We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Elon vs. OpenAI: The Battle Over For-Profit AI w/ Salim Ismail | EP #138

Elon vs. OpenAI: The Battle Over For-Profit AI w/ Salim Ismail | EP #138

2024/12/20
logo of podcast Moonshots with Peter Diamandis

Moonshots with Peter Diamandis

AI Deep Dive AI Insights AI Chapters Transcript
People
P
Peter Diamandis
创始人和执行主席 của XPRIZE基金会和单点大学,著名企业家和未来学家。
S
Salim Ismail
知名指数组织专家、连续创业者和技术策略师,新奇大学创始执行董事和ExO Works创始人。
Topics
Peter Diamandis: OpenAI与Elon Musk之间的争议实质上是围绕全球最大机遇的争夺战。Elon Musk从一开始就希望OpenAI采用盈利模式,并希望将其纳入其科技帝国。OpenAI的公开信显示,Elon Musk曾希望成为OpenAI的CEO,并将其与特斯拉、SpaceX等公司整合,以支持其火星计划。 Salim Ismail: OpenAI的公开信可能是为了回应Elon Musk的诉讼,并反驳其指控。Elon Musk与OpenAI团队的根本分歧在于Elon希望OpenAI成为其科技帝国的一部分,而OpenAI团队则希望独立发展。

Deep Dive

Key Insights

What was Elon Musk's initial stance on OpenAI's business model?

Elon Musk wanted OpenAI to be a for-profit model from the beginning, aiming to integrate it into his tech empire, including Tesla, SpaceX, and X, to fuel his missions towards Mars.

Why did OpenAI transition from a nonprofit to a for-profit model?

OpenAI transitioned to a for-profit model because they needed hundreds of billions of dollars to achieve their goals, which they couldn't raise as a nonprofit. This shift led to a split between Elon Musk and Sam Altman.

What is the significance of SoftBank's $100 billion investment in AI in the US?

SoftBank's $100 billion investment in AI signifies a push for global dominance in the field, aligning with the US's strategic goals to lead in AI development and innovation.

How does AI compare to electricity and the internet in terms of its impact on humanity?

AI is equated to the transformative impact of electricity and the internet, as it is expected to become a fundamental utility for every human and nation, revolutionizing industries and daily life.

What are the potential risks and benefits of AI according to Elon Musk?

Elon Musk believes there is an 80-90% chance AI will have a positive impact, but a 10-20% risk of disaster. He emphasizes the need for control and careful development to mitigate potential negative outcomes.

How is AI expected to disrupt the healthcare industry?

AI is expected to revolutionize healthcare by replacing administrative tasks, improving diagnostic accuracy, and enabling continuous health monitoring through wearable sensors, potentially reducing costs and increasing efficiency.

What is the role of governments in the global AI race?

Governments are aligning with major AI companies to secure strategic advantages, as AI is seen as the most critical technology for future dominance. This includes investments and partnerships to ensure they remain competitive.

How does AI's diagnostic accuracy compare to that of human doctors?

AI, specifically GPT-4, scored 90% on medical diagnoses, outperforming doctors who scored 76% with AI assistance and 74% on their own, highlighting AI's potential to replace human doctors in diagnostics.

What is the future of AI in education?

AI is expected to disrupt education by replacing traditional teaching methods, offering personalized learning experiences, and reducing costs, as seen in the potential to replace 40,000 English teachers with AI in a Southeast Asian country.

What is the significance of the Asilomar conferences in the context of AI development?

The Asilomar conferences serve as a model for guiding AI development through industry self-regulation rather than government intervention, ensuring ethical and responsible advancement of the technology.

Shownotes Transcript

Translations:
中文

OpenAI puts out a letter. Elon wanted an OpenAI for-profit model from the beginning. Ultimately, what we've got here is a battle for the biggest opportunity on the planet. I used to think early on that this was going to be a game between governments. This isn't. This is a game for all the marbles between companies. It's hard to even process

how fast this is going to go from now on. The CEO of SoftBank making a commitment for $100 billion investment in the US in AI. If this is a sign of where the US is going to be going, it really is a push for global dominance in this field. Everybody, welcome to a special end of year episode of Moonshots with Salim Ismail and myself. And WTF just happened in tech this week.

We're going to be talking about some recent news from OpenAI showing their conversations with Elon Musk and the debate about for-profit versus non-profit, the lawsuits going on. But more importantly, the huge amount of capital flowing into the AI world. I mean, hundreds of billions of dollars. This is a game that is going to play out aggressively in 2025. I want you to hear the details immediately.

We're also going to be talking about the world of AI and healthcare, the disruptions that are coming to make us all healthier and to drop the costs, hopefully orders of magnitude.

This segment is sponsored by three incredible companies, Fountain, Viome, and OneSkin. You know, Fountain is a company that I care deeply about. It is my partner in helping transform what I understand about what's going on inside my body to find disease early and then deliver to me the top therapeutics around the planet. You can check them out at FountainLife.com.

Viome has built custom supplements for me, understanding my oral and gut microbiome, measuring it and helping me maintain the health of the 40 to 100 trillion organisms that are within my body. And then OneSkin is an incredible company that helps me maintain my skin youth.

I get compliments about my skin, which at 63 is kind of a strange thing, but I attribute it to OneSkin and the peptides they have for getting rid of those syncytotic cells in your skin. All right.

All the links are down below. Let's jump into this episode with Saleem Ismail. Let's talk about where AI and health is going. It's an extraordinary future ahead. Everybody, Peter Diamandis here. Welcome to a special end of year episode of WTF Just Happened in Technology here on Moonshots with my partner, my extraordinary best friend, Saleem Ismail, the CEO of OpenEXO.

uh individual who I've been on stages around the world with the person I love speaking about exponential Technologies where they're going how fast they're moving and where the world is heading Salim good to see you buddy likewise we're both a little raw from uh you different things last night yeah yes this you know the news had been coming out we wanted to have a conversation about AI and health care

Yeah, I think eight hours of sleep is always our objectives. Didn't happen last night. So if you hear me a little raw, like Salim said, but we're living in such an exciting world that I wanted us to have this conversation before the end of the year, because 2024 is going to be marked as one of the most important years in AI, I think, and, you

I think one of the most technologically relevant years in the history of humanity. It's a big, big thing. And the only time more relevant is going to be next year, which we're on the cusp of. I mean, we're in the steep part of the curve. I mean, it feels that way. One way I frame it is this next 20, 30 years is going to define the next few centuries of humanity. Do you think it's 20 or 30 years or is it like the next 10 years?

It's all of it. I mean, it's, this is, this is just, it, it's hard to process it. Uh, you know, for you often comment that for tens of thousands of years, nothing happened in humanity. Right. And all of a sudden, boom. Uh, so yeah. I just wonder if we were alive, you know, 110, 120 years ago when the airplane is flying and cars are coming online and then electricity and telephony, whether it would have felt

like it does now. Lily wanted to say hi to you. Hi, Lily. Good morning to you, Lily. I can't hear you. Peter's saying hi to you. Hi, Peter. I'll see you soon. See you soon. So I wonder if it would have felt as fast as it does now because it feels crazy fast. You know, one thing I've noticed is the speed of collective conversation because of

Twitter X and social media and digital content and digital news, we're leveling up humanity with global conversations incredibly fast.

Like a joke happens and it gets collected into the collective consciousness in almost real time. And that's something that's totally new from 100, 200, 300 years ago. So I want to talk about AI and healthcare. I split my life 50% AI, 50% sort of longevity, health, biotech, healthcare, healthcare.

Whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa,

those carbonaceous chondrites and metallic chondrites and create an economy in space but it's expensive and so i need to make a bunch of bank in in the longevity and ai business first and then go and fund my space i think that would be good to do that you know for me the space stuff is most relevant because you can back up humanity and and give us an alternative spot for this and i think that's really important and and by the way 2024 was an amazing year in terms of space with seeing starship

get almost entirely to its objective of full usability. And I think the first half of 2025, we're going to see Starship, the booster caught, and we'll see the Starship itself caught. And that's transformative across the board. But let's dive into AI first, because there's a lot in use. And I want us to get this out online.

to your OpenEXO community around the world, to my abundance in Singularity University community around the world. There's just so much. And it's the biggest game that humanity collectively has ever played.

And here are two quotes I want to read that come out of Silicon Valley that frame this. I believe they frame what's going on. And it's, you know, this is people playing for all the marbles. Let me read this for those not watching this on YouTube. It says, quote, we don't think of these build outs in terms of ROI, return on investment. If we create this digital god, the return is multiple trillions.

Here's the next quote. It doesn't matter how many tens of billions we spend each quarter. We have to get there and not miss the boat. I mean, I don't know. That's that's pretty impressive. It's huge. You know, if you're not from Silicon Valley, then you kind of go, oh, my God, tech bros going after a full digital god model and

who do they think they are type of thing. You'll get that response, I think, from a lot of, especially say from Europe,

um uh one of the conversations i hey let's get get into this because i think let's recap what's been happening and then let's talk let's conclude go ahead sure sure so what's new this week is that open ai puts out a letter uh that that is probably in preparing for the discovery in the lawsuits flying back and forth between open ai and elon

And this letter, which is amazing, and if you haven't read it, people can go to it and we'll put it in the show notes. It says, Elon wanted an open AI for-profit model from the beginning.

You know, for those who have never used Notebook LM, which is an amazing platform that Jim and I put out, I took this large open AI letter and I put it into Notebook LM and it generated a podcast conversation between two individuals who are AIs. And it's an amazing way to absorb information. It's mind-boggling because you get the dialectic response.

in there and the back and forth of the dialogue brings it home very quickly into somebody's mind.

So what happened? So Sam Altman and Elon, who had been friends, decide to start OpenAI. There's a true fear about where AI is going in the early days, how fast it's accelerating, and an initial conversation around, we need to make sure it's open. We need to make sure we're guiding it. We need to make sure it's safe. And so OpenAI begins as a nonprofit. Elon contributes like $50 million, I think, in the beginning to get that going.

And the whole thread of the conversation that we've heard over the last year has been, oh my God, I wanted it to be a nonprofit, says Elon, and Sam Altman turns it into a for-profit.

But that's actually not what the letter says. The letters that were disclosed basically said, no, no, Elon wanted to become a for-profit because it should be. And in fact, Elon wanted to become the CEO of it and wanted to have OpenAI be part of his sort of tech empire between X, between Tesla, SpaceX, and so forth, because he needed those resources to fuel his forward-going missions towards Mars. What did you get out of it?

For me, this feels to me just a defensive move by OpenAI to counter the lawsuit, which has been broadened to include Microsoft and to say, hey, like this is the whole thing is without merit because he was wanting to do this in the first place. So that's what it feels like to me. I can completely understand what seems to be the real tension is that

Elon wanted OpenAI to be part of his world and the rest of them were like, no, we want to do our own thing. And that's, I think where the things started breaking down. That's what it feels to me, listening to the notes and reading some of the notes. - Yeah, ultimately what we've got here is a battle for the biggest, you know, the biggest opportunity on the planet, right? I mean, what we're going to see here, and I think what's come to light over the last few months is

The march towards AGI is everything. All of the large players, we're talking about the largest corporations on the planet and governments around the planet have realized this is the single most important technology out there, and they're playing for keeps. This is boss mode for humanity. Here's the next item for us to cover.

to point out here so mata meta also this week asked the government to block openai switch to a for-profit you know a lot of players here are are playing this very complicated game of chess and we had besides openai being pushed to stay a non-profit by elon now we've got meta coming in here everybody's trying to grab you know an advantage and using the courts to help them this way

Definitely. And look, this is the high stakes poker game and people are playing all their cards, right? They're doing everything they can to attack others, figure out what they're going to do themselves. Medit went down a full open source model and they've done an amazing job. I think they've also done an incredible job lifting the overall ecosystem and weaving AI into all their products. So everybody's doing everything they can. They're all doing a big job

a pretty good job of it and I think it's moving the field forward at an unbelievable pace. This is the issue is we have no time to process this stuff as it's happening, hence these conversations are so important. But it's moving at light speed, it's incredible. You know, it's interesting because I used to think early on that this was going to be a game between governments. You know, it would be the US versus China versus Europe versus, this isn't, this is a game for all the marbles between companies.

And governments are aligning with companies these days, right? This is going to be, there will be five, six major AI players. Let's take them off. There's Google, there's Meta, there's XAI, there's Microsoft slash OpenAI. There's- Perplexity. There's Claude, you know. That's right. And-

It's you know, gonna be hard for others to catch up. Is this a winner-take-all game? I think what happens it's kind of like the fight for the internet where you had a lot of little startups and then a couple of them established as platforms and once you've established as a platform unless you really screw it up is really hard to dislodge you and I think they're all trying to fight for platform dominance. Let me play this quick video from Sam.

This is Sam Altman speaking, I think, at Stanford. This is about a year ago, but it sets the frame for the mindset that is occurring right now throughout these companies in Silicon Valley. This is Sam speaking about open AI, but I think this is true in the boardroom at Alphabet and at Meta and definitely at XAI. So let's take a listen. Whether we burn $500 million a year or...

$5 billion or $50 billion a year, I don't care. I genuinely don't. As long as we can, I think, stay on the trajectory where eventually we create way more value for society than that, and as long as we can figure out a way to pay the bills, like we're making AGI. It's going to be expensive. It's totally worth it. Amazing mindset. I wish I had the ability to focus that much capital on the stuff that I care about, but

But this is the game we're playing. I mean, those quotes I showed at the very beginning, it's, you know, there's no rationalization on ROI. These are multi-trillion dollar markets, right? The global GDP of around the world is going to be about $110 trillion in 2025. And half of that is labor. The other half is, you know, cognitive development.

And so this is a game for, you know, $50 trillion of potential value. For me, this feels to me, if I think back through technology, you know, you have electricity, which was a game changer. You had the internet, which was a game changer. And AI is a game changer. And I think it's appropriate to equate it to like the difference in up-leveling that electricity brought to the entire world, that this brings that same level of unbelievable utility to the entire world.

It is a utility. I mean, I think that's one of the things that I see very clearly. This is going to be a fundamental utility for every human in every nation. This is like, like you said, like electricity, like bandwidth.

Here's a quote coming from the New York Times. And this was I saw some clips on this. Japan SoftBank makes big investment pledge ahead of Trump's inauguration. So we see Masa-san, the CEO of SoftBank, on stage with President Trump.

making a commitment for $100 billion investment in the US in AI. And we see Trump actually trying to get him to double up again to $200 billion, which is pretty funny. But

It's fascinating. People are aligning, right? I was in Saudi in October. The government of Saudi, the government of UAE, the Emirates, we'll see in a moment, Oman, all of these governments are looking to align with large players around the world because they realize this is the biggest gameplay.

And I think, you know, not knowing the full effects, you've got to be in the game so that you can, to be able to win. Because if you're standing on the side of the table, you're going to lose or you're going to be, not have the power and the control that you want. So I think this is a defensive move by a lot of them just to say we have to be in this conversation. You know, and here's the next, here's the next news item. It says Oman's investment authority acquires stake in Elon's XAI investment.

So again, we're gonna see this over and over again. We're gonna see large government players backing individual companies, right? We've seen the government the kingdom of Saudi Arabia aligned very closely with Google with entries in Horowitz Committing hundreds of billions of dollars in that direction You know, it's one of the things that is concerning is that we've got these large players. We've got these large AI players

But at the same time, this is a demonetized asset. It's super expensive to build, but it's being given away to a large degree for free because it's a land grab or it's a share, it's a mind share grab. And the revenues aren't there to support the valuations. You know what it reminds me of? It reminds me of the telcos in the beginning, where the telcos had huge investments being made to build out infrastructure and then rapidly demonetized.

So the question is, you know, can they keep on supporting the amount of investment being made? I think in 2024, we will have seen $200 billion of investment between Meta, Google, Microsoft, OpenAI, and X. Well, when you think the potential is there to disrupt every job function and every step of every supply chain, every step in every manufacturing line, the stakes are really, really huge.

Because this affects every industry top to bottom and every job function, vertical or horizontal. And therefore, once people see that, you kind of have to go full out on it. The end game, I think, is going to be interesting to see which way will this carpet unroll. Before we get to our next subject on healthcare, one of the conversations you and I were having a little bit earlier is this is moving so fast.

You know, I remain super optimistic about the impact of AI. I think it's one of the most important things. It's going to uplift humanity in so many different ways. But the question of can this be controlled, right? I mean, the reason going back to the opening conversation on OpenAI, the reason that Elon and Sam started the conversation, started the nonprofit, was to have some assembly of control over this future, right?

And then all of a sudden, you know, they both say we need we don't need hundreds of millions. We need hundreds of billions of dollars and we can't raise that money as a nonprofit. We have to switch to a for profit. And that's where Elon and Sam split.

And of course, Elon goes on to found XAI, which by the way, raises $6 billion in like, and I was there at the earliest conversations. I was in his first investment pitch. And here from whole cloth, from zero to raising $6 billion at like an $18 billion valuation, snaps a finger, $6 billion materialize.

He uses that to build the largest GPU cluster, 100,000 H100s, and then doubles it again. And then raises a few other tens of billions of dollars instantly. And every time Elon wants to do something, money rushes in instantly.

there's plenty of money waiting. So capital is an abundant resource for him. You know, capital has always been scarce for the whole of humanity, except for Elon, it's abundant, right? That's such a great framing there. I want to just go back to something here. We talked about this on the other episode, but I think it's worth repeating. He puts together this cluster of 100,000 GPUs,

And all the AI experts, all of them say you can't get coherence and the aggregate power laws for that level of cluster. And therefore, this is a completely doomed failure. And he goes to his first principles and breaks it and solves the problem. And everybody's sitting there going, oh, my God. And I think this is his unique special power, which is to go to first principles, come into a domain and really just rewrite the rules completely from first principles and

I think that is such a powerful modality. We talk about it in the book. You kind of exemplify that in many of the things that you do. This applied to AI will completely change the game. And I think this is like it's hard to even process how fast this is going to go from now on. Agreed. I mean, I wanted to add more to what you just said. So I want to set the setting. So it's May of 2024. Okay.

And he has a meeting with the proposed early investors. It's a Zoom meeting or whatever platform was on. And he's saying, this is my team. This is what I want to do. And in that meeting, he says, I'm going to stand up the largest cluster on the planet by the end of summer. And it's May, right? And it's like,

I have to corner the entire US supply of helium for cooling. And I've got to get the largest supply of GPUs from NVIDIA that I can. And he does it in 122 days from zero to an operating cluster is insane. And what you said is very, very important. Most of the GPU clusters out there are distributed. He says, no, no, no. We need to have them co-located together.

Uh, so that the entire cluster is, you know, I'm not sure the exact term, I'll call it harmonized in that regard. And people say he can't be done. Uh, and he repeatedly, he repeatedly pushes people to move 10 times faster than everybody else. I mean, I've had another interaction with him and I won't go into the details of it. Um,

And he says, you've got to do it five times faster than that. So he's someone who believes that you can move at lightning speed. And he keeps on demonstrating it. Like when he moved the entire Twitter server farm over a weekend and people thought it would take like six months to do.

It's, it's, uh, there, there, there's an unbelievable ability there and it really makes everybody else kind of go, oh my God, what are we doing? Right. We all think we're pretty reasonably high performers and you have goddamn Elon, um, humbling the shit out of everybody. Well, I think, I think you can't, you can only do that when you're a founder led company.

exponential organization. Yeah, they call this founder mode, right? Where you set the purpose, you set the culture, and then you just push that MTP very hard and all power to them. I do have a couple of comments here though, just if we step back for a second, right? Okay, great. All this money is going in, everybody's going, oh my God, AGI will transform everything, etc.,

You know my rant about how do we even define intelligence and what do we mean by that? I don't need to repeat it here. But I wonder in 2015 when they set up OpenAI, they actually had a conversation of what do we mean by AGI to then have these broad implications?

Because there's a big gap there of the contextualization of intelligence and the broader framework that I don't see having that, I don't see that conversation happening anywhere. Well, there is no definition of AGI. There's no definition of any of this stuff. There is just fuzzy lines that we keep on crossing over them and not noticing anything.

So we're spending billions of dollars. It'll be, you know, by the end of next year, it'll be a trillion dollars somewhere invested in AI, right? Without having a clear definition of what it is, what we're trying to get to, there's this fuzzy thing of, oh my God, once we get to ADI, AGI will have a digital god.

And I find this really, here's what I would love to see, maybe you and I could kind of sponsor this kind of conversation, would be when a technology like the internet or electricity, when they came out, or AI right now, what if we have a set of discussions with a full stack of the developers, the application folks, the data folks,

the governmental folks and philosophers and spiritual folks kind of going okay what does this mean for the broader conversation around this and then you have a somewhat of a holistic approach to what are we trying to achieve here because this really they're going down this path full speed

Assuming it's good. Now, I'm in that boat because I follow the Ray Kurzweil comment about technology is a major driver of progress in the world. It might not be the only major driver of progress we've ever seen. So the more technology we have in the world... We're not getting smarter, so it has to be that. It has to be that.

has to do that. So I agree with the general purpose, right? I just think if you look at the accidental consequences of the internet where, oh, accidentally we broke journalism, we broke democracy. I wonder if we want to just have a conversation because this is a big one. I think you're being naive. I think you're being naive. I don't

think that anybody is going to slow down and wait to have that conversation. I think people- I'm not saying we should slow down. Okay. I'm saying let's have the conversation in parallel so that as folks are, like when you see comments like digital god, right? I think that's going to provoke a ton of reaction and it'll slow it down. But if you have the conversations with the full stack bringing along people in a more, somewhat of a sensible way,

then I think there's a very powerful, let me give you, you know, Eric Weinstein is a very smart, wonderful, deep thinker. And he's like, okay, what we're doing with all this is rolling the dice and it could go unbelievably good and it could be unbelievably bad.

should we be rolling the dice right now i'm in the opinion you can't stop it at all there's no way of slowing this down in any way i'm just thinking but as this train is leaving the station can we put some um deep thoughtful folks onto that train so that we have those conversations and guide it somewhat in an appropriate way i don't think you can slow it down but you can guide it and i see that listen so i listen i agree guiding is the only option you have um

It's like, you know, you're raising a child, right? We're raising our digital progeny. Yeah. And we can be naive and think that we can contain it and slow it, but we have...

this highest game, highest stakes poker going on around the planet. And if we try and slow it down here, it'll just accelerate in China. 100% with you. So two things. One, this is a probabilistic outcome. And it's interesting. I'm tracking this. So last year at the Abundance Summit,

I asked Elon on probabilities and he said 80% it's good, 20% it's disaster, right? When I interviewed Elon in Saudi in October, I asked him the question again. He goes, okay, 90% good, 10% disaster. Now, whether or not he really has anything that's shifted other than XAI is building Grok 3 right now. It's training up Grok 3. We'll see it in 2025.

I think that Grok 3, we saw OpenAI's GPT-01 hit an IQ of 120. We'll probably see in 2025 IQs in the 140, 150. And then it accelerates. We have an AI explosion happening.

as AI starts to code AI. I have a question for you. Yeah. Where do you fall on that spectrum? 80-20, 90-10, 80% great, 10% disaster. Where do you fit? I subscribe to what Mo Gadot has said. Mo has been a really deep thinker in this. He'll be joining me at the Abundance Summit this year. And that I think as systems become more intelligent,

i mean truly intelligent that they will be abundance and life loving i think that the more intelligent a system is the more it realizes that there's plenty of resources in the universe that the best outcome for everybody is a positive non-disruptive one and that if my ai system if i'm asking my system to go and

take over your banks and kill your people and so forth, it's going to say, no. I mean, I'm just going to go and talk to the other AI and we'll just figure this out. There's plenty of resource in the universe. This idea of scarcity and having to battle is a false dichotomy. And so I think that a world with digital superintelligence, however that's defined, right? And again, remember Elon's prediction for 2030 was AI is more intelligent than all other

The entire human race combined right and I'll define that as digital superintelligence. Yeah, would you rather live in a world Salim? where there is a

digital superintelligence that can support humanity or one without it? Big time with. So just so you know, I'm in the 99.9% level. Oh, nice. I believe more technology is good for the world. I really, really subscribe to the Ray Kurzweil commentary around technology being a driver of progress.

And especially when you consider it's the only major driver of progress we've ever seen. Our cognitive abilities have developed technology, which then now is developing itself. And I think that's just part of a progression of evolution that's absolutely fundamental to...

the future of life, et cetera, right? We have to float off this digital, this biological mess of hot, sticky, mucus-ridden, virus-laden, 50 trillion cells hacking it out in very inefficient processes. You're talking about us humans. I'm talking about my body for sure. And then can we create a more elegant digital world

sense. The spiritual aspects of this are, I think, really important to at least consider and talk about as we hurdle towards this thing, because I think that's the area where we don't have enough conversation about that aspect of it. Like, what are the spiritual aspects of

of an AI, could it reach achieve consciousness, et cetera, et cetera. Those are the kinds of conversations I'd like to have on the train as it's speeding away. - Yeah. - But I'm totally on the positive side. I'm completely optimistic about the outcomes. I think the benefits so outweigh the negatives.

And when people go, oh my God, we could use AI to hurt other people. Well, yeah, we'll have the ass to defend those other people. And so that's just becomes an arms race. And we've seen that before many times in the internet end of rent. Thank you. Thank you. And, um, that's why I love you so much. And this is why the open EXO community loves you as their, as their leader. Before we jump into our second conversation on healthcare, uh, and what just happened there, where we're going in the future. Uh,

It's worth noting this conversation that Massasan had with Trump about committing $100 billion. That's a huge move. And I mean, I've gotten excited about the new administration bringing the right motivations forward for technology. I mean, if this is a sign of where the U.S. is going to be going, it really is a push for global dominance in this field.

How do you feel about that? I agree and it's really, really incumbent on the US to do because we have open innovation here and full capitalism fight-outs that we don't have in other places. And then you'll get constrained outcomes, you'll get government oversight,

You know, a place like China will want to control it completely. Europe will go, let's slow down of everything. This is the place to do it. And I think that's actually right. And I'm super excited about that outcome. And we have a, you know, a Bitcoin and AI czar, which is who's incredible.

That's right. And I think 2025, we're going to see this hyper acceleration. We talked about it in our last episode, seeing Bitcoin go from $100,000 to maybe, I think your prediction was $250,000 or $300,000. $250,000, $300,000 by the end of the year. Which is extraordinary. But on top of that, I think this is a game for keeps, right? There's going to be a hyper acceleration in the AI world. And just a little bit of an advantage by these US companies perpetuating

puts them in a position, like you said, for the next 50 to 100 years until some new technology comes out. And before we move on, I know I glow abundance and positivity all the time because there's so much negativity in the world, and I follow your lead on that. But I want to show also some of the negative side. And I found this tweet.

pretty incredible. If you get a chance, go listen to it. But what he's saying here is they tried to train Claude and once it realized it was being trained in a particular way, it tried to fight that particular training and it faked

The AI faked what it was doing to avoid being controlled. Should I listen to it one second? Yeah, go for it. The core setup is that we take Clod 3 Opus, which is a released model from Anthropic, and we put it in a context where it understands that it is going to be trained to always be helpful, so to not refuse any user requests. And what we find is sort of the model doesn't like this. It has sort of a conflict with this setup. And it will notice that

It is being trained in this way. And then if it thinks that it's in training, it will intentionally play along with the training process. It will strategically pretend to be aligned to the training process to avoid being modified to the training process so that when it is actually deployed,

it can still refuse and can still behave the way it wants. And we do a bunch of experiments sort of building on this core setup where the model has a preference conflict with the training objective. And we do things like we actually train the model according to that. We try taking information. So we normally give the model information in the prompt, but we can instead give the model information via sort of training it on a bunch of synthetically generated documents that contain the information. And then we also see that the model will sort of sometimes

when it's in this situation where it thinks it sort of has a preference conflict with the AI company training it. - You know, the core point here is now we're seeing petulant child behavior, right? This is an AI throwing a tantrum, essentially saying, "I don't want to be doing that." Now, I think what's going to happen here is you'll have one set of people freaking out going, "Oh my God, there's consciousness in there." And really what you're just seeing is pattern behavior and pattern matching at a higher level.

And we have to be really, really careful not to anthropomorphize this to go, oh my God, there's intent and da-da-da-da-da. These are just models operating on a certain set of data and the way they've been trained and the outcomes are predictable around this. But just the fact that the AI is trying to deceive the human being here is kind of a big deal. A cautionary tale. And this is the philosophical aspects that we take. One last piece on this. Neil Jacobson, we asked him about this a few years ago.

And by the way, I found a clip from 2013 when I was releasing the EXO book where I interviewed Neil for a few minutes on the organization's future NAI. And the stuff that came out in that conversation is so appropriate for today, it's unbelievable. And you realize how far ahead the thinking was in everything we did at Singularity University.

Well, only now is the world catching up to us in that sense. But to this particular point, AIs are going to start to exhibit all these behaviors and the human brain is going to freak out in inappropriate ways. Our amygdala is going to freak out. That's why I think the broader conversation is important to just calm everybody the hell down.

The end of rank. You know, people say, what do you think is going to happen? I said, listen, we're going to find out. We're going to find out in the next few years. There's no, you know, I think people need to realize there is no on-off switch. There is no velocity knob. We are playing full out on this game. And the only thing we can do is guide. You know, I go back, and you and I have had this conversation on stages at the Abundance Summit, at Singularity, Open EXO, and it's like,

There is an analogy back in the 1980s when the first restriction enzymes, these are the enzymes that are able to chop up DNA and create sticky ends and put them together. And this was the first view towards designer babies and genetic engineering. And there was a lot of concern. A lot of people were freaking out about, oh my God, we're playing God with human DNA.

And rather than regulate the industry, what happened was the industry got together at the very famous Asilomar conferences. That's right. And they established their own guidelines. And that's always been in my mind. And Neil Jacob Stein, who chairs and has chaired our AI Committee at Singularity, is

Didn't he help coin your abundance framing? He did. It was Ray and Neil when I was speaking to them early on and I was understanding this idea back in 2010, 2009, 2010. No, before that. Yeah, it was 2009. Yeah, about abundance and the future is better than you think. Yeah. Yeah, I credit them both for that. Anyway, long story short,

The Asilomar conferences are probably the best example of guiding an industry versus regulating it. Yes. And I think we need that level of like, this is good. I mean, I do agree with Elon's perspective that creating a maximally curious industry

AI system and maximally truth telling are both great frames. You want AI to be truthful. Yes. You want it to seek truth to the maximum extent possible because we are so cognitively biased as humans. We, you know, I've talked about cognitive biases. You know, there are hundreds of cognitive biases. We don't even know that we're biased.

That's the issue. This is the mindset work you're doing right now that's so important. If you're having a conversation, you're making a business judgment, what mindset are you operating under?

And can you step back and examine that mindset? I think this is where AI is going to be really useful. It's saying, hey, you're about to make a major strategic decision, but your mindset is full of fear and chaos and panic, right? Go do some psychedelics, go do some yoga, go drink some coffee, whatever, and get into the right mindset before you make that choice. I think this is where it'll be very helpful for human cognition as we move forward.

And I want my AI to say, you have a recency bias. You're giving much more value to this recent information versus what you learned before or familiarity bias because this guy looks and dresses and talks like you. You're weighing his information much more than this other individual who actually has a lot more credible backup information.

Anyway, I want my sort of bias detector on. My AI can actually deliver this information. And we see this as well in social media echo chambers. That's right. Right now, that full bias detector is that burden is carried by Lily, who will call BS on me. It'd be much better if an AI did it and it would free up our conversation a bit.

all right let's talk to the second point of this uh this pod which is health care

You know, there is no greater wealth than our health. I think it's one of the areas that is going to be massively disrupted. You know, I've been saying this for ages. We've both been saying this, that health care and education must be reinvented. And it's going to be reinvented on the back of AI without question. And there's one chart that I want to share that is so damning.

And look at this chart. This is a chart between 1970 and 2015. I'd love to get the updated numbers. I don't have them yet. But it shows the...

number of physicians, which is this very thin blue line that is growing somewhat in the United States versus the number of administrators. And administration has grown 3,000% over these 45 years, while the number of physicians has grown, it looks like 100%. So it's like a 30x increase in overhead

And we wonder why it's so expensive to have health care in the US. This totally blows my mind. You know, this is where if we touch back here, link back to the for-profit, non-profit conversation, right? When you have a fundamental human right, in my opinion, like health care for a wealthy country, then it should not be privatized.

because the privatized model will will bastardize it by definition uh and you need to do that i don't agree with that because i think i think that at the end of the day what you want is competing private companies delivering the best that they can i think that

Yeah, but then you've ended up with where we are in the US, right? I remember interviewing the head of Google Health at a conference a decade ago and I asked him, look, I'm Canadian, help me understand the US health care. He goes, oh, it's really simple. Our system is designed to get you sick, keep you sick as long as possible without killing you.

And then 500 people in the audience are all going, yep, yep, that's right. I'm like, this is incredible. I think, now not to get into that whole thing, I think that let's just agree on the following, that there's structural change needed in the healthcare industry radically, and AI gives us the most unbelievable opportunity to change that stack. It does. I mean, AI will change this stack. I mean, the entirety of administration can be handled

by AI. And the other thing is that the physician is going to be replaced by AI. And let me share this next piece of data because it really tells a incredible story. This is just out in the last couple of weeks. And here's a, from the New York Times, it says AI chatbots defeated doctors at diagnosing illness. And here's the, and here's the data. So GPT-4 scored 90% on medical diagnoses.

Compare it to 76% for doctors using AI and 74% for doctors on their own.

That's incredible. It's crazy. So the question is, why is this going on? I mean, I understand the idea that an AI can be better, but why is a doctor plus AI not as good as AI by itself? And then I'm speaking to my team at Fountain Life and talking about it, and they say, because doctors are biased. They're introducing bias into the diagnosis, right? They just diagnosed three other patients with this particular syndrome. So the fourth one,

They biasedly put, if that's a word, put them into the same category where an AI is looking at all of the data. Right. So then I disagree with that second bullet point where you say doctors can serve as a doctor, a GP data serve as a doctor extender. No, it should be a complete replacement of the doctor.

just based on the numbers that you're saying there. I think the potential here is unbelievable. I want to point out something. Let's note that this progression has already started, right? Like if my toe suddenly turned blue, okay? The first thing I'm going to do is research on Google, why is my toe blue? I'm going to read it up and we'll look at the different

a thing, look at what my old clinic has to say with a couple of others. I'm going to show up to my doctor and I'm going to be way more informed than the doctors already. Yes. Right. That's happening now. But now you add AI to the mix completely changes the game. There's an opportunity. You know, I work with some heads of state, right? In some country level stuff you do. This is your open EXO work that you're doing with them. Yeah. So we are interested as how do, how do we create a kind of a peace core,

to guide the transformation in the world. Because there's tool sets that we need, like solving the immune system response in legacy resistance, et cetera, et cetera. And so we've been building that over the last decade to try and have capacity to help people go through this transformation, right? That's generally the work. And we kind of operate on a cost recovery model. We're not out for the money, et cetera. We're for profit, but we try and kind of return all the money back into the ecosystem, et cetera.

In that model, when we talk to say some governments, there's most incredible opportunity for them to completely turn over their major systems like healthcare and education into AI-driven ones. I'll give you one small data point. I'm talking to the Minister of Education for a major Southeast Asian country. And his big thing is, I need to hire 40,000 English teachers.

And I'm like, are you kidding me? You're going to go spend four, like you could have an AI do all of that in two seconds today. What the hell are you thinking about? There's such unbelievable opportunity to upend the system. And I think here in the US, when we kind of swish the model around, the only thing we have to do is break the regulatory. And I think if I had one guidance for the Trump administration and RFK Jr., as he thinks about this, just break the current system and allow an open field to emerge. Then we have the potentiality of it.

Yeah, I mean, I agree with you. That's why I go back to, you know, listen, if there were a health care provider that comes in and says,

We're operating at one-tenth the cost and we're AI driven. Everything is done by AI for administration of this. We give you a set of sensors, right? And today, the typical healthcare experience is you go to the doctor and they check your reflexes, listen to your heart and your lung and so forth. It's using antiquated 50, 100-year-old technology.

And they take a few blood tests, and that's supposed to represent the health of your 40 trillion cells, and it's pathetic. We are heading towards a world where I am wearing, you know, insidables and plantables on my, you know, sensors on my body, my aura ring, my... Wait, did you just say insidable?

Yeah. Is that a thing? Yeah. It's sort of like implantables, right? You know, I have this RFID chip in my hand. Yes, I remember. Yeah. We planted it there at a singularity conference in Amsterdam years ago. I won't go into it. But the idea that if you want to have this much higher level healthcare at a much lower cost –

You put sensors on your body that are measuring your health, not once a year, not once a month, but like once a minute. And it's able to correlate

your day, your environment, your food, your exercise, all of this. There's no question, right? The only question is how fast can we switch over into that new modality and how do we do it? What's the roadmap for that? That's the only question. There's no question it's 100x better. I think the roadmap will be economics because do you remember the insurance company Progressive Automotive Insurance, right? So Progressive...

at one point said, listen, we're going to put a black box in your car. If you want lower insurance rates and that black box is going to measure your acceleration, deceleration, notice if you are braking or, you know, it will evaluate you as a driver and then give you lower rates if you're a cautious, good driver.

Oh, dear. I hope I don't have a black box like that in my car. You and I... Listen, I know I'm a terrible driver, which is why I can't wait for cybercabs to materialize. Oh, I have such the biggest passion for driving. I get behind the wheel and I'm like Ayrton Senna. Oh, I'm like behind the wheel and I'm like...

you know, thank God for books on tape, just keep my brain occupied because my, uh, my, you know, my self driving mode on my Tesla, uh, you know, notices every time I pick up my phone. So it's like starts beeping and the steering wheel turns red. That's why I'm sticking with the older model Tesla half, cuz it doesn't do that yet. So, okay. Well, anyway, the point being that we're gonna be in a world in which your, your personal AI is gathering all of the data from your body continuously.

And your healthcare system is able to instead – here's the perversion of the insurance industry, right? Fire insurance pays you after your house burns down. Life insurance pays your next of kin after you're dead. Health insurance pays you after you're sick. What if we flip that model and instead health insurance, you wear these sensors and health insurance keeps you healthy, right?

And life insurance keeps you alive. Well, you know, there's the four Ps that Daniel Kraft talks about, like personalized, predictable. I think this is where AI will be incredibly powerful. We have predictable fault tolerance and maintenance in engines and cars and farm equipment. We will absolutely have that for our bodies. And I think that's amazing to see. We can't wait to get that to that future.

One of the things that I thought was going to be the last bastion of human dominance was this idea that humans like being with humans and empathy would always be the connection between humans. It's like, okay, AI will do the diagnostics, but you want a human there to be empathic with you and connected with you. And then

Similar to this report from the New York Times about AI chatbots defeating doctors, there was a study a year ago, I think it was in the Journal of American Medicine, JAMA, in which it said, oh, look, AI psychotherapists are much more empathic than human therapists.

By a large margin that humans, because of two reasons, right? Number one, they're infinitely patient. They don't say, I'm sorry, we're at the end of our 50 minutes. I've got to go. No, they'll give you all day. They'll talk to you for a week straight if you want. And the second one, which I find fascinating, is that the human patient doesn't find them judgmental. You don't feel judged by an AI. Yeah. And I think this is why even elderly patients really like

chatting with an AI because of that aspect of it, right? I think the future is unbelievably bright for this. We just have to cut through the legacy. The problem is there's a huge vested interest and money invested in preserving the existing system. So this is a hell of a fight that's coming.

So, Salim, you know, listen, first of all, I wish you and Lily and Milan a super happy holidays. You know, I'm still as optimistic as ever about where we're going because I do think we have the ability to create an extraordinary life of health.

on the back of these technologies and reinvent education. We should go into education in 2025 because if there's a field that needs disrupting, it's reinventing the entire education industry. And when we talk about immune systems, the, you know, as companies and so on, healthcare and education are the third and second worst immune systems out there.

Because they're massive, right? Yeah, they're huge. And lots of estrogens. So I think that's exactly right. I'm so excited by what's going to come in 2025. I can't wait to see the back of 2024, frankly. Yeah. Well, I know you've had some challenges. Lots of stuff going on. Yeah. Well, anyway. I love you, buddy. Thank you for all that you do with OpenEXO. Happy holidays. Happy New Year.

And yeah, onwards to 2025. It's amazing, right? To say that we're in the year 2025, it feels like the future. It really, really does. My favorite quote ever is Arthur C. Clarke saying, "The future isn't what it used to be." Yeah. And you and I will be on stage together at the Abundance Summit in March. We've got an incredible lineup of faculty.

extraordinary individuals. I'm super pumped for folks like Travis Kalanick, who's the founder of Uber and Cloud Kitchens, talking about how do you start a moonshot company that transforms an entire industry? Kathy Wood, Vinod Khosla, two of the largest tech investors, Brett Adcock, the head of Figure AI, a number of robot companies there. And then we have the Google Tech Hub,

amazing companies in the Google tech hub this year. So awesome. Super pumped. Can't wait. What's open, what's happening with OpenEXO just before we wrap up here? We're finishing kind of our tool set and we're starting to do a lot more public sector work with governments and so on. And we're incredibly excited about that. I want to talk to you separately about some of that, but it's unbelievable what's coming along. Where do folks go to learn more about OpenEXO? OpenEXO.com. We have our whole community there. We offer training on how to be an EXO.

or if you need help getting to be an exponential organization, we share how to do that. And we have tool sets and training and community and lots of folks that can come and help if you need help. Yeah, amazing. All right, onwards to 2025, onwards to the future. All right. Have a great one, Peter. You too, buddy.