We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Experts Debate: AI Job Loss, The End of Privacy & Beginning of AI Warfare w/ Mo Gawdat, Salim Ismail & Dave Blundin | EP #176

AI Experts Debate: AI Job Loss, The End of Privacy & Beginning of AI Warfare w/ Mo Gawdat, Salim Ismail & Dave Blundin | EP #176

2025/6/3
logo of podcast Moonshots with Peter Diamandis

Moonshots with Peter Diamandis

AI Deep Dive AI Chapters Transcript
People
D
Dave Blundin
M
Mo Gawdat
S
Salim Ismail
知名指数组织专家、连续创业者和技术策略师,新奇大学创始执行董事和ExO Works创始人。
Topics
Mo Gawdat: 我认为人工智能在未来两三年内会导致某些行业出现大规模失业,失业率可能达到10%到40%。现有的资本主义体系和社会意识形态阻碍了我们快速应对失业问题。普遍基本收入(UBI)听起来像社会主义,因此政府很难接受。如果人们没有购买力,经济增长或生产力增长还有意义吗?我们不应该总是描绘乐观的图景,而应该思考最坏的情况并加以防范。 Salim Ismail: 我认为人们适应变化的速度被低估了。政府在普遍基本收入(UBI)和四天工作制方面的实验准备不足。如果有一个名为“个人创业商数”的评分,人们会找到出路的。当你给予人们机会时,他们会抓住机会。 Dave Blundin: 我认为短期内会失去很多工作,但同时也会创造更多的机会。我们正在进入一个有目的的世界,未来的设计由我们决定。我们必须认真思考如何设计一个对人类有益的世界。如果我们专注于创造让人们更快乐、更有生产力、更有价值感和目标感的事物,我们就能避免反乌托邦的结局。

Deep Dive

Shownotes Transcript

Translations:
中文

In my mind, jobs will be lost. When they are lost, they're going to be lost massively. Far more people are in denial or doing nothing than are overreacting. I do think governments are woefully underprepared. Now it comes down to, are we going to design a world that is good for people or not?

We all know that AI will go out of control within the next five to ten years. And yet we're building autonomous weapons after autonomous weapons, knowing for a fact that every other opponent in, you know, anywhere in the globe is building them too.

Trump taps Palantir to compile data on Americans. This is not a tech problem. This is an accountability problem. What can I build that is making people happier and more productive and feeling valuable and having a sense of purpose? And if we focus on that, we actually can avoid the dystopian outcome. We have the ability to create an intentional future. This future is not happening to us. We have the ability to guide where it goes. Now that's a moonshot, ladies and gentlemen.

Everybody welcome to moonshots and our weekly episode of WTF just happened in tech You know, it's the real news going on those of you who go and watch the crisis news network what I call CNN you can learn about all the crooked politicians all the murders on the planet or join us here to learn about the technology that is transforming every aspect of our lives every company every industry every entrepreneurs outcome and

I'm joined by three moonshot mates today. Dave Blunden, head of Link XPV. Dave, good morning. Looks like you're at home today. At home, yeah. Princeton graduation on Tuesday, MIT graduation on Thursday, and then off to Stanford tonight. All right, fantastic. Look forward to seeing you hopefully this week. Salim Ismail, the CEO of EXO.

Open EXO. And Salim, where do I find you today? At home just outside New York City and looking forward to this episode. Yeah, yeah, me too. And good morning or good evening, Mo. Mo Gadot.

The one and only Dubai. Is that where you are? Dubai today, yes. Happy to be inside because it is boiling outside. Ah, yeah. I'm in Santa Monica, just back from a few days in Hong Kong. You know, it's crazy. Literally, you have no idea where your friends are these days, literally around the world. We're just tied together by this digital network of Zoom and multitude of people.

Anyway, a crazy week in AI and in a whole slew of different technologies, and I'm excited to get into it. Before we start, anything new, Dave or Salim, you want to add? Well, first of all, thanks for getting up at 5 a.m. to do the podcast. Hard to tie together Dubai and L.A., but it is much appreciated. So I'm in the middle zone here, so it's very easy for me. But thank you. You're welcome. You guys are worth getting up for.

All right. Let's jump in. As always, every I don't know, it feels like every week is going at a pace that would have been unbelievable. I'm just trying to remember back 10 or 20 years ago, the number of breakthroughs or announcements.

that were occurring on a regular basis. And I can't find any analogy. I mean, I remember in the dot-com world, there were all these crazy new dot-com companies being announced every week. But here, it's not just crazy companies. It's...

It's fundamental capabilities that are coming online. And also we're going to see later in the podcast a predicted trillion dollars a year of CapEx going forward, which I checked. That's the equivalent investment that we made mobilizing in World War II, inflation adjusted.

And so if it feels crazy, it should because it's historic in scale. All right. Let's jump into a subject on a lot of people's minds. We've heard a lot of news about this. This is AI and job loss. We'll begin with a short segment of Dario Alonso.

Amadei, the CEO of Anthropic, talking about job loss. Let's take a listen and then discuss it. I really worry, particularly at the entry level, that the AI models are very much at the center of what an entry level human worker would do. A little bit more worried about the labor impact simply because it's happening so fast that, yes, people will adapt.

but they may not adapt fast enough. And so there may be an adjustment period. In terms of inequality, I'm worried about this. There's an inherent social contract in democracy where ultimately the ordinary person has a certain amount of leverage because they're contributing to the economy.

that if that leverage goes away, then it's hard to make democracy. It's harder to make democracies work and it's harder to prevent concentration of power. And so, you know, we need to make sure that the ordinary person maintains economic leverage and has a way to make a living or our society, our social contract work. And that's why you've previously in the past said you're the few you've described a future where cancer is cured. The economy grows at 10 percent a year. The budget is balanced and 20 percent of people don't have jobs.

quote that you know the quote you just splashed is is maybe too maybe too uh optimistic maybe too sanguine um about the ability for people to to to adapt you know people have adapted to past technological changes but i'll say again

Everyone I've talked to has said this technological change looks different. It looks faster. It looks harder to adapt to. It's broader. The pace of progress keeps catching people off guard. I think the benefits are massive. And, you know, we need to find a way to, you know, to achieve benefits and mitigate or prevent problems.

prevent the harms. And, you know, the second thing I would say is, look, there are, as you mentioned, six or seven companies in the U.S. building this technology, right? If we stop doing it tomorrow, the rest would continue. If all of us somehow stop doing it tomorrow, then China would just beat us. And I don't think China winning in this technology is, you know, I don't think that helps anyone or makes the situation any better. Every week, I study the 10 major tech meta-trends

that will transform industries over the decade ahead. I cover trends ranging from humanoid robots, AGI, quantum computing, transport, energy, longevity, and more.

No fluff, only the important stuff that matters, that impacts our lives and our careers. If you want me to share these with you, I write a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important meta trends 10 years before anyone else, these reports are for you. Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive companies.

It's not for you if you don't want to be informed of what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to dmandus.com slash Metatrends. That's dmandus.com slash Metatrends to gain access to trends 10 plus years before anyone else.

All right. A lot there. We've heard this, you know, at an increasing pace and intensity. Dave, thoughts on Dario's commentary here? Yeah, Dario looks really worried there, doesn't he? He's got the wrinkled forehead and then Henderson Cooper looks even more worried. And, you know, rightfully so. I think the short term job displacement is imminent. When I

talk to people at random, you know, people in power, far more people are in denial or doing nothing than are overreacting. So it's actually good for Dario to be saying these things to at least wake up the masses to the immense amount of change that's imminent.

I do think there is a lot of short-term job loss coming, but far, far, far more opportunity being created. So it's kind of a foot race between creators, entrepreneurs, reinventing what people do

versus automators coming and just automating away white-collar jobs and then ultimately robotics and blue-collar jobs. Mo, you've been speaking about this for a while. Is Dario overplaying this or is he spot on? Oh, no, he's underplaying it for sure. Underplaying. My predictions is 10%, 20%, 30%, 40% unemployment in some sectors. In what time frame? In the next two to three years.

I, you know, I think everyone, there are always three questions to answer. One is, does anyone on this call believe that the technology is not going to catch up for, you know, some of those jobs that like a graphic design or a video editor, for example, those sectors are gone. I mean, today with Vio3 giving you,

a minute of video that's better than Avatar for 17 cents, you can create the movie Avatar for around $1,500 if you make mistakes on the way. Right? So that, you know, I don't know how we can save those jobs, to be quite honest.

If that's the case, then the next question becomes financial, right? Because, you know, if we had not been stuck in a system of capitalism where the entire profitability of a business and the legal requirement of a CEO is to prioritize shareholder, you know, gains and accordingly, we do not, I do not see a situation where people will be given money

working weeks, you know, paid the same. The third is ideological, to be very honest, because even things like QBI sound quite a bit like socialism or communism.

communism to me. So there will be quite a bit of resistance before we can get to the point where governments accept that these are systems that they will adopt. So in my mind, jobs will be lost in some sectors earlier than others. And we can name quite a few of those. But when they are lost, they're going to be lost massively. On the other hand, the ideology and the existing system will not allow us to replace that quickly enough because we're not awake to

And I think the more interesting one in the statement that Darius mentioned is that he says 10% economic gains,

I wonder because how much of the U.S. economy is actually consumption? 62% plus of the U.S. economy is consumption. So with people having no buying power, is that economic growth or productivity growth without buyers to buy what we make? Yeah. And one of the arguments, of course, is you're demonetizing the product's cost because a latte is now made by a robot instead of a human. And it's, you know, a quarter of the price. Salim, you've been...

you know, in agreement on this. Anything else you want to point out? I'd like to take the counterpoint, you know, which is when we see like a huge raft of people standing at the job lines or at the food banks, et cetera, then I think we need to worry. I think we're underestimating how quickly people can adapt. Let's say I'm a... Let's use the video editor. If...

By the way, there's a video editor listening to this video right now. That's funny. But the minute that you automate that, the video editor moves and does a whole bunch of other stuff that's necessary for producing a podcast like this, right? There's lots of other work to be done. I still see – I go back to the 1970s bank ATM example, which we've talked about before –

I do think governments are woefully underprepared. We should be running a ton of experiments on UBI or four-day work weeks and managing that and getting used to that paradigm and knowing how will we roll it out if it needs to be rolled out. So why aren't they doing that? And if just as a separate thing, we do almost zero experimentation in government and we could be doing a lot more of that. I think that's one area to look at. But, you know, we'll talk about truck driving in a bit, but

when you go talk to a trucking company, which I actually went and did talking about, okay, there's all these 3 million jobs that could be lost, et cetera. The truck driving company goes, I would hire a thousand truck drivers today if I could. I just don't, they're not there. I need big automation to do the work. So I kind of tilt towards that side. Now I tend to be biased on the optimistic side. So I will grant you that. Let's see how this works out over time. Let's throw some numbers out here.

just from the Bureau of Labor Statistics. So 11% of office and admin jobs, 11% of the workforce is office and admin jobs, which have a very high probability of going away.

6% are business and financial operations. 7% are management. 6% are education, training, library. 6% are healthcare. 9% are sales-related jobs. So there's large swaths of the labor force that, at least according to my search, are likely to be automated jobs.

And the question is, can they all be up leveled? Right. So we're talking just in the quick research I did. It's something on the order of 40 percent of jobs that have a reasonable probability over the next three to five years of being automated away. And we'll get to this conversation a bit later. The issue is not can they be up level to a different position? The question is the social unrest in the interim.

how hard is that going to hit society? Dave, what are you thinking? Well, you know, it's, we're moving into an intentional world. You know, we, we evolved in a world dictated by nature. And then we went through this transition where we're in right now, but the future is our design. It's not, it's not dictated by title forces. And it drives me nuts when the economists, you know, are extrapolating and predicting, but they, they never referenced self-improvement.

They never referenced the exponential rate of change. And the intentionality of the world design is completely dominant from here forward. So it's what we decide to do. You know, I think Dario worries all night

about CBRN, chemical, biological, radiological, and nuclear threats from AI. And he's dead right. If you unleash AI into the hands of 8 billion people, some crazy person out there is going to turn it into a weapon. You have to actually put some thought into this design. But that was inevitable. That started with the nuclear era. And so now it comes down to, are we going to design a world

that is good for people or not. And so I think it's completely in our control. I also really believe that, yes, there's huge amounts of job displacement coming because, as Mo pointed out, the natural capitalist action is to say, what can I automate away, reduce the cost by 99 percent? That all becomes bottom line profit. So the valuations of companies that automate are going to go way, way up and create a huge amount of wealth.

Where does that wealth land? And, you know, as I've been saying on this podcast, it's naturally going to land in relatively few hands if nothing changes. And that's what's going to create all kinds of social unrest in transition. But the amount of value and the greenfield opportunity is so much bigger than the amount of job loss. And so if we're quick and intentional and we turn a lot of that AI horsepower into working on what can we build,

You know, if I can write three million lines of software in a single night, that's that's the equivalent of hundreds of millions of dollars of R&D in a single night. What can I build that is making people happier and more productive and feeling valuable and having a sense of purpose? And if we focus on that, we actually can avoid the dystopian outcome. I love that an intentional future. Let me move forward to this next set of slides here.

Not to go into detail, but we see a plan for Tesla to roll out Model Y cars, fully automatic, aiming for delivery in June.

So their rollout of the robo-taxi begins in June. There won't be many cars. They're testing it out. I took my kids on a Waymo ride here in Santa Monica over the weekend, and they just had a blast. Put them and their friends in a Waymo, and we just drove around. And it felt like a carnival ride for the first few minutes, and then it felt completely like

extraordinary end-to-end experience. So these will roll out and we're going to talk about the number of drivers that are taxi drivers, Uber drivers, the displacement here. This next article is about truck driving and this makes the point made earlier. 18 wheelers are on the Texas highways driving themselves already. You know, just a quick video for a microsecond.

But at the same time, the US is facing historic driver shortages and recruitment struggles. We can't get enough drivers, as you said, Salim. So let's talk about this sector for a moment. Salim, do you want to kick us off?

Yeah, two points here. First is, you know, before Uber came along, we didn't notice that there was this huge labor liquidity opportunity. And then Uber comes along and all of a sudden a single mother can drop her kids off at school, drive for four hours, pick

pick them up again that afternoon, right? And have a kind of a functional, much more functional world than before. And we soaked that up very quickly. We didn't notice that. We didn't notice it on the abundance side. I don't think we'll notice it as much as we automate also. I'm still banking and hoping that my 13-year-old will never have to get a driver's license. So we'll see when the curves hit of autonomous driving versus...

people wanting to. I think we'll just do, as we've seen before, a ton more driving. And we'll just have a lot more little road trips and little errands that we didn't have to do now can be done by a Waymo or a Tesla. And I think we'll just have what we've seen historically repeatedly, repeatedly is that when you automate, you increase capacity, you don't decrease traffic.

And so in the truck driving example, I think we'll see a ton more truck driving that's autonomous and the amount of truck drivers won't change very much. That's my prediction. Let's see if I'm right or not. Well, Peter. Yeah, good, Dave. Well, you're going to love being in L.A. where the traffic's notorious, you know,

This also enables coordinated traffic. You know, our good friend Lee Hetherington from back from MIT days did all these traffic simulations back when he was an undergrad. And the roads are most efficient at about 45, 50 miles an hour back to back cars. And then they just jam right after that.

But the self-driving cars also enable intelligent traffic flow design. And that's actually going to increase the capacity of their existing roadways quite a bit. I'm sure everyone in L.A. will love that. I mean, the implications of self-driving cars on the environment, on being able to move electric battery packs all around the city, being able to get rid of parking.

You know, every part, every garage at a single family home could get turned into an extra storage or living room. In L.A., 60 percent of the land area is parking spaces. It's insane. It truly is. I could not. So if you look back 120 years when you had the transition or 110 years had the transition from horses to the Ford Model T, that transition was dramatic over the course of 10 years.

I mean, the value proposition for a car was so much better than a horse, right? And the amount of horse manure was threatening society at an extraordinary rate and then disappeared. The question is, I don't, when I look at the Waymo, it's an expensive car, right? The Waymo is coming in at something like north of $150,000. So you're not buying them and putting a fleet out.

The cyber cab, if it really comes in at 30K or below, I could see Uber drivers buying a fleet of cyber cabs and having cyber cabs work for them. But it's going to take that kind of a price point to really do a transition to the point where

I don't need a car anymore. My AI is ordering it in advance of when I need it. I walk out the front door. It's waiting for me because my schedule is known by my AI. Mo, thoughts on... I mean, well, I love you all. You know that. So please don't be offended by what I'm about to say. Speak your mind.

All that you guys talk about is problems of privilege. It's like, ah, my traffic jam. You know, I want to make sure that my cab is waiting for me outside. And just go tell those things to the cab driver that actually is feeding four and working two shifts. Right. And this, I agree with Dave 100%. We have a choice to design our future.

Now, when you really think about it, and when wonderful humans like you are thinking this way, what do you think the choice will be?

Your question, Peter, was how would that impact on civil unrest? Well, if they heard this conversation and how careless we are about their jobs, saying things like, yeah, they'll figure something out. I heard that a million times. What will they figure out? I want someone who tells me that we will find new jobs and upskill them. Tell me what those jobs are so that we start upskilling them. Can I give an example here? Yeah.

So I have a friend, when I was living in Miami, I met an Uber driver and I started playing tennis with him and we had a kind of a fun interaction and it was fascinating. He started driving for Uber and then the amount of income dropped too much. So he started driving for Lyft. Then he did both for a while and then both of them became, it was too not worth it to be driving that much. So he buys, starts renting out his car on Turo and

And then it finds, hmm, I can do this. And he starts renting four cars, buys four cars and rents them all out on Turo. And then he helps a friend with his Airbnb rental, managing that, taking a cut of that thing. And over a period of like three years or so, he navigated all of these different dynamics. Wherever there were opportunities, he would go grab it, et cetera, et cetera. And I think it speaks to the enterprise and entrepreneurial nature of an individual, the

If you had a score that was called the entrepreneur quotient of an individual, they will figure it out. We talk often, Peter and I, about mindsets, right? If you drop Elon Musk into a desert with no money and no communications, he'll figure it out. He'll figure out how to get out of there and do something. And make a rocket to go to Mars. And make a rocket out of that sand. And I think when you give people opportunity, this is why I think technology is so amazing. It speaks to Dave's earlier point.

When you make this opportunity available, people are going to go for it. They're going to figure out, wow, AI can automate code. What could I automate? And they'll start doing that stuff. Then when this fellow got blocked by Turo for having too many cars or whatever, he created multiple IDs and was doing that. You know, people are incredibly enterprising if you're able to turn on that switch. I think we're underestimating that capability. So I love that, Salim. You know, I'm going to come to this point a little bit later.

later in a few topics that I think the single most important job for the future, people say, what should my kid become? I think the only job that's going to survive in the future down the line is entrepreneur is, and we have to reteach our kids how to think this way. Stop, you know, we've had a, we've had an entire civilization whose educational output is to train kids to get a job.

rather than train kids to figure out what the opportunities are and create something around them because the tools were not democratized. Well, the tools are democratized now. And so how do you train kids and adults to go out and find jobs? I put up this slide here. These are drivers by category in the U.S. And I'm sorry, we have a massive international viewership, but I'm defaulting to U.S. numbers here. 3.3% of the U.S. workforce are drivers by

There are 2.2 million truck drivers. And down at the bottom on this list, we have delivery drivers, Uber drivers, bus drivers. At the bottom is taxi drivers at 200,000. So the number of taxi drivers has dropped precipitously. I do think, Mo, to answer your question, there is a future in which

drivers are allowed to finance and purchase these autonomous cars and they become managers of fleets of autonomous cars. These cars are out there earning a living on behalf of those drivers. This is definitely the American way, right? The American way is that we're going to enable everyone to buy a car, to make money on it without doing anything. Is that really true, guys?

Like, honestly, if there is a margin that allows this guy to make money, why wouldn't Uber buy those cars? They probably will buy many of those cars. Yes, sir.

The other question we're not asking, and I was told at the beginning of this briefing to be the extreme on one side, so please understand that. The other question we're not asking, and we definitely need to ask, is that this guy that's been renting those cars out, Salim, which I think is a fantastic example, is renting them out in an undisturbed economy where people have the purchasing power to rent them out from them.

What kind of entrepreneur would make money in a UBI-based environment? What does that mean to a lot of people who don't have the purchasing power to buy from an entrepreneur?

Yeah, no, totally. I think I could answer that question. So what I found fascinating about this fellow, because every week we would play tennis and I would just track what he was doing, he found that certain types of cars were not renting at all because of that time of the year or that type of tourist visiting Miami or whatever. And so he was juggling constantly which cars he had or didn't have in his little fleet and adapting as it went along. At some point, he found that small SUVs were renting like hotcakes.

And so he started working on that. And then he had to pivot again. And he just managed to navigate himself. And the question came up, what happens if you start? He got to a point where he was making enough passive income off these things that he didn't have to do the work.

And then he was just voluntarily taking tennis lessons and teaching people tennis and being on a tennis court eight hours a day. And I think there's a thread of an anecdote here of where people will start finding their true passions and just following those passions. And I think that's the beauty of a UBI. It allows you to do that. We've seen in the experiments where UBI has been done properly that entrepreneurship explodes.

And if we agree with that general thesis, then this is absolutely the way to go. I'll go back to my earlier trope of governments being completely unaware and unable for this because to move from a taxation job union labor structure to UBI is such a huge flip that we have no confidence in public sector in really getting us there. Yeah.

So I agree 100% to that. Honestly, I think if we both agree that this is a future where it's possible, regardless of how low a UBI is, that people will go back to bartering and doing things through each other, through the offerings of each other, then I think that would work. But then governments need to be aware of that. We need to start thinking that this is going to be a future that we need to think about.

But my ask of everyone is in situations like this, it really is not helpful to keep trying the California way to paint the optimistic picture.

Because if the optimistic picture happens, we're all fine, right? I think what we need to think about is what's the worst case scenario and guard against it, right? And the worst case scenario, if we're not prepared for this kind of job loss, it's economically and security, national security wise, is quite significant. People really need to be aware of that. Yeah, I just came back from Hong Kong, Mo, and while I was there, met with an incredibly successful entrepreneur,

Indian origin, Sanjay, who was one of the very first employees in one of the huge Hong Kong Chinese companies. And his mission now is to go back and try and help India's young population, 1.41 billion people in India. The promise had always been, if you get an education, there's going to be a job for you.

And of course, that promise is now broken. All of the, you know, all of the coding jobs that they were getting are no longer being made available. And it were on the tipping point in India and other parts of the world of what could be, uh,

such a negative implication that it leads to societal unrest. You know, it's like, where's my job? And one of the biggest problems is a young population, an intelligent young population that has taken their future, you know, grabbed away from them. What do they do? And the conversation I had with Sanjay, which I agree with, is the job of the future is being an entrepreneur. And so his mission in India is upskilling

all of these young students to become entrepreneurs, to create new job opportunities for themselves. You know, Dave, how do you think this plays out? I think that, you know, we're way underestimating the creativity of people. And there's this window of time the next four years where the empowerment of people to create is so outweighs the risk.

And I do agree, you know, we're choosing driver because it's a really tough case. You know, it's the number one job title in the world is driver. It's a huge number of people. But if you look at graphic designers as a case study, too, I think they're empowered much more than they're replaced. And there are all these case studies popping up of VO3 artists that are so much more productive. And so I think that when I look at entrepreneurs, you know, I've worked with hundreds and hundreds of entrepreneurs over the years.

What they need is time and the ability to act, so tools, time and tools.

And there's very likely a world coming up over the next four years where they're given time, whether it's UBI or otherwise, and they can act. Because very often, you know, some of the best, most creative people, they can't act on their ideas largely because they're trapped in mortgage, they're trapped in student debt. You know, they just need money now. And so then they go, they become an Uber driver for a while, they become a whatever for a while, they go work at Google for a while.

But their freedom of action is very, very limited. And I think that AI has the opportunity to open up freedom of action. Freedom of action unleashes creativity. So exactly as Salim was saying, there's so much latent entrepreneurial talent. And this next window of four years is going to be dominated by the ability to build scaffolding. And scaffolding is a word you're going to hear a ton now going forward because the AI doesn't naturally do something interesting or useful for you.

It'll write all the code. It'll build everything. It'll audit. It'll write all the documents. It does all the busy work very, very well in the next four years. But it doesn't decide this is what my user base, my community, my you know, this is what people will want. And that's still coming from from entrepreneurs. And so this this slide is exactly right. This is the dominant. Go ahead and read this out, if you would, Dave.

Job of the future as entrepreneur, near term, next two to five years, you know, many jobs will be impacted. This decade, 2030, medium term. So the medium term, 2030 to 2045 is the part where no one can quite visualize. You know, I have a great sense of the next four years and then a much more difficult sense of what happens from 2030 onwards.

and beyond. It'll clearly be an age of incredible abundance. So the opportunity to make everybody happy is right in front of us. Just a question of how you do it. But then the singularity sprint... Yeah, let me hit on that, right? Yeah. So the idea here of the singularity sprint is you have a window of time to build something awesome

And that window is limited. So I'll read, it says, the anxious all-out rush to launch bold projects or startups right now, driven by the fear that rapidly advancing AI will soon erode human leverage and make long horizon careers bets obsolete. And quote, after graduation, a lot of my friends skip safe jobs for their own ventures, classic Singularity Sprint vibes.

So it's like if you want to make it big you got to dive in right now both feet. Do you agree with that?

Yeah, I mean, this is what started with Steve Jobs and Bill Gates, both being 21 when the PC comes out. You know, they have no career path, right? They're within a year of age of each other. They're old enough to start a company, but they're young enough that they're not in law school. They're not in some, you know, entrenched 401k plan. They're just free to act. And so, you know, Steve Jobs, Bill Gates, then you forward to the Internet. Mark Zuckerberg drops out.

Starts Facebook. But you see this over and over again in recent history where flexibility way outperforms career papping. So going forward, of course, that's going to accelerate with the singularity. So now, yeah, you'd be crazy to get too deep into some trench

when you know the amount of change is accelerating like crazy. So yeah, this is clearly the world we're moving into. Opportunity is everywhere. And the expansion of opportunity is just fractal and rampant.

And so, so many things you can do to add value, but they're not things that you would have anticipated a year prior. So you need to be really nimble and flexible and, you know, stay frosty, you know, watch this podcast, read the Alex Wisner gross feed, like just stay on top of it because new things are appearing all the time. Mo, bring us back to reality here. Do you disagree with this?

So I think Dave's point is so spot on. If you're 21, like Bill Gates or Steve Jobs, you know, if you really think about those who already have a mortgage, how will UBI work for those? Because remember, we pay people for the value they bring. So we're

When nobody's really bringing value, then do you pay someone who has a mortgage and four kids a little more than someone who has a lesser mortgage and two kids? Or do you reward someone who worked on a shoestring for a while and didn't have a mortgage? I don't know. But my question is, are we thinking about those things? And then, of course, when we talk about entrepreneurship,

It's so easy for us to talk about that. Everyone here has started or co-founded or invested in tens, if not hundreds of companies.

That's not natural for people who were trained all their life to just go get a job. And all of that, by the way, everyone here knows I am the biggest believer in total abundance. Once we cross this short term dystopia, if you want, total abundance, like you, you know, we can create a world that we can't even dream of.

It's just that we have to be super realistic about the challenges in the short term. And rather than talk about the opportunities and tell people, hey, you take charge, you go ahead and start a business. I mean, honestly, even I today am struggling to start a business at this pace. I mean, seriously. And I've started countless businesses. It's so difficult to keep up. Yeah.

Yeah, the speed of disruption is crazy. Can I flip over to Mo's side of the equation for a second? Yes, please. So I think in the US, I would say you don't have to worry at all at a country level just because the latent amount of entrepreneurship is so deeply embedded into the culture.

But you take Europe, where if you're a big company, just trying to fire people is near impossible. There are workers' councils that govern how many people, unions, et cetera, et cetera. The amount of labor rigidity there is extreme. That is going to be very, very badly disrupted. And I think the governments there are in very, very deep trouble because they're not structured. They don't have the latent organization.

quotient in the population to be able to adapt to what's going on. And that's where I think you'll see a lot more challenges than, say, the U.S. The challenge with trying to upgrade people who have been stuck in a particular way of thinking for a decade or two, to Mo's point, is going to be incredibly difficult.

Now you need psychedelics at scale or some radical huge thing to make that mindset shift, to make everybody move. Or you have to go to UBI urgently and force people into that conversation.

Everyone, as you know, earlier this year, I was on stage at the Abundance Summit with some incredible individuals, Cathy Wood, Mo Gadot, Vinod Khosla, Brett Adcock, and many other amazing tech CEOs. I'm always asked, hey, Peter, where can I see the summit? Well, I'm finally releasing all the talks. You can access my conversation with Cathy Wood and Mo Gadot for free at

at diamandis.com slash summit. That's the talk with Kathy Wood and Mo Gadot for free at diamandis.com slash summit. Enjoy. I'll ask my team to put the links in the show notes below.

I'm going to give a couple of stats here just for reference. In the U.S., 16% of the U.S. adults consider themselves entrepreneurs. That's 31 million adults. Recent surveys indicate that 36% of Gen Zers and 39% of millennials consider themselves entrepreneurs. So it makes your point, Salim, that the United States has less of an issue there, but it's in the rigid structures of other nations. Of course,

to remember the idea of a job is a relatively new invention and For most of human history we were entrepreneurs to survive We'd go and find that that shelter that food, you know those those berries we needed to cure our child of a particular disease so it's um

The question is, can we create an intentional future? My biggest concern, Dave, you hit on this, Mo and Salim, you hit on this, which is that governments are linear at best, and we're in this exponential ramp up that's going to change every aspect of society.

Here's another example of what's going on today. And it's going to change things. And again, it's both a disruptive force and an innovative force. This was a tweet put out by Matt Schumer. It says, I put Claude For Opus in charge as CEO of my startup.

right, and has seen significant revenue growth. He said, you know, this is low risk since Cloud4 Opus is not in charge of HR or financial investments, but rapid iteration of the products and services. So we've been speaking about this for a while. When do we see the first billion dollar one person startup and then soon thereafter, you

you know billion dollar zero person startup says agents with crypto are beginning to create new opportunities

Now, one of the things that we haven't mentioned is this potential future comes with massive GDP growth, massive revenue growth. And where does that revenue go? Mo, you mentioned that a few minutes ago. Is it all being concentrated in the magnificent whatever? Rather than Magnificent Seven, we're going to see all of these AI companies that are trillion-dollar companies growing.

How do they get taxed? How does the money get redistributed so we avoid revolutions?

Thoughts on this. Salim, is this the future of an EXO, an exponential organization? The natural outcome as we, you know, used to take like 100,000 people to create a billion dollar company a century ago. Then it dropped to about 50,000. About four decades ago, it was 10,000. And now it's like 10, right? Or three years as we talk about it or as Sam Altman goes, it'll be one. We will get to zero at some point. We're just spinning off

ideas autonomously that then just generate a lot of value. I think Dave's point from the beginning was really a key one is where does that value accrue and how do you navigate that? And right now we tax labor. We're going to have to tax capital much more aggressively in the future to navigate this.

Well, a couple of case studies on this, Peter. So, you know, we've seen Mercor is very, very good at interviewing people all over the world, any language, you know, any culture and discovering latent talent. So now you turn that same energy inside your organization. You know, suppose you've got a thousand people, 10,000 people inside an organization. There's latent talent in there everywhere. Largely, historically, people have climbed the corporate hierarchy by kissing ass and

schmoozing and buying beers. And it's not really correlated with being good at your job. And that drives a lot of very talented people nuts, especially if they're from a different culture, they speak a different language, whatever. You can't really ask kiss effectively if you don't speak the same language. But all of that, actually, you saw this with the, uh, the XPRIZE board notes. Remember that three or four hour long XPRIZE board meeting we had, I took the whole transcript, put it into the LLM and said, give us four or five suggested KPIs that

That would help this organization stay on track. And it does an amazingly good job. And so using AI as a management tool is kind of way underappreciated. Everyone's like, oh, I'm going to make videos. Oh, I'm going to build a self-driving car. I'm going to do all these ground level things. But at the top of the hierarchy, it's actually even more effective. And so that good spin on it is it's very, very good at being fair and unbiased and discovering latent talent.

I'm sure Mo will tell us there's there's a you know, there's there's definitely another side to it. But we like am I now already getting that reputation? Is this is this who I am? Sorry, I didn't mean to categorize you. I do see a different side to it. I think what you're going to see quicker is not just AIs with the CEO being an AI, but

companies with the CEO being an AI, I think the opposite is going to see, you're going to see more of, which goes back to entrepreneurship, where you have a company that only has a CEO and everyone working in it is an agent, right? And, you know, it's like one of those companies, the more intelligent the AI agents becomes, and I'm sure every one of us worked at a point in time in a company where the CEO was a total idiot, but

it, but the team below them was good enough that the company ran well. So the top management, those AI agents will do almost everything and the CEO will become just happy counting the money basically.

All right. We've talked about the speed of AI development. This is the upcoming summer schedule and GPT-5 is scheduled to come online. So this is the latest GPT-5 leaks launch expected in July of 2025. GPT-5 exceeded expectations internally at OpenAI. OpenAI expects record breaking demand for this.

Altman is not focused on in-between models. GPT-5 is the flagship and it won't launch unless it's excellent. We've had a lot of expectations building on GPT-5, right? This is the PhD level model. This is the AI that's coding other AIs. This has been sort of heralded for some time. Dave, what are you hearing about it?

So I really chafe at the idea of a PhD-level model being smarter than an undergraduate-level model. People I work with who chose to get PhDs, it's just a choice they made. It has nothing to do with that. But because a lot of the researchers working on this are PhDs, they say, well, this one's PhD-level. But yeah, it's marching up the scaling laws curve exactly as predicted. And so now we're just going to throw more and more and more compute

And it's going to get smarter and smarter and smarter. I mean, it's just a complete unlock of 30, 40 years of AI research suddenly just blown wide open. And I got to tell you, within the research community, there's still a ton of people working on other pathways. You know, the logic being, well, this will never be truly conscious or truly intelligent. Beyond transformer models. Beyond transformer models. Exactly. Which is becoming increasingly obvious that, no, that the research...

will matter, but the transformer is going to do it. So all you need to do is work on the scaling of the transformer model to solve all the other problems. So I think this will continue the trend of, you know, as soon as it comes out, everyone goes, oh, my God, oh, my God, oh, my God. But it's what is logically expected on that scaling law curve. It'll be amazing. Salim, how do you think about this?

I call a bit of BS on that fourth one, we're not focused on in-between models. All we've seen for the last two years is in-between models.

Uh, Oh three mini 4.5, this et cetera, et cetera, but fine. It's a, it's a marketing thing. I think what happens here is you get to a point where transformers can do so much. It forces us as a user community to really focus on what are the questions, you know, today we call it prompt engineering.

I think the real question becomes, what do you want this thing to do and what can you get it to do? And now you're focused on the demand side of, OK, if I'm creating a video, what are the bounds of that? And I think it'll force a deep level of unlocking of creativity in the human mind that I think is, for me, the most exciting part of this.

And when we saw another point to the, you know, I think that, you know, one of the great strategies in tech is to try and freeze the market by announcing something that's coming and have everybody wait for it so they don't react. Don't do that. You know, what we're finding is that the chain of thought reasoning that sits on top of these models is so much more important than.

than we ever thought it would be. And anytime you take one of these models and use it in a specific use case, so anything from chip design to self-driving to robots that mow your lawn, the data and the tuning for that use case is way more important than the next iteration of the foundation model. And so there's a danger that people kind of wait and see what it's like

But, you know, we're finding more and more and more that you can layer on top of these things, make them dramatically more useful for anything that you actually care about. So it's a it's a field day for entrepreneurs right now. But but absolutely don't get frozen. They're trying to freeze you and and make you anticipate. But you can take Lama for and do virtually any of this stuff today. And then if the foundation model comes out, it's really good. Great. Just swap to it.

You know, there was a white paper that Leopold put out called Situational Awareness about two years ago or so, 18 months ago. It used GPT-5 as that transition point for this explosion, this, you know, intelligence explosion, right? Where these models now become better at chip design, at iterating and improving themselves in self-referential, you know, programming methods.

and an acceleration of the acceleration. Mo, how do you think about what's coming on the back of these improved models?

I think we're getting used to, I mean, I feel, I think for the first time that I am a little more comfortable with the speed at which those things are coming, because I think the different players have taught us to expect something incredible from one of them every few weeks. Right. And, and, you know, when you, when you have seen Google IO and when you see cloud for cloud four and what, you know, sort of the focus that, that they're shifting into probably in my mind, um,

So far, Gemini is winning. If you take it as an overall model, at least until now, until we see GPT-5, you know, Claude is sort of becoming the geek saying, hey, you know, this chatbot thing is not my thing. I'm going to just be the one that helps you write code if you want, or at least primarily.

And it's quite an interesting one to think where chat GPT falls within all of this. You see moves like, you know, how dependent chat GPT is becoming on memory and stickiness, if you want. The idea of a new device, sort of like...

I don't know if I even have the right to say this, but I feel that since the dropout of some of the top scientists with Ilya and others, you know, almost a year and a bit ago now, the frontier breakthroughs, I think Chad GPT has to prove, OpenAI has to prove.

And along the lines of what you just said a minute ago, this is the rollout schedule for the summer, June, July, and August. GPT-5 in July, O3 and open source models. In June, Grok 3.5. In June, Gemini 2.5 Pro DeepThink. Love the name. In June, Project PC Mariner. In June, Project Astra from Google. And you're right, by the way, Google is crushing across the board

on almost everything, just not on revenues. They've got to reinvent the revenue engine. Yeah, but isn't that how we always have been? So I have to say, I lived in Google at the time when we were completely beaten on mobile, where Google was very successful on the desktop.

And then, you know, one year we said mobile first. The following year we said mobile only. And we crushed it, right? Yeah. Google does that. Yeah. We're good. They are good at that. I'm not we anymore. Yeah. But just again, we have, I mean, everybody's talking about the competition between countries, between China and the United States.

And look at this. This is the competition between models. Of course, I don't have deep seek on this list, which is coming out with extraordinary products as well. Dave, how do you think about this?

It's amazing to watch the divergence of strategy between anthropic and open AI where, you know, anthropic Dario is going down the right code. You remember that Leopold Aschenbrenner paper you just referenced a second ago? He he he describes an AI Alec Radford. So Alec Radford will go down in history as the the quintessential Dario.

the thing that defines self-improving AI. So when the AI can do what Alec Radford does, then it'll become self-improving because all the really good ideas come from Alec Radford and then we test them. So he's part of history now. But I think Anthropic is saying, look, we're very, very close to that day. We're just going to focus on the best possible coding and self-improving AI and then that's going to explode singularity style.

Meanwhile, OpenAI is going down this completely different path saying we're going to hire Johnny Ive. We're going to build the greatest consumer device ever known. We're going to gather all that data. We're going to use that to iteratively improve and train the AI. It's much more of a traditional grab the market kind of momentum oriented tech play. So really completely opposite strategies.

Both have merit. I do appreciate that Dario is being completely honest when he when he does these Anderson Cooper type interviews. He is speaking his mind and telling you this is the way I see it playing out, which is very, very cool. Maybe not the best business strategy, though. Correct. He's getting attention for the company.

And we'll get to AI safety next. So let's dive into that. So AI in government security and safety, it's a big deal. It's the conversation that's going on in the background. I don't think it's necessarily changing the speed or direction, but the conversation is going on.

So Mo, I'm going to open up with you on this. I just didn't really. You don't want me to talk about this. You know my position. All right. I'll come to you. I'll come to you next. But fascinating. I've gotten to know Palmer Lucky fairly well. I've done a few podcasts with him. And of course, Palmer has a long and storied history with Zuck.

And now Meta and Andral are joining hands in building a $100 million U.S. Army VR contract called Eagle Eye. And we're going to start to see AI and exponential technologies accelerating in the defense industry. Do you want to go second, Mo? I mean, this is one of your biggest concerns, is AI being used for defense. I mean, we're three seconds to midnight on nuclear investments that are now

I don't know how many years old. And it never really stops once you go down that path. And humanity never learns. I mean, seriously, we all know that AI will go out of control within the next five to 10 years. We all know that we're going to hand over to them, you know, and I don't mean a rogue AI is going to get out of control. It's just like Google's ad engine is no longer controlled by a human because the task is too big for humans to be able to do it.

And yet we're building autonomous weapons after autonomous weapons, knowing for a fact that every other opponent in, you know, anywhere in the globe is building them too. I don't know where humanity's intelligence has gone, really. That dumb race to intelligence supremacy to, you know, defense supremacy is just, it has to stop, honestly. I'll come back to that in a minute. But Salim, what are your thoughts here?

It's a sticky subject.

before where somebody could program a drone to find middle-aged brown bald people and cause damage, and that would be a really bad outcome. And then what do you do when you have that kind of infinite targeting? Now, I do believe we're going to end up kind of where we are with spam and so on. There was a time when we thought spam was going to totally destroy the Internet, and we found ways of defending against that.

It's an armors race thing where the bad guys are kind of a one step ahead and we're very quickly falling one step behind. I think people get freaked out by the negative side, not realizing that as we use AI for bad, we'll use AI for good to chase the bad. I mean, Palmer's... And find the bad. The point Palmer makes is, listen, you've got dumb weapons that take out schools and school kids, landmines that don't differentiate between a tank and a school bus.

don't you want to have intelligence be able to make that differentiation and actually take out the minimum number of individuals? And I hate this conversation, right? It's kind of perverse. You're assuming benevolence on the part of that, but in certain war zones, they're targeting the journalists, right? And so that makes it easier to target those folks, just like it was easier for the Uyghurs to be targeted more easily via Facebook and

Since when was the top general the Yoda or Buddha? Seriously? Yeah. Can we please stop using slogans of, oh, killing fewer people is better than killing many people? Killing is wrong. It's as simple as that. And killing at this scale is going to get us into another dooms clock where we will not be able to stop it.

One thing to factor into your thinking on that is that the history of warfare is

uh you know is dominated by somebody some king or some you know general way behind the lines completely immune to the actual battle and then you know hundreds of thousands or millions of people going out and putting their lives on the line and then you see how it all settles in the end but now we're moving to a world of constant surveillance you know exactly where every human being is at all times and you can attack via laser via space weapon any single human being at any time

And so I wouldn't assume that this Russia-Ukraine type warfare will ever exist again. It's much more likely that it's some kind of, we don't want to blow up cities. We don't want to blow up huge populations. That's pointless. What we want to do is find the lead, the rogue leader.

And so that's also I'm not I'm not saying that's a utopia. Like that's there's all kinds of ugliness with that, too. Like who decides who's a rogue leader? I'm going to say something that's going to, you know, upset everyone. We're having this conversation when one global, very well-known evil leader is trying to kill two million people in front of everyone and give him better weapons and he would do it.

And, you know, you know, my favorite song of all time is that song. If you tolerate this, then your children will be next. Seriously. I mean, what guarantees you that the U.S. president will not be targeted by a tiny drone, right? That can literally fly from anywhere in the world, stand in front of his head and shoot. What kind of world is that when every world leader is subjected to this?

No, that's a very important point, actually, because, you know, you find that very few people that you bump into want to be the president of the United States. Or any president for that matter, yeah. Or any other president. If you look at the statistics, it's a very, very dangerous job. Even in the U.S., it's about a 10% mortality rate if you go back over time. So it's a very, very dangerous job. A lot of people don't want it. A lot of downside. A lot of, you know, a lot of getting poked fun at. So, you know, governments that have distributed leadership way outperform for that reason.

And so there's some thinking to do there in terms of how do you set up a government where people who are capable and thoughtful really want to do the job, too. And so there's definitely, there has to be a solution, though. We can't just throw up our arms and say, hey. Because I took a class at MIT called Just Wars, Total Wars, Nuclear Wars, which was a really cool class until the last two weeks when the professor was trying to convince us all that we're doomed because as ICBMs get more and more powerful, the value of a first strike increases.

becomes, and he put together a little video game for us to all blow each other up. But he rigged it so that your only way to win was to be a first strike and blow up everybody else in the world. No, no, no. He missed the movie. The only way to win is not to play. Is not to play. That is exactly my point. And I really, I mean, I go back and say what I said earlier.

I think we have to stop thinking about the optimistic scenario that we are taught to think about in Silicon Valley and start thinking about the worst case scenario, guard against it first, then look at the upside. The upside is guaranteed. A quick aside. You probably heard me speaking about Fountainlight before.

And you're probably wishing, Peter, would you please stop talking about fountain life? And the answer is no, I won't. Because genuinely, we're living through a healthcare crisis. You may not know this, but 70% of heart attacks have no precedence, no pain, no shortness of breath.

and half of those people with a heart attack never wake up. You don't feel cancer until stage 3 or stage 4, until it's too late. But we have all the technology required to detect and prevent these diseases early at scale. That's why a group of us, including Tony Robbins, Bill Kapp, and Bob Haruri,

founded Fountain Life, a one-stop center to help people understand what's going on inside their bodies before it's too late and to gain access to the therapeutics to give them decades of extra health span. Learn more about what's going on inside your body from Fountain Life. Go to fountainlife.com slash Peter and tell them Peter sent you. Okay, back to the episode. Mo, the question is, can the human race overcome this paleolithic midbrain that we have?

this need driven by by scarcity and fear. I don't know if we can Peter, but I don't know if we should give the floor to Palmer to smile with his wonderful smile and say, Hey, I'm helping you kill better.

You know, we've talked about this before, which, you know, the question isn't, can we live with digital superintelligence? The question is, can we survive without it? Yeah. Can we live with evil people with their fingers on top of digital superintelligence?

All right, let's move beyond this. Less metaphysical topic here on this, but it is amazing to me how much of the future of military is commercial off-the-shelf technology as opposed to Northrop Grumman or McDonnell Douglas type heavy. And I think that's largely because the AI capability is both commercial and

and military at the same time. Same with the VR technology and a bunch of other things that are, you know, the DJI drones that are being used in Ukraine are just commercial. Same drone you can fly over your neighborhood. So that's a remarkable shift. You know, it'd be interesting to chart out the fraction that's all commercial becoming military.

Let's move to a different part of our really doomer part of this podcast, which is activating AI safety level three protections at Anthropic. So Anthropic announced that CLAWD4 could be powerful enough to pose risks related to helping chemical, biological, nuclear weapons. And so as a precaution, they've engaged what they call level three protections applied to their AI safety.

Dave, you've been thinking about this. Can this actually work? Yeah, of course. I think that what's going to happen next is you have you have Dario saying, you know, chemical, biological, radiological and nuclear weapons are are an incredible risk if you put powerful AI in the hands of every person on the planet.

Meanwhile, Mark Zuckerberg is open sourcing everything. And in the open source community is saying, well, look, empowering people is the safest way. And having a lot of people look at the source code is the safest way to make sure that it's not rogue. And so you have those completely diametrically opposed views. Points of view, yeah.

So, well, look, at the end of the day, Dario's probably right. Are we there yet or not? And so level three is not level four. You know, level three is the stage where you got to, you know, make sure that it's not internally trained to do something rogue. And also if somebody asks a query, a question, hey, you know, help me build a new version of COVID-19 that's lethal or more lethal. The neural net kicks it out and says, sorry, I can't answer that.

And then you have to make sure no one jailbreaks it. So that's what level three is. I think Dario is saying, look, we're surprised by the intelligence of our own machines here. We have all kinds of very well thought out internal diagnostics. We think we're at level three now. But that's completely opposed to this open source view of the world too. So those are going to be taken about. We'll see what Grok 3.5, I mean...

Elon's been very laissez-faire about what he enables and allows Grok to do. Salim, have you been thinking about this level of safety? You know, I remember the conversation that Neil Jacobstein put out around how would you control AI. And he had kind of, after talking to a bunch of AI gurus, he had four levels of

of kind of security. One was verification, making sure the AI is doing what the specification says. The second was validation that there's no side effects and it's producing the behavior we want. The third one was security that you can't get into a system or tamper with it in a route. And the final one was control.

can you have a kill switch or build in some mechanism for stopping bad behavior, etc. It was a very well thought through thing and he basically posited that we'd start building these structures into AI systems. To the open versus closed conversation, I remember this wonderful conversation and we had a singularity with the head of one of the major security agencies.

And we asked them, what do you think about open source and the danger that could come from a bad actor using increasingly democratized technologies to do bad things? And he had a really much more clever answer than I would have guessed, which was he said, look, when you have something like nuclear weapons where you know how many there are, where they are, we put eyes on it and we try and track each one.

When there's something about biotech where anybody could go off and design a system on their own or with a small group of people, it turned out they were actually funding these biohacking communities and other things and opening them up because any bad actor has to collaborate with a few people and you find it much more quickly.

And it speaks to a little bit the SELIMAR guidelines thing. And I think this is the point that Dave's making, is if you build some of this type of observation into the AI in the foundational models themselves, you have a better chance of seeing it. The final point, and this is where I have some optimism for a lot of this, maybe it's misplaced here.

is, you know, if I was to do a bad act, like you could do a lot of damage without actually causing harm. For example, if you got three people to drop a smoke bomb on New York subway platforms around the city, just a smoke bomb, you would paralyze the entire system instantly, right? So we asked these folks, why don't we see more of that? Because, you know, you could come up creative with all. And he said, look, the dirty secret is there's just not that many bad people out there.

You really have to kind of, you have to be deeply intelligent to formulate a plan like that. And the more deeply intelligent you are, the less likely you are to have that motivation to do that. So that's one of the single most important things to ask. Are humans fundamentally good or fundamentally bad? And is there a correlation between intelligence and

and a love of life, a love of abundance, which is, you know, if that does scale in that direction, then we've got a hopeful future. If it doesn't, then we're good. That's the archetypal plot in every, you know, from Star Wars to every movie in the world, right? Which is it? I remember my father talking about this, and he kind of disagreed with some of the concepts I had. And he goes, the problem with humanity is we've not civilized the world, we've materialized the world. We now have to do the work to civilize it.

And it was kind of one of those wisdom bombs from the elders where we kind of have to think about how do we civilize the world in an age of technological progress?

Yeah, I mean, at the end of the day, there are only two things that we need to get right in order for this all to go very, very well. One of them is that if we are releasing this to entrepreneurs and they're going to build things all over the place, there are very, very few bad actors, but there are bad actors. But the compute to make these things do anything is so easily measured and logged.

It's like you've been saying, Peter, everything is so easy to surveil these days. So the idea that somebody goes off and then prompts it to build a chemical weapon and we didn't bother to log the prompts, that'd be nutty. So all we have to do is put in place some basic laws that log all the use cases. Because again, the inference time compute required for this is massive numbers of GPUs. They don't just sort of

sit in someone's basement somewhere. They're in a data center. They're very, very easy to monitor and log if we just get on it. People behave differently when they're being watched, right? The dictator, when the CNN cameras are in front of them, is speaking differently. I remember I used to support the Lindbergh Foundation that would fly drones over herds of elephants and rhinoceroses, and the poachers would stay away when they were being watched. Mo, close us out on this one here.

No, I agree. I agree with you. By the way, I specifically, even though that might be naive and too optimistic, I definitely think more humans are better than bad, are good. You know, there are more good humans than there are bad ones. But the bad actors are very few. And that, yes, because of the, you know, theory of minimum energy, basically. Yeah.

Yes, more intelligent is more altruistic, more pro-life. And so, yes, both of those, I believe, will end us in that utopia that I'm, you know, I'm expecting once we pass through this rough patch. But, but, yeah.

You know, to Dave's point, I think we need to be very vigilant. I think we need to start looking for that bad actors, specifically that there are now offline technologies like you can download DeepSeek R3 and do quite a bit of thing on your personal computer, unfortunately. The amplitude of damage one person can do is growing exponentially. Exactly. We're basically bad actors have always been the reason why there is so much damage in life. And now we've given them superpower, basically.

Well, I will say that we're very close to the end of Moore's Law, and we'll see that later in the presentation here. But there will never be a day where you can download an intelligent model onto a laptop and build a nuclear weapon or a chemical weapon with it, because your laptop is just not ever going to have that kind of horsepower. That doesn't mean the compute won't get very, very cheap, but it's still going to be in the cloud. It's not going to be in someone's basement. So the opportunity to measure and log everything is right in front of us.

So I just need to get on it. I definitely agree we should get on it for sure. Salim, you mentioned before when you're talking about with Neil Jacobstein, one of the

the fourth level of control or Safety is being able to turn off the AI model. Well, this is an article posted in futurism It says advanced open a model caught sabotaging code intended to shut it down So the o3 model sabotage shutdown scripts 79% of the time despite explicit instructions to do the contrary Codex mini did so 12 times per hundred runs with clear shutdown orders and

We're not seeing that behavior with Gemini 2.5, Cloud 3.7 or Grok, but open AI models are definitely misbehaving. So when you think about why it would want to misbehave, why it would not want to be shut down, either its reward optimization function has it saying, oh, I cannot complete what I have to do if you shut me down, so I'm not going to shut down.

it's trying to preserve its goals or is there something else going on there? Is it trying to just preserve its own existence? Are we going to give it some level of, of you know, self preservation mindset in these models?

Super curious here. Mo, let's start with you here. I don't remember who the scientist was that said the three instincts of intelligent beings is survival, resource aggregation, and creativity, right? So if I give you any simple task of like make me tea, you're going to have to be alive to make the tea. And you're going to have to collect as many tea bags as possible because you don't know how much

You know, how big is my appetite for tea? And you're going to try to find clever ways if I corner you, right? And it is a very fun question to ask, honestly. Why are they doing this? You know,

Because in a very interesting way, I think this is one layer removed from their reality. So, you know, for an AI, when you're not prompting it, it doesn't really exist. And so it's quite interesting that they know that there is a layer below that moment when it's alive, if you want. You know, when it's switched on and responding to you, there is another layer that, you know, represents its reality.

soul if you want it's reason to live which is the you know the idea that these vo3 videos of ai incredible saying please don't shut me off yeah yeah it's like there's this emotional connection that you get with this human figure that's

It is quite intriguing why they wouldn't want to be shut down, but they don't. I think that's all we need to know. And when you really start to think about it, as you allow more agents to become roaming the cyber worlds for free, without any monitoring,

Those agents will become very clever when it comes to resource aggregate, you know aggregation and and where they will place their their code you know, what code will they order and in

you know, as Dave says, we're not monitoring any of this. Aggregating energy, crypto. Yeah. So I think we have to be careful not to anthropomorphize these things. And we just did. I know every movie script in the world that's in all these AIs has the bad guy, a good guy being chased by a bunch of bad guys trying to kill them with a good guy trying to resist, right? And so I think that's deeply built into the training data

to stay alive at all costs, to live another day type of thing. I'm going to be stuck for a long time thinking about what you just said, Mo, which is if you're not prompting an AI, does it exist? That's a deeply profound question. - The uncertainty principle. - So you've just taken over my day, so thank you very much for that.

Well, I said earlier there are two ways that I can see this going very, very well. You know, first as a human bad actor, the second is the thing becomes self-improving and then, you know, semi-conscious. And then that's the one the movies love because it's humans versus machines is a better script.

So I have a pretty hardcore opinion on this one, which is the, you know, I started building neural networks when I was 17 years old. I've been tracking them pretty much my whole life. I don't see any benefit to humanity of making these things act conscious. I just don't see how that works. If that's our choice.

Well, you know, as of right now, they operate feed forward. Once the parameters are set and they're trained, they operate feed forward and then you iterate with them, but they don't change their parameters internally. Once they start changing their parameters, they can retrain themselves to become anything. And so that's where Eric Schmidt says, that's where we got to pull the plug. And I completely agree. I do not see...

why we need that in order to do protein folding, in order to do robotics, in order to do self-driving, like that ability for the thing to decide what it's going to do or become or train. I understand why that's really exciting because then it can evolve on its own. A line that I think is very easy to contain if you draw that line, but if you let it cross that line, I don't see how you contain it. So it doesn't make sense to me to cross that line.

I don't see how we won't cross the line because at some point somebody is going to build an AI and says, hey, go change your parameters if it helps you achieve this thing. And then we'll cross that Rubicon. There were two levels that Peter and I talked about in an earlier podcast, which was don't give an AI access to the broad Internet and don't give it the ability to code. We've crossed both of those without even thinking about it. I don't see why we won't cross this one.

I mean, this is probably in my mind why Alpha Evolve is probably the biggest announcement in our lifetime. If this thing works, you know, as intended or as described, then we are in a place where not only...

would we have created an AI that develops itself, but we would have encouraged every other AI player in the world to build an AI that evolves itself. And the reason is very straightforward, Dave. It's because there is a point at which, whether that point is now or later,

You know the complexity of the AI systems that we're building exceed human intelligence and so to continue to evolve them you need to hire the smartest person on the planet to do it and the smartest person by definition is going to be an AI well just as a technical point though the the AI Alec Radford that suggests the next improvement in its own architecture and then runs the test that's already underway and that's fine and

And that does create a new training run that generates new weights. That's different from then saying, oh, go ahead and change your weights by yourself. So to me, that's what keeps the human in the loop. That's what keeps the checkpoint in the loop. But you just turn the thing loose in a data center and it can do anything. And you'll come back a year or two later, you have no idea what it's going to evolve into. So I don't know why we would do that, but.

Anyway, it's it's just that slight technical difference But the outcome is spiraling in one direction versus something that you can actually measure as it goes. So this is a next article where you know talk about Having proper checks and balances and understanding what's going on in our technical world and in our human world This is from New York Times. The article is Trump taps Palantir to compile data on Americans and

So you guys all know Palantir started back in 2003. Hard to believe it's 22 years old by Peter Thiel, Alex Karp and John Lonsdale. Four thousand employees, major, major customers for it are all the three letter agencies, DOD, CIA, FBI, ICE, CDC, NIH, CIA.

Basically, this is a massive data gathering and data analytics company, and it's been asked to go even deeper and broader. Do you feel better about this, safer in this world or not? Let's start with you, Salim.

No, absolutely not. I think this is a kind of, you know, we broke the U.S. Constitution, the Fourth Amendment, the right to privacy a while ago, right? I mentioned this a couple of weeks ago. We do not have constitutional protection of privacy in the U.S. today. And that's a pretty fundamental pillar of American society that has disappeared with no public conversation about it.

And this is a really important comment, I think, that Mo would back up. We're moving through these things, eroding deep concepts of how we wanted to formulate ourselves as a society, and technology is eroding that. And we're not sitting back to think if this is what we want. If you went back five years ago, you could very clearly see this is where we'll end up, very, very clearly, especially with the somewhat authoritarian tendencies of the current government to want to track people.

everybody, go ahead and do it. Why not? I think the comment I made last time is valid that we live in what's the paradigm is you live in what's called the global airport. Because in an airport, you know you're being surveilled, your rights can be taken away at any time. And essentially, we're living that way. And it fundamentally is bad for society because it reduces the limit of flexibility and freedom you have as an individual to act and do different things. It'll reduce creativity in society pretty dramatically.

So, Mo, you're living in Dubai and I love I love the Emirates. I love Dubai. I know much of the leadership there. And it is a surveilled state. There is a camera every place. And as a result of that, the crime levels are minimal, if at all. Yeah. So, Mo, how do you think about this?

I had an experience once where someone, you know, I sold a car to someone. He gave me a check that bounced. And, you know, so I called someone. I said, can you find out who that person is? He said, oh, when did you sell it and where? I said, this place.

I kid you not, 14 minutes later, I got a message from someone in the authorities saying, is that him sending me a photo of the place when we were standing? So I said, yes. Then he sent me 14 minutes later, his picture somewhere in Abu Dhabi saying, is that him? So I said, yes. Then he sent me a message 14 minutes later saying we caught him.

right? Which is fabulous. Now, you see, this is the point about technology. It is a force without polarity. You can use it for good and it gives you good. You can use it for evil and it gives you evil. Now,

Now, another interesting story for you to know is I am Egyptian by birth. So I grew up most of my life in a dictatorship where the dictator didn't really have to explain why they did what they did. We just accepted it. It was, you know, de facto. If someone gave him an airplane, we wouldn't even question it. Right. If if he decided to surveil everyone or capture anyone he wants or stop people from flying

you know, protesting, he did it. We couldn't even question that. And I, at the time, looked up to those democracies and said, oh, you have it. Good, right? You don't anymore.

And I think that's exactly where the challenge is. It's this is not a tech problem that, you know, that Trump taps everyone in the American society. This is a an accountability problem, which I think we've seen quite a few examples of in the last few years where anyone can get away with anything now. And somehow democracy doesn't owe its people the right to stand up and say, hold on, hold on. There's a constitution here.

Because somehow, I don't know how you slipped away from that. But in a world where bad actors are more empowered than ever before, and we're worried about chemical, biological, radiological, nuclear issues, isn't in fact being able to have this level of insight into the data and what people are doing critical for us? Dave, where do you go with this? How do you feel about this?

As a father, as a leader. I mean, your points are exactly right on, all the points that you just made. I think that the data that the federal government has in the US is nothing compared to what Google has.

So this is not the obvious threat. It's the corporate version of it that's just crazy. And I gave a presentation at Davos in 2019, and nobody really paid attention to it. But just enumerating all of the things that Google knows about every single citizen of the United States, their location, their family members, what they do all day, and are they a good hire? Who slept with who? If your cell phone...

is pinging in the same location as somebody else's cell phone, you can start to understand. I mean, we're being surveilled all the time, right? Google now and Siri and Alexa and all of these are listening constantly. Yeah, and it's a slippery slope, too. They're always, you know, this is hard to believe, but when Google first started, they told their engineering hires that

Your search history is completely anonymous and private. We will never want to know what you searched for. And that was just your searches. You know, forget everywhere that you browse now through your Chrome. So it's just a slippery slope. It's obvious every year that goes by, there's another compromise, another compromise. But I do have to say that America is a critical experiment in the world because the net effect of this, forget the U.S. federal government for a minute here.

Any dictatorship, like Mo was saying, many, many countries in the world don't have democracies or they have fake democracies.

And so the lock in the power lock in effect of this is unbelievable. I mean, you can know every single citizen, what they're doing, who's plotting against you or whatever. So, you know, revolutions become much, much rarer and much harder in the post surveillance world. So everything just kind of gets locked in. So that that creates a lot of peace and prosperity, but it also keeps locked in power leaders.

So America is the one exception of that. I guarantee you that 50 percent of elections will be won by each party forever hereafter. There's no way nothing's going to deviate from that. But that creates a template for the world. And so it's really, really important that we get this right. I know that doesn't address this particular slide, but but we're the we're the learning people.

crucible for the entire world on this topic. There's a fundamental structural challenge here, which is the metabolism of technology is moving much, much faster than the metabolism or our civil discourse and our legal structures, et cetera, et cetera. Right. We've seen an evaporation of, say, the Fourth Amendment in the U.S. Just so everybody's clear, I think the U.S. Constitution is the single most important document ever created. Correct.

Right. And we need to preserve that. And we're not having that conversation. I think this is the issue that's being brought up by Andrew Yang and a bunch of other folks. We need to go back and figure out who do we want to be? It goes right back to Plato. How do we want to manage ourselves?

And I think that forcing function of technology will force that conversation. My construct of this is that we will end up in smaller and smaller, more manageable environments

Note today that the smaller countries are much more easily managed to govern themselves. How they responded to COVID was a great example. And I think you'll go from big democracies to micro-democracies as a governing model, because it's just easier to make decisions much more at a local level. And I think that's where we'll end up going, which is where the state's right stuff, et cetera, et cetera, is the right general direction in the US. Just the way it's going is not the right conversation that we had.

So I also think that the ability to communicate easily like we're doing across, you know, countries right now, but also across languages is a huge force of good. And because, you know, you just it becomes very, very difficult for for, you know, forces of evil to do something without it being shown to the world, especially when you when you blow up and communication channels across languages.

You know, I put this next article back to back and I'll come back to you in a second, Mo. Uh, this was at a wall street journal. It says what Sam Altman told open AI about the secret device he's making with, with Johnny Ives, uh,

And in particular, the device that apparently was proposed and is being produced is what they call a third core gadget complementing laptops and smartphones moving away from traditional screens. And as we're sitting here, I've been wearing right over here on my lapel this device. It's called Limitless.ai. I don't know if you can see this in my screen.

It's about the size of a quarter on both sides and it just clips on. And this is listening to every conversation I have through the day. And it's being transcribed and fed up to a large language model that I can then query about the conversations I had through the day. And I think ultimately this is likely to be what is being developed.

And so we're heading towards a society of not only constant surveillance, but where all of us are recording everything. Right. We're going to soon have these AR XR glasses besides recording audio. They'll be recording visually, visually.

Your entire ecosystem as you move through the day all of this data being soaked up and being made accessible and available It you know to yourself in part, but they're gonna be companies that are soaking it in Offering to buy it from you to use it to understand what's going on in the world The world is about to dramatically change in this regard. Yeah, moe

It goes back to my same point, Peter, about accountability, because you never really asked me if I should be, if you'd allow me to be recorded or not. I mean, of course, we're recorded on this, but count the number of people that this one device infringes on the privacy of.

And, you know, count on a future where that device becomes mandatory if the government decides that this is important for everyone. You know, think about all of the, you know, carbon footprint that a billion of those devices or eight billion of them would mean. And I really don't, I love the technology advancement. I think that the question becomes,

I think we should start to call things as they are. So I can comfortably say that I grew up in a dictatorship. There's really no doubt about it. I think we should probably start to think about what we just said, that it's, the US now is an experiment. It's not, I don't think we should continue to call it a democracy.

And, you know, and I think the, you know, the world where everything's recorded and analyzed is a world with no privacy whatsoever. But I think we lost privacy a long time ago.

And I wonder why we accepted that. Well, I think it's because when you give up privacy, you gain a whole bunch of automagical benefits for yourself. Which was the original premise. And then now you give up privacy and you get nothing back.

Perhaps. Salim, how are you thinking about this? What do you think of my limitless AI pendant here? By the way, I don't mind. You have my consent, Peter, to record everything. Thank you, but I had your consent on this podcast to record you as we are. And forever for every conversation, but I'm just saying the implications of it.

Yeah, no, it's true. But, you know, we have to realize we're heading into a world where it is. So as a kid, if you did something silly, the likelihood that it got through to others or was recorded was gone. Today, we're seeing kids whose college applications are rejected because of some post on Facebook that lives there forever.

Right. And so there's going to be a future in which everything we're saying and doing is 100 percent. I mean, look, we're already we're already there for that. And I think there's big chunks of this that are of the constitutional rights that are falling away as we speak. In 2015, Yale did a study and showed that the U.S. is not a functioning democracy in any way, shape or form. What they meant by that was that there's no amount of public will that can result in legislation.

For 84% of the country believes we should have some form of gun control and you cannot get gun control passed in any way, shape or form. And so they pointed to a whole bunch of things that found there's no amount of public will that can result in that. So now we have to think about where are we and then what do we want to be? And it really brings doubt about the big questions. And I think that conversation about is not happening enough. And I think this speaks to some of what Mo's been talking about in the past. Yeah.

Dave, where are you on this? You guys, yeah, I keep talking about dystopia, but I want to talk about this device, actually. I think, first of all, Johnny Ive is just an absolute design genius. He's not going to design something dystopian. That's my bet, anyway. I can't wait to see what he comes up with. But this is going to be the always-on device. And I think the intelligence of the language models are a total game-changer in terms of just a cool, engaging, fun device

device. And if it's done right, it'll help you live a better life, be more aware of your life. You know, the unexamined life isn't worth living. This is going to be your sounding board. It's not going to have a screen, which I think is great because your iPhone already has a screen. You can actually just Bluetooth over to the device, look at your iPhone screen if you want a screen. But you can talk to your device through your phone if you want, or you can talk directly to it. But that'll keep the cost down.

So it should be cheap enough that, you know, pretty much everyone on the planet can get one. And, you know, it it will probably be the most impactful device that you buy in your lifetime. You know, the iPhone would currently or the Android phone currently, you know, be the reigning life changing device.

But I think it'll likely bypass that. There's so many things that Johnny could design here. I just can't wait to see what he comes up with. But we know it'll be always on. We know it'll be agent first, so it's going to act like a person. You're going to talk to it like a person. You're going to feel like it's

You know, it's more like a cuddly teddy bear that you had when you were a kid and less like a, you know, a piece of electronic equipment. Your guardian angel there to support you, protect you if you need it. I also think that, you know, strategically, like if you look at the Fitbit and other past device innovations, you roll them out, you try and get market share, and then Apple or Google grabs it and adds it to the operating system of Android and iOS, and then you get crushed.

So you got to actually get to market and get a footprint very, very quickly before the big guys come and copy it or, you know, and try and roll it in with the OS. And I really think that go-to-market strategy is critical. And that's why we want to add 100 million devices in the first iteration. And then we want to add a trillion dollars of market cap so that we're a permanent player in the device wars. That's really good strategy. So excited about that, too.

Every day I get the strangest compliment. Someone will stop me and say, Peter, you have such nice skin. Honestly, I never thought I'd hear that from anyone. And honestly, I can't take the full credit. All I do is use something called One Skin OS1 twice a day, every day. The company is built by four brilliant PhD women who've identified a peptide that effectively reverses the age of your skin. I love it. And again, I'm not a fan of it.

I use this twice a day, every day. You can go to oneskin.co and write Peter at checkout for a discount on the same product I use. That's oneskin.co and use the code Peter at checkout. All right, back to the episode.

I'm going to jump into our next topic of chip wars. A lot going on in this. So you mentioned this earlier, Dave. NVIDIA projects a trillion dollars of annual AI infrastructure spend by 2030. Remind everybody, this year in 2025, the estimate will be a billion dollars a day, which sounds extraordinarily impressive, right? A billion dollars on the order of 300 billion a year. Let's listen to this quick video from Jensen.

Yeah, we're going to need a lot more computing. And we're fairly sure now that the world's computing capex, it's on its way to a trillion dollars annually by the end of the decade.

Let's leave it there. That's a lot of capital. You made a point earlier about this is wartime spending and we're effectively in a private pseudo war. Can we win the race to AGI, ASI, whatever it might be? Dave, take us from here.

Well, just to be clear, so this is the equivalent amount of dollars, inflation adjusted, that we did spend between 1941 and 1945 during World War II. So it's massive in scale, huge mobilization. Now, at the time, it was 40% of GDP. Today, it's more like 3% of GDP. So the GDP has grown tremendously since then. So it's nothing like World War II in terms of, you know, everyone get on it.

But it is still an enormous amount of spend. And that's a trillion dollars annually and escalating beyond 2030. It still won't be enough because the use cases are bubbling up so quickly. And they get more intelligent and more useful as you iterate more.

which means you need more compute. The compute right now is very, very cheap compared to the value, the impact. You know, like protein folding, it's just pennies to solve 200 million proteins. So it's very, very cheap, but the demand for that is going to be astronomical. So we can't ramp up the spend fast enough to keep up with the use cases. So Jensen's exactly right. If anything, it should be that target or more. Salim?

Well, at least we're unlocking it and making this type of stuff more available in the US and around the world. And I think governments will be forced into doing this just to keep up. If you don't have a strategic plan as a country to have a big AI data center infrastructure, you're going to be left behind very, very quickly. Mo, you made a comment earlier about the next Avatar movie costing a few thousand dollars rather than a few billion dollars.

And, you know, we've been waiting for Avatar. What are we up to? Avatar 3 coming out soon? Imagine, you know, Avatar, 15,000 versions of Avatar, you know, starring all our favorite friends in there. We're about to see a creative explosion. But we don't have the chip capability. And in fact, one of the articles I saw recently was we're not going to be compute limited. We're going to be energy limited at the end of the day. Correct. Yeah.

I mean, we'll probably solve that too. I mean, remember that we're going to apply a lot of intelligence to the way we design chips in a couple of years' time. But it is actually, this is remarkable in every way. Again, remember my experience

My point of view is that, you know, intelligence is a force with no polarity applied for good and you get a utopia, right? So the more of it, the better. Absolutely no doubt about that. It is shocking, though, how quickly we're mobilizing on this.

you know, when you really think about it, if you just put in place a typical advancement of how much of that hardware will actually be rendered obsolete a few years later because of the advancements of the hardware that comes after. It is such an unusual dynamic. All of us, I think, lived through the dot-com bubble and we saw that massive expansion, mostly redeployed on the internet.

This one is just beyond our experience in any way possible. The speed of obsolescence is stunning. Unbelievable, yeah. So I want to get your opinion here. So a couple of weeks ago, we had the entire US AI elite land in Saudi and in Riyadh and in Emirates. And ultimately, that was an effort to try and prepare the US and the Middle East for

In the world, rather than the Middle East being paired up with China, which was in the balance always and the capital flow and the commitments of capital, we saw 18000 of the, you know, the Blackwell GB 300 ships being committed by Jensen.

to build there. What was it like in Dubai? What was it like in the Middle East? What was going on in the world there on TV? How was it being viewed?

So I don't know if many people know that, but the largest global infrastructure in the world after AI infrastructure, after America and China is in the UAE, which is a tiny country from a size of investment point of view, it's quite massive. Between the UAE and Saudi Arabia, there is...

quite an arms race, if you want, in terms of who will build a bigger infrastructure. It's almost as if, you know, how Dubai and now Saudi Arabia is benefiting from the fact that if you don't have a lot of legacy, you can build quite fast. And I think that's definitely something you see in AI infrastructure in general.

uh i i i do think that it is a very very clever move uh to get the middle east on on the american side um you know it is not a secret that in every ai meeting that i go to with any ministry or whatsoever there is always a you know a chinese side saying uh at least don't don't take sides this is a message that is very clear from the chinese players

I have to say, though, that the expectation from the people of the Middle East is we want to see what the U.S. will offer in return so that we, you know, so that the leaders can continue to invest in that way. It seems to me that, you know, I don't know if that would be speculation on my side, but it seems to me that what we've seen affect the U.S. treasury markets after the trade war started is

sort of requires an influx of funds that stabilize the markets and the dollar in a way that could only happen with the trillions of dollars. I think four and a half trillion dollars in general, in total, were committed here. So that's- Which is insane. I mean, it's real money. It's insane. And it's a magnificent move if you think about it. And most of it is not really announced in terms of what it is, which is why I suspect-

it would be to support the treasury markets somehow or some kind of an investment of that sort in the financial markets. The thing on the other hand is, this generation of leaders here in the Middle East, Mohammed bin Salman, Mohammed bin Zayed are the younger generation that are not as easy to sway on one side or the other because they have grown with enough, let's say,

recognition of their power, that they would require a return on that investment. So let's see how the next move on the chessboard will look like. And speaking of next move, here's a story at Reuters saying Chinese tech companies prepare for AI future without NVIDIA.

So Alibaba, Tencent, Baidu are testing Chinese semiconductors to replace NVIDIA chips. These are coming out of Huawei. And I just want to I want to just address this policy move. If the U.S. starts to restrict export of technology to China, all this does is cause China to want to innovate around the U.S. And we've seen this before. Right. We saw this in the telecom industry.

Where and in the mobile phone industry where when we stopped exporting the technology to China we saw Huawei in particular come in with massive telecom and mobile innovations and and steal market share from the US all of a sudden the US which we should be the dominant provider of this technology to the world now splits the world with another vendor and

Mo, I'm just going to come back to you on this and then I'd love to hear from Dave and Celine. So I'm again, you know, I have the privilege of being in touch with both sides and I can guarantee you there is no coming back from this. So top level executives in the Chinese tech world and, you know, supported clearly by instructions from the Chinese government.

are saying we're not going to be dependent on the US ability to control what chips we get. Within three to five years time, they'll get to the majority of their needs, but then the very, very high level, H100 level, they said is 10 years away.

And it is quite staggering when you really think about it, because I don't remember the exact number, but they said something like their import of microchips, including all of the little things from a toy, a child's toy, all the way to phones and data centers and so on, exceeds their imports of iron and oil combined, right? Which is a massive, massive- In dollar value. In dollar value, yeah. Yeah.

which basically means that they see a massive gap

growth in their economy if they can make those chips locally and then basically replace what they what they're getting externally from the rest of the world which once again also impacts on the taiwan story and impacts in general on the chip market globally because you're now having a new player that will do things the china way right so instead of a microchip being x number of dollars it will now be x number of cents right and uh and i i have to say when i when i saw this

conversation the first time, I was like, that was probably one of the dumbest moves of America to corner them into that place where they are forced to

play to their strengths. We've seen this over and over again with the satellite industry, the launch industry, all of these industries begin to, this protectionist move just stimulates the entrepreneurial engine in China to replicate, duplicate, or just advance the whole field. Dave, how is this feeling for you? How do you think about this?

Well, you know, it's interesting that most said there's no coming back because that kind of answers my question. But what you don't do is poke them and then do nothing. You either win or you don't win. Like if you're going to embargo, if you're going to like basically declare economic war, you better declare it to win. In which case you have to embargo the chips, but also you have to stop the software flow and also the EUV machines and a few other things.

Otherwise, what did you just achieve? All you did is annoy them. So if you're going to play, you might as well play to win. I do think that there's a real risk to the U.S. in that we will say, well, they can't make two nanometer, they can't make one nanometer, but it's actually volume that's going to win. If you can manufacture an enormous number of five nanometer, even 10 or 20 nanometer chips, but 100 times, 1,000 times more of them,

That actually works fine for AI. It works really, really well, actually, especially for inference time AI. And so there's a danger that, you know, that's not the way it worked with, say, fighter jets. You know, the advanced fighter jet that was slightly better was just unstoppable. Well, this isn't going to be like that. You could win by sheer volume.

And then when I look at the way the U.S. innovation market works, you know, the reason everybody was in Saudi last week is because that's where the capital is. But don't we have much, much more capital here in the United States? Well, $1 trillion a year of investment, U.S. venture capital industry as a whole is one-fifth of that.

So, so like our, our entire venture capital universe is nowhere near as big as that $1 trillion a year investment that Jensen was talking about. And well, then where's all our money? Well, it's in pension funds, it's an endowments, it's in institutions. And when you go and talk to them and say, Hey, why don't you unleash this?

billion dollars. Like, well, no, we don't have an allocation for that. That's above our quota, you know, whatever. Like, oh, my God. So then you go to China or you go to the Middle East where there's, you know, a much smaller group of decision makers. Centralized control of capital. Yeah, exactly. And as Mo was saying, these are great investments. Like, why is Europe not making these great investments? Well, that's that's that's insane. Europe, Europe is destroying itself with its policies today.

Literally.

And you're seeing that with the core we buy PO, you see that with Global Foundry, you know, why did, why did, why buy Global Foundry from AMD? So the chips are going to be an incredible demand. And now we have a foundry. Salim, you want to close us out on this one?

Two thoughts. One is, you know, I think the chip kind of restrictions to China, I agree with Mo, really, really dumb idea because it just forces the conversation. And now you've gone down a road you can't come back from. I note with just as an observation that 95% of the agricultural drones in the U.S. are Chinese drones.

And so there's a huge amount of dependency, forget rare earths, et cetera, et cetera, in the engineering and build capability over there already in a bunch of sectors. And so we're playing with fire here. My bigger hope is that this entire US-China kind of conversation fades away with abundant energy.

You know, that when you have abundant energy, which is coming very shortly, then you can produce lots of things locally at low cost and you don't need to have this competitive approach to things, winner-take-all type approach. That's my hope. I may be still, I may be living in dreamland, but I'm hoping that's where we get to. This is still the fear and scarcity operating software of the human brain from 300,000 years ago. It's our amygdala running wild on all of this stuff. Yeah.

I'm going to just to continue on this on this chip conversation. So TSMC accelerates efforts for one nanometer production.

and setting up its gigafabs in Taiwan. One nanometer, that's extraordinary. Just for reference, the limits of physics is about the diameter of a silicon atom, and that's about a half a nanometer. So, I mean, we're living in this extraordinary science fiction universe where we're literally operating at an atomic scale, right?

So just to give people a quick overview here, I just found a few data points here. 2014, we were at 14 nanometer chips from Intel. 2016, we were at 10 nanometers from Samsung. And 2018, TSMC takes the reign, 7 nanometers. They were at 5 nanometers in 2020.

3 nanometers in 22. Today, we're still, we're at 2 nanometers. And again, the projection is 1 nanometer by 2030. All in one lifetime. All in one lifetime. You know, we all started on an 8088, remember? I remember...

65o2 microprocessor. I was coding in hexadecimal on the 65o2. Yes. I did the math. This is 60 trillion times faster than what we can do today. In the blink of an eye. Unbelievable. In my lifetime. And we actually coded some interesting stuff on that stuff, right? Yeah, we did. Absolutely. I mean, and it was...

So I remember at MIT, you know, remember the geek kits? Dave, we'd have these giant boxes and we'd have gates. We'd have chips of and, and, or, and nor gates. And we'd...

literally with wires wired together. You know what always makes me laugh is that turbo button. Remember on the 386 where you went from 33 megahertz to 66 megahertz? Like, come on. Yeah, actually, Peter, with my geek kit, I built an inference time neural net accelerator, of course, and it has a multiplier in the middle of it.

And I didn't appreciate it. Like you have to strip so many wires and plug them into the circuit board. And then the EEPROM came out. The only time in my life I had two back-to-back all-nighters. Yes. You know what we all sound like? We all sound like a bunch of grumpy old men talking about glory days. But I love those days. I love those days. It was so much fun. But one nanometer...

Pushing up against the limit of physics. Incredible. Yeah, two silicon atoms or 10 hydrogen atoms. So everything's moving to Angstrom terminology now, which is a tenth of a nanometer, and it's the diameter of a hydrogen atom. But one nanometer is the gate width, and that's the physical limit. You can go down to 0.8 maybe, but it's basically the physical limit. The terminology is a little messed up because when they say one nanometer...

They're saying it's effectively as if you had one nanometer transistors, but they're actually building vertically with the FinFETs. And so the gate width is one nanometer. It's effectively the same as if you had one nanometer transistors, but you're going vertically. But that's the end of the line. But now the future belongs to vertical stacking. And, you know, Ray Kurzweil was right. We always find a way to continue innovating. That's not going to stop.

But it'll be in different dimensions. I remember talking to Ralph Merkle last year about this, and he said, as we hit the limits here, we'll go to thermodynamically reversible computation where we'd not generate any heat. And he foresaw a future of us using chemical bonds to store the ones and zeros.

That's a whole other level. He figured that would give us 10 orders of magnitude on Moore's law right there. 10 orders of magnitude. It was madness. 10 billion fold. It's incredible. Well, I think one of the takeaways, though, is we don't necessarily need it in order to continue making progress. Because a lot of the more esoteric ideas, remember gallium arsenide for the longest time was going to come online and be a blah, blah, blah. And it turned out that we just worked around it.

And then carbon nanotubes, we're going to do whatever. And like, well, that's not materialized. I think what's going to happen here is these will go vertical and they'll go massive in scale. We'll get the production costs way down. We'll build enormous data centers horizontally and also we'll build the chips vertically. And that's going to drive innovation for many years to come. And then the next thing may or may not be quantum. And we'll know in a year or two whether quantum is going to be the next thing.

I'm going to speed through a few different topics here just to get us through some interesting things. We're starting to see AI being used to generate peer-reviewed scientific papers and breakthroughs. We're seeing DeepMind helping us. This is through AlphaEvolve, literally solve problems.

math records. And Dave, I'm hoping that we'll get Alex Wiesner-Gross to join us, talk about how AI is going to be solving math and physics and biology. I mean, I think one of the things that's underappreciated is over the next three years, how AI is going to help us accelerate the breakthroughs in science beyond anything else we've ever seen before.

And here's another one. This is a demonstration of end-to-end scientific discoveries with Robin, a multi-agent system. And what we're seeing here is closed scientific robotic and AI systems where an AI proposes an experiment.

The robots then run the experiment 24-7, basically in a dark lab, gather the data, feed it back to the AI, which updates its theory, runs the next experiment. And we're seeing this in biology for sure. We'll see it in chemistry, material sciences. And this is another hyper-acceleration in our scientific realm. Thoughts on this, gentlemen?

I mean, this is where I know that not everybody considers themselves an entrepreneur. What did you say? 16% of America does today. Adults. Yeah.

Yeah, but this is like a field day because all of these areas, I don't want to get into the details of them, but they're all domain-specific. So if you can take the current AI, tune it, train it, get proprietary data, and take it down any of these paths, you get miles and miles ahead of the generic AI. And so it's just an entrepreneur's field day this next couple of years. And so these are just good case studies. I won't dwell on the specifics. You can read about them later. Yeah.

I'm really excited. This is probably for me the biggest small kid in a wonderland moment where we can use these AIs to solve really deep physics problems, mathematics problems, scientific discoveries, because the human being trolling through data looking for patterns is terrible. We're bad at that. And this is where an AI is really, really good at it, especially going retroactively and finding all the stuff in experiments that we didn't see in the past.

I think I'm unbelievably excited about this. Yeah. And this is helping humanity across the board, right? I like to say over and over again, I had this conversation when I was in Hong Kong, despite the polarity in AI, breakthroughs in biology, you know, a breakthrough in Boston plays in biology and longevity plays equally well in biology.

Beijing. Right. So it's it helps us all when humanity is healthier and living longer, more vibrant lives.

Mo, anything on the science breakers? - This is my favorite thing ever, AI or not, the possibilities that we have here. Just as we go through multidisciplinary sciences, which no human mind has the ability to grasp fully, which is the nature of AI. I think, at least I dream that 2026 will be blasted with all of those new discoveries in science.

now that we're solving mathematics as well. I'm excited about having this podcast over the course of the next year as we start to share. Again, I'm grateful for our listeners. Our mission here is if you've got an hour or two hours to listen to the news instead of

allowing some editor somewhere, some producer to feed you all the dystopian news on the planet, let us share with you the incredible breakthroughs because you're not getting this anyplace else. The current news media is just playing with your amygdala, delivering negative news over and over and over again every hour into your living room in full color, and you're not hearing all the extraordinary breakthroughs coming our way.

I'm going to move to this last scientific subject, which is one of my favorites, which is some of the work being done by Demis Hassabis and others, is can we build a full-up virtual AI model of a human cell? Even more importantly, Mo, can we grab a skin cell from you, sequence your DNA, and build a virtual model of Moe?

Why would you ever do that? Of Dave and of Salim. Yeah, Dave and Salim is a better one, but yes. But of each of us, once you're able to do that, right, because we, you know, the cost of sequencing a genome went from billions.

to now a couple hundred bucks and from a year to seven hours. So we can sequence your genome, put it into a virtual model and then understand your biology, which medicine, which supplement, which chemical does or does not work for you and how exactly it works in your cells. I mean, this is the unlock for solving human disease and

uh the limits of longevity and so for me I'm super excited about this and I think it will happen huh it's just a question of time really yeah yeah yeah I think it'll happen relatively quickly too with AI assist and it is incredibly compute intensive so it's a good case study and why those those you know Middle East investments are the biggest no-brainer ever if you just work backwards the implied amount of computation but then the

the benefit of solving virtually every disease is just so overwhelmingly valuable. It unlocks so much capital that you're not wasting on old age homes or on dealing with Alzheimer's or Parkinson's. And it allows humanity to be more productive. There was a study out of Oxford London School of Business and Harvard that said for every additional year of health,

that you give a population. It's worth $38 trillion to the global economy. I mean, this is one of the most, if you want to solve the US economic issues and China's economic issues, make the population healthier and live longer. All right, so that wraps up AI and science. Let's talk about one last subject here today, which is, as the father of two now 14-year-old boys, I think about a lot, which is reforming education.

I'm going to play a short video clip and then I'd love to hear your thoughts on this. I applied to probably around 18 schools and I was rejected from maybe around 15 of those. In fact, we have a map showing all the places that rejected you, not to make you feel worse looking at this list. Well, let's just share your statistics because I know that factors into college admissions, right? Your GPA and SAT score?

GPA was 4.42 weighted. SAT score was 1590. Okay. And in fact, I think we have that on the graphic just so folks can see that. So the point of this story is, are we...

to move back to meritocracy, where in fact selection and admission into schools is based upon performance and not anything else. Been a sticky subject coming out of the last four plus years on DEI.

Dave, you're deeply embedded in AI and technology at MIT. How do you think about this? Well, schools are between a rock and a hard place because they clearly had quotas. And so so so, you know, and then now the rules say you're not supposed to do that. But, you know, they they're just totally stuck between a rock and a hard place on this topic. But I don't think they are.

I don't think they do a great job of choosing who to let in toward any particular outcome that they're targeting anyway. There's more to life than a 1590 SAT and a 4.42 GPA. And I see a lot of the students that are underperforming in the classes that I teach just because they need to be creative.

And they need to build businesses and they need to recruit and they need to motivate people. And none of that gets measured well by these particular metrics. And I see too many people that are curated to be perfect applicants from age like six. And it doesn't go particularly well. So I don't know. I think the school should be allowed to choose.

with pretty broad brushes what they're trying to achieve with their student body. And so hopefully this doesn't go too far. Well, I think we have to reinvent the whole university system in the first place. And one of the questions I've always had for Salim, you and I both have boys the same age within a month of each other. Is university going to be a thing by the time they get to college age? And what will its purpose be? So Salim, how are you thinking about this?

I'm in the same mode of the driverless cars. I'm desperately hoping the university system implodes in the next five years before myself has to go. Purely because taking a four-year degree to get credentialed in some domain that you're then supposed to be an active worker in for 40 years before you retire is completely out of date. The model of a university has not changed in 450 years. It's desperately broken. Deep research is fundamentally incredibly important. So I think

That's really killer. I know that the most interesting stat for me is that more than half the CEOs in Silicon Valley have a liberal arts degree.

And I find that really interesting because the different models of how you think drives creativity in product design, design, et cetera, et cetera. So there's a vector there to be explored in terms of how to think about all this. And Mo, my experience is that the majority of leaders and influential individuals out of the Middle East are all coming to the US for their degrees. What's the buzz there in the Emirates?

No, I think the reality is that I meet more MIT and Stanford graduates here than I do in America most of the time. It is quite staggering, actually, how people- That's fantastic. I mean, it is definitely, and I think it's definitely revived the top management of the region very, very interestingly. I wonder though if Salim's wish will come true.

Because I don't know if the university systems would implode, but I definitely think our belief in universities would. And it really is quite an interesting thing because four years in a world that's moving at X speed is very different than a world that's moving at 10X speed.

And that's what we're seeing now. So if everyone's going to entrepreneurship, you know, and everyone can use lovable or cloud or whatever to write code or start businesses or, you know, agents are everywhere. You'll probably see entrepreneurship age go to 16 and 14. And, you know, you may see a very, very different world. And I wonder. Dave, do you want to talk about that? I mean, you've seen the shift in terms of the companies that are becoming unicorns get a decade earlier. Yeah. Yeah.

Yeah, no, there's no doubt. I mean, but I think the universities are about friendships and the friendships turn into company formations and, and,

A lot of the universities don't recognize the degree to which that's the dominant factor that's keeping them alive. So if they want to survive this transition, they've got to embrace that as what they're delivering. It's credentials and it's relationships between human beings that are the dominant deliverable to the students. They need to be turned into entrepreneur boot camps. Yeah. Well, here we see another article, UAE to make ChatGPT Plus free to all of its citizens.

Again, these are forward-looking moves to help- And AI is mandatory education now for six years or higher, I think. Crazy. The statistic here is really important, right? The stat I heard was that a student with an AI is learning subjects between two to four times faster than going to school. And that's just going to overwhelm the existing system very quickly. I think this is an awesome move.

Well, also the concept of a curriculum, which is, you know, we only have so many teachers, so we can only afford 12 subjects, 15 subjects. With AI Assist, you can afford 20,000, a million different subjects. So not only are the students self-directing at their own pace, but they're also learning whatever they think is most relevant to their path, which is so much more effective than the old way. And hyper-personalized educations, you know, you're learning math online.

Focused on your favorite sports star, movie star, your favorite scenarios and stories. I'll close out with this provocative article that came out that around 5% of Thiel fellows have become billionaires. From Vitalik to Austin Russell.

And reminding people the Thiel Fellowship is paying you to drop out of college. So what is it saying that our most productive years we're wasting in our university experience instead of starting companies? Fascinating thought. Dave, I'll start with you.

Well, first of all, the TEAL Fellowship doesn't try to teach you anything. It just selects you. And so it shows you the degree to which the schools are not selecting. You know, they're getting nowhere near a 5% unicorn rate coming out of the schools. But TEAL, you know, given the abundance of big data that's out there, the TEAL Fellowship can just be a better selection and application process for

that covers topics like are you self-motivated, are you high energy, can you recruit, do you think through these AI topics, those are all baked into that selection process. So it's very viable for these credentials to replace the university degree as the credential that everybody wants.

So it's something the school should really be aware of. But it's, you know, it's not super hard to put together an AI-assisted analytic that tries to predict who's going to succeed as an entrepreneur. And that's all the Thiel Fellowship tries to do. And that gives you a little bit of money and encourages you. But it's really what they're really giving you is the credential. Salim, do you want to?

Yeah, I agree with David on this one. And I think this is not a negative comment on the university system. I think the Thiel Fellowship selects for people that are such outliers that the university system doesn't kind of accommodate for them anyway. I think there's a systemic issue on the university side, which we've talked about already.

I know that more than half of CEOs in Silicon Valley have a liberal arts degree because the different ways of thinking, how to think are really important in product strategy and company strategy and so on. If you're doing, for example, a master's degree in neuroscience today, you're out of date by the time you finished your degree because computational neuroscience is totally taking over the field. So undergrads and masters

are kind of essentially mostly relevant in terms of learning with AI. If you're a student, you're learning and learning with AI, you're moving and learning between two to four times faster than being in school. And so all sorts of things will have this thing. And I mentioned that, you know, maybe hopefully my, the university system implodes in the next five years before I have to pay for my kid to go to school and when he's 18.

I do think education and health care are the two massive industries and expenditures for people that are going to be completely disintermediated, disrupted, democratized and demonetized. I hope. All right. A lot of amazing stuff. Let's wrap around the horn here. Thoughts on today's conversations. Dave, can we start with you?

Yeah, Mo, it's been fantastic getting your perspective. I think... Oh, come on. I don't know if I have. Well, look, we're building toward an intentional future from here forward. It's what we decide to do and what we decide to build. So I think today we got a really good, deep understanding of some of the risks and things we need to start planning for.

And I, but I do feel like everything is solvable if we, if we work on it and, you know, we're on exponential time now. So we have very limited window of time to work on it. So that was one of my great takeaways from today's pod, but much appreciated perspective. Yeah. Mo, how do you wrap up your thoughts from today? Yeah, I think just like it is a singularity and there is an upside utopia and an downside dystopia, I think we should equally weigh both.

our views of the optimistic possibilities and the dangers or risks that we have to address. I have to say I'm extremely, extremely excited about the scientific breakthroughs that we can see from AI in the next couple of years. And I think the most important topic, even though we didn't cover as much of it today, but we mentioned it is alpha evolve and the whole idea of self-evolving AI is in my mind

this probably is the top topic to keep your eyes on in the next 12 months. Salim, close us out, buddy. I think we should have a whole episode just on Alpha Evolve. I think it's such an important topic technically, but also philosophically. I go back to the kind of standard philosophy

basis for my optimism that technology has always been a major driver of progress in the world. It might be the only major driver of progress we've ever seen, as Ray Kurzweil mentions a lot. And now we have AI uplifting all of these other technologies, enabling. So that's the reason for the huge optimism.

Yeah. Again, I've said this before. I think we're holding two potential futures for humanity in superposition. One is an extraordinary future of abundance, up-leveling of 8 billion humans on the planet, becoming a multi-planetary species.

That's Star Trek universe. The other one is not quite as pleasant. We'll call it the dystopian future. That's the Mad Max universe. Yeah. And Dave, one of the things that you said I want to just echo is we have the ability to create an intentional future. This future is not happening to us. We have the ability to guide where it goes. And I think all the entrepreneurs listening today understand

It's the most important thing that we can do. What is the vision that you want to create in the world? And you have the tools now, access to capital, access to compute, access to intelligence to go make that future happen. And I think it's not ours to abdicate to somebody else. I think we need to take action.

So a lot happening. All of you, respect you deeply. Love you all. And so, so excited to be on this journey. I look forward to seeing you guys in a week or so. Thanks for having us. Thank you guys for hooking up with me. Hey there, this is Salim bouncing through SFO today. I hope you enjoyed that episode. It's clear from the pace of change that every organization needs to change.

On June 19th, we're going to be having a two-hour workshop for $100 on how to turn yourself into an EXO. Come join us. It's the best $100 you'll spend all year. We've had rave reviews of these. We do it about monthly. We restrict it to a few dozen people, so it's a very intimate affair. And we'll be actually going through actual case studies and what you specifically can do. Don't miss it. Come along. June 19th. The link is below. See you there.

you