We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Doom vs Boom, EA Cult Returns, BBB Upside, US Steel and Golden Votes

AI Doom vs Boom, EA Cult Returns, BBB Upside, US Steel and Golden Votes

2025/5/31
logo of podcast All-In with Chamath, Jason, Sacks & Friedberg

All-In with Chamath, Jason, Sacks & Friedberg

AI Deep Dive AI Chapters Transcript
Topics
Jason Calacanis: 我认为AI可能导致大规模失业,尤其是在卡车司机和Uber司机等职业中。我担心AI末日论不仅仅是技术问题,背后还隐藏着更深层的政治和经济议程。我们需要认真对待AI可能带来的社会影响,并为可能出现的失业潮做好准备。 David Sacks: 我认为AI实际上会创造更多就业机会。AI提高了资本回报率,因此企业会增加投资,从而创造更多工作岗位。我不认为AI会像人们担心的那样摧毁就业,相反,它会推动经济增长和创新。 Chamath Palihapitiya: 我认为现在对AI安全性的警告与Anthropic的关键融资时刻密切相关。我认为这是一种非常聪明的商业策略。这些公司需要找到一个角度来吸引投资,而夸大AI的风险是一种有效的方式。此外,有效利他主义运动正在推动全球AI治理,这背后存在着特定的意识形态和政治议程。 David Friedberg: 我认为AI确实存在风险,比如可能发展成超出我们控制的超级智能。但是,我认为这些担忧被过度炒作了。我们不能只关注一种风险,而忽略其他风险,比如中国在AI竞赛中获胜。我认为,与其过度监管AI,不如专注于创新和竞争。

Deep Dive

Shownotes Transcript

Translations:
中文

All right, everybody, welcome back to the All In podcast, the number one podcast in the world. You got what you wanted, folks. The original quartet is here live from D.C. with a great shirt. Is that is your haberdasher making that shirt or is that a Tom Ford?

That white shirt is so crisp, so perfect. David Sachs. Are you talking about me? Your czar. Your czar. I'll tell you exactly what it is. I'll tell you what it is. You can tell me if it's right. Brioni. Yes, of course. Brioni. Brioni spread collar. Look at that. How many years have I spent being rich? When a man turns 50, the only thing he should wear is Brioni.

The stitching is... Looks very luxurious. That's how Chamath knew, right? Chamath, how'd you figure out the stitching? It's just how it lays with the collar. To be honest with you, it's the button catch. The Brioni has a very specific style of button catches. If you don't know what that means, it's because you're a fucking ignorant malcontent yourself. I'm looking it up right now. Right. Yeah, he's... I just asked you on TPT. Rain Man David Sack. We open source it to the fans. Love you, man. Queen of Kicks.

All right, everybody, the All In Summit is going into its fourth year, September 7th through 9th. And the goal is, of course, to have the world's most important conversations. Go to allin.com slash yada, yada, yada to join us at the summit. All right. It's a lot on the docket, but there's kind of a very unique thing going on in the world, David. Everybody knows about AI doomerism. Basically, people who are concerned, rightfully so, that AI could have some significant impacts on the world. Dar

Dario Amodei said he could see employment spike to 10 to 20% in the next couple of years. They're 4% now, as we've always talked about here. He told Axios that AI companies and government needs to stop sugarcoating what's coming. He expects a mass elimination of jobs across tech, finance, legal, and consulting. Okay, that's a debate we've had here.

and entry-level workers will be hit the hardest. He wants lawmakers to take action and more CEOs to speak out. Polymarket thinks regulatory capture via this AI safety bill is very unlikely. The U.S. enacts AI safety bill in 2025, currently stands at a 13% chance. But, Zach, you wanted to discuss this because it seems like there is...

more at work than just a couple of technologists with, I think we'd all agree, there are legitimate concerns about job destruction or job and employment displacement that could occur with AI. We all agree on that. We're seeing robo-taxis start to hit the streets. I don't think anybody believes that being a cab driver is going to exist as a job 10 years from now. So there seems to be something here about AI doomerism, but it's being taken to a

A different agenda, yeah? Well, first of all, let's just acknowledge that there are concerns and risks associated with AI. It is a profound and transformative technology. And there are legitimate concerns about where am I lead? I mean, the future is unknown, and that can be kind of scary.

Now, that being said, I think that when somebody makes a pronouncement that says something like 50% of white-collar jobs are going to be lost within two years, that's a level of specificity that I think is just unknowable and is more associated with an attempt to grab headlines. And to be frank, if you go back and look at

at Anthropix's announcement or Dario's announcement, there is a pattern of trying to grab headlines by making the most sensationalist version of what could be a legitimate concern. If you go back three years ago, they created this concern that AI models could be used to create bioweapons, and they showed

what was supposedly a sample, I think, of claw generating an output that could be used by a bioterrorist or something like that. And on the basis of that, it actually got a lot of play. And in the UK,

Rishi Sunak got very interested in this cause, and that led to the first AI safety summit at Bletchley Park. So that sort of concern really drove some of the initial AI safety concerns, but it turns out that that particular output was discredited. It wasn't true. I'm not saying that

AI couldn't be used or misused to maybe create a bioweapon one day, but it was not an imminent threat in the way that it was portrayed. There have been other examples of this. You know, obviously people are concerned about could the AI develop into a super intelligence that grows beyond our control? Could it lead to widespread job loss? I mean, these are legitimate things to worry about, but I think these concerns are being hyped up to a level that there's simply no evidence for. And the question is why?

And I think that there is an agenda here that people should be concerned about. So let's start with maybe, Freiburg, things that we all agree on here. There are millions of people who drive trucks and Ubers and Lyfts and DoorDashes. You would, I think, agree the majority of that work

In but five to 10 years, just to put a number on it, will be done by self-driving robots, cars, et cetera, trucks. Yeah, Dave? I think that might be the wrong way to look at it, or I wouldn't look at it that way, and maybe I'll just frame it a different way. Please. If I'm deploying capital, let's say I'm a CEO of a company, and I can now have software

that's written by AI, does that mean that I'm going to fire 80% of my software engineers? Basically, it means one software engineer can output, call it 2050 times as much software as they previously could by using that software generation tool.

So the return on the invested capital, the money I'm spending to pay the salary of that software engineer is now much, much higher. I'm getting much more out of that person because of the unlocking of the productivity because of the AI tool that I previously could. So when you have a higher ROI on deployed capital, do you deploy more capital or less capital?

Suddenly, you have this opportunity to make 20 times on your money versus two times on your money. If you have a chance to make 20 times on your money, you're going to deploy a lot more capital. And this is the story of technology going back to the first invention of the first technology of the caveman. When we have this ability to create leverage, humans have a tendency to do more and invest more, not less. And I think that's what's about to happen.

I think we see this across the spectrum, people assumed, oh, my gosh, software can now be written with one person, you can create a whole startup, you don't need to have venture capital anymore. In fact, what I think we're going to see is much more venture capital flowing into new tech startups, much more capital being deployed, because the return on the invested capital is so so so much higher because of AI. So generally speaking, I think that the premise that AI destroys jobs is wrong, because it doesn't take into account

the significantly higher return on invested capital, which means more capital is going to be deployed, which means actually far more jobs are going to be created, far more work is going to get done. And so I think that the counterbalancing effect is really hard to see without taking that zoomed out perspective. To respond to Sachs's point,

I do think anytime you see a major change, socially, societally, there's a vacuum, how's the system going to operate in the future. And anytime there's a vacuum in the system, a bunch of people will rush in and say, I know how to fill that vacuum, I know what to do, because I am

smarter, more educated, more experienced, more knowledgeable, more moral, I have some superiority over everyone else. And therefore, I should be in a position to define how the new system should operate. And so there's a natural kind of power vacuum that emerges anytime there's a major transition like this. And

And there will be a scrambling and a fighting and a whole bunch of different representation. Typically, fear is a great way of getting into power. And people are going to try and create new control systems because of the transition that's underway. Okay, it's around the world. Yeah, I mean, so, Chamath, it's pretty clear, you know, Freiburg didn't answer this question specifically, so I'm going to give it to you again, you would agree that

jobs like driving things are going to go away. If we had to pick a number somewhere between five and 10 years, the majority of those would go away. He's positioning, hey, a lot more jobs will be created because there'll be all these extra venture capital and opportunities, etc. But job displacement will be very real. And we're seeing, I think, job displacement. Now, you had a tweet recently, you know, you were talking about entry level jobs and how that seems to be

going away in the white collar space. So where do you land on job displacement? Freiburg's already kind of given the big picture here, but let's step back to, for people who are listening, who have relatives who drive Uber or a truck or are graduating from college and want to go work at a, you know, I don't know, the Magnificent Seven or in tech, and they're not hiring. And we know the reason they're not hiring because they're leaning into AI. So let's talk about the job displacement in the medium term.

I'm going to ignore your question. Great. Why should you be any different than the other malcontents on this podcast? There's two people not wanting to answer the question about job displacement. Interesting trend. No, no, no. We'll go back to that. Let me start by just saying that it seems that these safety warnings

tend to be pretty coincidental with key fundraising moments in Anthropics' journey. So let's just start with that. And if you put that into an LLM and try to figure out if what I just said was true, it's interesting, but you find it's relatively accurate. I think that there is a very smart business strategy here. And I've said a version of this about the other companies at the foundational model layer that aren't

Meta and Google, because Meta and Google, frankly, sit on these money gushers where they just generate so much capital that they can fund these things to infinity. But if you're not them, so if you're open AI or if you're anthropic, you have to find an angle. And I think the angles are slightly different for both, but I think what this suggests is that there's a pattern that exists. And I think that that explains why

some of the framing of what we see in the press, Jason, and why we get these exaggerated claims. Perfect. So there are people who are doing this for nefarious reasons is, I guess, where you're sort of getting at here. It's a way to pump up the market. No, it's not nefarious at all. It's smart. It's smart. If you fall for it, it's up to you. Yeah. Okay. Well, there's also an industrial complex, according to some folks that are

backing this. If you've heard of effective altruism, that was like this movement of a bunch of, I don't know, I guess they consider themselves intellectual sacks. And they were kind of backing a large swath

of organizations that I guess we would call in the industry astroturfing, or what do they call it when you make so many of these organizations that they're not real in politics and flooding the zone perhaps. So if you were to look at this article here, Nick, I think you have the AI existential risk

industrial complex graphic there. It seems like a group of people, according to this article, have backed to the tune of 1.6 billion, a large number of organizations to scare the bejesus out of everybody and make YouTube videos, TikToks,

And they've made a map of it. There's some key takeaways here from that article where it says here that it's an inflated ecosystem. There's a great deal of redundancy, same names, acronyms, logos with only minor changes, same extreme talking points, same group of people just with different titles, same funding source. There's a funding source called Open Philanthropy, which was funded by Dustin Moskowitz, who is one of the Facebook billionaires. Chamath, you worked with him, right? I mean, he was...

Wasn't he like Zuck's roommate at Harvard or something? And he's one of the first engineers who made a lot of money. So he's an EA, and he funded this group called Open Philanthropy, which then has become the feeder for essentially all of these other organizations, which are almost different fronts to basically the same underlying EA ideology. And what's interesting is that the guy who set this up for Dustin, Holden Karnofsky, who is a

major effective altruist that was doling out all the money, he's married to Dario's sister. And she's, I guess, associated with EA, and she was one of the co-founders of Anthropic. So these are not coincidences. I mean, the reality is there's a very specific

ideological and political agenda here. Now, what is that agenda? It's basically global AI governance, if you will. They want AI to be highly regulated, but not just at the level of the nation state, but I'd say internationally, supernationally. To what end?

Well, if you just do a quick search on global compute governance, it'll tell you what the key aspects are. So number one, they want regulation of computational resources. This includes access to GPUs. They want AI safety and security regulation. They want international, you call them globalist agreements.

And they want ethical and societal considerations or policy built into this. Now, what does that sound like? That sounds a lot to me like what the Biden administration was pursuing. Specifically, we had that Biden executive order on AI, which was 100 pages of burnsome regulation that was designed to promote AI safety, but had all these DEI requirements. So it led to woke AI. You remember when Google launched Black George Washington and so forth.

They had the Biden diffusion rule, which created this global licensing framework to sell GPUs all over the world. So extreme restrictions on proliferation of servers of computing power.

They created what's called the AI Safety Institute. And they, again, fostered these international AI summits. So if you actually look at what the Biden administration was tangibly doing in terms of policy, and you look at what EA's agenda is with respect to global compute governance, they were pushing hard on these fronts. And now if you look at the level of personnel, there are very, very powerful Biden staffers who now all work in anthropic.

So probably the most powerful Biden staffer on AI over the past four years was a lawyer named Tarun Chhabra. And he now works at Anthropic for Dario. Elizabeth Kelly, who was the founding director of the AI Safety Institute in the government, now works at Anthropic. Like I mentioned, Dario's

sister is married to Holden Karnofsky, who doles out all the money to these EA organizations. So if you were to do something like create a network map, you would see very quickly that there's three key nodes here. There's the effective altruist movement, of which Sam Bankman-Fried is the most notable member, but which I think Dustin Mosfitz is now the main funder. There's the Biden administration and the key staffers. And then you've got Anthropic. And it's a very tightly wound network.

Now, why does this matter? Well, let's get – yeah. Because – Also, the goals, I think, is – Yes. Well, the goal, like I said, is global compute governance. It's basically establishing national and then international regulations of AI. Now, here's the – But they would claim – let's just pause here for a minute. They would claim the reason they're doing it, and so we'll say if we believe this or not, but –

They are concerned about job destruction in the short term. They're also concerned, as science fiction as it is, that the AI, when we get to like a sort of generalized superintelligence, is going to kill humanity, that this is a non-zero chance. Elon has said this before. They've sort of taken it to almost like a certainty. We're going to have so many of these general intelligences. Isn't it odd that they only believe that when they're raising money?

Well, that's what I'm sort of getting at. I think they believe it all the time, but maybe the press releases are time for the fundraisers. But yet they're building a really great product, right? Yeah, look, I mean... It is a great product. Claude kicks ass. I'm more interested in the political dimension of this. I'm not bashing a specific product or company. But look, I think that there is some non-zero risk of AI growing into a super intelligence that's beyond our control. They have a name for that. They call it X-risk or existential risk.

I think it's very hard to put a percentage on that. I'm willing to acknowledge that is a risk. You know, I think about that all the time and I do think we should be concerned about it. But there's two problems, I think, with this approach. Number one is X risk is not the only kind of risk. I would say that China winning the AI race is a huge risk. I don't really want to see a CCP AI running the world.

And if you hobble our own innovation, our own AI efforts in the name of stomping out every possibility of X risk, then you probably end up losing the AI race to China because they're not going to abide by those same regulations. So again, you can't optimize for solving only one risk while ignoring all the others. And I would say the risk of China winning the AI race is, you know, it might be like 30%, whereas I think X risk is probably a much lower percentage.

So there are other risks to worry about. And I do think that they are single-mindedly focused on

scaring people with some of these headlines around, first it was the bioweapons, then it was the super intelligence, now it's the job loss. And I think it's a tried and true tactic of people who want to give more power to the government to scare the population, right? Because if you can scare the population and make them fearful, then they will cry out for the government to solve the problem. And that's what I see here is that you've got this

elaborate network of front organizations, which are all motivated by this EA ideology. They're funded by a hardcore leftist. And by the way, I became aware of Dustin's politics because of the Chase Abudin recall. I found out that he was a big funder of Chase Abudin. Remember this? Dustin Mossomis and Carrie Tuna, his wife. Also, Reed Hastings just joined the board of

of Anthropic. Remember when he, back in 2016, tried to drive Peter Thiel off of the board of Facebook for supporting Trump. So, you know, these are like committed...

leftists, they're Trump haters. But the point is that these are people who fundamentally believe in empowering government to the maximum extent, more government and empowering government to the maximum extent. Now, my problem with that is I actually think that probably the single greatest dystopian risk associated with AI is the risk that government uses it to control all of us.

To me, you end up in some sort of Orwellian future where AI is controlled by the government. And out of all the risks we've talked about, that's the only one for which I've seen tangible evidence. So in other words, if you go back to last year when we had the whole woke AI, there was plenty of evidence that the people...

who were creating these products were infusing their left-wing or woke values into the product to the point where it was lying to all of us and it was rewriting history. And there was plenty of evidence that the Biden EO was trying to enshrine that idea. It was basically trying to require DEI be infused into AI models. And

It wanted to anoint two or three winners in this AI race. So I'm quite convinced that prior to Donald Trump winning the election, we were on a path of global compute governance where two or three big AI companies were going to be anointed as the winners. And the quid pro quo is that they were going to infuse those AI models with woke values. And there was plenty of evidence for that. You look at the policies, you look at the models. This was not a theoretical concern. This was real.

And I think the only reason why we've moved off of that trajectory is because of Trump's election. But we could very easily be moved back onto that trajectory. If you were to look at all three opinions here and put them together, they could all be true at the same time. You've got a number of people, some might call useful idiots, some might call just

you know, people with God complexes who believe they know how the world should operate. Effective altruism kind of falls into that. Oh, we can make a formula. That's their kind of idea where we can tell you where to put your money, rich people, in order to create the most good. And, you know, where are these enlightened individuals with the best view of the world? They might be. Who knows? Maybe they're the smartest kids in the room, but they're kind of delusional. The second piece I'll do here is

I think you're absolutely correct, Chamath, that there are people who have economic interests who are then using those useful idiots and or delusional people with God complexes to serve their need, which is to be one of the three winners. And then, Saqib, inherent to all of that is they have a political ideology. So why not use these

people with delusions of grandeur in order to secure the bag for their companies, for their investments, and secure their candidates into office so that they can block further people from getting H-100s because they literally want to- By the way, that's the part that's very smart about what they're doing because it's not like they're illiquid. They're full of liquidity in the sense that you're bringing in people that are very technically capable.

And you're setting up these funding rounds where a large portion goes right back out the door via secondaries. And so there's all these people that are making money having this worldview. And so to your point, Jason, it's going to cement that worldview, and then they are going to propagate it even more aggressively into the world. So I think the threshold question is, should you fear government overregulation, or should you fear autocomplete?

And I would say you should not be so afraid of the autocomplete right now. It may get so good that it's an AGI, but right now it's an exceptionally good autocomplete. Yeah, and I just think that, again, it's a tried and true tactic of people who want to

give immeasurably more power to the government to try and make people afraid and they stampede people into these policies. And it gives them power. Exactly. Now, why do I think this is important to talk about? On last week's show, I talked about the trip to the Middle East and how we started doing

these AI acceleration partnerships with the Gulf states who have a lot of resources, a lot of money, and they're intensely interested in AI. And the Biden administration was pushing them away. It basically said, you can't have the chips, you can't build data centers. And it was pushing them into the arms of China. The thing that I thought was so bizarre is that the various groups and organizations and former Biden staffers who wrote this policy have been agitating in Washington, and they've been trying to portray themselves as China hawks.

And I'm like, wait, this doesn't make any sense because this policy, again, there's basically two camps in this new Cold War. It's US versus China. You can pull the Gulf states into our orbit or you can drive them into China's orbit. So this to me just didn't make any sense. And what's happened is that frankly, you've got this EA ideology that's really motivating things, which is a desire to lock down compute, right? They're afraid of

proliferation. They're afraid of diffusion. That's really their motivation. And they're trying to rebrand themselves as China hawks because they know that in the Trump administration, that idea is just not going to get much purchase, right? And your position as czar is a level playing field. People compete and the good guys, you know, the West should be supported by

to hit artificial general intelligence as fast as possible so the bad guys, China, don't get it first. That's a...

Well, open competition. I don't know if I would frame it around AGI specifically, but what I would say is that, look, I think our policy should be to win the AI race because the alternative is that China wins it and that would be very bad for our economy and our military. How do you win the AI race? You got to out-innovate. You got to have innovation. That means we can't have over-regulation red tape. We got to build out the most AI infrastructure, data centers, energy.

which includes our partners. And then third, I think it means AI diplomacy because we want to build out the biggest ecosystem. We know that biggest app store wins, biggest ecosystem wins, right? And the policies under the Biden administration were doing the opposite of all those things. But again, you have to go back to what was driving that. And it was not driven by this China hawk mentality. That is now a convenient, reasonable,

rebranding. It was driven by this EA ideology, this doomerism. And so this is why I'm talking about it is I want to expose it because I think a lot of people on the Republican side don't realize where the ideology is really coming from and who's funding it. They're obviously Trump haters and they need to be

Freiburg, when we look at... They do. They need to be loomered. I mean, you know. Freiburg, I want to come back around again because I respect your opinion on

you know, how close we are to turning certain corners, especially in science. So I understand big picture, you believe that the opportunity will be there. Hey, we got people out of fields, you know, in the agricultural revolution, we put them into factories, industrial revolution, then we went to this information revolution. So your position is we will have a similar transition and it'll be okay.

Do you not believe that the speed, because we've talked about this privately and publicly on the pod, that this speed, the velocity at which these changes are occurring, you would agree, are faster than the industrial revolution, much faster than the information revolution. So let's one more time talk about job displacement. And I think the real concern here for a group of people who are buying into this ideology is,

is specifically unions, job displacement. This is something the EU cares about. This is something the Biden administration cares about. If truck drivers lose their jobs, just like we went to bat previously for coal miners, and there were only 75,000 or 150,000 in the country at the time, but it became the national dialogue. Oh my God, the coal miners.

How fast is this going to happen? One more time on drivers specifically. Okay, coders, you think there'll be more code to write, but driving, there's not going to be more driving to be done. So is this time different in terms of the velocity of the change and the job displacement in your mind, Friedberg?

The velocity is greater, but the benefit will be faster. So the benefit of the Industrial Revolution, which ultimately drove lower price products and broader availability of products through manufacturing, was one of the key outputs of that revolution, meaning that we created a consumer market that largely didn't exist prior. Remember, prior to the Industrial Revolution, if you wanted to buy a table or some clothes, they were handmade.

They were kind of artisanal. Suddenly, the Industrial Revolution unlocked the ability to mass produce things in factories. And that dropped the cost and the availability and the abundance of things that everyone wanted to have access to, but they otherwise wouldn't have been able to afford. So suddenly, everyone could go and buy blankets and clothes and canned food.

and all of these incredible things that started to come out of this industrial revolution that happened at the time. And I think that folks are underestimating and under realizing the benefits at this stage of what's going to come out of the AI revolution, and how it's ultimately going to benefit people's availability of products, cost of goods, access to things. So the counterbalancing force JCal is deflationary, which is, let's assume that the cost of everything comes down by half,

That's a huge relief on people's need to work 60 hours a week. Suddenly, you only need to work 30 hours a week. And you can have the same lifestyle or perhaps even a better lifestyle than you have today. So the counter argument to your point, and I'll talk about the pace of change and specific jobs in a moment. But the counter argument to your point is that there's going to be this

cost reduction and abundance that doesn't exist today. Give an example. Let's give like some examples that we could see automation and food prep. So we're seeing a lot of restaurants install robotic systems to make food. And people are like, Oh, job loss, job loss.

But let me just give you the counter side. The counter side is that the cost of your food drops in half. So suddenly, you know, all the labor costs, it's built into making the stuff you want to pick up. Everyone's freaking out right now about inflation. Oh my god, it's $8 for a cup of coffee. It's $8 for a latte. This is crazy, crazy, crazy. What if that dropped down to two bucks? You're gonna be like, man, this is pretty awesome with good service and good experience and don't make it all dystopian. But suddenly, there's going to be this like incredible reduction or deflationary effect in the cost of food. And

And we're already starting to see automation play its way in the food system to bring inflation down. And that's going to be very powerful for people. Shout out to Eats, Cloud Cushions, and Cafe X. We all took swings at the bat at that exact concept is that it could be done better, cheaper, faster. One of the amazing things of these vision action models that are now being employed is you can rapidly learn using vision systems and then deploy automation systems in those sorts of environments where you have a lot of kind of repetitive tasks.

that the system can be trained and installed in a matter of weeks. And historically, that would have been a whole startup that would have taken years to figure out how to get all these things together and custom program it, custom code it. So the flip side is like when Uber hit, those people were not drivers. Think about the jobs that all those people had prior to Uber coming to market. And then the reason they drove for Uber is they could make more money driving for Uber or now driving for Uber Eats or DoorDash and the flexibility. So their lifestyle got better. They had all of this more control in their life. Their incomes went up.

And so there's a series of things that you are correct, won't make sense in the future, from a kind of standard of work perspective. But the right way to think about it is opportunity gets created, new jobs emerge, new industry, new income, costs go down. And so I keep harping on this, that it's really hard today to be very prescriptive to Sachs's point about what exactly is around the corner. But it is an almost certainty that

that what is around the corner is more capital will be deployed. That means the economy grows. That means there's a faster deployment of growth of new jobs, new opportunities for people to make more money, to be happier in the work that they do. And the flip side being things are going to get cheaper. So I mean, we're waxing philosophical here, but I think it's really key because you can focus on the one side of the coin and miss the whole other.

And that's what a lot of journalists and commentators and fear mongerers do is they miss that other side. Got it. Well said, Friedberg. Well said. I think I've heard Satya turn this question around about job loss saying, well, do you believe that GDP is going to grow by 10% a year?

Because what are we talking about here? In order to have the kind of disruption that you're talking about, where, I don't know, 10 to 20% of knowledge workers end up losing their jobs, AI is going to be such a profound force that it's going to have to create GDP growth like we've never seen before. That's right. So it's easier for people to say, oh, well, 20% of people are going to lose their jobs. But wait, we're talking about a world where the economy is growing 10% every year. Do you actually believe that's going to happen? That's more income for everyone. That's new jobs being created.

It's an inevitability. We've seen this in every revolution, you know, prior to the Industrial Revolution, 60% of Americans worked in agriculture. And when the tractor came around, and factories came around, those folks got to get out of doing manual labor in the fields where they were literally, you know, tilling the fields by hand. And they got to go work in a factory where they didn't have to do manual labor to move things. Yeah, they did things in the factory with their hands. But it

But it wasn't about grunt work in the field all day in the sun. And it became a better standard of living. It became new jobs. And today we think about- It became a five-day work week. It went from a seven-day, six or seven-day work week to five. 100 hours a week to 45, 50 hours a week. And now I think the next phase is we're going to end up in less than 30 hours a week with people making more money and having more abundance for every dollar that they earn with respect to what they can purchase. Yes.

and the lives they can live, that means more time with your family, more time with your friends, more time to explore interesting opportunities. So, you know, we've been through this conversation a number of times. I know I'm not being too prescriptive. No, it's important to bring it up, I think, and really unpack it because the fear is peaking now, Sachs. People are using this moment in time to scare people that, hey, the jobs are going to go away and they won't come back. But what we're seeing on the ground, Sachs, is

I'm seeing many more startups getting created and able to accomplish more tasks and hit a higher revenue per employee than they did in the last two cycles. So it used to be, you know, you try to get to a quarter million in revenue per employee than 500. Now we're regularly seeing startups hit a million dollars in revenue per employee, something that was rarefied air previously, which then speaks to your point, Freeberg, that there'll be more abundance. There'll be more capital generated, more capital deployed. Yeah.

Because yes, more capital deployed for more opportunities, but you're going to need to be more resilient, I think. Yeah. I think it's actually very hard to completely eliminate a human job. The ones that you cited, and Jake, how you keep citing the same ones, because I actually don't think there's that many that fit in this category, the drivers, and maybe level on customer support, because those jobs are so monolithic. But when you think about even like what a salesperson does, right? It's like, yes, they spend

a lot of time with prospects, but they also spend time negotiating contracts and they spend time doing post-sale implementation and follow-up and they spend time learning the product and giving feedback. I mean, it's a multifaceted job and you can use AI to automate pieces of it,

But to eliminate the whole job is actually very hard. And so I just think this idea that, boom, 20% of the workforce is going to be unemployed in two years, I just don't think that it's going to work that way. But look, if there is widespread job disruption, then obviously the government's going to have to react and we're going to be in a very different societal order. But my point is, you want the government to start reacting now before this is actually happening? Yeah, we don't need to be precogs and predict it. Yeah. It's a total power grab. It's a total power grab to give

give the government and these organizations more power before the risk is even manifested. And let me say this as well, with respect to all these regulations that were created, the 100-page Biden EO and the 200-page diffusion rule, none of these

regulations solve the X-risk problem. None of these things actually would prevent the most existential risks that we're talking about. They don't sign for alignment. They don't sign for the kill switch. Yeah, when someone actually figures out how to solve that problem, I'm all ears. Look, I'm not cavalier about these risks. I understand that they exist, but I'm not in favor of the fear-mongering. I'm not in favor of giving all this power to the government before we even know how to solve these problems. Chamath, you did it

a tweet about entry-level jobs being toast. So I think there is a nuance here, and both parties could be correct. I think the job destruction is happening as we speak. I'll just give one example and then drop to you, Chamath. One job in startups that's not driving a car or super entry-level was people would hire consultants to do recruitment and to write job descriptions. Now, I was at a dinner last night talking to a bunch of founders here in Singapore, and I said,

I said, how many people have used AI to write a job description? Everybody's hand went up. I said, how many of you with that job description, was that job description better than you could have written or any consultant? They all said, yes, 100% AI is better at that job. That was a job, a high level HR recruitment job or an aspect of its act. So that was half the job, a third of the job. To your point, the chores are being automated. So I do think we're going to see entry level jobs, Chamath.

The ones that get people into an organization, maybe they're going away. And was that your point of your tweet, which we'll pull up right here? If a GPT is a glorified autocomplete, how did we used to do glorified autocomplete in the past? It was with new grads. New grads were our autocomplete.

And to your point, the models are good enough that it effectively allows a person to rise in their career without the need of new grad grist for the mill, so to speak. So I think the reason why companies aren't hiring nearly as many new grads is that the folks that are already in a company can do more work with these tools. And I think that that's a very good thing. So you're generally going to see

OPEX as a percentage of revenue shrink naturally, and you're going to generally see revenue per employee go up naturally. But it's going to create a tough job market for new grads in the established organizations. And so what should new grads do? They should probably steep themselves in the tools and go to younger companies or start a company. I think that's the only...

solution for them. Bingo. The most important thing for whether there are jobs available for new grads or not is whether the economy is booming. So obviously in the wake of a financial crisis, the jobs dry up because everyone's cost cutting and those jobs are the first ones to get cut. But if the economy is booming, then there's going to be a lot more job creation. And so

Again, if AI is this driver and enabler of tremendous productivity, that's going to be good for economic growth. And I think that that will lead to more company formation, more company expansion at the same time that you're getting more productivity. Now, to give an example, one of the things I see a lot discussed online about these coding assistants is that they make junior programmers much better.

Because, you know, if you're already like a 10x programmer, very experienced, you already knew how to do everything. And you could argue that the people who benefit the most are the entry level coders who are willing to now embrace the new technology, and it makes them much more productive. So in other words, it's a huge leveler.

And it takes an entry-level coder and makes them 5x or 10x better. So look, this is an argument I see online. The point is just, I don't think we know how this cuts yet. I agree. And I just think there's like this, this doomerism is premature and it's not a coincidence that it's being funded and motivated by this hardcore ideological element. I'll tell you my hiring experience.

We have about 30 people at 8090. And the way that I found it to work the best is you have senior people act as mentors, and then you have an overwhelming corpus of young, very talented people who are AI native. And if you don't find that mix, what you have instead are L7s from Google and Amazon and Meta, who come to you with extremely high salary demands and stock demands,

And they just don't thrive. And part of why they don't thrive is that they push back on the tools and how you use them. They push back on all these things that the tools help you get to faster. This is why I think it's so important for young folks to just jump in with two feet and be AI native from the jump because you're much more hireable, frankly, to the emergent company.

And the bigger companies, you'll have a lot of these folks that see the writing on the wall, may not want to adapt as fast as otherwise. Another way, for example, that you can measure this is if you look inside your company,

on the productivity lift of some of these coding assistants for people as a distribution of age, what you'll see is the younger people leverage it way more and have way more productivity than older folks. And I'm not saying that as an ageist comment. I'm saying that it's an actual reflection of how people are reacting to these tools. What you're describing is a paradigm shift. It is a big leap. It's like when I went to college, when I took computer science,

it was object-oriented programming. It was like C++. It was compiled languages. It was gnarly. It was nasty work. And then you had these high-level abstracted languages. And I used to remember at Facebook, I would just get so annoyed because I was like, why is everybody using PHP and Python? This is like not even real. But I was one of these old lightites who didn't understand that I just had to take the leap. And what it did was it grew the top of the funnel of the number of developers by 10x. And as a result,

What you had were all of these advancements for the internet. And I think what's happening right now is akin to the same thing, where you're going to grow the number of developers upstream by 10x. But in order to embrace that, you just have to jump in with two feet. And if you're very rigid in how you think the job should be done, technically, I think you're just going to get left behind. Just a little...

Interesting statistic there. Microsoft announced 6,000 job layoffs, about 3% of their workforce, while putting up record profits, while being in an incredible cash position. I think it's like total confirmation bias. It's like now every time there's a layoff announcement, people try to tie it to AI to feed this Doomer story.

I don't think that's an AI story. Well, actually, I think it... I don't think it's an AI story. I think it is because the people they're eliminating are management. And I think the management layer becomes less necessary in an AI world. It was entry-level employees. Now you're saying it's management. This is total confirmation bias. No, no. I think those are two areas that specifically get eliminated. Entry-level, it's too hard to give them the grunt work. And then for the managers who are old and have been there for 20 years... Hold on, let me finish. For those people, I think...

They are unnecessary in this new AI monitoring world. AI can't do management. What are you talking about? What is the AI agent that's doing management right now in companies?

This theory doesn't even make sense. Oh, no, it totally does. There are tools now that are telling you, these are the most productive people in the organization. Tramath just outlined who's shipping the most, et cetera, who's using the tools. And then people are saying, well, why do we have all these highly priced people who are not actually shipping code, who are L7s, et cetera. You're totally falling for some sort of narrative here. This makes no sense. I don't think I am. Yeah. Let me be very clear what I'm saying. What I am saying is,

AI natives are extremely productive. They use these tools. They're very facile with them. I think it's very reductive, but what you see is the older or more established in your career you are in technical roles, what I see is that it's harder and harder for folks like that to embrace these tools in the same way. Now, how does it play out in terms of

I think that just these tools are good enough where the net new incremental task-oriented role that would typically go to a new grad, a lot of that can be defrayed by these models. That's what I'm saying very specifically. And I don't think that speaks to management. I agree with Sachs. It has nothing to do with management. But Sergey said, Freeberg, when...

He came to our F1 that management would be the first thing to go. I was talking to some entrepreneurs last night, again, here in Singapore, and they are taking all the GitHub and Jira cards and things that have been submitted, plus all the Slack messages in their organization, and they're putting them into an LLM and having it write management reports of who is the most productive in the organization. And in the new version of Windows, it's monitoring your entire desktop freebird.

Management is going to know who in the organization is actually doing work, what work they're doing, and what the result of that work is through AI. That is the future of management. And you take out all bias, all loyalty, and the AI is going to do that. I couldn't disagree with you more, Sax, on that. But Freeberg, do you want to wrap this up on this topic? That wasn't my point. My point is that AI, managers are not losing their job because AI is replacing them. I didn't say that

AI wouldn't be a valuable tool for managers to use. Sure, AI will be a great tool for managers, but we're not anywhere near the point where managerial jobs are being eliminated because they're getting replaced by AI agents. We're still at the chatbot stage of this.

Literally, Sergey said he took their internal slack, went into like a dev conversation, said, who are the underrated people in this organization who deserve a raise? And it gave him the right answer. That doesn't allow you to put 6,000 people. I think it's happening as we speak. It's just not over. You fell for this narrative. You grasped onto this Microsoft restructuring where they eliminated 6,000 roles and you're trying to attribute that to AI now.

I think it has to do with AI. I think management is looking at it saying, we are going to replace these positions with AI. We might as well get rid of them now. It is in flux. We'll see who's right in the coming months. Can I make another comment? Freiburg, wrap us up here so we can get on to the next topic. This is a great topic. I want to make one last point, which I think, and Sax, you may not appreciate this, so we can have a healthy argument about this.

I think in the same way that all of this jobs are going to get lost to AI fearmongering, there's a similar narrative that I think is a false narrative around there's a race in AI that's underway between nation states. And the reason I think it's false is if I asked you guys the question, who won the Industrial Revolution?

The Industrial Revolution benefited everyone around the world. There are factories and there's a continuous effort and continuous improvements in manufacturing processes worldwide. That is a continuation of that revolution. Similar if I asked who won the internet race.

There are businesses built out of the US, businesses built out of China, businesses built out of India and Europe that have all created value for shareholders, created value for consumers, changed the world, etc. And I think the same is going to happen in AI. I don't think that there's a finish line in AI. I think AI is a new paradigm of work, a new paradigm of productivity, a new paradigm of business, of the economy, of livelihoods, of pretty much everything

every interaction humans have with ourselves and the world around us will have in its substrate AI. And as a result, I think it's going to be this continuous process of improvement. So I'm not sure, look, there are different models. And you can look at the performance metrics and models, but you can get yourself spun up into a tizzy over which model is ahead of the others, which one's going to quote, get to the finish line first. But I think at the end of the day, the abundance and the economic prosperity that will arise

from the continuous performance improvements that come out of AI and AI development will benefit all nation states and actually could lead to a little bit more of a less resource constrained world where we're all fighting over limited resources and there's nation state definitions around who has access to what and perhaps more abundance, which means more peace

and less of this kind of resource-driven world. Saks, your thought on the Kumbaya theory exposed by Freebird? Yeah, exactly. I'll partially agree in the sense that I don't think the AI race is a finite game. It's an infinite game. I agree that there's no finish line.

But that doesn't mean there's not a race going on. So, for example, an arms race would be a classic example of a competition between countries to see who is stronger to basically amass power. And they might be neutralizing each other. The balance of power may stay in equilibrium, even though both sides feel the need to constantly uplevel their arms, their power. And so I think that to use the term that Mearsheimer used at the All-In Summit, we are in an iron cage.

The US and China are the two leading countries in the world economically, militarily, technologically. They both care about their survival. The best way to ensure your survival in a self-help world

is by being the most powerful. And so these are great powers who care a lot about the balance of power, and they will compete vigorously with each other to maintain the greatest balance of power between them. And high tech is a major dimension of that competition. And within high tech, AI is the most important field. So look, there is going to be an intense competition around AI. Now, the question is, how does that end up? I mean, it could end up in a tie or in

It could end up in a situation where both countries benefit, maybe open source wins, maybe neither side gains a decisive advantage, but they're absolutely going to compete because neither one can afford to take the risk

that the other one will develop a decisive advantage. - Prisoner's dilemma. - Nuclear proliferation is a good analogy. I would argue nuclear deterrence led to a more peaceful world in the 20th century. I mean, is that fair to say, Sachs, that ultimately-- - Well, what happened with nuclear is that the actual underlying technology hit an asymptote. - Right. - It plateaued, right?

And so we ended up in a situation where, in the case of the United States versus the Soviet Union, where both sides had enough nukes to blow up the world many times over. And there wasn't really that much more to innovate. So, you know, the underlying technological competition had ended. The dynamic was more stable and they were able to reach an arms control framework to sort of control the arms race, right? I think AI is a little different. We're in a situation right now where the technology is changing very, very rapidly.

and it's potentially on some sort of exponential curve and so therefore being a year ahead even six months ahead could result in a major advantage i think under those conditions both sides are going to feel the need to compete very vigorously i don't think they can sign up this is a system of productivity right for an agreement to slow each other down i just don't think it was not a system of productivity it was not a system of economic growth it was a system of literally destruction and this is quite different this is a system of making more with less which

Which unleashes benefits to everyone in a way that perhaps should be calming down the conflict and the tension between nations. You've got to admit that there is a potential dual use here. There's no question that the armies of the future are going to be drones and robots and they're going to be AI powered. And as long as that's the case, these countries are going to compete vigorously to have the best AI. And they're going to want their...

leaders or national champions or starps and so forth to win the race. What's the worst case, Sax, if China wins the AI race? What is the worst case scenario? Ask what it means first. That's literally what I'm asking. What would that scenario be? Would they invade America and they dominate us forever? What does it mean to leave? What does it mean to win? To me, it would mean that they achieve a decisive advantage in AI such that we can't leapfrog them back.

And an example of this might be something like 5G, where Huawei somehow leapfrogged us, got to 5G first and disseminated it through the world. They weren't concerned about diffusion. They were interested in promulgating their technology throughout the world. So if the Chinese win AI, they will sell more products and services around the globe than the US. This is where we have to change our mindset towards diffusion. I would define winning as the whole world consolidates around the American tech stack. Right.

They use American hardware in data centers that, again, are fundamentally powered by American technology. And, you know, just look at market share, okay? If we have like 80 to 90% market share, that's winning.

If they have 80% market share, then we're in big trouble. So it's very simple. It means like- Yeah, but if the market grows up by 10x, it doesn't matter because the world will have, every individual in every country will now have more. They will have a more prosperous life. And as a result, it's not necessarily the framing about if we don't get there first, we are necessarily going to lose. I get that there's an edge case of conflict or what have you, but I do think that there's a net benefit where the whole world suddenly is in this more prosperous state

And this is a classic example of a dual use technology where there are both economic benefits and there are military benefits. Yes. GPS would come to mind in this example, right? Like my summary point is just that it's not all about a losing game with respect to this quote race with other nation states. But at the end of the day, yes, there is risk. But I do think that if the pace of improvement changes,

stays on track like it is right now, holy shit, I think we're in a pretty good place. That's just my point. Some positivity. Look, I hope that the AI race stays entirely positive and it's a healthy competition between nations and the competition spurs them on to develop more prosperity for their citizens. But as we talked about at the AI summit, there's two ways of looking at the world. There's kind of the economist way that Jeffrey Sachs was talking about and then there's the balance of power way, a realist way, which Mearsheimer was talking about. And

When economic prosperity and survival or balance of power come into conflict, it's the realist view of the world that it's the balance of power that gets privileged. And I just think that's the way that governments operate is that prosperity is incredibly important. We want economic success, but power is ultimately privileged over that. And this is why we're going to compete vigorously in high tech. That's why there is going to be an AI race.

Okay, perfect segue. We should talk a little bit about what was the topic of discussion. Yesterday, I had a lunch with a bunch of family offices and capital allocators, government folks here in Singapore. And they were talking about our discussion last week about the big, beautiful bill and the debt here in the United States. It's permeating everywhere. The two conversations at every stop I've made here is the big, beautiful bill and

and the balance sheet of the United States, as well as tariffs. So we need to maybe revisit our discussion last week. Chamath, you had, and Freiburg did an impromptu call with Ron Johnson over the weekend, which then spurred him going on 20 other podcasts to talk about this. Stephen Miller from the administration has been tweeting some corrections or his perceived corrections about the bill. And Saxa, I think you've also started tweeting this

Where do we want to start? Maybe... Well, I think there are just a couple of facts that should be cleaned up because... Okay. So facts from the administration, their view of our discussion. Well, even though I was defending the bill last week, on the whole, I wasn't saying it was perfect. I was just saying it was better than the status quo. Yeah. You were clear about that. Yeah. But even I, in doing that, was conceding some points that I think were just factually wrong. And the big one was that I said I was disappointed that Doge

the doge cuts weren't included in the big beautiful bill what stephen miller's pointed out is

is that reconciliation bills can only deal with what's called mandatory spending. They can't deal with what's called discretionary spending. And since the Doge cuts apply to discretionary spending, they just can't be dealt with in a reconciliation bill. They have to be dealt with separately. There can be a separate rescission bill that comes up, but it can't be dealt with in this bill. And just to be very clear, look, if the Doge cuts don't happen through rescission, I'm going to be very disappointed in that. I really want the Doge cuts to happen, but it's just a fact.

that the doge cuts cannot happen in the big, beautiful bill. It's not that kind of bill. And I think it's therefore wrong to blame big, beautiful bill for not containing doge cuts when the Senate rules don't allow that. You know, it all goes back to the bird rules. There are only specific things that can be dealt with through reconciliation, which is this 50 vote threshold.

and it has to be quote-unquote mandatory spending. Discretionary cuts are dealt with in annual appropriations bills that require 60 votes. Now look, this is kind of a crazy system. I don't know exactly how it evolved. I guess Robert Byrd is the one who came up with all this stuff, and maybe they need to change the system. But it's just wrong to blame the big, beautiful bill for not containing the Doge cuts. That's just a fact. So the other thing is that the BBB does actually cut spending. It's just not scored that way because when the bill...

removes the sunset provision from the 2017 tax cuts, the CBO ends up scoring that as effectively a spending increase. But tax rates are simply continuing at their current level. In other words, at this year's level. So if you used the current year as your baseline, okay, and then compared it to spending next year,

it would score as a cut in spending. So it's just not, it's not correct to say the bill increases spending. It does actually result in a mandatory spending cut, but it's not getting credit for that because we're continuing the tax rates at the current year's rates. Do you believe, Sachs, that this administration, which you are part of, in four years will have spent

Will have balanced the budget? Will it have reduced the deficit or the deficit continue to grow at two trillion a year? What is your belief? Because there's a lot of strategies going on here. My belief is that President Trump came into office inheriting

a terrible fiscal situation i mean basically that he created and that biden created they both put a trillion on the debt that's just a big difference it's a big difference to add to the deficit when you're in the emergency phase of covet okay there's an emergency for that sure

It's emergency spending. It was never supposed to be permanent. And then somehow Biden made it permanent. And he wanted a lot more. Remember Build Back Better? He wanted a lot more. So, you know, it's tough when you come into office with a, what is $2 trillion annual deficit. So to my original question. Now look, hold on. Would I like to see the deficit eliminated in one year? Yeah, absolutely. But there's just not the votes for that. Well, I asked you for four years. There's a one vote margin here in the House.

And the Democrats aren't cooperating in any way. So I think that the administration is getting the most done that it can. This is a mandatory spending cut.

And I think the doge cuts will be dealt with hopefully through rescission in a subsequent bill. I'm asking you about four years from now. Will we be sitting here in four years? Will Trump have cut spending by the end of this term in another three and a half years? Will we be looking at a balanced budget potentially? Is that the goal of the administration? Or will we be at...

42, 44, 45 trillion dollars at the end of Trump's second term, David Sessions. Listen, if you want that level of specificity, you're going to have to get Scott Besson on, okay? This is just not my area. I'm not going to pretend to have that level of detailed answers. But what I believe is that the Trump administration's policy is to spur growth

I think that these tax policies will spur growth. I think that AI will also be a huge tailwind. It'll be a productivity boost. I think let's stop being doomers about it. We need that productivity boost. And I think that the net result of those things will be to improve the fiscal situation. Do I want more spending cuts? Yeah, but look,

We're getting more than was represented last week. Let's put it that way. Okay. Fair enough, Sax. Thank you for the cleanup there. Chamath, our bestie Elon, was on the Sunday shows and he said, hey, the bill can be big or it can be beautiful. It can't be both. He seems to be, I'll say, displeased or maybe not as optimistic about balancing the budget and getting spending under control.

But he still believes in Doge, obviously, and hopefully Doge continues. You seemed a little bit concerned last week, a week's past. You've heard some of Stephen Miller's opinions. Where do you net out seven days from our big, beautiful budget bill debate last week? I think Stephen's critique of how the media summarized the reaction to the bill is accurate.

And I think it's probably useful to double click into one thing that Sachs didn't mention, but that Stephen did. A lot of this pivots around the CBO, which is the Congressional Budget Office, and how they look at these bills. And there's a lot of issues with how they do it.

In one specific case, which Sax just mentioned and Stephen talked about, is that they have these arcane rules about the way that they score things. And what they were assuming is that the tax rates would flip back to what they were before the first Trump tax cuts, which obviously would be higher than where they are today.

What that would mean in their financial model is we were going to get all that money. Now, to maintain the tax cuts where we are, they now then would look at that and say, oh, hold on, that's a loss of revenue. Why are all of these things important? I downloaded the CBO model, went through it, and what I would say is at best, it's Spartan.

which means that I don't think a financial analyst or somebody that controls a lot of money will actually put a lot of stock in their model. I think what you'll have happen is people will build their own versions, bottoms up. Do you trust it, the CBO's version of this, or do you largely trust it? I don't think the CBO really knows what's going on, to be totally honest with you. Okay. I think that there are parts of what they do, which they're also opaque on. Nick, I sent you

A tweet from Goldman Sachs. So here's what Goldman put out. Now, the point is, when you build a model, what you're trying to do is net out all of these bars, okay? You're trying to add the positive bars and the negative bars, and you figure out what is the total number at the end of it. Now, in order to do that, when you see the bars on the far right, that's a $2034. That's very different than a $2025. The CBO doesn't disclose how they deal with that. They don't disclose the discount rate. So you can question what that is.

The CBO makes these assumptions that, as Steven pointed out, are very brittle with respect to the tax plan. That's not factored in here. So those are the issues with the way the CBO scores it. So you have to do it yourself. Now, Peter Navarro published an article, which I think is probably the most pivotal article about this whole topic. Peter Navarro of Tariff Fame. Yeah. Here, I think he nails it right in the bullseye, which is the bond market needs to make a decision

on one very critical assumption when they build their own model. Okay, so let's ignore the CBOs, kind of brittle math and the Excel that they post on their website. People are going to do their own because they're talking about managing their own money. But Navarro basically points to the critical thing, which is, listen, those CBO assumptions also include a fatal error, which is they assume these very low levels of GDP. What you're probably going to see in Q2 is a really hot GDP print.

If I'm a betting man, which I am, I think the GDP print is going to come in above three, not quite four, but above three. And so what Peter is saying here is, hey, guys, like you're estimating 1.7% GDP. Why don't you assume 2.2? Or why don't you assume 2.7 or any number? Or really what he's saying is, why don't you build a sensitivity so that you can see the implications of that? And I think that that is a very important point. OK, so where do I net out a week later, Jason?

It's pretty much summarized in the tweet that I posted earlier today. So over the last week, as people have digested it, I think that there are small actors in this play and big actors. The biggest actor is obviously President Trump, but the second biggest actor is the long end of the bond market.

These are the central bankers, the long bond holders, and these macro hedge funds. Why? Because they will ultimately determine the United States' cost of capital. How expensive will it be to finance our deficits? Irrespective of whatever the number is, it could be a dollar or it could be a trillion dollars. That doesn't matter right now. The point is what is going to be our cost of capital? And what's happened-

over the last little while is that they've steepened the curve and they've made it more expensive for us to borrow money. That's just the fact. So how do we get in front of this? I think the most important thing, if you think about what Peter Navarro said is, this plan and the bill can work if we get the GDP right. Okay? So how do you get the GDP right?

And this is where I have one very narrow set of things that I think we need to improve. And the specific thing that I'll go back to is today, America is at a supply demand trade-off on the energy side. What does that mean? We literally consume every single bit of energy that we make. We don't have slack in the system. We are growing our energy demands on average about 3% a year.

So I think the most critical thing we need to do is to make sure the energy markets stay robust, meaning there's a lot of investment that people are making. On Tuesday, I announced a deal that I did building a one gigawatt data center in Arizona. This is a lot of money. This is little old me, but there are lots of people ripping in huge, huge, huge checks, hundreds of billions of dollars.

I think the sole focus has to be to make sure that the energy policy of America is robust and it keeps all the electrons online. If there's any contraction, I think it'll hit the GDP number because we won't have the energy we need. And that's where things start to get a little funky. So I think where I am is I think President Trump should get what he wants. I think the bill can work, narrowly address the energy provisions, and I think we live to fight another day. So Friedberg-

Cynical approach might be, we're working the refs here. The CBO is not taking into GDP. This GDP has a magical unicorn in it. AI and energy is going to spur this amazing growth. But the bond markets don't believe it either. So are we looking at just...

A GOP, a party, I'll put the administration aside, that is just as recklessly spending as the Democrats, and they want to change the formula by which they're judged in the future that there's going to be magically all this growth and growth solves all problems and whatever.

What we really need to do, to your point, I think two weeks ago, that this is just disgraceful to put up this much spending. And we have to have austerity and we need to increase maybe the discipline in the country. And both parties have to be part of that. I'm asking you from the cynical perspective, maybe to represent or steal me on the other side here. We had a conversation with Senator Ron Johnson after we recorded the pod last week.

And he was very clear in a key point, which is that this bill addresses mandatory spending. Just to give you a sense, 70% of our federal budget is mandatory spending. 30% falls into that discretionary category. The mandatory spending is composed of the interest on the debt, which is now well over a trillion dollars a year on its way to a trillion five, almost a trillion a year. Medicare, Medicaid,

Social Security, and some other income security programs. And as Ron Johnson shared with us, over the years, more and more programs have been put into the mandatory spending category. And so you can get past the filibustering in the Senate to be able to get budget adjustments done. The key thing he's focused on and Rand Paul is focused on, and I've talked about is the spending level of our mandatory programs, the big, beautiful bill,

proposes a roughly $70 billion per year cut in Medicaid. Okay, and that sounds awful. How could you do that to people in 2019? The year before COVID Medicaid spending was $627 billion. 2024 was 914 billion. So the $70 billion cut gets you down to 840. You're still roughly call it 40% above where you were in 2019. So is that the right level?

And fundamentally, the opportunity to cut those mandatory programs, which I know sounds awful to cut Social Security and cut Medicaid. But the reality is they're not just being cut from a low level, they're being cut from a level that's 60 plus percent higher than they were in 2019. I gave you another example, which is the SNAP program, the food stamp program. Again, $15 billion of the 120 a year that we spend on food stamps is being used to buy soda.

And a whole nother chunk of that 120 is being used to buy other junk food. So they have proposed in this bill to cut SNAP down to 90. And it was 60 in 2019. So it's still 50% above where it was in 2019. So the key point that's being made by Ron Johnson and others, is that the spending on these mandatory programs, which account for nearly three quarters of our federal budget,

are still very elevated relative to where we were in 2019. And we are not going to get out of our deficit barring a massive increase in GDP without changes to the spending level. Now, I don't put the blame on the White House. This bill passed with one vote in the House, one vote.

And so a key point to note, and I've said this from day one, and every time I've gone to DC, and every time we've talked about Doge, I've said there's no way any of this stuff's going to change without legislative action from the Congress. And here we are seeing Congress, for whatever reason, you can listen to Ron Johnson, you can listen to Rand Paul, you can listen to others say, you know what, we can't cut that deep. It is going to be too harmful to our constituents. We need to keep the programs at their current levels

or make no changes at all or only modest changes. And that's where we are. That's the reality. Now, I do think that Navarro did an excellent job in his op ed, for whatever criticism we may want to lay on Navarro for many other things. He pointed out that the CBO projections in 2017

for the next year's GDP growth numbers was 1.8 to 2%. And it actually came in at 2.9%, a full one point higher because of the Tax and Jobs Act that was passed by the Trump administration in 2017. So the additional money that goes into investments because lower taxes are being paid, fueled GDP growth. This is what some people call trickle down economics.

people ridicule it, they say it doesn't work, it's not real. But in this particular instance, they cut taxes and the GDP grew much faster than was projected or estimated by the economists at the CBO. So the argument that's being made is that we are not capturing many of the upsides in the GDP numbers that are being projected. And I will be honest about this, I don't think anyone knows.

how much the GDP is going to grow. We don't know the economic benefit and effects of AI. We don't know the economic benefits and effects of the work that's being done to deregulate. Another key point, which is not talked about by Navarro or anywhere else, there's a broad effort to deregulate, standing up new energy systems, deregulate industry and pharma,

deregulate banking, Besson talked about this in our interview with him, all of those deregulatory actions, theoretically, should drive more investment dollars. Because if you can get a biotech drug to market in five years, instead of 10, you'll invest more in developing new biotech drugs. If you can stand up a new nuclear reactor in seven years, instead of 30, you'll build more nuclear reactors, money will flow. If you can, you

Get a new factory working because it's a lot easier and faster to build the factory and cheaper. You'll build more factories and production will go up. People were really taken, by the way, by your comment that you would shut up about the deficit if we had like a really great energy policy. We were dumping a lot on top of it. I want to build on the point that both Chamatha and Freeberg made about growth rates.

So there's a very important chart here from FRED. This is the Federal Reserve of St. Louis. This is federal receipts. So basically, it's federal tax revenue as a percent of GDP. And this goes all the way back to the 1930s, 1940s. So if you look in the post-World War II period, you can see, just eyeballing it, that there's a lot of variation around this. But the line is around 17.5% plus or minus 2%.

And the interesting thing is that this chart reflects radically different tax rates. So, for example, during some of these periods, we've had 90% top marginal tax rates. We've had 70% top marginal tax rates. So, yeah, under Jimmy Carter, the top marginal tax rate was, I think, 70%.

We've had tax rates under Reagan or Clinton in the 20s. So the point is that the tax rate that you have and what you actually collect as a percent of GDP don't correlate. The most important thing by far is just how the economy is doing. If you look at the top tick, it's around 2000 there. If you just mouse over it. 1999 to 2000. Yeah, we get just under 20% of federal receipts of percent of GDP.

And tax rates were quite low back then. The reason why is we had an economic boom. So look, the point is the most important thing in terms of tax revenue is having a good economy.

And this is why you don't just want to have very high tax rates because they clobber your economy. So this point that Navarro was making in that article, it actually makes sense. I mean, 1.7% is a pretty tepid growth assumption. We should be able to grow a lot faster. And if we have a favorable tax policy, you can grow a lot faster. Now, if you go to...

spending. Can you pull up the FRED chart on spending? What you see here is that, I mean, it's been kind of going up, but let's say that since the mid-1970s or so, federal net outlays as a percent of GDP, so basically spending, was around 20% of GDP. And then what happened is during COVID, it went crazy, went all the way up to 30%, and now it's back down to low 20s, but it's still not back down to 20. And what we need to do is grow

the economy, we have to grow GDP to the point where federal net outlays are back around 20%. If you could get tax revenue to the historical mean of around 17.5% or 17%, you get spending to 20%, then you have a budget deficit of 3%, which is much more tolerable. And I think that's Besson's target under his 3-3-3 plan, right? Is you get GDP growth back up to 3% and you get the budget deficit down to 3%. All right, Chamath, you had some charts you wanted to share? Yeah.

Well, I think what's amazing is if you take last week and now again this week, we're all converging on the same thing. The path out of this is through GDP growth. And I just want everybody to understand where we are. And this is without judgment. This is just the facts. What this chart shows in gray is the total supply of power in the United States.

And the blue line is the utilization. So what you build for is what you think is a premium above the demand, right? You'd say, if there's one unit of demand, let's have 1.2 units of supply, we'll be okay. But as it turns out, historically in the United States, we've had these cycles where we didn't really know what the demand curve would look like. And so over the last number of years, we've stopped really building demand

supply in power. But what happened with things like AI and all of these other things is that the demand just continued to spike. And so what this chart shows is we are at a standstill sitting here today in 2025. On margin, we're actually short power.

which is to say, sometimes there are brownouts, sometimes there's lack of power because we didn't add enough capacity. So that's where we are today. So then we talk about all of these new kinds of energy. And this is just meant to ground us in the facts. If you tried to turn on a project today, sitting here in May of 2025, here's what the timelines are. We all talk about SMRs, small modular reactors,

The reality is that if you get everything permitted and you believe the technology can be de-risked, you're still in a 2035 plus timeframe. You're a decade away. If you have an unplanned nat gas plant today, the fastest you could get that on is four years from now. If we tried to restart a mothballed nuclear reactor, of which there are only three we can restart,

That's a 2027 to 2030 timeframe. So let's give us the benefit of the doubt. That's two years away. If we needed a planned nat gas plant, there's already 24 gigawatts in the queue, which can't get turned on. So where does this end up? And this is where I think we need to strip away all the partisanship and understand what we're dealing with. We have ready supply of renewable and storage options today.

It's the fastest thing that you can turn on. It allows us to turn on supply to meet the demand and utilization. So I just think it's important to understand that we must not lose energy. We cannot lose the energy market because that is the critical driver of all the GDP. All right, Nippon Steel and the US Steel merger got cleared by President Trump. This was something that was being blocked by Biden, obviously, for national security reasons.

Nippon is going to acquire your steel 14.9 billion. Biden blocked that, as we had discussed. On Friday, Trump cleared the deal to go through calling it a partnership that will create 70,000 jobs in the US. And on Sunday, Trump called the deal an investment saying it's a partial ownership, but it will be controlled by the USA. There seems to be a reframing of this deal and that the United States is going to benefit from it, but it's not a sale. It's an investment.

Let's set some context. The United States is always on the wrong side of these deals. Okay, we've been on the wrong side for 20 years, meaning we show up when an asset is stranded or completely run into the ground. For example, we did the auto bailouts at the end of the great financial crisis. If it's not a company and there's toxic assets, we set up something called TARP. What do we get? Not much in return. In this, it's the opposite.

And I think that this strategy has worked for many other countries really well. So if you look at Brazil, companies like Embraer and Vale, which are really big Brazilian national champions, have a partnership, a pretty tight coupling with the Brazilian government. The Brazilians have a golden vote. If you look inside of the UK, there's a bunch of aerospace and defense companies, including Rolls-Royce, that have a very tight coupling with the UK government. They have a golden vote.

If you look in China, companies like ByteDance and CATL have a very tight coupling with the Chinese government, and the Chinese government has a golden vote. And so what are all of those deals? Those deals are about companies that are thriving and on the forward foot.

And so I think this is a really important example of things that we need to copy. I've said this before, but one part of China that I think we need to pay very close attention to is Hu Jintao in 2003 laid out a plan and he said, we are going to create 10 national champions in China in all the critical industries that are going to matter for the next 50 years.

including things like batteries and rare earths and AI. And they did it. But for those companies, it allowed them to thrive and crush it. And I think that we need to do that and compete with those folks on an equal playing field. So- In all industries or in very specific strategic ones? Because that would seem like corrupting capitalism and free markets would be the steel man. There's 10 industries that matter. And you can probably- Give us some of them. Steel is one. Okay. I think the-

precursors for pharmaceuticals are absolutely critical. I think AI is absolutely critical. I think the upstream lithography and EV deposition and chip making capability, absolutely critical. I think batteries are absolutely critical. And I think rare earths and the specialty chemical supply chain, absolutely critical. If you have those five

you are in control of your own destiny in the sense that you can keep your citizens healthy and you can make all the stuff for the future. So I think if the president is creating a more expansive idea beyond U.S. deal with this idea of U.S. support, maybe there'll be preferred capital in the future to U.S. deal. But if he creates a category by category thing across five or six of these critical areas of the future, I think it's super smart and we should do more of it.

Sacks, what do you think? Interventionism, putting your thumb on the scale, golden votes, a good idea for America in very narrow verticals, or let the free market decide? What are your thoughts on this golden vote, having a board seat, etc.?

Well, it depends what the free market, so to speak, produced. And the reality is over the past 25 years is we exported a lot of this manufacturing capacity to China. And I don't think it was a free market because they had all these advantages under the WTO that we talked about on a previous podcast. They were able to subsidize their national champions.

while still remaining compliant with the WTO rules because supposedly they were a developing country. It was totally unfair. And what they would do is through these subsidies, they would allow these national champions to essentially dump their products in the global market and drive everyone else out of business. They became the low-cost producers. I think that, as the president just said recently, not every industry has to be treated as strategic.

clothes and toys. We don't necessarily have to reshore in the United States, but steel production is definitely strategic. Steel, aluminum, and I'd say the rare earths, we have to have that capacity. We cannot be completely dependent on China for our supply chain. So some of these industries have to be reshored. And if you need subsidies to do it, I think that you do it for national security reasons, first and foremost. There are other industries where the private market works just fine

And what we need to do to help those companies is simply not get in their way with unnecessary red tape and regulations. So I would say empower the free market when America is the winner. And then in other areas where they're necessary for national security, then you have to be willing to basically protect our industries. Freeberg, it seems like the great innovation here might also be the American public getting upside when we gave loans to...

Solyndra and Tesla and Fisker and a bunch of people for battery powered energy under Obama. We just got paid back in some cases by Elon. Other people defaulted, but we didn't get equity. What if we had instead of getting our 500 million back on the loan from from Elon, which he paid back early and with interest, if we got half back and we got half in equity, RSUs, whatever stock options warrants,

This would be an incredible innovation. So what are your thoughts here? Because people look to this podcast as, hey, the free market podcast, but this does seem to be a notable exception here of maybe we should get involved and do these golden, you know, share votes, board seats, you know, maybe more creative structures in order to win faster. What are your thoughts, Freeport?

I don't like it. I don't like the government and markets. Keep the government out of the markets. It creates a slippery slope. First of all, I think markets don't operate well if government's involved. It gets inefficient, and that hurts consumers. It hurts productivity. It hurts the economy. Second, I think it's a slippery slope. You do one thing now. Let me ask you a question, though. If government non-intervention results in all the steel production moving offshore, if it results in all the rare earth production

processing and the rare earth magnet casting industries moving offshore. In fact, not just moving offshore, but moving to an adversarial nation such that they can just switch off our supply chain for pretty much every electric motor. Is that an outcome of the quote unquote free market that we should accept?

Well, then I think that's where the government can play a role in trade deals to manage that effect. So you can create incentives that'll drive onshore manufacturing by increasing the tariff or restricting trade with foreign

foreign countries so that there isn't a cheaper alternative, which is obviously one of the plays that this Trump administration is trying to do. I'd rather have that mechanism than the government making actual market-based decisions and business decisions. You know how inefficient government runs. You know how difficult it is to assume that that bureaucracy is actually ever going to act and pick any best interest or any good interest at all. They're just going to fuck it all up. So I'd rather keep the government entirely out of the market, create a trade incentive where the trade incentive basically will drive private

markets, private capital, to build that industry onshore here, because there isn't one and there's demand for it. Because you've restricted access to the foreign market that I think would be the best general solution sex. And then I think it's a slippery slope, because then you could always rationalize something being strategic, something being security interests in the United States. So then every industry suddenly gets government intervention and government involvement. And then the third thing is, I don't want the government making money

that the Congress then says, hey, we've got more money, we've got more revenue, let's spend more money, because then they'll create a bunch of waste and nonsense that'll arise from having increased revenue. One side, and I will say, one thing where I do think we do a poor job is we don't do a good job to answer your question, J. Cal, of investing the retirement funds that we've mandated through Social Security. We should be taking the four and a half trillion dollars that our Social Security beneficiaries have had deducted from their paychecks over many, many years,

And those social security future retirees or current retirees are getting completely ripped off because their money is being loaned to the federal government. It's not being invested. It's been loaned to the government to spend money and run a deficit and ultimately inflate away the value of the dollar. We should have been investing those dollars in some of these strategic assets. So if ever there were to be shares or investment that the government does,

It should be done through strategic investing through the Social Security Retirement Program. Similar, by the way, to what's done in Australia, where these supers have created an extraordinary surplus of capital. Same in Norway, same in the Middle East countries. Incredible sovereign wealth funds that benefit the retirees and the population at large. That's where the dollars should be invested from. I do think the fundamental focus priority right now should be reforming Social Security while we still have the chance.

We have until 2032 when Social Security will be functionally bankrupt and everyone's going to get overtaxed and kids are going to end up having to pay through inflation for the benefits of the retirees of the last generation. Fibers, right? We're on a seven-year shot clock to when Social Security is not funded. And by the way, this opportunity to fix mandatory spending, it was

It was an opportunity to introduce some structural reform in Social Security. Another reason why I think that there is a degree of disgracia in this bill, particularly with how Congress had acted and not addressing what is becoming a critical issue because everyone wants to get reelected in the next 12 months, 18 months. They've got elections coming up. So everyone's scrambling to not mess with that because you can't touch it. It's like, you know what? Guys, this is bankrupt in seven years. It's going to cost us five, 10 times as much when we have to deal with it when everyone runs out of money.

Deal with it now. Fix the problem. And by the way, we should flip all that money, $4.5 trillion into an investment account for the retirees where they can own equities, and they can make investments in the markets, and they can participate in the upside of American industry and the GDP growth that's coming. Instead, they're getting paid 3.8% or 4.5% average from treasuries that they own, that by the way, are now have a lower credit rating than they've ever had. You know, it's crazy.

I'm in complete agreement with you. And I think it's a lack of leadership on Trump's part. If Trump is going to criticize Taylor Swift and Zelensky and Putin and everybody, you know, all day long on truth social, he can criticize Congress and the Democrats and the Republicans on not cutting spending. I think he should speak up. I think he was elected to do that. It was a big part of the mandate. And he should tone down the tariff chaos and tone up the and lean into the

intelligent immigration, you know, recruiting great talent to this country. And he should be pushing to make these bills control spending. That's just one person's belief. For the chairman dictator, Chamath Paihapatiya, your czar, David Sachs, in that Chris Brioni white shirt, very beautiful. And the sultan of science deep in his Wali era.

I am the world's greatest moderator. And as Freedberg will tell you, executive producer for life here at the All In podcast. We'll see you all next time. Bye-bye. Jason at All In. Love you, boys. Bye-bye. Your winner's right. Rain Man, David Sack. And it said, we open sourced it to the fans and they've just gone crazy. Love you, Westies. I'm the queen of kinwad. Your winner's right. Your winner's right. Your winner's right. Besties are gone. Go 13.

We should all just get a room and just have one big huge orgy because they're all just useless. It's like this like sexual tension, but they just need to release it now. You're a bee. What? You're a bee. We need to get merch. I'm going. What?