We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Sam Altman, OpenAI and the Future of Artificial (General) Intelligence

Sam Altman, OpenAI and the Future of Artificial (General) Intelligence

2025/5/22
logo of podcast On with Kara Swisher

On with Kara Swisher

AI Deep Dive AI Chapters Transcript
People
C
Casey Newton
K
Kara Swisher
卡拉·斯威舍是一位知名的媒体评论家和播客主持人,专注于科技和政治话题的深入分析。
K
Karen Hao
K
Keech Hagey
Topics
Kara Swisher:作为一名科技记者,我见证了 ChatGPT 在 2022 年 11 月对科技界产生的巨大影响,它使 OpenAI 及其 CEO Sam Altman 广为人知。我认识 Sam Altman 很久了,并且见证了他的成长,包括他在 OpenAI 的工作,在那里他和 Elon 讨论了成立这样一个组织来保护我们免受科技巨头(如 Google)侵害的必要性。我认为 Sam Altman 是一个有趣的人,他很迷人,但有时也显得有些操纵性,他让我想起了史蒂夫·乔布斯。Karen 和 Keech 的书在某些方面有重叠,但总体来说互为补充,分别从不同角度探讨了 Sam Altman 和 OpenAI。

Deep Dive

Shownotes Transcript

Translations:
中文

You're coming live from a rave. In a bit of a rave situation. Hi, everyone. From New York Magazine and the Vox Media Podcast Network, this is On with Kara Swisher and I'm Kara Swisher.

I've been reporting on tech for decades and few advances have made the kind of splash and had the potential long-term impact that ChatGPT did back in November 2022. It made a nonprofit called OpenAI and its CEO, Sam Altman, known around the world.

I met Sam actually when he was a teenager when he had a company called Looped, which didn't last. And I've watched him over the years grow as he went from company to company, including at OpenAI, where he and Elon told me about the need to have an organization like this to protect us against the tech giants themselves, Google and others, when AI came of age. And it turned out nothing.

These might have been the monsters we were scared of meeting in the first place. I do like Sam. He's very charming, and I can see why people think of him as manipulative, but he really is an interesting character, more like Steve Jobs than anyone else I've ever interviewed. And I wrote and spoke a lot about his sudden ouster and just a sudden reinstatement as the CEO of OpenAI in the fall of 2023.

And I should note that Vox Media, like a lot of media companies, has a licensing deal with OpenAI that gives OpenAI access to its IP. My guests today are two tech journalists who have each come out with their own very well-reported books about SAM and OpenAI, including what was happening behind the scenes during that crucial firing and rehiring, the wide impact of generative AI, and the potential for artificial general intelligence in the future.

Keech Hagee is a reporter at The Wall Street Journal. Her book is called The Optimist, Sam Altman, Open AI, and the Race to Reinvent the Future. Karen Howe writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series. Her book is called Empire of AI, Dreams and Nightmares in Sam Altman's Open AI. Whether you're a fan of AI or fearful of it or somewhere in between, this is an important episode, so stay with us.

you

To remind you that 60% of sales on Amazon come from independent sellers, here's Tracy from Lilies of Charleston. Hi, y'all. We make barbecue sauce, hot sauce, and specialty popcorn. They get help from Amazon to grow their small business faster. They handle all our shipping and logistics, which is a big help. All on it up. Have a great day, Tracy. Hot stuff, Tracy. Ooh, honey. Shop small business on Amazon. Oh, yeah.

Karen and Keech, welcome. Thanks for coming on on. Thank you for having us, Cara. Thanks for having me. So you both have hefty books out this week about Sam Altman and OpenAI. Congratulations. There's some overlap, of course, but the two books are actually complementing each other very well, I thought. Let me read the titles. Keech, your book is The Optimist, Sam Altman, OpenAI, and The Race to Invent the Future. Karen, yours is Empire of AI, Dreams and Nightmares in Sam Altman's OpenAI. Each of you, I'd like

you to talk about your titles and reflect on the perspective of what you are trying to do. Keej, are you an optimist, a boomer, as they say, and Karen, are you a doomer?

And I guess there's the Zoomers who are in the middle. But let's start with you, Keech. I am an optimist as a general person. I don't know if I'm totally optimistic about AI, but I've maybe fallen more on that side than the other side. But the title really is about how Sam presents himself to the world. So that's his brand. That is what he goes into every room with, you know, investors acting the part of. Sure does. So that is sort of what he's projecting. And what about you, Karen?

So I wouldn't say that I'm a doomer and that I think boomers and doomers are actually ultimately two sides of the same coin. And I exist sort of in a third space, which is the AI accountability space that really recognizes that there is an extraordinary amount of power that's being concentrated within companies that are developing AI.

And we need to hold those companies accountable. And so Empire of AI, that title is really a nod to what I see as a global system of power that these companies are entrenching, where we really need to understand them as new forms of empire in terms of really grappling with the sheer scope and scale of

of what they are now doing and what they say they will continue to do over time. So the launch of ChatGPT in November of 2022 is what most people consider to be the starting gun for the AI race we're in now, very similar to Netscape.

releasing its browser that really got people using the internet more than anything. It also made Sam Altman a household name. Now, neither of you books start with that launch. Instead, you write in your prologues about the few days when Altman was fired from his role as CEO. I wrote quite a bit about that. And then reinstated after a pushback from his executive team, the OpenAI staff, and a lot of lobbying by other tech leaders. It's being known inside of OpenAI as the blip.

Talk about why you thought it was significant. Karen, you start with that, because it was a moment. A lot of these tech companies have their moments, but this was it for them. For me, it really exemplified this specific question that I try to explore throughout the book, which is, how do we govern AI? Because it is one of the most consequential technologies of our century. And for

This moment in which a few small handful of people get to decide the future of a company that has had such profound implications on the trajectory of AI development really highlights the kind of controlling hands that are playing this game of control.

shaping the most consequential technology. And so one of the things that I advocate for in the book is we really need to be shifting towards more democratic AI governance. But starting with that anecdote, I really wanted to highlight where we are now, which is completely undemocratic. And everyone is sort of living in the whims and the thrash of people at the top that are deciding things behind closed doors. Keish?

Yeah, so I started with the blip. Certainly it's the most dramatic and that in some ways made Sam Altman truly a household name. I mean, he was after ChatGPT, but I think this concentric circles of understanding really reached their maximum point that wild, wild day when he was fired. And I think it tells you something about Sam's mind and Sam's

strange love of eccentric structures that this happened to him. He had been spinning up these strange structures throughout his whole career. He did it at Y Combinator. You see a little bit of it at Looped where he tries to invent new forms of things that haven't existed before and it really turned out to be his downfall.

Yeah, I mean, looped was, for people who don't know, was where I met him actually as a company that was kind of a failure. He was very young at the time. But when you say unusual structures, a lot of it had to do with the board losing members, right? I mean, it was a relatively balanced board. And then people dropped off who were sort of the more central figures versus...

the left and the right, if you want to do it in a political term. Absolutely. It was a board-powered struggle at some basic level, as well as concerns over safety, as I think both of us have documented a combination of these things. And Sam has always sort of figured like,

I got this, right? I'm the ultimate networker. Everyone owes me favors. There's no situation that I cannot talk my way out of. And it was extraordinary to watch one that he couldn't, at least for five days. Except he did, right? So,

Talk about how it changed the trajectory of the company. How do you think it impacted OpenAI and SAM going forward? Keech, why don't you start and then Karen? Oh, massively. We are still feeling the reverberations from the blip today. Explain that. So, I mean, I think let's start with Microsoft, right? It kind of looks like Microsoft rode in and was supporting OpenAI during the blip. And that's true. But they also lost confidence in SAM.

the solidity of OpenAI in the midst of that and began to make backup plans. For sure. Right? So we saw they hired Mustafa Suleiman. They started developing their own models. And even now, as they are negotiating with OpenAI,

for a new structure between OpenAI and Microsoft, the blip hangs over all of that. The concern that Microsoft has that OpenAI might not be there in a few years. They don't want to spin up a bunch of data centers and cloud that they then can't monetize down the road. So it increased the risk profile of OpenAI greatly, and we're still feeling it today. Karen? It definitely, I would agree with Keech that it definitely showed kind of weakness within OpenAI's foundations. But at the same time,

with the way that Sam has sort of always orchestrated himself, he used this crisis moment as a way to make himself even more powerful. And so he has been able to, even as he's losing the relationship with Microsoft, establish relationships with President Trump.

and start getting even bigger deals like the $500 billion Stargate project. And so it sort of accelerated the trajectory that OpenAI was already on. And Altman sort of effectively used this moment to kind of weed out some of the primary dissenters and the primary voters

stronghold of resistance against the general push for the company to go faster and faster and develop these larger and larger models. And now we're kind of seeing an even more extraordinary pace in that. Yeah, he certainly removed the brakes, the people who are alleged brakes, right? And I have to say, I've never met a tech company that didn't have one of these hair on fire moments, whether it was Google or Netscape or even Microsoft. There was a lot of sturm und drang at the beginning of that company.

There's not a tech company I know that doesn't have something like this. It just was so valuable and so famous, right, at this point. Obviously, Sam plays a big role in both your books, and he is probably the center figure of the AI movement right now. Keith, your book goes deep into Sam's biography, his upbringing in St. Louis, college at Stanford, his time at Y Combinator.

What did you learn about Sam, and I was speaking with both family and friends, and what surprised you, you know, and how did he become this power broker at such a young age? I saw some of it. You know, he's a charming person, obviously, and he's a good networker. Keej, why don't you start with that? Well, one of the things I was surprised to learn was that his childhood wasn't all that happy. I certainly went into the reporting figuring, oh, you know,

upper-middle-class family, good schools, everything's happy. And as I did more reporting, I realized that his parents had had strains in their marriage pretty much his whole life. They were separated by the time his father passed away in 2018. So I think that there's sort of like a drumbeat of anxiety beneath all that optimism that is an important thing to understand about him. And then

You know, he was a gay teenager in the Midwest, and that was a difficult thing. And I think he, in his early years, stepped up and became kind of a voice for the gay people at his school. And that taught him a little bit about leadership and about risk-taking. So you see, I think that one of the things Sam really wants to do

be a leader even beyond, you know, of companies, right? Like, he has political ambitions and he sees himself as sort of a great figure of history. And I think both of those things were forged in St. Louis. Is there anything that you were like, huh, interesting? There's always a moment, right, of whoever it happens to be. Meeting Sergey Brin's parents was very insightful for me, for example. Yeah, I think getting to understand his mother and father's relationship, I think, was the thing that really helped. You know, his mother is...

very ambitious. And his father is kind of a do-gooder. And there are these quotes in the book where his mother really didn't respect what his father was doing sometimes. And I think that that tension kind of like lays at the heart of him a little bit. So Karen, you interviewed around 260 people for your book, but Sam and Open Eye refused to talk to you. However, you did talk to his sister, Annie, who's launched a lot of accusations against Sam, including

I'd like you to explain that because I did run down some of this stuff and it seems like the sister has some struggles, I would say.

Yeah, totally. I think in terms of why I think Annie sort of encapsulates so many of the themes in the book, one of the defining things is regardless of whether you side with Sam or side with Annie in terms of the clear struggle that these siblings have with one another...

It really highlights that, you know, any individual, especially ones that grow so powerful, are going to have some deep personal and professional baggage that they carry with them. And one of the things that I try to highlight in the book is we need better governance structures because we should not be resting so much power in the hands of individuals. This particular conflict that Sam had with his sister clearly weighed on him because it was the one thing that...

Right.

whims and pressures and capricious relationships and tensions. It could be just a very troubled sibling. Like a lot of, interestingly, a lot of people who are like this do have trouble. You know, I just did an interview with Barry Diller. His brother was a drug addict. I think it's the most troubling part of his life from what I can, from what I can get. And it's a shame. Keats, you write in your prologue about one moment in the interview with Sam where his altruistic mask slipped to reveal the fierce competitor beneath and

Karen, you have a version of that when writing about the 2018 rift between Sam and Elon. You say the rift was the first major sign that OpenA was not, in fact, an altruistic project, rather one of ego. I've always thought it was ego. I don't know. The first line of my book was it was capitalism after all. So both of you, do you think these are two sides of the same coin? Did you think that Sam is duplicitous as critics has accused him of telling people what they want to hear and then bad mouthing them behind their backs?

I never believed the altruism to start with of anybody, not just Sam. So talk a little bit about this altruism versus ego. Keech first. So, yeah, that moment was when he was bragging about having beaten Google to the punch, right? Which is?

Very satisfying. Go ahead. Right. Sure. I mean, you know, Sam is a product of this like Silicon Valley Y Combinator world where being first and having things go up and to the right is everything. So and he's admitted that too, right? He can't like take away his desire to make that happen. I think you can see him...

doing a dance for Elon Musk in those early years. There was a moment where everyone was reading Nick Bostrom's book Superintelligence. Yes, I know. Trust me. In 2014. And that was just kind of like what the cool kids were talking about a little bit. Such a blowhard. And it was kind

Kind of fascinating that the way that you showed that you were serious about AGI even being a thing at all was saying that you were scared of it, right? That was kind of like a code and a way to recruit researchers, show that you took it seriously, because that was still sort of an unusual thing to even say out loud that you believed in.

So in one way, when he was kind of seducing Elon, it was that moment, right? Who's most scared of AI? Let's have a contest between us about who can think up the most apocalyptic situation. And we need to create something to counterbalance Google. Which works on Elon because he's steeped in science fiction. He's a science fiction buff, like an extraordinarily...

deeply read one. Yeah, this entire story is deeply shaped by science fiction. That was one of the things that was so fascinating to learn about in the research. So I think that that's the way to sort of understand the initial non-profit altruism thing a little bit, right? It was a vestige of a very specific cultural moment. I don't know if it was a plot, you know, that they were then going to like toss it off as soon as they had a chance to. But yeah,

you know, Karen's criticism is absolutely right that they did then toss it off and become a regular company for all intents and purposes.

Right. Karen? Yeah, I would completely agree with Keech that it was sort of this moment in time where Sam, I think, identified that he could use this as a way to recruit people. Sam is, you know, he's a once-in-a-generation talent when it comes to storytelling. He's really good at telling the right story. And he also...

Who does that remind you of? Steve Jobs. Yeah. I mean, you know, it reminds you of a lot of people in Silicon Valley because that is what Silicon Valley— Well, Steve Jobs was the original. —values. But absolutely, I think Sam worshipped Jobs. He did.

And so when he comes into a room with someone and meets with someone, I really think the best way to understand what comes out of Sam's mouth is that it's more tightly correlated with what he believes the other person needs to hear rather than necessarily something that he needs to tell from himself.

And I think that is ultimately what opening I became a bit of a manifestation of was he knew he needed a really gripping story. And he knew to go back to like beating Google to the punch. He knew he didn't have enough capital at that time to compete on salaries. So he needed that special element in addition to salaries. You know, I opened the book with...

We'll be back in a minute.

That was so sus.

Nobody wants this. Rear any consideration in all categories, including outstanding comedy series. I want to talk a little bit about the role in the AI race because they are dominant. When Altman started opening with Elon, its goal was to be more than a startup launching commercially. As we said, that was actually focused on researching, especially researching artificial general intelligence technology.

That's been up and down over the years. And Altman, I think, probably always wanted to turn it into a for-profit company. Elon probably was the most believer, if I recall his attitude at the time. He really did think AI was going to kill us. Others, I don't know, maybe Sergey Brin did and others. But Elon sued after he flounced out. To start with, he flounced out. He sold his house and then he was mad he sold it, essentially.

Earlier this month, OpenAI announced we'd be restructuring again as a public benefit corporation. Instead, with a nonprofit parent overseeing the for-profit arm and being a major stakeholder, it's not really a big difference because the profit and nonprofit are going to have the same board. And for benefit corporations are nonsensical to me. Elon says he's still suing. Keech and Karen, talk about the weird structure and why it's important for the OpenAI story. First, you, Keech.

So this weird structure kind of seems like a black hole that opening I might never be able to climb its way out of. The idea of a nonprofit controlling a for-profit. They tried pretty hard and they hit a wall and they had to retreat the last couple of weeks. So this is going to be challenging because this is going to make it harder for them to raise money and they need a whole lot of money. And it is important to note that when they were doing this last round of fundraising, they were telling all the investors, oh yeah, yeah, it's going to be cool. We're going to, the for-profit is going to be in charge now. If that doesn't happen, you can have

your money back. And then now that this has happened, it seems like, okay, the investors are going to give it anyway, but in the long term, this is going to make fundraising a lot harder for them. Right. They raised $40 billion, a lot of it from SoftBank, I think $30 billion from SoftBank. Harder for them in that people don't quite know what to make of this thing? Well,

Well, if you're an investor and you don't have control of where your investment is going, you don't have a voice on the board, that's going to be pretty challenging. If the nonprofit can come along and veto whatever the for-profit is doing, that's essentially what happened when Sam got fired. It's going to make it a very wobbly investment, a very high-risk investment. Karen, why don't you talk a little bit about this? Yeah, I definitely think that there is a certain level of

that opening I had to make in this regard. But I also think it sort of highlights the strategic mind of Altman in that he is still able to make certain gains in the direction that he wants to go within the space that he has. So, yeah,

Originally, the for-profit was capped and investors could only receive a certain amount of returns. And now, as a PBC, it's not capped. And that is actually a step in the direction that OpenAI does want to go. And one of the things that I sort of kind of hit upon where Keech was also saying earlier that

Altman sort of is fond of these strange structures and has been throughout his career. Altman plays legal defense, whereas, you know, Musk plays legal offense. And so he kind of creates these really nested, complicated structures that just make it really hard for us as journalists to even scrutinize and report. What does it mean that it went from a capped profit to a PBC? And I think that's an important part of the story as well, because it is a tactic.

that he uses. Well, I think he wanted it to be a profit company. I think he got a lot of roadblocks probably from state regulators, from federal regulators, Elon. The case wasn't going away. Why isn't the case going away for Elon if this is what he says he wanted? What do you think is happening here? Yeah, I mean, you mean like, so the real politic? I mean, you know, Elon said, like, this changes nothing. Or Elon's lawyer said, this changes nothing, right? Yeah.

and they're still going to go ahead and they still object to the PBC. Oh, please. He just wants his money. If you step back a little bit, I mean, it is kind of extraordinary that Elon Musk lent his name and...

promised a billion dollars, gave a very small percentage of that, but basically his name is credibility is money to get this thing off the ground and then got nothing in exchange for that. We also have to point out that he is a competitor and that part of what's going on here is a fight between companies. And it's not just between XAI and OpenAI. You know, there's Anthropic,

and other folks there. And a lot of these fights that maybe look like they are philosophical are also commercial. Absolutely. So Ulm would say he definitely needs the money to get computing power. Karen, you write in your book that isn't inherent to AI or even general AI, but OpenAI has started the race. You write, not even in Silicon Valley did other companies and investors move until after ChatGPT started to funnel unqualified sums into scaling. That included Google and DeepMind. OpenAI is

original rival. It was specifically open AI with its billionaire origins, unique ideological bent, and Altman's singular drive, network, and fundraising talent that created the ripe combinations for Tickler Vision to emerge and take over, which means they started this. And I've gotten lots of emails and texts from people who are like, it wasn't supposed to be a race, and now it is. So explain the importance of winning right now and how the model of cost and resources play into that, Karen. Yeah.

Yeah, I kind of want to start maybe a little bit earlier in point in time in that. So I started covering AI in 2018. And at that time, pre-open AI sort of seizing the imagination of what AI could be. I mean, AI, there were so many different diverse ideas about AI.

What AI was in the research world. There was so much fascinating research that was going into could we build AI systems without data or less data? Could we build AI systems that were super compute efficient? Could we build AI systems that reason but without needing to train it on the entire internet?

All of that went away once OpenAI released GPT-3 and especially once OpenAI released ChatGPT because it appeared, I mean, now it's not clear anymore, but it appeared at the time that this was going to be an incredibly commercially lucrative path to producing more commercially viable and just really great products.

And so everyone kind of glommed onto this approach. And because of this race, all of the AI researchers that were working in the field started getting these million-dollar compensation packages from companies, and they shifted from independent academia into these corporate labs and started being financially conflicted in sort of the work that they were doing to develop this field and this technology. And so the reason why I say that

This kind of compute-intensive scaling laws paradigm of AI development is not inevitable. It's because there were so many other paths being explored that then got deleted, essentially. They got forgotten. Yeah, because they were deciding to spend the money, correct? Exactly. And now people are stuck or the industry is sort of stuck in this mind frame of,

The only thing that they can do to win this race is to just keep spending more money and just keep acquiring larger supercomputers than the other guy.

And that has sort of created this crazy downward spiral in terms of a race to the bottom for things like environmental impacts and social impacts and ultimately labor impacts. So, Gage, obviously size matters. What a surprising thing for a bunch of men. We've recently seen counterexamples to scaling, though. The success of DeepSeek from China, we don't have to—

I couldn't think about the exact number, but it's clear how much it costs. But they use fewer GPUs. They're looking in their scarcity. They've found some efficiencies. Keet, what impact do you think the launch has had? Because despite DSEEC, the push for bigger super campus data centers is obviously the day after the inauguration. Sam standing next to President Trump announced the $500 billion Stargate data center project.

Earlier this month, it announced open AI for countries to help other countries build AI infrastructure. Talk about this. Does it have to be this way? Did DeepSea have an effect? You just saw this UAE deal that looks like they're really competing for something like that. So talk a little bit about what it could mean. Sometimes it does feel like data colonialism. Other ways, it feels like an opportunity for some of these countries.

Well, the reason that Sam Altman is like the man for the AI moment is because the version of AI, this scaling laws version of AI, is something that requires so much money. And Sam is really great at raising money. So it's like he's a hammer and that's the nail, you know. And if he had different skills, you know, maybe that wouldn't be so important. But he is a master fundraiser. And so the version of AI that he is driving the world toward is one that requires giant piles of money.

And the deep-seek moment, yes, there was a freakout, and the fundraising process became complicated in the weeks after that deep-seek announcement suggested that, hey, there might be a much more efficient way to do this. But it passed, I would say. Meaning how so? Well, just that...

It's full speed ahead, pedal to the metal, let's build data centers all over the world. We'll really see when people start sending checks for this fundraising, not just promising the money, but when the actual money arrives, about how much confidence investors have. Is that something you're worried about? I think it's something we're keeping an eye on, yeah, definitely. Yeah.

So another area we're seeing Sam in this shift to position is regulation, because that's part of this whole package, because it looks like Trump is pedal to the metal, like whatever they want. And obviously, that's why they were all standing there at the inauguration, because this is the moment. When he testified to the Senate Judiciary Committee two years ago, about six months after Chachibiti came out, he said the regulatory intervention by governments would be critical to mitigate the risks of AI models.

He was back on Capitol Hill earlier this month. I want you to listen to his answer to a question from Senator Ted Cruz about standards or regulations. I am nervous about standards being set too early. I'm totally fine with the position some of my colleagues took that standards, once the industry figures out what they should be, it's fine for them to be adopted by a government body and sort of made more official. But I believe the industry is moving quickly towards figuring out the right protocols and standards here. And we need the space to innovate and to move quickly.

Hmm. What a surprise. Interesting. So what does it say about that? Karen, why don't you and then Keats talk about this? I'd like you both to talk about it. Go ahead, Karen. Yeah. So I think when Altman testified two years ago, he called for regulation, but a very particular type of regulation. Don't throw me in that briar patch. Yeah. There were a lot of senators that were asking about current day harms like copyright, impact on jobs, environmental issues, things like that.

And Altman very cleverly sort of orchestrated a complete shift of their attention towards future harms in talking about the ability of these models to potentially extricate themselves and go rogue on the web and that kind of danger. And so he was really calling for regulation in that bucket by saying the current models are – we don't need to worry about that now. Let's really like nail the regulation for models that don't exist yet. Right.

And so I actually think the most recent time that he testified, it's a little bit more – he's being a bit more clear now what his stance is, but it hasn't been a change in his stance.

It's just I think he's realizing the original talking points that he wheeled out are no longer as effective. And so he's testing out something new ultimately towards the same objective. Well, at the time, it was also Biden administration who was putting out that executive order. When he testified that time, I think I texted him, you're lying. And he goes, how do you know? I go, your mouth is moving.

But go ahead, Keej. So it's been fascinating to see how the cultural moment has changed with Trump returning to office, right? Suddenly now everything is about China. China, China, China. Yeah. You hear me. We got that. Yeah.

Right. So the entire justification for all this infrastructure investment, for us not regulating as much as it seemed like folks maybe wanted us to a couple years ago, is about competition with China. That fits into the Trump administration worldview, and it just happens to fit with what the economic incentives are of open AI. It's been quite an interesting pivot. What do you think about the pivot? I think it is...

troubling to pump up an enemy and use that enemy as justification for doing things. We'll be back in a minute.

To remind you that 60% of sales on Amazon come from independent sellers, here's Tracy from Lilies of Charleston. Hi, y'all. We make barbecue sauce, hot sauce, and specialty popcorn. They get help from Amazon to grow their small business faster. They handle all our shipping and logistics, which is a big help. All on it up. Have a great day, Tracy. Hot stuff, Tracy. Ooh, honey. Shop small business on Amazon.

So every episode we get a question from an outside expert. Here is yours. This is Casey Newton. I am the writer of the Platformer newsletter, the co-host of the Hard Fork podcast, and of course, Kara's former tenant and forever houseboy. Keech and Karen, I am a big fan of both of your books. So when

When you consider the broader societal and ethical implications of OpenAI's work to build artificial intelligence, what do you think is the most significant unintended consequence of OpenAI's rapid development that you believe deserves more public and policy attention right now? Great question from Casey, my houseboy. Kate, you go first, then Karen. I think it's about economic concentration. You know, Sam has...

has had a long history of being interested in things like UBI and being concerned about the impacts that AI is going to have on the average person. But if you actually look at what they're doing, they are concentrating economic power to a degree that we have never seen before. And I don't see any evidence of breaks on that really at all. Yeah, I would agree with you. Go ahead, Karen.

I think I would add one more layer, which is we are also seeing an enormous amount of political power concentration as well now. The way that Silicon Valley has aligned completely with the Trump administration and the Trump administration is now putting the full firepower of their authority behind, you know, things like allowing them to build data centers on federal lands using emergency presidential powers.

and helping them strike these deals in the UAE. That is a degree that we've also never seen before. And I do not think that what we've seen with Musk being able to go into the government and create Doge and kind of start decimating the government is actually inherently different from what these other AI companies are doing, which is they're really positioning themselves to gain as much economic and political leverage as possible and

Within the U.S. and around the world such that they will reach a point, and I personally think that they've already reached this point, which is why I call them empires, where they are able to act in their self-interest without any material consequence. Such as ecological problems or things like what you write a lot about. Yeah, exactly. Meaning they'll get to do whatever they want locally and globally. Exactly.

Because this administration is taking off every break, essentially, and in fact is probably benefiting in some fashion that we're not aware of. So on May 15th, Sam tweeted, soon we'll have another low-key research preview to share with you all, which turned out to be OpenAI's new AI coding agent called Codex.

To be clear, low-key research preview is a way they described ChatGPT when it came out, and Altman is clearly trying to draw parallels. It was the same day that OpenAI announced that its flagship GPT 4.1 model would now be integrated into ChatGPT, but they've also had some issues with a recent update as the chatbot is apparently too sycophantic, and they had to roll back some of those issues. How important are these new releases, Keech, and then Karen?

Well, I think it was interesting that we saw Fiji Simo recently even be put in charge of applications at Instacart's CEO. She's like the Sheryl Sandberg of this thing. That's how people are characterizing it. Right. And you see, you know, she has history with advertising and products. And so you can kind of see the kind of company that...

OpenAI is trying to become, it's pointed at becoming, right, like a big consumer tech company, which I don't think anyone could have possibly guessed, even when ChatGPT was launched. I certainly didn't see that coming. I thought it was going to live inside Microsoft, you know? Right. So, yeah, I think these most recent developments

or just steps on the path toward that. I mean, we'll see more with these devices. They've been talking to Johnny Ive, we know. I think there's a very good chance they're going to become sort of an Apple-like consumer company in the future. Yeah, I think there's sort of two things happening here. One is that OpenAI is not really retaining its research edge anymore. You know, I talk with

Yeah.

So they're really trying to drum up the releases in that regard. But I think the second thing that it shows is also Altman's management style. He actually had an interview with Sequoia recently where he touched on it himself, where he was like, I don't want a lot of people to be in one room working on the same thing because then we get mired in bureaucracy where people are just debating each other and things.

There's lots of infighting and he is a very conflict-averse person. And so he was like, we just needed to be doing a lot of things all the time so that there's a few people each working on all of these different things and they're not going to be in conflict. And then we like ship, ship, ship, ship, ship.

And I think it's very telling that now Fiji is coming to, I think, maybe create some kind of strategy behind it instead of just having chaos. Right, she's a very operational person. Exactly. I would say he is not. And obviously there's been a lot of dramatic departures, although each one of them then goes and starts their own company. They're like, this is safety, and then they get billions of dollars, which is almost laughable in some way. But in that regard...

research does get left behind because OpenAI was to do the research and achieve artificial general intelligence going forward. Is that the main goal? And again, the definitions of it is changed depending on who you're talking to. I had a different definition from Dr. Fei-Fei Li the other day when I saw her versus someone else versus the people who debate on Twitter who I try not to pay attention to. How do you, is that still the main goal?

Yes, and I think part of what the Fiji CMO's hiring is so that Sam can delegate and he can sort of focus on his big goal, which is, of course, raising money and research. But being the guy who brought the world AGI, right, that's what he wants to be. He wants to be a man of history. So he wants to keep his eye on that ball. I think...

AGI is still narratively the goal because it is the most narratively persuasive and continues to have a lot of power in shaping public discourse and also continuing to rally people within OpenAI towards a common goal. Right, because it's not like you're just making a search engine that you can sell advertising against, which is kind of boring. Yes, exactly. But has OpenAI been focused on public?

research breakthroughs that would enable a so-called AGI in a while? I don't think so. I think they have largely shifted to a consumer product company, as Keech mentioned, and they are really starting to maximize their models on user engagement, which, you know, I don't think

models being maximized on user engagement is going to lead us to an AGI, no matter how they try to define it. And so I think there has long been a divergence at the company between what they publicly espouse and even what they espouse to employees and what the priorities of the

company are, but it is starting to diverge even more dramatically. Even more between the researchers. No, it's a consumer company. Who do you, each of you, who would you imagine they think their real competition is? Google?

Yeah. I mean, I'm covering Google for a really long time, and this is the first time that someone's really given Google a run for their money. Which isn't such a bad thing. Not just OpenAI, you know, all the chatbots, them in the front. But yeah. I want to finish up talking about the future then. One thing Sam has done is he's definitely shapeshifted himself in order to get along with the Trump administration. He's not become...

Like Elon, they're definitely in the tank for those people. I would not say he is, but obviously he went to the Middle East. I never get a sense of some of the more, you know, ridiculous, you know, what's the opposite of Trump derangement syndrome, Trump crush syndrome or something like that. I don't sense that from him in any way. I just feel like he feels like he has to work with them. There's no other choice. How do you look at that?

Yeah, he is a no-way MAGA. But I thought it was really interesting that he did not support either candidate verbally during the campaign. So that was like months, right? Months even before the election, he just kind of sat tight. So he has advisors around him and they're reading the polling and all of that, right? That he saw that he was going to have to work with this person. So they find their areas of overlap and that's about building AI and infrastructure. What do you think, Karen? Yeah.

Yeah, he's a strategic guy and he's an opportunist. He is willing to align himself with the people that are going to get him where he needs to go. And he's willing to suspend maybe his own inner values for that. So I think that is ultimately what we saw. And was I a little bit surprised? I was, but then it made so much sense that I was like, oh, of course, that he would have done that. I think...

Altman is trying to basically reach escape velocity with the backing of the Trump administration. He's trying to get to a point where all of the infrastructure is already laid. You know, the first bricks are already placed on the ground and you just can't do anything about it anymore, even in the next administration if it shifts back to the Democrats.

I think he's just trying to move as quickly as possible such that it becomes very, very difficult to unwind. And I think that's, you know, it's not just opening eyes strategy or Altman strategy. That's been the story of Silicon Valley for a long, long time. Just move fast until you've

superseded the law and you've superseded other mechanisms to rein you in. Keech, you just mentioned something I want to come back to. You wrote about Sam's political ambitions to become governor of California, not this cycle, but maybe even president. He has denied these ambitions. How likely do you think they are? Well, that was a moment back in 2017 when he definitely did explore running for governor of California.

And he talked to people about, you know, the president thing. I don't think he was super serious about it, but this was something that was in the air. I think that he wants to be in the room where it happened, you know, and AI turned out to be the way for him to get there.

kind of more than anyone could have guessed. I thought it was really interesting that in the beginning of OpenAI, or early years, he had said publicly they had gone and tried to get the government to invest, and they had been turned down. He did, yeah. And I think he's always believed that AI is something the government should have been doing anyway, right? That that's the ideal model, and I guess if they're not going to do it, I'm going to go do it myself. So I think we're kind of seeing that moment

come to pass in some ways. A little bit what Karen was saying, that there is this sort of mirror image of China state-backed capitalism thing that I could very much see like emerging in the future. It's already sort of here. And I think he just wants to be there sort of at the center of organizing our society. Do you ever see a rapprochement with Musk?

Yeah, he's a pretty flexible guy. I think he would be willing to do it. I don't know. It's more Elon that I would question. Who do you think each of you his biggest enemy is then? Himself? What something else?

I mean, at this moment, it is Elon without question. There's both like real emotional anger there on both sides as well as competition. So that fight is still very real. I think that absolutely is true for this moment. And I think what's interesting about

Altman's whole history is he gets a lot of detractors over time because what I realized was if you agree with Altman's vision for the future, he's the best asset ever because he's so persuasive at being able to get you the things that you need to

Right.

And so he has encountered many, many people along his career that have become, you know, enemy number one in that particular era. So it's Musk now, but certainly there will be more in the future. Last question. You both dedicated years now to understanding the people and the person particularly behind this technology is incredible and incredibly frightening to a lot of people.

I want to ask you both, what's your P. Doom? What is the percent chance you think AI goes wrong, someone creates an existential catastrophe for humans? And let me give you the positive side. What is your greatest hope for it? So why don't we start with Karen and then Keech?

So I kind of hate P. Dooms. P. Dooms is the worst case scenario. Yeah. But, you know, I really think that we risk undermining democracy because we are allowing so few people to accumulate so much economic and political power, as I mentioned, and we are building these new forms of empire, which are the antithesis of democracy. Empire is based on hierarchy. It is based on the belief that

There are superior people that can have the right, whether it's God-given or nature-given, to rule above inferior people. And democracy is based on a philosophical, beautiful philosophical notion that people are created equal and that that's why we all have agency to actually shape our collective future.

And we are rapidly moving towards a world where most people do not feel that agency anymore and therefore are not actively participating in democracy. And that is part of what we're seeing with the erosion of democratic norms in our society right now. And I think the most optimistic version of the future is, I hope—

There have been so many people that are now activated in wanting to understand AI, wanting to grapple with AI, wanting to be part of that conversation, whether that's artists, whether that's educators, whether that's kids, that I really hope it starts to shift the trajectory that we're currently headed on and people will mobilize and start reasserting themselves and re-livening democracy. Yeah.

So, yeah, I don't think there's a very large chance of existential risk from AI, but not zero, right? But the possible upside, if you want to ask the one possibly good scenario, is that when they hit the wall with this nonprofit conversion plan, it means that the nonprofit is going to remain in control. And

Sam has many times said that he envisions a future where all the people in the world are going to be able to vote on what AI looks like. And it was never clear to me how the nonprofit industry

And, you know, it's one of those things, oh, we haven't quite figured it out yet, but we're going to figure it out. But there is a desire to have some kind of democratic mechanism. And if they are not able to wiggle out of the nonprofit, maybe that is something that can emerge over time. You have no dooms? I mean, you know, single digit percentage chance of total annihilation from the robots.

Thank you both. They're both terrific books. Usually there's all these books that come out in certain companies. This is a company everyone should be paying attention to early, later, in the future and stuff like that. So it's really important to understand where they came from and especially this particular figure, Sam Holtman. Thank you so much. Thank you. Thank you, Cara. Thank you.

On with Kara Swisher is produced by Christian Castor-Russell, Kateri Yoakum, Dave Shaw, Megan Burney, Megan Cunane, and Kaylin Lynch. Nishat Kerwa is Vox Media's executive producer of podcasts. Special thanks to Maura Fox and Eamon Whalen. Our engineers are Rick Kwan and Fernando Arruda, and our theme music is by Trackademics.

If you're already following the show, you don't have Trump crush syndrome. If not, you have a P-Doom score of zero. Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow. Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast Network, and us. We'll be back on Monday with more.

After graduating from high school, Anthony needed a plan. He loves playing video games, but that doesn't cover rent. So he took a job at Amazon packing boxes. He heard about their free skills training programs to boost his pay. Now Anthony is a software developer for Amazon. With a bigger paycheck, he upgraded his computer system at home. With his new skills, he's developing a video game in his free time.

Grow your career and your pay. Learn more at aboutamazon.com.