We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Brooke Talks AI With Ed Zitron

Brooke Talks AI With Ed Zitron

2025/1/29
logo of podcast On the Media

On the Media

AI Deep Dive AI Chapters Transcript
People
B
Brooke Gladstone
E
Ed Zitron
一位专注于技术行业影响和操纵的播客主持人和创作者。
Topics
Brooke Gladstone: 我观察到,近期美国人工智能产业受到了来自中国DeepSeek公司新模型的巨大冲击,这引发了美国股市动荡,许多科技巨头的股价大幅下跌。DeepSeek模型的技术水平与美国巨头公司不相上下,但其成本却低得多,并且更容易获取,这给美国人工智能产业带来了巨大的挑战。 Ed Zitron: 我长期以来一直警告说,人工智能产业存在泡沫,而DeepSeek的出现证实了我的观点。美国人工智能公司过度依赖昂贵的GPU和大型数据中心,这并非构建大型语言模型的必要条件。DeepSeek公司以更低的成本和更简易的方法构建了与GPT-4.0类似的模型,并开源了其研究成果,这证明了美国人工智能公司的低效和缺乏创新。DeepSeek的V3模型在成本上远低于GPT-4.0,但却具有同等的竞争力,这彻底打破了美国人工智能公司构建大型语言模型需要巨额资金投入的神话。大型语言模型的基准测试存在人为操纵,无法反映实际应用场景。DeepSeek公司公开发布了其模型的源代码和研究细节,人们可以验证其结果。美国人工智能公司之间缺乏真正的竞争,他们都在做类似的事情,并依赖于巨额资金投入。DeepSeek公司证明了美国人工智能公司缺乏真正的创新和竞争,其发展模式是低效和浪费的。美国人工智能公司依赖于夸大的承诺和巨额资金投入,而非真正的技术创新。生成式人工智能已经达到顶峰,其发展潜力有限。大型语言模型的“幻觉”问题无法解决,其概率性本质限制了其发展潜力。人工智能的炒作周期是由媒体和企业管理层的夸大宣传造成的,而非基于其实际能力。 Ed Zitron: 美国人工智能公司将大量资金投入到不切实际的项目中,而非解决实际问题。硅谷的“增长至上”思维模式阻碍了真正的创新,导致人工智能领域缺乏实际应用。ChatGPT等生成式AI并非真正的“人工智能”,而是将现有技术与其他技术简单结合的产物。Sam Altman关于人工智能安全的说法是营销策略,而非对实际风险的关注。生成式人工智能对环境的影响巨大,包括能源消耗、水资源消耗和电子垃圾产生。生成式人工智能目前处于亏损状态。ChatGPT等生成式AI并未解决实际问题,其流行度是由于媒体炒作的结果。OpenAI的大部分收入来自付费订阅,而非实际应用。人工智能泡沫即将破裂,因为现有模型并未取得实质性进展。OpenAI公司不断推出改进不大的产品,缺乏真正的创新。美国人工智能公司缺乏创新,都在模仿彼此,而非解决实际问题。美国人工智能公司缺乏有意义的、面向大众市场的、有用的产品,其发展模式对环境和社会造成负面影响。特朗普的“星门计划”是虚张声势,其背后缺乏实际支撑。DeepSeek公司对所有人工智能公司构成威胁,因为他们都在开发类似的产品。硅谷公司缺乏创新,盲目跟风,导致资源浪费。元宇宙的失败反映了硅谷“增长至上”的思维模式,缺乏可持续发展战略。科技公司的永续增长模式不可持续,市场对其估值将进行重新评估。DeepSeek公司证明了人工智能技术可以更低成本、更高效地实现,但并未创造新的商业模式或功能。美国人工智能公司过于追求规模和增长,忽视了可持续发展和商业模式的构建。大型科技公司通过制造技术壁垒来维持垄断地位。人工智能产业泡沫破裂将导致大量人员失业,并对科技行业产生更深远的影响。科技公司永续增长的神话破灭将导致市场对其估值进行重新评估,并可能引发科技行业危机。人工智能产业泡沫破裂可能引发类似于2008年金融危机的后果。科技公司应放弃永续增长模式,追求可持续发展。大型语言模型的底层技术是Transformer模型。

Deep Dive

Shownotes Transcript

Translations:
中文

WNYC Studios is sponsored by Intuit TurboTax. Taxes was feeling stuck trying to squeeze in getting tax help but never having enough time. Now, Taxes is getting a TurboTax expert who does your taxes from start to finish. While they work on your taxes, you get real-time updates on their progress. And you get the most money back guaranteed.

Get an expert now on TurboTax.com. Only available with TurboTax Live Full Service. Real-time updates only in iOS mobile app. See guarantee details at TurboTax.com slash guarantees. On the Media is supported by Progressive Insurance. Do you ever think about switching insurance companies to see if you could save some cash? Progressive makes it easy to see if you could save when you bundle your home and auto policies.

Try it at Progressive.com, Progressive Casualty Insurance Company and affiliates. Potential savings will vary, not available in all states. This is On the Media's Midweek Podcast, and I'm Brooke Gladstone. So, lately the stock market's been rocking and rolling, but this week the AI industry especially has felt the ground shake with the intrusion of a new kid on their block.

a Chinese model called DeepSeek, founded by a Chinese billionaire in 2023, which has broken through with technology that is comparable to the American behemoths. I'm talking about OpenAI, Google DeepMind, Anthropic, Meta's Llama AI, and Elon Musk's XAI. But ever so much cheaper to make and mind-bogglingly accessible to all.

Ed Zittrain, host of the Better Offline podcast and writer of the newsletter Where's Your Ed At?, has been warning that the AI bubble has been primed to burst for some time now. Ed, thanks for joining us. My pleasure.

So on Monday, the stocks of many major tech firms took a steep dive, including Alphabet, Microsoft, the AI chip maker, Nvidia, which I guess lost almost $600 billion in value, reportedly the biggest one day loss in U.S. history, though it's recovered a little.

It all happened when news broke about a new, relatively open Chinese AI model called DeepSeekR. It was kind of like a horror movie jump scare for the AI industry. Why?

So it's important to know how little the AI bubble has been built on. If you look at these companies, Anthropic, OpenAI, and then the competitive ones from Amazon, Google, with Gemini, for example, there's not actually been much behind them. It's always been this idea that America has built these big, beautiful, large language models that require all of this money based on their narrative, but

Tons of GPUs, the most expensive GPUs, the biggest data centers, because the only way to do this was to just cram more training data into them. What DeepSeq did...

They found a way to build similar models to GPT-4.0, the underpinning technology of ChatGPT, and O1, the reasoning model that OpenAI had built, and make them much, much, much cheaper. And then they published all the research as to how they did and open-sourced the models, meaning that the source code of these models is there for anyone to basically adapt.

Now, GPUs, it's an electronic circuit that can perform mathematical calculations at high speed. Is this the chip that NVIDIA makes?

In a manner of speaking, so GPUs, graphics processing units, they're used for everything from hardcore video editing to playing video games and such like that. Well, computer games in the case of a computer. NVIDIA has a piece of software called CUDA. Now, all you need to know about that is GPUs traditionally were used for graphics rendering. What CUDA allowed, and it's taken them 15 years to do it or more, CUDA allowed you to build software that would run on the GPUs.

So what NVIDIA started doing was making these very, very, very powerful GPUs. And when I say a GPU in a computer, that's a card that goes inside. The GPUs that NVIDIA sells are these vast rack-mounted, like they're huge and they require incredible amounts of cooling. They run hot. And the way that generative AI, so the technology under ChatGPT runs, is by its...

It runs on these giant GPUs, and it runs them hot. They are extremely computationally expensive and power-consuming, which is why you've heard so many bad stories about the environmental damage caused by these companies. Right. DeepSeek.com.

didn't have access to NVIDIA chips, or at least not unlimited access, partly because of the Biden sanctions and other things. So they had to find other ways to cut down on the cost. So the sanctions made it so that what could be sold to Chinese companies was limited. Now, before the sanctions came in, deep-seekers

It grew out of a hedge fund called High Flyer, which also should deeply embarrass the American tech industry that just a random outgrowth of a hedge fund just mapped them. But nevertheless...

What they did was they had stockpiled these graphics processing units before the sanctions came in, but they'd also used ones that have less memory bandwidth. NVIDIA could only sell kind of handicapped ones to China. So they had a combination of these chips. So they found ways around both training the models, effectively feeding them data to teach them how to do things.

and also the inference, which is when you write a prompt into ChatGPT, it infers the meaning of that and then spits something out, a picture, a ream of text, and so on and so forth. But these constraints...

meant that DeepSeek kind of had to get creative. And they did. They got very creative. And just to be clear, all of this is in their papers. People are probably already working on recreating this. People are running the models themselves now. It's all for real. They not only are running them, they can also modify them. Exactly. And they can build things on top of them. Now, an important detail here is that one of the big hubbubs is that DeepSeek trained

their V3 model competitive with ChatGPT, the underlying technology GPT. And they trained it for $5.5 million versus GPT-4.0, which is the latest model, cost $100 million or more, according to Sam Altman. So they kind of proved you don't really need to buy the latest and greatest GPUs. In fact, you don't even need as many of them as you thought, because they only use 2,048 of them, as opposed to the hundreds of thousands that hyperscalers have.

The reason that they're building all of these data centers is because they need more GPUs and more space to put the GPUs and ways to cool them. And indeed, NVIDIA with their latest Blackwell chips has found these things are in 3,000 pound server. It's truly, genuinely really cool to look at. It's just the result that kind of sucks. But didn't Chat...

GPT or the company that made it OpenAI just create, as you say, this new reasoning program a couple of months ago called O1. They say it can answer some of the most challenging math questions, and it seemed to have put the company once again at the top of the heap. Are we saying that DeepSeq can do similar things?

Yes. Now, when you say it can do these very complex things, this is another bit of kayfabe from this industry. What is kayfabe? Kayfabe from wrestling, specifically, where you pretend something's real and serious when it isn't really. Uh-huh. So...

The benchmarking tests for large language models in general are extremely rigged. You can train the models to handle them. They're not solving actual use cases. When O1 came out, the media, who should be ashamed of themselves, fell over themselves to say how revolutionary this was without ever asking, "What does this actually do? What can I build with it?" And on top of that, O1 is unfathomably expensive.

When it came out, the large discussion was, wow, only OpenAI can do this. Only OpenAI is capable of making something like this. You kind of had similar things from Anthropic, other companies, but OpenAI, top of the pile. And that was why they were able to charge such an incredible amount and why they were able to raise $6 billion. Except now...

The sneaky Chinese, and I mean that sarcastically, this is just good engineering. They managed to come along and say, not only can we do this 30 times cheaper, and to be clear, that number is based on DeepSeek hosting it, so we don't know who's subsidizing that, but nevertheless...

Not only can they do it cheaper, but they can do it. And on top of that, they open source the whole damn thing. So now anyone can build their own reasoning model using this, or they can reverse engineer it and build their own reasoning model that will run cheaper and on their servers, kind of removing the need to deal with open AI entirely. The developers that I have talked to

are extremely impressed with O1, but they're also extremely impressed with DeepSeq's R1. So they're kind of like, huh, why would I pay this insane amount of money when I could not? Eventually, you're going to find cloud companies in America that will run these models. And at that point...

Where's OpenAI's moat? And the answer is they don't have one, just like the rest of them. DeepSeek isn't entirely open because it didn't say how they trained their AI. Well, they didn't share their training data. They said how they trained it, and they were actually extremely detailed. But the question is, can we trust their numbers? We don't have to. They published the source code, the research behind everything.

and the last week has been crazy. You've seen so many people using it and the results speak for themselves. The model works really well. It is cheaper. There are versions of R1 that you can run on a MacBook.

This is potentially apocalyptic for open AI, because even if you don't trust DeepSeek, even if you say, I do not trust their hosted model, so the version that DeepSeek sells access to, I don't trust it, which is fair. We don't know where it's run, and we don't know who backs it. But you can self-host it. You can run it on a local thing, or you could run it using a GPU. These things can be done safely. You don't have to trust them. And

You can build your own. They explained how they trained it. They explained why it was cheaper in great detail. And I've spoken to multiple experts who all say the same thing, which is, uh-oh, OpenAI.

So now China and America are in an AI race, a hegemonic battle of generative AI. And it seems that this deep-seek tech has upended our assumptions of how all this was going to go. Most of our assumptions, not your assumptions, because I've been reading you, you say that there was never really any competition among American AI companies.

Yep, that is the long and short of it. This situation should cause unfathomable shame throughout the tech media, but really within Silicon Valley.

Did all of them just sit around twiddling their thumbs? No. What happened was Anthropic and OpenAI have been in friendly competition. They've both been doing kind of similar things in different ways, but they're all backed by the hyperscalers. OpenAI, principally funded by Microsoft, running on Microsoft servers almost entirely until very recently, but paying discounted rates to Microsoft and still losing money. They lost $5 billion last year. They're probably on course to lose more this year. OpenAI? Yes. Okay.

And that's after $3.7 billion of revenue. Anthropic, I think they lost $2.7 billion last year. And you'd think, with all of that loss, they would be chomping at the bit to make their own efficient models, right? What if I told you they didn't have to? What if I told you that there was no real competition between them?

Google, Amazon, they back Anthropic. They just put more money in. They all had these weird blank check policies and the venture capitalists backing the competition, whatever that was. There's nothing they could do because based on the narrative that was built, these companies needed all this money and all of these GPUs as that was the only way to build these large language models. So why would any of them ever build something more efficient?

When they could all keep doing the same thing, this big, nasty, lossy, wasteful thing. They were always going to get more money because this is the only way we can fight China when China builds the thing. The real damage that DeepSeek's done is they've proven that America doesn't really want to innovate. America doesn't compete. There is no AI arms race.

There is no real killer app to any of this. ChatGPT is only popular because AI is the new thing. ChatGPT has 200 million weekly users. People say that's a sign of something. Yeah, that's what happens when literally every news outlet all the time for two years has been saying that ChatGPT is the biggest thing without sitting down and saying, "What does this bloody thing do and why does it matter?" Oh, great, it helps me cheat at my college papers. It can hallucinate stuff.

Oh, great. And now on top of this, and really the biggest narrative shift here is that everything has been predicated on the fact that we had to spend this money, that there was no cheaper way of doing this at the scale they needed to. There was nothing we could do other than give Sam Altman more money and Dario Amadei more money. And that's the CEO of Anthropic. And all they had to do was just continue making egregious promises.

because they didn't think anyone would dare compete, anyone would dare bring the price down. And I don't think OpenAI believed anyone could do reasoning like this. I think it's good that this happened. I'm glad it happened because the jig is up.

Bloomberg reported last month that the big names in AI tech, OpenAI, Google, Anthropic, are struggling to build more advanced AI. You've been saying since 2024, early 2024, that generative AI had already peaked. Why did you think that then? And why do you think so now?

So the reasoning models, what they do, just to explain, is they break down a prompt. If you say, give me a list of all the state capitals that have the letter R in them, it will then, and there's a whole different technical thing I won't go into there, you can see it thinking, and I must be clear, these things aren't thinking, but

But it thinks through the steps and says, okay, what are the states in America? What states have this? And then it goes and checks its work. Nevertheless, these models are probabilistic. They're remarkably accurate in the sense that they would guess that if you say, I need a poem about Garfield with a gun, it would need to include Garfield and a gun, perhaps a kind of gun. And it would guess what the next word was.

Now, to do this and train these models required a bunch of money and a bunch of GPUs, but also a bunch of data, scraping the entire internet, to be precise. Not just the Common Crawl, which is a large file of a bunch of internet sites, but basically anything they could find. There was a rumor that OpenAI was literally transcribing every video from YouTube and then using the text from that.

My husband has written a lot of books. He's on that list. My daughter has written a couple of books. She's on that list. I wrote a comic book and they use that too. Jesus, it's just disgusting. A minuscule speck in the universe of stuff that they used. I didn't feel particularly special, but I'm just saying I know how universal their use of this stuff was. But here's the crazy thing though. Even with all of your stuff and everything on the internet, to keep doing what they are doing,

They would need four times the available information of the entire internet, if not more. Because it takes that much. To train these models requires you just shoving it in there and then helping it understand things. But to get back to one thing, these models are probabilistic.

There is no fixing the hallucination problem. Hallucinations are when these models present information that's false as authoritatively true. I thought they'd been getting much better at catching that stuff. No, they haven't. The whole thing with the reasoning models is that by checking their work, they got slightly better. But I think people forget how amazing the human brain is. The mistakes we make are just fundamentally different. We don't make mistakes because each thing we're doing is a guess. We make mistakes because we're

falling apart constantly. We're all dying and our bodies are hell, at least in my case. But the thing I'm saying is these models were always kind of going to peter out because they'd run out of training data, but also there's only so much you can do with a probabilistic model. They don't have thoughts. They are probabilistic. They guess the next thing coming and they're pretty good at it.

But pretty good is actually nowhere near enough. And when you think of what makes a software boom, a software boom is usually based on mass consumer adoption.

and mass enterprise level adoption. Now the enterprise referring to big companies of like 10,000, 50,000, 100,000 people, but down to like a thousand. Nevertheless, financial services, healthcare, all of these industries, they have very low tolerance for mistakes. And if you make a mistake with your AI, well, I'm not sure if you remember what happened with Knight Capital. That was with an algorithm. They lost hundreds of millions of dollars and destroyed themselves because of one little mistake.

We don't even know how these things fully function, how they make their decisions. But we do know they make mistakes because they don't know anything. They do not have knowledge. They are not conscious. I get it. No, no, no, no. Not just not conscious.

They don't know anything. ChatGPT does not know. Even if you say, give me a list of every American state, and it gets it right every time. It's just pattern recognition. Yes, it is effectively saying, what is the most likely answer to this? It doesn't know what a state is. It doesn't know what America is. It doesn't know anything. It is just remarkably...

remarkably accurate probability, but remarkably accurate is nowhere near as accurate as we would need it to be. And as a result, there's only so much they could have done with it. You wrote, what if artificial intelligence isn't actually capable of doing much more than what we're seeing today? What if there's no clear timeline when it'll be able to do more? What if this entire hype cycle has been built, goosed by a compliant media system

to take these people at their word. You said you believe that a large part of the AI boom is just hot air pumped through a combination of executive BSing and the media gladly imagining what AI could do rather than focus on what it's actually doing.

You said that AI in this country was just throwing a lot of money at the wall, seeing if some new idea would actually emerge. Are we talking about agents that can actually do things for you in a predictable way? Are we talking about God? What is the new idea that they were hoping would emerge from throwing money

billions at this project. So Silicon Valley has not really had a full depression. We may think the dot-com bubble was bad, but the dot-com bubble was they discovered e-commerce and then ran it in the worst way possible.

This is so different because if this had not had the hype cycle it had, we probably would have ended up with an American deep seek in five years. Deep seek, the way it is trained, uses synthetic data, which makes it very good at things with actual answers, things like coding and math, but

The way Silicon Valley has classically worked is you give a bunch of money to some smart people and their money comes out the end. In the case of previous hype cycles that work, like cloud computing and smartphones, there were very obvious places to go. Jim Covello over at Goldman Sachs famously said one of the responses to General Vejo was to say, well, no one believed in the smartphone. Wrong. There were thousands of presentations that led fairly precisely to that.

So with the AI hype, it was a big media storm. And suddenly, Microsoft's entire decision-making behind this was they saw ChatGPT and went, god damn, we need that in Bing. Buy every GPU you can find. And I'm telling the truth. It is insane that multi-trillion dollar market cap companies work like this. Nevertheless, all of these companies, they went, well, this is the next big thing. Throw a bunch of money at it. That's worked before. Buying more, doing more, growing everything always works. And I call it the ROT economy, the growth at all costs mindset.

Silicon Valley over the years has leaned towards just growth ideas. What will grow? What can we sell more of? Except they've chased out all the real innovators. But to your original question, they didn't know what they were going to do. They thought that chat GPT would magically become profitable. When that didn't work, they went, well, what if we made it more powerful and bigger? We can get more funding that way. So they did that. And then they kept running up against the training data war and the diminishing returns. They went,

Okay, agents. Agents sound good. Now, agents is an amazing marketing term. What it's meant to sound like is a thing that goes and does a thing for you. Just trying to get a plane reservation is a nightmare. If I could outsource that, I certainly would. But when you actually look at the products, like OpenAI's Operator, they suck. They're crap. They don't work.

They don't work, and even now the media is still like, "Well, theoretically this could work." They can't. Large language models are not built for distinct tasks. They don't do things. They are language models. If you are going to make an agent work, you have to find rules, but you also have to find rules for effectively the real world, which AI has proven itself, and I mean real AI, not generative AI that isn't even autonomous.

is quite difficult. So give me an example of where we are right now with AI. You're saying it can't do anything. I mean, it can write term papers. This is actually an important distinction. AI as a term is one thing, and it has been around a while, decades actually. AI that you see in like a Waymo cab...

like an autonomous car, it works pretty well. Then you look at Tesla and you go, it works less well because...

The problem is not being able to drive along a road. It's the one edge case. Training for edge cases is so difficult. So like the very rare cases that nevertheless end up with someone dying. Now, AI is an umbrella term. And what you see in like a Waymo, for example, nothing to do with ChatGPT. ChatGPT is generative AI, transformer-based models. What Sam Altman and his ilk have done is attached...

thing to the side of another thing and said it's the same because they want the markets and journalists to associate them so that they don't have to build real stuff.

When you say they've attached a thing to another thing, give me some sort of concrete way to visualize what that means. So chat GPT is not artificial intelligence. It is not intelligent. I guess it's artificial intelligence in that it pretends to be, but isn't. But the umbrella term of AI...

is old and has done useful things. Alpha fold, protein folding. These are things that actually exist and help with diseases. These are things that go through an enormous amount of data to find proteins that can enable us perhaps to develop medicines to combat diseases. Right. And that is not generative AI.

Just to be clear, and the versions of Siri before Apple intelligence, not generative AI, different kind of AI, Waymo caps, AI, algorithms, you could refer to them as AI. Those have been around a while.

Generative AI is separate, but what Sam Altman does as he goes and talks about artificial intelligence, and most people don't know a ton about tech, which is fine, but Altman has taken advantage of that and particularly taken advantage of people in the media and the executives of public companies who do not know the difference between any of these things and said, yeah, chat GPTs, that's all AI. Sam Altman went before Congress and said, we need technology.

You to help us help you so that AI doesn't take over the world. Oh, it's so funny as well when he says that as well. They love talking about AI safety. You want to know what the actual AI safety story is? Boiling lakes, stealing from millions of people, running...

and burning energy, the massive energy requirements that are damaging our power grid. That is a safety issue. The safety issue Sam Altman's talking about is what if ChatGPT wakes up and does this? It's marketing. Cynical, despicable marketing from a carnival barker. Sam Altman is a liar and it's disgraceful how far he's gone.

Let's talk for a moment about the environmental impact. It eats a huge amount of energy. Apparently, according to the International Energy Agency, a request made through chat GPT consumes 10 times the electricity of a Google search.

To cool all of those incredibly hot GPUs requires fresh water. That's in a world where a quarter of the humanity already lacks access to clean water. It ends up creating a lot of toxic electronic waste. Its power is often generated by burning fossil fuel. The cost-benefit analysis here doesn't look that great. It isn't.

You say it's incredibly unprofitable anyway.

They always say, well, it never is going to generate a profit at the beginning. Even Amazon was not making money when it started. They love that example. They love to bring up Uber as well. Now, Uber runs on labor abuse, but also Uber in their worst year lost around $6 billion. You want to know what year that was? It was 2020 when their business stopped working because no one went outside.

Now, they say, oh, Amazon wasn't profitable at first. They were building the infrastructure and they had an actual plan. They didn't just sit around being like, at some point, something's going to happen, so we need to shove money into it. With Uber, for example, yeah,

Kind of a dog of a company, horrible company. But nevertheless, you look at what Uber does, and you can at least explain to a person why they might use it. I need to go somewhere, or I need something brought to me using an app. That's a business with a thing. Even then, it required so much labor abuse and still does. OpenAI, by comparison, what is the killer app exactly? What is the thing that you need to do with OpenAI? What is the iPhone moment? Back

Back then, to get your voicemail, you actually had to call a number and hit a button. But you could suddenly look at your voicemail and scroll through it. You could skip the beginning. Text people in a natural way versus using the clunky buttons of a Nokia 3210. There were obvious things. The earliest days of Uber in San Francisco, it was really difficult to get a cab anywhere. So what I'm describing here are real problems being immediately solved. You'll notice...

The people don't really have immediate problems that they're solving with ChatGPT other than Sam Altman solving the problem of how does he get worth a few more billion.

Okay. Generative AI is incredibly unprofitable. $1 earned for every two and a quarter spent, something like that? Yeah, $2.35 from my last estimates. OpenAI's board last year said they needed even more capital than they had imagined. And the CEO, Sam Altman, recently said that they're losing money on their plan to make money, which is the chat GPT-

Pro plan? What is that? So this is where the funny stuff happens.

Open AI's premium subscriptions make up about 73% of their revenue. The majority of their revenue does not come from people actually using their models to do stuff, which should tell you everything. Because if most of their money doesn't come from people using their very useful, allegedly, models, well, that means that they're either not charging enough or they're not that useful. I think Altman said he wasn't charging enough. He isn't charging enough, but...

Their premium subscriptions have limits to the amount that you can use them. Well, their $200 a month ChatGPT Plus subscription allows you to use their models as much as you'd like, and they're still losing money on them. And the funny thing is, the biggest selling point is their reasoning models, 01 and 03. 03, by the way, is their new thing that is just throwing even more compute at the problem. It's yet to prove itself to actually be any different other than just...

slightly better at the benchmarks and also costing $1,000 for 10 minutes, it's insane. But the reason they're losing that is because the way they've built their models is incredibly inefficient. And now that DeepSeek's come along, it's not really obvious why anyone would pay for ChatGPT Plus at all. But the fact that they can't make money on a $200 a month subscription, that's the kind of thing that you should get fired from a company for. They should boot you out the door. How does DeepSeek make money?

Well, that's the thing we don't know. But it's important to separate DeepSeek, the company, which is now growth of a Chinese hedge fund. We don't know who subsidizes them. But anybody can use their program for free. They also released a consumer-focused app where anyone can use it for free. And that's 100% subsidized and we do not know how. But their models, which are open source, can be installed by anyone. And you can build models like them.

At this point, one has to wonder how long it takes for someone to just release a cheaper ChatGPT+ that does most of the same things. So you described at the top that this bubble is going to burst, right? How do you know? Why is it inevitable? I feel it in my soul. Nothing is inevitable. However, these models are not getting better. They are getting around the same in different ways. Give me an example of that.

I mean, they're only getting better at benchmarks. Their actual ability to do new stuff. Nothing new has happened. Look at what happened with Operator, their so-called agent. Operator is open AIs. It doesn't work. It sucks. You can use it if you use the ChatGPT Plus subscription, for example. Have you ever tried to use it? Yeah, it doesn't work very well. It sometimes does not understand what it's looking at and just stops.

It's just really funny that this company is worth $150 billion. Every single time they release a product, there's no new capabilities. Operator by comparison for OpenAI is pretty exciting in the sense that it did something slightly different and also doesn't work. It's just this warmed up crap every time with them. I was just going to say, so what's the end game here?

The end game was Microsoft saw ChatGPT was big and went, damn, we've got to make sure that's in Bing because it seems really smart.

Google released Gemini because Microsoft invested in open AI, and they wanted to do the same thing. Meta added a large language model. They created an open source one themselves, Lama. And they did that because everyone else was doing GPT and transformer-based models and generative AI. Everyone just does the same thing in this industry. No one was thinking, can this actually do it? They all

all tell the same lies. Sundar Pichai went up at Google I/O, the developer conference, and told this story about how an agent would help you return your shoes. Took a few minutes. He went through this thing talking about how it would autonomously go into your email, get you the return thing, just hand it to you. It'd be amazing. And then ended it by saying, this is totally theoretical.

They are making stuff up, have put a lot of money into it, and now they don't know what to do. All the king's horses and all the king's men don't seem to be able to get the valley to spit out one meaningful, mass-market, useful product that actually changes the world other than damaging our power grid, stealing from millions of people, and boiling lakes.

Last week, Trump announced a $500 billion AI infrastructure plan called the Stargate Project, along with the CEOs of OpenAI, Oracle, and SoftBank. And though it was announced at the White House, it's privately funded. President Trump still kind of wants to put his brand on it. Is this about him wanting to tap into that sweet, sweet masculine energy?

No, it's just Trump doing what Trump does, which is getting other people to do stuff and then taking credit. There is no public money in it. And in fact, there's another little wrinkle. So it's not $500 billion. It's up to $500 billion. Another weird detail as well. OpenAI has pledged to give $19 billion, according to the information, which they plan to raise through an equity sale.

and debt. Their largest round they've raised is $6 billion. Their company loses $5 billion a year. What is going on? Why do I have to talk about this nonsense with a straight face? It's all a performance. They're all tap dancing. And then Deep Seek came along and did this and freaked them out so much because the whole thing's hollow. It's an actual glass onion.

After the announcement of the Stargate project, Elon Musk took to social media to criticize the project. After all, he has his own AI empire. I wonder, does DeepSeek threaten all AI tech companies equally?

Every single one, because they're all building the same thing. There's very little difference between GPT-4-0, OpenAI's model, and Anthropix Claude Sonnet 3.5, and Google Gemini. There are various versions of them, but they're all kind of the same. So...

DeepSeek's V3 model is the one that's competitive with all those, and it's like 50 times cheaper. It's so much cheaper, and now it's open source, so everyone can build that. Now, Elon Musk's situation is even weirder. He just bought 100,000 GPUs with Dell. Very bizarre. Partnership with Michael Dell there. Dell makes them? Yeah.

Dell makes the server architecture. They go inside. So Dell had the data center. But nevertheless, Grok and X.AI, Grok being the chatbot that is attached to Twitter, it's not actually really obvious what any of that is meant to do. Kind of similar to how the AI additions to Facebook and WhatsApp and Facebook.

Instagram don't really make any sense either. But it's actually good to bring that up because everyone is directionlessly following this. They're like, we're all doing large language models, right? We're all going to do the same thing, right? Just like they did with the metaverse. Now, Google did stay out of the metaverse, by the way. Microsoft bought bloody Activision and wrote metaverse on the side. Right?

Mark Zuckerberg lost like $45 billion on the metaverse. And putting aside the hyperscalers, there were like hundreds of startups that raised billions of dollars for the metaverse because everyone's just following each other. The valley is despicable right now. It's full of people that build things because everyone else is building something. They don't build things to solve real problems. And I think this is actually a larger economic problem too. Why is AI in everything?

No one bloody knows. The metaverse was Zuckerberg's effort to create some sort of multimedia space where people could live or something, right?

He was claiming it would be the next internet, but really it was just a bucket of nonsense. It was just a bunch of stuff that he did because he needed a new growth market. The metaverse was actually a symptom of the larger problem, the rot economy I talk about, which is everything must grow forever. And tech companies are usually good at it, except...

they've run out of growth markets. They've run out of big ideas. So the reason you keep seeing the tech industry jumping from thing to thing, that when as a regular person, you look at them and go, that seems kind of stupid, or this doesn't seem very useful. What's happening is

is that they don't have any idea what they're doing and they need something new to grow. Because if at any point the market says, wait, you're not going to grow forever? Well, what happened to NVIDIA happens? NVIDIA has become one of the biggest stocks. It has like some ridiculous multi-hundred percent growth in the last year. It's crazy. The market is throwing a hissy fit because guess what? The only thing that grows forever is cancer.

Okay. What about the people who say, just give this time? It's going to happen. You make a great case that if this is not a competitive atmosphere here in the U.S., if something does happen, it'll probably be somewhere else.

Yeah, but what might happen elsewhere doesn't mean that they're going to find a multi-trillion dollar market out of this. DeepSeek has proven that this can be done cheaper and more efficiently. They've not proven there's a new business model. They've not made any new features.

And there's an argument right now, a very annoying argument, where it says, well, if the price comes down, right, that means that more things will happen because more people will say. So Yvon's paradox was quoted by Satya Nadella. And Yvon's paradox says, well, as the price of a resource comes down, so the use of it increases. That's not what's going to happen here. No one has been not using generative AI because it was too expensive. In fact, these companies have burned the

billions of dollars doing so. A third of venture funding in 2024 went into AI. These companies have not been poor. And now we're in this weird situation where we might have to accept, oh, I don't know. Maybe this isn't a multi-trillion dollar business. Had they treated it as a smaller one, had they said, this might be like a $50 billion industry, they would have gone about it a completely different way. They never would have put billions of dollars into GPUs. They might have put a few billion and then focused up

Kind of like how DeepSeat went, "We only have so many and they only do so much, so we will do more with them." No, American startups became fat and happy, but even when you put that aside, I was never a business model with this. Come on! It's just so dull! Give me real innovation, not this warmed up nonsense.

And you say that here, big tech, it just clones and monopolizes. Yes. And what they wanted, I believe, with this was to create the large language model monopoly by creating the mystique that said, okay, these models have to cost this much. The only way they can run is when we have the endless money glitch. And the only way we can build them is with the biggest GPUs you've got. That myth, a

allowed them to tap dance as long as they wanted while also excluding others. Because others, how would you possibly build a model like GPT-4.0? You don't have all the GPUs and all the money. Except now, maybe that might not be the case. And now, I don't know. People aren't feeling so good out there. So...

If this really is all early signs of the AI bubble bursting, what are the human ramifications of this industry collapsing? So there will be tens of thousands of people laid off, just like happened in 2022 and 2023 from major tech companies. On top of that, I'm not scared of the bubble bursting. I'm scared of what happens afterwards.

Once this falls apart, the markets are going to realize that the tech industry doesn't really have any growth markets left. The reason that tech companies have had such incredible valuations is because they've always been able to give this value.

feeling of eternal growth, that software can always magic up more money. It's why the monopolies are so powerful. They've had just endless cash to throw at endless things, like Google own the marketplace where you buy ads, the marketplace where you sell them, and the way you host the ads themselves. They've never really had to work particularly hard other than throw money at the problem and exclude other people.

Once it becomes obvious that they can't come up with the next thing that will give them 10% to 15% to 20% revenue growth year over year, the markets are going to rethink how you value tech stocks. And when that happens, you're going to see them get pulverized. I don't think any company's shutting down. I think Meta is dead in less than 10 years just because they're a bad company.

But right now, the markets believe that tech companies can grow forever and punish the ones that don't. There are multiple tech companies that just lose money. But because they grow revenue, that's fine. What happens if the growth story dies, like a 2008 housing crash, but specifically for tech? And I fear it. I hope I'm wrong on that one. The human cost will be horrible.

Even outside of the depression that might be experienced in the tech world, there are so many pension funds out there that may have investments. You're painting a picture of, you know, the housing crash of 2008. I actually am. I wrote a piece about that fairly recently. It was at Sherwood and it was about how open AI is the next Bear Stearns. And if you look back at that time,

And you mentioned the people that might say I'm wrong, that I should just wait, that these things are working out.

You could see stories like from David Leonard over at New York Times and others talking about how there's not going to be a housing crash and maybe even a crash would be good, but there's nothing to worry about. There were people at that time talking about how there was nothing to worry about and they were doing so in detail and using similar rhetoric, just wait and see. Things are always good, right? What goes up never comes down. That's the phrase, right? And it's horrifying because it

It doesn't have to be like this. These companies could be sustainable. They could have modest 2-3% growth. Google as a company basically prints money. They could just run a stable good Google that people like and make billions and billions and billions of dollars. They would be fine. But no, they must grow. And we are the users and our planet too, our economy too. We are the victims of the rot economy. And they are the victors.

These men are so rich. Sam Altman, if OpenAI collapses, he'll be fine. He's a multi-billionaire with a $5 million car. These people, they're doing this because they know they will be fine, that they'll probably walk into another cushy job. It's sickening and it's cynical. Well, that's the end. I just have a couple more informational questions. Of course. One is...

You talked about the transformer-based models. I still didn't get what that was. It's just the underlying technology behind every single large language model. In the same way, like, servers underpin cloud computing. Mm-hmm. And my other question is, there's a $5 million car? Yes, he has, like, a Koenigsegg Gregera. Nasty little man. Ed, thank you very much. It's been such a pleasure. I loved it. I'll come back whenever.

Ed Zitron is host of the Better Offline podcast and writer of the newsletter, Where's Your Ed At? Thanks for listening to On The Media's Midweek podcast. Check out the big show on Friday, where we'll be talking about lots of things, including posing that perennial question about what the president just did. Is that legal? You can find the show, of course, right here. And it posts around dinnertime on Friday. Bye.

If it's time for you to say goodbye to your car, truck, boat, motorcycle, or RV, consider donating it to WNYC. We'll turn the proceeds from the sale of your vehicle or watercraft into the in-depth news and programming that keeps our community informed. Donating is easy, the pickup is free, and you'll get a tax deduction. Learn more at WNYC.org slash car.

NYC Now delivers breaking news, top headlines, and in-depth coverage from WNYC and Gothamist every morning, midday, and evening. By sponsoring our programming, you'll reach a community of passionate listeners in an uncluttered audio experience. Visit sponsorship.wnyc.org to learn more.