We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
B
Ben Brooks
N
NLW
知名播客主持人和分析师,专注于加密货币和宏观经济分析。
Z
Zeynep Tufekci
Topics
NLW: DeepSeek的发布是AI领域一个重要的转折点,引发了关于其对美国公司、市场、AI产业以及地缘政治影响的广泛讨论。我认为,未来的世界中,开源将成为一种新的软实力,美国应该在这个领域发挥主导作用,而不是被中国占据。然而,中国的“开放”可能是一种策略,因为其开放并非完全自由,而是受到中国共产党希望人们相信的事实所限制。如果模型将塑造未来人们对真相的认知,那么这确实是一个值得关注的问题。 Zeynep Tufekci: DeepSeek的出现对美国科技行业造成了冲击,暴露了美国在AI安全和监管方面的不足。我认为,美国试图通过限制芯片出口等方式来阻止AI技术传播的策略是行不通的。相反,政府和行业应该为AI带来的变革做好准备,例如加强网络安全,解决AI可能加剧的不平等问题。大型跨国公司的利益不能代表所有人的利益,我们需要更广泛地考虑AI对社会的影响。 Ben Brooks: DeepSeek的开源模型虽然强大,但也存在政治宣传和审查的问题。一些议员提议禁止与中国进行任何AI技术的进出口,这将阻碍开放科学和技术的发展,并可能导致全球依赖中国技术。我认为,DeepSeek的开源特性使其模型可以被修改和改进,以消除审查,这是一件好事。美国应该将开放技术视为传播影响力的工具,而不是共产主义宣传的载体,并确保强大的模型保持开放可用。

Deep Dive

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, open source as the new soft power. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. Hello, friends. As you can probably tell, I am recording this on Friday, still in the midst of this flu. But for this week's Long Reads episode, when I was searching around the web, it's very clear that the launch of DeepSeek is still the most resonant thing that's happened in AI in a very long time.

There were numerous op-eds trying to make sense of what DeepSeek really meant for American companies, for American markets, for the AI industry, and for geopolitics. We're going to start by reading a set of excerpts from columnist Zeynep Tufekci called The Dangerous AI Nonsense That Trump and Biden Fell For. You can probably tell the position she's coming from just from the title. I'm going to turn it over to the AI version of myself, which today at least will be much better than the non-AI version of myself, and then I'll be back to introduce the second piece.

China's tech industry recently gave the U.S. tech industry, and along with it the stock market, a rude shock when a startup called DeepSeek unveiled an artificial intelligence model that performs on par with America's best, but that may have been developed at a small fraction of the cost and despite trade restrictions on AI chips. Since then, there have been a lot of frantic attempts to figure out how DeepSeek did it and whether it was all above board. Those are not the most important questions, and the excessive focus on them is an example of precisely how we got caught off guard in the first place.

The real lesson of DeepSeek is that America's approach to AI safety and regulations, the concerns espoused by both the Biden and Trump administrations, as well as by many AI companies, was largely nonsense. It was never going to be possible to contain the spread of this powerful emergent technology, and certainly not just by placing trade restrictions on components like graphics chips.

That was a self-serving fiction, foisted on out-of-touch leaders by an industry that wanted the government to kneecap its competitors. Instead of a futile effort to keep this genie bottled up, the government and the industry should be preparing our society for the sweeping changes that are soon to come. The misguided focus on containment is a belated echo of the nuclear age, when the United States and others limited the spread of atomic bombs by restricting access to enriched uranium, by keeping an eye on what certain scientists were doing, and by sending inspectors into labs and military bases.

Those measures, backed up by the occasional show of force, had a clear effect. The world hasn't blown up, yet. The Trump administration is operating under the same faulty logic. Just one day into his new term, President Trump and OpenAI's chief executive Sam Altman, fresh off his $1 million pledge to Trump's inaugural fund, announced a vast computing infrastructure venture. Called Stargate, it is billed as a multi-hundred billion dollar bid to retain U.S. advantage in the fast-growing industry.

DeepSeek chose the very next day as the moment to publish a paper letting the world in on its great coup. The company says it spent little of what OpenAI and others spent because it was able to optimize its software and train its model more efficiently. Advances like that have allowed many other technologies to become cheaper and more widely available. Still, not everyone believes that account, especially given questions about China's respect for intellectual property rights and trade restrictions. Could the company have amassed a forbidden stash of Nvidia chips? Maybe.

Could the cost of developing the model have been higher than was disclosed? Some estimates suggest so. OpenAI says that DeepSeek may have stolen some of its work, but whatever DeepSeek did it and others can keep doing it.

Already, many AI companies are building on DeepSeek's model. Individuals are downloading it or querying it for only a tiny fraction of what OpenAI charges. Within the industry, there's a popular trope that the real turning point will be the development of AGI, or Artificial General Intelligence, when AI reaches human-level intelligence and potentially becomes autonomous. The implication, then, is that what's happening now is just a kind of warm-up, which no one needs to worry too much about. That's a convenient falsehood.

We have reached the other AGI turning point: artificial good enough intelligence. AI that is fast, cheap, scalable, and useful for a wide range of purposes. And we need to engage with what's happening now. Many observers have described this as a Sputnik moment. That's incorrect. America can't reestablish its dominance over the most advanced AI because the technology, the data, and the expertise that created it are already distributed all around the world. The best way this country can position itself for the new age is to prepare for its impact.

If the inevitable proliferation of AI endangers our cybersecurity, for example, instead of just regulating exports, it's time to harden our networked infrastructure, which will also protect it against the ever-present threat of hacking by random agents or hostile governments. And instead of fantasizing about how some future rogue AI could attack us,

It's time to start thinking clearly about how corporations and governments could use the AI that's available right now to entrench their dominance, erode our rights, worsen inequality. As the technology continues to expand, who will be left behind? What rights will be threatened? Which institutions will need to be rebuilt and how? And what can we do so that this powerful technology with so much potential for good can benefit the public?

It is time, too, to admit that the interests of a few large multinational companies aren't good proxies for the interests of the people facing such a monumental transformation. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in.

Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk. Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company.

Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vantage to manage risk and prove security in real time.

For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents buy industry horizontal agent platforms.

Agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.

That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.

If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. Hello, AI Daily Brief listeners. Taking a quick break to share some very interesting findings from KPMG's latest AI Quarterly Pulse Survey.

Did you know that 67% of business leaders expect AI to fundamentally transform their businesses within the next two years? And yet, it's not all smooth sailing. The biggest challenges that they face include things like data quality, risk management, and employee adoption. KPMG is at the forefront of helping organizations navigate these hurdles. They're not just talking about AI, they're leading the charge with practical solutions and real-world applications.

For instance, over half of the organizations surveyed are exploring AI agents to handle tasks like administrative duties and call center operations. So if you're looking to stay ahead in the AI game, keep an eye on KPMG. They're not just a part of the conversation, they're helping shape it. Learn more about how KPMG is driving AI innovation at kpmg.com slash US.

The second piece from former Stability AI head of public policy Ben Brooks is called If China Shares AI, the U.S. Can't Afford to Lock It Out. Once again, I'm going to turn it over to the AI version of myself, and then we'll be back to discuss. When the Chinese firm DeepSeek launched its latest AI model, shocking policymakers and bruising the stock market, it exposed a paradox. Freely available software that parrots the Chinese Communist Party was made freely modifiable.

DeepSeek's open-source models rival those from closed-source U.S. labs and power a chatbot that is currently the most downloaded app globally. But they promote the one-China policy, flatter Xi Jinping, and avoid talk of Uyghur genocide. The chairman of the House Select Committee on the Chinese Communist Party, arguing the new model is controlled by Beijing and openly erases the party's history of atrocities, called for stronger export controls...

His colleagues quickly obliged. Within 48 hours, Sended Josh Hawley had introduced a bill to prohibit the import and export of any AI technology to or from China, with penalties of 20 years' imprisonment. The bill would ban research projects, activities directed toward fuller scientific knowledge, with Chinese colleges or universities. And the broad definition of AI technology would capture not just chips but also data, research software, and the distinctive settings or parameters that determine a model's performance.

These proposals are the most aggressive AI reforms contemplated by any policymaker of either party anywhere in the U.S. President Trump has blasted the Biden administration for imposing onerous and unnecessary government control over AI. Yet a ban on the import and export of intangible technology like models, digital files that can be shared on the internet, would eclipse anything proposed by his predecessor, the European Union, or the Republicans' bait noir, California.

Bizarrely, the bill appears to prohibit simply downloading Chinese technology or intellectual property, barring U.S. developers from even studying models like deep-seeks, let alone learning from them. And since no one can reasonably prevent widely available software, data, or research findings winding up in China, it would put the brakes on open science and open technology from the U.S. too. If they survive Congress and a robust First Amendment challenge, these ideas would smother open innovation and foster a global reliance on Chinese technology in the process.

While Chinese developers are required by law to ensure their models adhere to the CCP's core socialist values, it's precisely because deep-seek open-source there are one model that researchers and developers can do something about it.

Anyone can download and run the model independently of DeepSeek, probe it for undesirable behaviors and modify it to improve performance, or unwind censorship. Within a few days, developers had shared over 500 variations of the model online, earning five times as many downloads as the original.

The AI search engine Perplexity has tweaked and deployed its own version of R1, which can summarize the Tiananmen Square massacre and explain Taiwanese independence without the Orwellian admonitions of the original model. By comparison, the largest models released by OpenAI, Anthropic, or Google are closed. Their distinctive parameters are withheld, accessible only through a paywalled interface like ChatGPT.

In general, users and developers must take what they're given, accepting the limitations imposed by a handful of big tech firms, including to the chagrin of Republican policymakers, their political and cultural biases.

Asked about DeepSeek, artificial intelligence and crypto czar David Sachs argues that predominantly closed-source U.S. firms dropped the ball by focusing on content moderation instead of competition. They wasted a lot of time on things like DEI and woke AI, he said. The models were basically producing things like black George Washington. All models embed the values and design choices of the labs that develop them, as well as biases in their training data and vulnerabilities in their architecture. But it's difficult to reconcile these contradictions.

Policymakers fret about Beijing free-riding on U.S. industry and exporting ideology in open-source models, while blaming U.S. firms for censoring their black-box models. These comments reflect a wider tension in Washington over whether open technology is a boon for transparency and competition in AI, or a windfall for America's adversaries.

As Hawley wrote to Meta following the release of its OpenLlama model, centralized AI models can be more effectively updated and controlled to prevent and respond to abuse. Yet because DeepSeek released its model openly, developers can look under the hood, modify its behaviors, and substitute alternative values. That's a good thing, since their R1 model appears to be equally capable, more efficient, cheaper to run, and free to use compared to leading U.S. models.

With these advantages, DeepSeq's models could become the default engines that power the next generation of AI applications. That means there's a real possibility that China-regulated and censored models might determine the search results or social media feeds of billions of people around the globe.

But open technology isn't guaranteed. Like open AI before it, DeepSeek could choose to keep its future models behind a paywall, succumbing perhaps to commercial pressure or a regulatory crackdown in Beijing. If the world develops a reliance on closed-source Chinese models accessed through a paywall, we will inherit their warped behaviors too. That is why the world needs a steady supply of competitive open-source alternatives that can be inspected, modified, and localized.

The U.S. is an indispensable part of that supply chain, both directly through our own open models, like Meta's Lama, and indirectly through chips, research, data, and capital.

There are risks. Open technology can be misused, and upstream firms may have little control over downstream development. But if the U.S. government turns off the tap, it will promote a global reliance on Chinese technology from labs regulated by the Chinese Communist Party. It will erode the influence of U.S. firms abroad and displace U.S.-trained, U.S.-aligned, and U.S.-regulated models from the world's AI systems. Instead of pulling up the drawbridge, the U.S. should commit to ensuring that powerful models remain openly available.

Ubiquity is a matter of national security. Retreating behind paywalls will leave a vacuum filled by strategic adversaries. Washington should treat open technology not as a vector for Chinese Communist Party propaganda, but as a vessel to transmit U.S. influence abroad, molding the global ecosystem around U.S. industry. If DeepSeek is China's open-source Sputnik moment, we need a legislative environment that supports, not criminalizes, an American open-source moon landing.

All right, just briefly, what's interesting to me here is that we are very clearly going through a huge global realignment. There's no way to look at the first couple weeks of the Trump administration and not see that that's what we're in the midst of. In that transition, there are going to be wildly competing interests and very different visions of the future that are going to be competing to shape the global order. And part of the confusion that you can see in Ben Brooks' piece has to do with the fact that traditional political affiliations won't always line up with the policies that people are looking for.

Still, I think that the underlying theme of these pieces and many others that have been written since the launch of DeepSeek comes down to a single salient idea, that in the future that we are in, the future that we are headed into, open source is soft power. I think the concern that many have is that the natural place for the US to play, which is open and free, is being taken up by China. But really, they're open as a trick because it's not truly open. It's open, but only with the facts that the CCP wants you to believe.

To the extent that we believe that models are going to shape truth in the coming decades, this is actually a meaningful area of concern. Anyways, guys, lots to chew on there. Apologies for not having a longer analysis, but we'll be back next week. Until then, appreciate you listening as always. Peace.