People
B
Brad Zelnick
D
Dario Amodei
E
Elon Musk
以长期主义为指导,推动太空探索、电动汽车和可再生能源革命的企业家和创新者。
E
Ethan Malek
J
Jensen Huang
领导NVIDIA从创立到成为全球加速计算领先公司的CEO和联合创始人。
M
Manus
M
Mark Benioff
P
Perplexity
P
Prakapa
R
Robin Washington
T
Tariq Amin
Topics
Dario Amodei:我认为AI技术的发展将加速就业岗位的流失,尤其对年轻工人影响最大。我预测,与AI相关的裁员可能导致失业率飙升至10%到20%。虽然AI可能在医疗等领域带来突破,并促进经济增长,但如果不加以应对,可能会导致大量人口失业。我认为政府和AI公司需要停止掩盖AI可能带来的负面影响,并积极寻找解决方案,例如对AI公司征收3%的token税,用于支持失业人员的再培训和经济转型。我希望通过提高对这一问题的紧迫性,促使各方共同努力,应对AI带来的就业挑战。 主持人:我个人认为AI确实会对就业市场产生深远影响,它会改变我们完成任务的方式,甚至取代某些工作岗位。虽然从长远来看,新的工作机会可能会涌现,但过渡期可能会非常艰难,许多人将面临失业和技能过时的风险。因此,我赞同Dario的观点,我们需要认真对待这个问题,积极探讨应对策略,例如提供技能再培训、支持创业和创新,以及调整社会保障体系,以适应AI时代的需求。

Deep Dive

Chapters
Anthropic CEO Dario Amodei warns of significant AI-related job displacement, potentially leading to unemployment rates between 10% and 20%. He urges governments and AI companies to address this issue urgently, proposing a 3% token tax on AI companies as a possible solution. While long-term optimism remains, the short-to-medium-term transition will likely involve substantial job displacement.
  • AI-related job displacement could cause unemployment to spike to 10-20%
  • Amodei proposes a 3% token tax on AI companies to mitigate job losses
  • The transition to an AI-driven economy will involve significant job displacement

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, the latest in the global AI arms race, and before that in the headlines, a dire prediction from an AI entrepreneur. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Thanks to today's sponsors, Blitzy.com, Super Intelligent, and Plum. And to get an ad-free version of the show, go to patreon.com slash AI Daily Brief. Welcome back to the AI Daily Brief headlines edition, all the daily AI news you need in around five minutes.

Earlier this week, we discussed a recent job report around how entry-level tech jobs were not happening at nearly the same rate they were just a couple of years ago. A lot of the conversation is around what part of that was coming from AI. Well, Anthropic CEO Dario Amadei has waded into that conversation with some fairly dire warnings.

Speaking with Axios, Amodei accelerated his timeline on AI job displacement and warned that young workers would be the first hit. He forecast that AI-related downsizing could see unemployment spike to between 10% and 20%.

His belief is that we could end up in a world where, to use his words, cancer is cured, the economy grows at 10% a year, the budget is balanced, and 20% of people don't have jobs. Now, Amadei has been warning that this was coming for years, but his concern is only growing more urgent. He said that governments and AI companies need to stop sugarcoating what's coming and told Axios that lawmakers generally don't get it, CEOs are afraid to speak up, and workers don't believe AI is coming for their job until it happens to them.

He said, One proposal for how AI companies could have responsibility around this issue is a 3% token tax that would be levied on AI companies and redistributed in some way.

He commented, Still, it's very clear from listening to Dario that the main point is not any one particular solution that he's promoting, but just increasing the urgency of the conversation.

I continue to be optimistic in the long term. On the one hand, I've said frequently on this show that I think one of the most comforting lies we tell ourselves is this notion that AI isn't going to take our jobs, but someone using AI will. I think AI is coming for our jobs in the sense that it's coming for basically every task we do now. But if you've listened to me talk about, for example, the Microsoft idea of agent bosses, it's not that I think everyone's not going to have a job. I think that the jobs are going to look really different.

At the same time, that is a long-term view. And I think in the transition, there will be a lot of displacement. And I tend to agree with Dario that we need to be having conversations around what that displacement is going to look like, what our remediations are in the short and medium term, and how we transition people and companies to a different future. Now, speaking of companies and AI, as we'll discuss later with NVIDIA, it is earnings seasons, of course, and upgraded forecasts from Salesforce suggest that their AI products are beginning to pay off.

During this week's earnings, the company boosted revenue projections by around $500 million for the year. Salesforce also disclosed that their AgentForce product now has 4,000 paid deals signed, up from 3,000 in February. AI revenue is up 120% year-on-year. Overall revenue growth came in at 7.6%, but Salesforce projects growth ticking up to between 8% and 9% this year as AI leads a return to strength. Alongside growing sales, the company is also starting to see efficiency gains from their internal use of their AI products.

CFO Robin Washington told analysts, we've reduced some of our hiring needs. She said the firm has been able to redeploy 500 customer service workers to other roles, saving $50 million.

Salesforce is also hiring fewer engineers due to productivity gains from AI. Washington commented, We view these as assistance, but they are going to allow us to have to hire less and hopefully make our existing folks more productive. Executives also discussed the $8 billion acquisition of Informatica, which will add a data management layer to the company's suite to facilitate agentic workflows. Now, that Informatica news was actually one of the stories that got shunted to the side based on some travel, and so I wanted to come back to it.

This is the largest acquisition for Salesforce since they spent $27.7 billion buying Slack in 2020. It's an $8 billion deal to acquire Informatica, which is a data management company that offers data integration and governance solutions. This deal has been a long time in the making, with the Wall Street Journal first reporting talks in April of last year.

Now, this is very clearly about beefing up Salesforce's agentic offering. One of the bottlenecks for enterprise agent deployments is ensuring access to proprietary data. Brad Zelnick of Deutsche Bank wrote in a research note, without proper governance and the ability to manage and contextualize data, which Informatica excels at, the utility of what Salesforce AI generates is likely limited. One of the things that we hear is a challenge all the time with agents in general, but agent force and Salesforce in specific,

is anytime you have data that's locked in a closed ecosystem or an agent that can only access a certain portion of enterprise data, you tend to run up against real limits of what agents can do. Informatica is definitely about helping Salesforce break out of the constraints of their current data cloud and give them access to information outside of the Salesforce ecosystem for those enterprises. I actually thought that this long tweet thread from Prakapa on Twitter was pretty dead on.

She writes,

Over 50% of Informatica's revenue comes from data integration. So why is metadata and governance stealing the spotlight? Because this isn't really a data integration story. This is a metadata story. It's one that goes beyond Salesforce and Informatica. It's about how enterprise platforms are quietly re-architecting themselves for AI.

Just look at what Mark Benioff said. Enterprise-grade AI demands data transparency, deep contextual understanding, and rigorous governance. Translation, AI isn't just about models or LLMs, it's about metadata infrastructure.

Salesforce isn't alone. A few weeks ago, ServiceNow acquired DataWorld, another metadata platform. Why are business application giants suddenly buying metadata companies? My take, we're hitting the limits of the agent hype cycle. Yes, AI agents look great in demos, but in production, they fail without context, without trust, without governance. Metadata makes agents work.

The new AI game plan for enterprise software, keep data in your ecosystem, keep agents in your ecosystem, win the CIO, buy and build metadata infrastructure to hold it all together. But here's the paradox. Enterprise data is only getting more fragmented, more tools, more agents, more data silos. Heterogeneity isn't going away. AI is accelerating it.

That's why I believe in a fragmented world of data, compute, and agents, the metadata and governance layer must be the most unified part of the stack. Open, neutral, interoperable. A control plane for the enterprise. Like I said, I think that that is pretty spot on and explains a lot of this acquisition. Lastly today in the headlines, one cool feature update. Perplexity seems to want to go way beyond AI search. The company has released a new tool for generating reports, spreadsheets, dashboards, and more.

Collectively known as Perplexity Labs, the features are available to paying subscribers. And in an announcement blog post, Perplexity wrote, Perplexity Labs can help you complete a variety of work and personal projects. Labs is designed to invest more time, 10 minutes or longer, and leverage additional tools to accomplish tasks, such as advanced file generation and mini-app creation.

The idea is to leverage Perplexity's agentic features to go beyond deep research into producing actual deliverables. Rather than just spitting out a raw report, they want to use a combination of coding and image creation tools to create something that's a little more final and polished.

Now, interestingly, you're also seeing some of the other general agent companies start to hone in around specific use cases and build UIs around them. I noticed, for example, that almost at the same time, Manus announced Manus Slides. They write, "Manus creates stunning structured presentations instantly. With a single prompt, Manus generates entire slide decks tailored to your needs."

Whether you're presenting in a boardroom, a classroom, or online, Manus ensures your message lands. Pretty interesting to see then these deep research type tools move into the production of the end documents as well. Let me know if you've had a chance to play around with Perplexity Labs or Manus Slides yet. For now though, that's going to do it for today's AI Daily Brief Headlines Edition. Up next, the main episode. Today's episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context, which

Which, if you don't know exactly what that means yet, do not worry, we're going to explain, and it's awesome. So Blitze is used alongside your favorite coding copilot as your batch software development platform for the enterprise, and it's meant for those who are seeking dramatic development acceleration on large-scale codebases. Traditional copilots help developers with line-by-line completions and snippets,

But Blitze works ahead of the IDE, first documenting your entire codebase, then deploying more than 3,000 coordinated AI agents working in parallel to batch build millions of lines of high-quality code for large-scale software projects. So then whether it's codebase refactors, modernizations, or bulk development of your product roadmap, the whole idea of Blitze is to provide enterprises dramatic velocity improvement.

To put it in simpler terms, for every line of code eventually provided to the human engineering team, Blitze will have written it hundreds of times, validating the output with different agents to get the highest quality code to the enterprise in batch. Projects then that would normally require dozens of developers working for months can now be completed with a fraction of the team in weeks, empowering organizations to dramatically shorten development cycles and bring products to market faster than ever.

If your enterprise is looking to accelerate software development, whether it's large-scale modernization, refactoring, or just increasing the rate of your STLC, contact Blitzy at blitzy.com, that's B-L-I-T-Z-Y dot com, to book a custom demo, or just press get started and start using the product right away. Today's episode is brought to you by Super Intelligent, and more specifically, Super's Agent Readiness Audits.

If you've been listening for a while, you have probably heard me talk about this, but basically the idea of the agent readiness audit is that this is a system that we've created to help you benchmark and map opportunities in your organizations where agents could specifically help you solve your problems, create new opportunities in a way that again is completely customized to you. When you do one of these audits, what you're going to do is a voice based agent interview where we work with some number of your leadership and employees to

to map what's going on inside the organization and to figure out where you are in your agent journey. That's going to produce an agent readiness score that comes with a deep set of explanations, strength, weaknesses, key findings, and of course, a set of very specific recommendations that then we have the ability to help you go find the right partners to actually fulfill.

So if you are looking for a way to jumpstart your agent strategy, send us an email at agent at bsuper.ai and let's get you plugged into the agentic era. Today's episode is brought to you by Plum. If you build agentic workflows for clients or colleagues, you need to check out Plum. Plum is the only AI-native workflow builder on the market designed specifically for automation consultants, with all the features you need to create, deploy, manage, and monetize complex automations.

Features like one-click updates that reach all your subscribers, user-level variables for personalization, and the ability to protect your prompts and workflow IP. Make your life easier, your clients happier, and your business thrive with Plum. Sign up today at useplum.com. That's Plum with a B forward slash NLW. Welcome back to the AI Daily Brief. One of the recurring themes on this show is, of course, the geopolitical competition around AI. AI is

Very clearly, not just a technology movement. It is also not just a business consideration. It is right at the heart of geopolitical competition, obviously with China and the US at the center of that. Today, we kind of have a grab bag of stories that all relate to that in some way, and we're kicking it off with comments from NVIDIA CEO Jensen Huang. Now, it is earnings season, and so this is always a time of the quarter when we have a little bit more commentary from big leaders.

And Jensen recently has been using every chance that he's gotten to beat this drum of China's increased competition. Speaking with Bloomberg, he said, the Chinese competitors have evolved. Like everybody else, they are doubling, quadrupling capabilities every year, and the volume is increasing substantially. Now, one of the companies that Huang is referring to is Huawei, who are testing a chip that's roughly equivalent to NVIDIA's previous generation flagship, the H100.

Mobile company Xiaomi also recently announced their first proprietary chip in eight years, which was manufactured using second-generation 3nm architecture. This would be the first time a Chinese firm has mass-manufactured a 3nm chip, which even if that doesn't mean anything to you, the key point is that it achieves parity with the technology that's used to create NVIDIA's leading chips.

Still, rather than heed Jensen's warning that export controls were a failure, the new administration is extending controls in a different direction. Bloomberg reports that the Commerce Department has written to firms that provide chip design software to halt supply to China. And from Commerce Department spokespeople, it sounds like this is part of just a broader review of how every part of this ecosystem interacts with China in some way.

Now, one of the big areas of dispute or questions whenever Jensen Huang talks about China is how much he truly believes the export controls are pointless or if he's merely speaking out of self-interest. China was one of the company's largest markets even with chip controls in place. In their earnings report on Wednesday night, NVIDIA disclosed that the ban on H20 units will cost the company $8 billion in revenue during Q2. That's around a 15% hit on their revenue projection of $45 billion.

It's also up from when the ban was first announced in April, when Nvidia believed that they would only lose out on $5.5 billion in revenue for the quarter. On the earnings call, Jensen said, China is one of the world's largest AI markets and a springboard to global success, with half of the world's AI researchers based there.

The platform that wins China is positioned to lead globally today. However, the $50 billion China market is effectively closed to us. The H20 export ban ended our Hopper data center business in China. We cannot reduce Hopper further to comply. Taiwanese tech publication DigiTimes reported that both NVIDIA and AMD now have new downrated chips in the manufacturing pipeline to comply with adjusted export controls.

Nvidia's chips will be based on the Blackwell architecture and will be named the B20, and Reuters reported that the new chip will be available at around a third of the cost of the H20. DigiTimes added that both companies are expected to begin sale of these chips into China from July.

Still, one of the reasons to think that this isn't just self-interest is that Nvidia is doing quite well from the rollout of their new Blackwell chips everywhere outside of China. Sales for the first quarter were up 69% compared to the previous year and beat expectations. Huang's statement said, global demand for Nvidia's AI infrastructure is incredibly strong. Now, the stock was up about 3% following earnings, so Wall Street clearly isn't too concerned about the loss of the Chinese market as long as the rest of the world is still buying.

Still, Huang restated his position on export controls for investors during the earnings call, commenting, The question is not whether China will have AI. It already does. The question is whether one of the world's largest AI markets will run on American platforms. Shielding Chinese chipmakers from U.S. competition only strengthens them abroad and weakens America's position.

Now, part of the reason that the whole discussion of China has increased over the last six months is, of course, DeepSeek. When DeepSeek came out with models that could compare to the top American models that were theoretically trained at a fraction of the cost, and when they then released those reasoning models into a free public application that got tons and tons of consumer downloads,

Everyone sat up and took notice. Well, that company has now released an updated version of their R1 reasoning model. The model was posted to Hugging Face on Wednesday, along with an announcement from the company on WeChat. The announcement said that this was a minor upgrade, but didn't provide a description of the changes or significant technical notes. Model cards and benchmarks were added on Thursday, with DeepSeek claiming model performance approaching that of the leading closed reasoning models...

OpenAI's O3 and Google's Gemini 2.5 Pro. It's a big step up from the original version of R1, but there is a question of whether this is a significant improvement on the previous open source leader, QWQ. Importantly, for those who thought that Chinese labs were going to start pulling ahead and actually being state-of-the-art, this would suggest instead that we're still in a paradigm where those Chinese labs are capable of keeping up with the leaders of the US in a few months' lag, but are not yet able to pull ahead.

Overall, in the four months since R1 was first released, app downloads have fallen by about 75% from their peak in February. As of April, which is the latest month for which I could find numbers, DeepSeek seemed to have around 96 million monthly active users, which would be around triple of the amount from January. Around a third of those are in China with significant user bases in India and Indonesia as well. The latest upgrade is the top trending model on Hugging Face, but it's not significantly ahead of Mistral's new developer model or Google's Gemma small model.

Rather than a handoff from US to Chinese labs then, the deep-seek moment in retrospect seems more like a wake-up call. As I mentioned before, it was the first reasoning model available for free, which allowed it to wow casual AI users. Since then, Anthropic, Google, and OpenAI have all made reasoning models available in their free tiers.

And when it comes to the policy response, it's even more confused. Professor Ethan Malek had interesting comments on this. He said,

If so, wouldn't the other national models catch up a few months later? I don't actually think many people in policy believe in a takeoff scenario for what it's worth. And if you believe that the competition over AI is motivated by a sincere belief in takeoff, the lack of any other policies that would suggest preparation for rapid increases in AI ability to superhuman level are somewhat confusing.

Now, the other area of the world that is key in the geostrategic story of AI is the Middle East, the figurative geographic and otherwise middle between the US and China. One psychodrama story out of that region, Elon Musk's feud with Sam Altman, continues to fester with the Wall Street Journal reporting that Musk tried to derail a deal to build a gigantic data center in the UAE if XAI wasn't included.

Last week, OpenAI led a consortium of U.S. tech companies to partner with UAE firm G42 on a one-gigawatt supercluster in Abu Dhabi known as Stargate UAE. Sources told the journal that on a group call with G42 officials, Musk warned that their plan had no chance of President Trump signing off on it unless XAI was included in the deal. Just before the president's tour of Gulf states, Musk learned that Altman would be on the trip and that a UAE deal was in the works.

White House sources said he became angry about it, complained about the administration treating all AI companies fairly, and invited himself to also join the trip. Musk ultimately appeared alongside the president in Saudi Arabia with Altman, but didn't continue on to the UAE. Still, after Musk's complaints, Trump and other admin officials reviewed the deal terms and decided to move forward.

White House sources said that Musk was opposed to a deal that would seem to benefit Altman, but he seems to have not had as much sway over decision-making as he had thought. The journal writes, These behind-the-scenes details seem to give some additional context on Musk's recent decision to step back from politics and his exit from the administration.

Reporting suggests that Musk had fallen out of favor with power brokers. Reuters stated his departure was quick and unceremonious. He did not have a formal conversation with Trump before announcing his exit, according to a source with knowledge of the matter who added that his departure was decided at a senior staff level. Still beyond the Elon story, the Wall Street Journal report also included previously unknown details about the OpenAI-led deal with the UAE.

They wrote that UAE officials had been lobbying the administration for months to gain access to a ton of AI chips and were willing to spend heavily to get them. G42 has reportedly agreed to pay all costs for the data center's construction, as well as pledging to fund a similar-sized project in the U.S. The initial 1 gigawatt of AI computing power is just the start of the planned 5 gigawatts to be installed in total at the site.

The plan is to make the facility available to host infrastructure for various U.S. companies, and XAI is viewed as a likely candidate for future sites at the sprawling data center hub. They're on the shortlist of companies with conditional approval to buy some of the 500,000 chips annually permitted to be exported to the UAE.

And one more from the Gulf region. Saudi Arabia's new state-owned AI company Humane is set to launch a $10 billion venture fund to pair with their aggressive data center strategy. The Financial Times reports that the fund will invest in startups across the U.S., Europe, and Asia.

Tariq Amin, Humane's CEO, said the firm was already in talks with American companies including OpenAI, Andreessen Horowitz, and XAI. He said Humane was also looking for a U.S. tech group to become an equity partner in the company's ambitious data center business. Amin declined to name specific potential partners but said, We're in discussions with all of them. Some of them, which you will hear about very soon, are massive names in the data center segment. Humane has already inked deals worth $23 billion across U.S. tech companies including NVIDIA, AMD, Amazon, and Qualcomm.

They aim to establish 1.9 gigawatts of data center capacity by 2030 and add another 6.6 gigawatts in the following four years. Amin estimates the build-out will cost around $77 billion at current prices. He said, The world is hungry for capacity. There are two paths you could take. You take it slow, and we are definitely not taking it slow, or you go fast.

Whoever reaches the end line first, I think, is going to secure a good chunk of the market share. Humane's stated goal is to have 7% of global compute by 2030. The $10 billion venture fund will also immediately put Humane in the mix as one of the larger funds in the space. You might remember back in April, Andreessen Horowitz raised a $20 billion AI investment fund that was one of the largest in the firm's history. So again, all in all, yet another indication of how the Gulf states are positioning and leveraging significant capital to become a player in the global AI sphere.

For now, though, that is going to do it for today's AI Daily Brief. Appreciate you listening or watching, as always. And until next time, peace.