We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
S
Satya Nadella
在任近十年,通过创新和合作,成功转型并推动公司价值大幅增长。
播客主持人
播客主持人,专注于英语学习和金融话题讨论,组织了英语学习营,并深入探讨了比特币和美元的关系。
Topics
Satya Nadella: 我对AGI的定义与其他人的不同。我认为AGI的真正检验标准不是基准测试或任务替代的任意水平,而是全球经济增长。发达国家的经济增长率目前趋于平缓,AGI要产生重大影响,需要带来前所未有的增长。AGI的长期受益者将是那些利用AI提高生产力的企业,而不是提供AI技术的科技公司。我对AGI的定义与众不同,认知劳动不是静态的,会不断演变。现在的认知劳动可以自动化,但自动化过程会产生新的认知劳动。即使AGI能够完成更高阶的认知任务,也会出现更高阶的新任务。人类的理性是有限的,AGI可以作为认知放大器,但不会完全消除人类角色。我个人已经开始使用大约10个AI代理来处理一些小任务,未来需要新的界面来管理大量的AI代理,而不是简单的聊天界面。AI的快速发展会带来社会和法律方面的挑战,需要谨慎应对。AI的部署需要考虑社会稳定和劳动力价值,不能只关注资本回报。法律框架需要适应AI的发展,才能确保AI的有效部署。AI的部署需要解决法律责任问题,不能简单地将责任推卸给AI。未来几年计算能力的建设可能会过剩,但最终会降低价格,促进更多应用。AI的竞争不仅仅在于模型的训练,还在于模型的部署和应用。AI基础设施建设需要考虑供需平衡,不能盲目扩张。AI市场不会是赢家通吃的市场,企业客户会选择多个供应商。开源模型会对闭源模型起到制衡作用,类似于Linux对Windows的影响。AI竞争不仅在于技术,还在于商业模式的创新。 播客主持人: 发达国家的经济增长率趋于平缓,AGI要带来显著影响,需要实现远超以往的增长。AGI的长期受益者将是那些利用AI提高生产力的企业,而不是提供AI技术的科技公司。AI代理的初期应用可能导致大量人力裁员,但最终会促进企业转型和创新。能够通过AI降低成本的企业不一定能引领未来,真正成功的企业会将节省的成本用于创新和发展。企业应该将AI带来的成本节约用于转型和创新,而非仅仅是股票回购。未来,人机协作的模式将是每个人管理一个代理团队,从而实现更高效的工作。AI不会完全取代人类的认知劳动,只是工作方式会发生转变,这需要时间和适应。AI是一种增强工具,它需要对现有工作流程进行根本性的重新设计。AI的部署不会一蹴而就,社会已经具备了应对潜在风险的能力。

Deep Dive

Chapters
Microsoft CEO Satya Nadella's recent interview with Dwarkesh Patel challenges the conventional understanding of Artificial General Intelligence (AGI). Nadella argues that AGI's impact should be measured by real economic growth, not benchmarks, and that the definition of cognitive labor is constantly evolving. He believes that while current cognitive tasks can be automated, new ones will emerge.
  • Nadella defines AGI's success by global economic growth, not benchmarks.
  • He challenges the static definition of cognitive labor, arguing it constantly evolves.
  • Nadella highlights the concept of "bounded rationality" in human decision-making, suggesting AI can act as a cognitive amplifier.

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, why Microsoft CEO Satya Nadella thinks differently about AGI. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. ♪

Hello, friends. We definitely have something a little bit different today. Microsoft CEO Satya Nadella just gave a really interesting interview about a wide-ranging set of topics. And given his prominence in the AI space, it has caused a ton of chatter. So what we're going to do today is dig into that interview, give the highlights, talk about the discussion around it. And in fact, that will be the main episode. We'll be back with our normal format of headlines and main episode tomorrow. But for now, let's listen to what he has to say.

Welcome back to the AI Daily Brief. It's not often that we have another podcast as the context for an entire main episode, but this one I think you'll see absolutely deserves it. For those of you who don't know, the Dwarkesh podcast has emerged over the last couple of years as the new Thinking Persons interview podcast.

It's in the long format interview style of a Rogan or a Lex Friedman, but with extremely high level, extremely intelligent guests and with Dwarkesh Patel, the host, having done a ton of research. Now, Dwarkesh is really young. He's only a couple of years out of school, but he's built a name for himself by just doing way more background research

and frankly, just giving way more of a crap than other podcast hosts do. He's also very, very smart and can engage with people on a really intellectual level. Anyways, there's a lot to love about the Dwarkesh podcast. It's a great example of why my main shows are not interviews. It's a totally different type of format that really can be done excellently well. Earlier this week, Dwarkesh posted, kids don't underestimate the power of a cold email. And it shows the background story of how he nabbed an interview with Microsoft CEO Satya Nadella.

Now, Nadella is obviously one of the key figures in the AI space by virtue of his leadership of Microsoft, by virtue of Microsoft's partnership with OpenAI. And this could have been a really basic, banal, say the same things you've always said kind of interview. And boy, was it not.

Media has been reporting this like news because Satya gets into everything from AGI to a bubble in CapEx and lots more. So what we're going to do today is actually dig entirely into the highlights from this interview. Of course, I highly recommend that you check out the whole thing. I will include a link to it in the show notes. Now, part of the context for this and the reason it seems like they agreed, other than Satya just being a fan of Dwarcash, is that there were a couple of big announcements from Microsoft's

specifically around some quantum computing advances, and that created the news context or reason for the interview. But really, over the course of the one hour, it was a much more big pattern trend type of conversation than just trying to push those two announcements. Now, easily the comments that are getting the most attention in online discourse is around AGI. Versace, the test of AGI, will not be found in benchmarks or some arbitrary level of task replacement, but is instead about global growth.

He said, This deserves some background for those of you who aren't familiar with growth statistics. Growth rates in the developed world are trending at around 2% or basically zero on an inflation-adjusted basis. And so the idea that we would have to see 10% growth from AGI means a total shift in how the global pattern has gone.

He drew the comparison explicitly between the AI revolution and the industrial revolution, noting once again that the real marker of success is not about hitting some random benchmark, but about making a paradigm-shifting impact on the global economy. In that vein, he argued that the real beneficiaries of AI in the long run will not be the tech companies providing it, but instead all the firms that harness AI to drive massive productivity gains.

Towards the end of the podcast, the conversation once again circled back to AGI. Dwarkesh trying to nail down whether Nadella believes that AGI will be achievable. Putting his own terms on it, Dwarkesh defined AGI as, quote, a thing which automates all cognitive labor, like anything anybody can do on a computer. Nadella responded, this is where I have a problem with the definitions of how people talk about it. Cognitive labor is not a static thing. There is cognitive labor today. If I have an inbox that is managing all my agents, is that new cognitive labor?

Nadella claimed that today's cognitive labor could be automated, but flagged there would be new cognitive labor that arises from the process. He continued, That's why I make this distinction, at least in my head. Don't conflate knowledge worker with knowledge work. The knowledge work of today could probably be automated. Who said my life's goal is to triage my email? Let an AI agent triage my email. That level of agent, Nadella proposed, would free up the person for a higher level of cognitive labor, but not eliminate their role entirely.

Dwarkesh then supposed that AGI could get to that second level of doing the higher-order cognitive tasks. Nadella claimed that there would be a third layer, adding, He drew on the Herbert Simon point that humans exist in a state of bounded rationality.

That is, no person has the bandwidth to fully optimize their experience. Instead, they settle for an adequate outcome. Satche supposed, if the bounded rationality of humans can actually be dealt with because there is a cognitive amplifier outside, that's great.

Now, let me take a pause here. This is something I've talked a lot about on this show and on other people's podcasts as well. My strong sense of what the pattern is going to be, hold aside AGI, just as agents get more profligate and more able to do more of the things that we do today. I think that inevitably there will be a phase one where companies cut a huge amount of human labor because the productivity gains that they're going for, quote unquote, are just in the form of doing the same stuff they do now, but cheaper.

Part of why enterprises are so attracted to agents is that their ROI is so implicit. They work, they do tasks that humans do now for cheaper. We know how markets react. We know that Wall Street thinks in terms of quarters, not years and decades. And so, of course, companies that are able to have the same output with half the number of inputs from a cost perspective are going to be rewarded, at least in the short term. However, I also believe that those companies will not be the ones who write the story of the future.

If you do save half of your budget to get the same output, there is no one deterministic way in which you have to apply those savings. Sure, you could just buy back stock, as some will do and play the financial engineering game, but others will reinvest those savings in transforming what they actually do, innovating and offering new types of products, offering a higher level of service that wasn't possible before.

And this, I believe, is where you'll start to see a different relationship between human and agents in the hybrid workforce. This is when I think you'll start to see companies experiment with what it looks like to make everyone a manager of a team of agents. To use Satya's terminology, the cognitive labor for everyone will shift into a more managerial type of experience where everyone has access to a small workforce or, frankly, army of agents to do whatever they had been doing before but at a much higher scale.

Obviously, this is not going to be an easy transition. It requires a totally different mindset, a totally different skill set, and there's going to be fits and starts and challenges along the way. But the point is that I simply do not believe that the replacement of all of what is currently cognitive labor or knowledge work will mean that there is no cognitive labor or knowledge work left. The only question is how long the transition takes and how fast companies can expand their field of view to have big enough ambitions to take advantage of the new capacity that people have when they're paired with teams of agents.

Now, bringing it back to this interview, the notion that AI is an amplification tool was threaded throughout the conversation. And indeed, this question of agents was discussed throughout the conversation. Satya views AI enhancement similarly to the shift brought about by personal computers and communications throughout the 1980s and 1990s. Basically, he pointed out that these advances didn't simply update existing workflows, but required fundamental redesigns in workflow. He said that is what needs to happen with AI being introduced into knowledge work.

When we think about all these agents, the fundamental thing is there's new work and workflow, a la what I was just saying. By way of example, he discussed preparing for this podcast appearance. He used Copilot to gather the documents he would need to familiarize himself with, generate a summary, and even put it in a podcast format. He then shared those resources with his team.

Summing up, he said, so the new workflow for me is I think with AI and work with my colleagues. Another comparison that he made was the lean process being introduced in manufacturing. Lean was first introduced by Toyota in the post-war period and involved running factories with a minimum stock of parts and only finishing enough goods to fulfill orders. It later evolved into what became known as just-in-time manufacturing and focused on cutting out all waste and bottlenecks within the process. Satya said that's what's going to come to knowledge. This is like lean for knowledge work.

Adding some heft to my idea that everyone's going to have armies of agents running around, Satya explained that he now has around 10 agents working on small tasks for him, things like sorting email and drafting responses. And so he's shifted. Instead of a bloated inbox waiting for him every morning, he now has a list of agentic tasks waiting for approval.

Nadella commented, I feel like there's a new inbox that's going to get created, which is my millions of agents that I'm working with, which will have to invoke some expectations for me, notifications for me, ask for instructions. And it definitely feels like this is a problem that Microsoft is aiming to solve. Satya continued, What I'm thinking is that there's a new scaffolding, which is the agent manager. It's not just a chat interface. I need a smarter thing than a chat interface to manage all the agents and their dialogue. That's why I think of this co-pilot as the UI for AI is a big deal. Each of us is going to have it.

So basically think of it as there is knowledge work and there's a knowledge worker. The knowledge work may be done by many, many agents, but you still have a knowledge worker who is dealing with all the knowledge workers. That's the interface that one has to build.

We could have a whole separate conversation, and maybe this is a podcast that I'll do at some point about what I see, or frankly, what we're seeing across Superintelligent as the new infrastructure for agents and for the hybrid workforce that they implicate that needs to be built. But this sort of agent management tool is part of what will help people transition from thinking about themselves as the end worker who does things to the manager who coordinates things.

Now, as big and different as this vision for the future is, Nadella also pointed out that we are all vastly underestimating effectively the human friction that's going to be a part of this transformation. For example, he said that supposing that AI can automate something like 60% of global work. He said, I think that in order to have a stable social structure and democracies function, you can't just have a return on capital and no return on labor. We can talk about it, but that 60% has to be revalued.

He gave the example of care and nursing work as something that could become highly valued, adding, Ultimately, if we don't have a return on labor and there's meaning in work and dignity in work and all of that, that's another rate limiter to any of these things being deployed.

The other big rate limiter that Nadella brought up was the legal system. We're talking about all the compute infrastructure, but how does the legal infrastructure evolve to deal with this? The entire world is constructed with things like humans owning property, having rights, and being liable. If humans are going to delegate more authority to these things,

then how does that structure evolve? Until that really gets resolved, I don't think just talking about the tech capability is going to happen. He actually connected the dots between the legal challenge and the alignment challenge. He said, you cannot deploy these intelligences until and unless there's someone indemnifying it as a human.

This AI takeoff problem may be a real problem, but before it is a real problem, the real problem will be in the courts. No society is going to allow for some human to say, AI did that. And indeed, this was another one of the big themes of the conversation, that AI deployment doesn't happen in a vacuum. Basically, Satya fundamentally doesn't believe advanced AI will suddenly come into existence and proliferate unchecked.

He pointed out that society has already developed software capable of producing extreme harm, but guardrails are developed to mitigating the risks. "We don't just write software and then just let it go," he said. "You have software and then you monitor it. You monitor it for cyber attacks, you monitor it for fault injections and what have you."

Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001.

Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk. Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time.

For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms.

agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.

That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.

If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. Hey, listeners, want to supercharge your business with AI?

In our fast-paced world, having a solid AI plan can make all the difference. Enabling organizations to create new value, grow, and stay ahead of the competition is what it's all about. KPMG is here to help you create an AI strategy that really works. Don't wait, now's the time to get ahead.

Check out real stories from KPMG of how AI is driving success with its clients at kpmg.us slash AI. Again, that's www.kpmg.us slash AI. Now, back to the show. All right, so we had this whole big section and parts of this conversation that were about AGI and agents in the workforce, the newly hybrid workforce.

What sort of barriers that aren't technological will slow down the rate of change? Maybe the other most discussed part of the interview, though, was around capital expenditure and the risk that data centers will be overbuilt. And interestingly, and this is what got people really chattering, Nadella acknowledged that the infrastructure will be overbuilt, just like it was during the dot-com era and even going right back to the construction of railways. He commented...

I'm so excited to be a leaser because I build a lot, I lease a lot. I'm thrilled that I'm going to be leasing a lot of capacity in 2027, 2028, because I look at the builds and I'm saying, this is fantastic. The only thing that's going to happen with all the compute builds is the prices are going to come down.

He brought in his experience during the cloud computing build-out as the industry transitioned away from on-premise servers and said, once we started putting servers in the cloud, suddenly people started consuming more because they could buy it cheaper and it was elastic. And they could buy it as a meter versus a license and it completely expanded.

With the deep-seek news cycle still front of mind, Nadella made the point that training a model is only half the battle. He said, It's not about building compute. It's about building compute that can actually help me not only train the next big model, but also serve the next big model. Until you do those two things, you're not going to be able to really be in a position to take advantage of your investment. So that's kind of where it's not just a race to building a model, it's a race to creating a commodity that is getting used in the world. You have to have a complete thought, not just one thing you're thinking about.

Now, with all of this, when Dworkesh suggested it might be more logical for Microsoft to increase their CapEx to $800 billion rather than $80 billion if they believe the opportunity is so vast, Nadella said, "...the classic supply side is, hey, let me build it and they'll come. That's an argument, and after we've all done that, we've taken enough risk to go do it. But at some point, the supply and demand have to map. That's why I'm tracking both sides of it. You can go off the rails completely when you're hyping yourself with the supply side versus really understanding how to translate that into real value for customers."

The interpretations for this were fairly dramatic. Prakash, the Adapai account, writes, Satya is out. TLDR, Microsoft doesn't believe in AGI, wary of overinvestment, OpenAI partnership is over. Adapai calls out, very negative on CapEx spend from Microsoft.

Lin Alden writes,

I think the way that people interpreted this CapEx conversation is interesting. I'm not so sure it's as dramatic as people are making it out, but a lot of people had a similar reaction to this.

Ultimately, another big conclusion was Nadella's belief that AI won't be a winner-take-all market. He recalled comments during the cloud build-out that AWS had an insurmountable lead, rendering Microsoft Azure as a pointless endeavor. Nadella said, "'Consumer markets can sometimes be winner-take-all, but anything where the buyer is a corporation, an enterprise, an IT department, they will want multiple suppliers. And so you've got to be one of the multiple suppliers.'" He extended this thought to the market for models.

His belief seems to be that open source will act as a governor on closed source models, much like Linux successfully checked Windows during the 90s and 2000s. Regarding the commoditization of AI and the lack of moats, Nadella commented, at scale, nothing is commodity. He pointed out that every tech company in the world was capable of racking servers, meaning that cloud storage was definitionally a commodity, but that the real moat was in hyperscaling. In other words, spinning up a cloud services business was trivial, but serving cloud storage globally at high speeds with minimal downtime was an error-defining challenge.

He seems to think a similar paradigm will play out in AI, stating, in the model layer, models need ultimately to run on some hyperscale compute. So that nexus is going to be there forever. It's not just the model. The model needs state. That means it needs storage. It needs regular compute for running these agents in the agent environments. Ultimately, he made it clear that the AI race in his mind is not nearly as simple as it's being perceived. You have to not only get the tech trend right, you also have to get where the value is going to be created with that trend. These business model shifts are probably tougher than even the tech trends.

I think part of what makes this moment so interesting is that the deeper you get into the AI space, the more convinced you are that we're still underestimating the magnitude of the potential disruption. I think Satya is correct to point out that there's going to be non-technology-based delimiters, societal barriers, legal barriers, human inertia, in other words, that slows down what the technology could do. But we still are talking about a fundamental shift where the work in a decade really just does not look like the work today.

It's almost impossible for most people to actually understand or grok just how big of a difference that's going to be. And that's why these types of conversations are so valuable. They help us understand at least the thoughts of some of the leaders in the space who are pushing things forward. Anyway, great conversation, highly worth the time to go check it out. For now, that is going to do it for today's AI Daily Brief. Until next time, peace.