We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
E
Ethan Mollick
No available information on Ethan Mollick.
N
NLW
知名播客主持人和分析师,专注于加密货币和宏观经济分析。
Topics
Ethan Mollick: 我认为AI在提升工作效率方面具有巨大潜力,许多员工已经开始在工作中使用AI工具。然而,企业目前尚未充分利用这些潜力,因为仅仅提升个人效率并不足以改善组织整体绩效。为了真正发挥AI的价值,企业需要进行组织创新,重新思考激励机制和工作流程。领导力至关重要,领导者需要明确AI驱动的未来,并引导员工适应新的工作方式。此外,企业还应鼓励员工参与创新,并建立AI基准来衡量AI的应用效果。我们需要构建领导者、实验室和员工之间的反馈循环,让我们比竞争对手更快地学习,并重新思考关于工作方式的基本假设。 NLW: 我同意Ethan Mollick的观点,AI确实可以提高工作效率,并且组织需要进行变革才能充分利用AI的潜力。然而,我认为Ethan Mollick的文章有些过时,因为它没有充分考虑到AI代理的出现。AI代理已经从根本上改变了组织对AI的看法,许多组织正在积极探索如何利用AI代理来重塑其业务。此外,我认为员工技能提升仍然很重要,但重点应该放在代理管理上,而不是提示工程。总的来说,我认为AI的发展速度比Ethan Mollick描述的还要快,组织需要快速行动才能保持竞争力。我认为领导力面临的挑战在于决定是否将效率提升转化为裁员或组织增长,这决定了 AI 战略的方向。

Deep Dive

Chapters
This chapter explores the gap between individual and organizational AI adoption. While many employees report significant productivity gains from using AI, companies haven't seen the same level of overall improvement. This discrepancy is due to a lack of organizational innovation in adapting to AI. To overcome this, organizations need to harness the power of Leadership, Lab, and Crowd.
  • AI boosts individual work performance, but doesn't automatically translate to organizational gains.
  • Companies are not capturing the full potential of AI due to lack of organizational innovation.
  • Organizational change requires rethinking incentives, processes, and the nature of work.

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, how to make AI work at work. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Thanks to today's sponsors, KPMG, Blitzy.com, and Super Intelligent. And to get an ad-free version of the show, go to patreon.com slash ai daily brief.

Hello, friends. Happy Memorial Day weekend. It being a weekend, of course, we are doing a long read. And this week, we are back with the one and only Professor Ethan Mollick, who has published a new post called Making AI Work, Leadership Lab and Crowd. Now, this was a really interesting one for me. On the one hand, there's nothing in here that I disagree with. I think the advice is good, and I think organizations would do well to follow it. And

And yet I find myself feeling like it was written for and about another time, a simpler time, frankly, in AI than one that I feel is actually kind of past. Let's get into it, though. Read some excerpts from this, and then we'll talk about it. As you can tell from my slightly haggard voice, this is actually me reading it. This is not AI.

Ethan writes, "Companies are approaching AI transformation with incomplete information. I think four key facts explain what's really happening with AI adoption. One, AI boosts work performance. How do we know? For one thing, workers certainly think it does. A representative study of knowledge workers in Denmark found that users thought that AI halved their working time for 41% of the tasks they do at work. And a more recent survey of Americans found that workers said using AI tripled their productivity.

Two, a large percentage of people are using AI at work. That Danish study from a year ago found that 65% of marketers, 64% of journalists, and 30% of lawyers, among others, had used AI at work. The study of American workers found over 30% had used AI at work in December 2024, a number which grew to 40% in April 2025. And of course, this may be an undercount in a world where ChatGPT is the fourth most visited website on the planet.

Now, editor's note here, this is NLW cutting in. We've also seen things like the KPMG Pulse Survey, which saw a massive increase between Q4 and Q1 of daily AI usage jump from 22% to 58% in those organizations. And again, that's daily AI usage of things like co-pilots. So there's clearly a ubiquitousness of this stuff that's happening and emerging really fast.

Now back to Ethan again. Number three, there are more transformational gains available with today's AI systems than most currently realize. Deep research reports do many hours of analytical work in a few minutes. Agents are just starting to appear that can do real work. And increasingly smart systems can produce really high quality outcomes. Number four, these gains are not being captured by companies. Companies are typically reporting small to moderate gains from AI so far. And there is no major impact on wages or hours worked as of the end of 2024.

How do we reconcile the first three points with the final one? The answer is that AI use that boosts individual performance does not naturally translate to improving organizational performance. To get organizational gains requires organizational innovation, rethinking incentives, processes, and even the nature of work. But the muscles for organizational innovation inside companies have atrophied. For decades, companies have outsourced this to consultants or enterprise software vendors.

who develop generalized approaches that address the issues of many companies at once. That won't work here, at least for a while. Nobody has special information about how to best use AI at your company or a playbook for how to integrate it into your organization. Even the major AI companies release models without knowing how they can best be used. We're all figuring this out together, so if you want to gain an advantage, you are going to have to figure it out faster than everyone else. And to do that, you will need to harness the efforts of Leadership Lab and Crowd, the three keys to AI transformation.

Leadership. Ultimately, AI starts as a leadership problem, where leaders recognize that AI presents urgent challenges and opportunities. More leaders are starting to recognize the need to address AI. You can see this in the viral memos from the CEO of Shopify and the CEO of Duolingo. But urgency alone isn't enough. These messages do a good job signaling the why now, but stop short of painting the crucial, vivid picture. What does the AI-powered future actually look and feel like for your organization?

Workers are not motivated to change by leadership statements about performance gains or bottom lines. They want clear and vivid images of what the future actually looks like. What will work be like in the future? Will efficiency gains be translated into layoffs or will they be used to grow the organization? How will workers be rewarded or punished for how they use AI? You don't have to know the answer with certainty, but you should have a goal that you're working towards that you're willing to share. Workers are waiting for guidance, and the nature of that guidance will impact how the crowd adopts and uses AI.

An overall vision is not enough, however, because leaders need to start to anticipate how work will change in a world of AI. While AI is not currently a replacement for most human jobs, it does replace specific tasks within those jobs. Ethan then uses the example of research changing legal research, changing how programming works, and big coming changes to marketing with things like Google's new VO3 model, which of course actually has the ability to have people talk in those ads.

Ethan continues, yet the ability to make a short video clip or code faster or get research on demand does not equal performance gains. To do that will require decisions about where leadership in the lab should work together to build and test new workflows that integrate AIs and humans. It also means fundamentally rethinking why you are doing particular tasks. Companies used to pay tens of thousands of dollars for a single research report. Now they can generate hundreds of those for free. What does that allow your analysts and managers to do? If hundreds of reports aren't useful, then what was the point of research reports?

I'm increasingly seeing organizations start to experiment with radical new approaches to work in response to AI. For example, dispersing software engineering teams, removing them from a central IT function, and instead having them work in cross-functional teams with subject matter experts and marketing experts. Together, these groups can vibe work and independently build projects in days that would have taken months in coordination across departments. And this is just one possible future for work. Leaders need to describe the future they want, but they also don't have to generate every idea for innovation on their own.

Instead, they can turn to the crowd and the lab. The crowd. Both innovation and performance improvements happen in the crowd, the employees who figure out how to use AI to help get their work done. As there is no instruction manual for AI, learning to use AI well is a process of discovery that benefits experienced workers. People with a strong understanding of their job can easily assess when the AI is useful for their work through trial and error in a way that outsiders cannot.

Experienced AI users can then share their workflows and AI use in ways that benefits everyone. Enticed by this vision, companies have increasingly been giving employees direct access to AI chatbots and some basic training in hopes of seeing the crowd innovate.

Most run into the same problem, finding that the use of official AI chatbots maxes out at 20% or so of workers, and that reported productivity gains are small. Yet over 40% of workers admit using AI at work, and they're privately reporting large performance gains. This discrepancy points out the two critical dynamics. Many workers are hiding their AI use, while others remain unsure of how to effectively apply AI to their tasks, despite initial training. These are problems that can be solved by leadership in the lab.

Solving the problem of hidden AI or secret cyborgs is a leadership problem. Ethan then talks a lot about what we talk about all the time here on this show, which is basically all the good reasons that employees who are not trying to skirt around the rules are keeping their AI use secret because they want to be able to get the benefits, but they don't want to be punished. They don't want to have those tools removed from them. They don't want to be viewed as having their work as less legitimate.

Ethan continues, leadership can help. Instead of vague talks on AI ethics or terrifying blanket policies, provide clear areas where experimentation of any kind is permitted and be biased towards allowing people to use AI where it is ethically and legally possible. Even with proper revision and incentives, there will still be substantial numbers of workers who aren't inclined to explore AI and just want clear use cases and products. This is where the lab comes in.

The lab. As important as decentralized innovation is, there is also a role for more centralized efforts to figure out how to use AI in your organization. Unlike a lot of research organizations, the lab is ambidextrous, engaging in both exploration for the future, which in AI may just be months away, and exploitation, releasing a steady stream of new products and methods. Thus, the lab needs to consist of subject matter experts and a mix of technologists and non-technologists.

Fortunately, the crowd provides the researchers, as those enthusiasts who figure out how to use AI and proudly share it with the company are often perfect members of the lab. Their job will completely or mostly be about AI. You need them to focus on building, not analysis or abstract strategy. He then gives a set of things that the people in the lab might build. For example, he writes, take prompts and solutions from the crowd and distribute them widely very quickly. Build AI benchmarks for your organization. And this one I think is incredibly important and we'll come back to in a moment.

But he also says, go beyond benchmarks to build stuff that doesn't work yet. What would it look like if you used AI agents to do all the work for key business processes? Build it and see where it fails. Then when a new model comes out, plug it into what you build and see if it's any better. If the rate of advancement continues, this gives you the opportunity to get a first glance at where things are headed and to actually have a deployable prototype at the first moment AI models improve past critical thresholds. Lastly, he writes, build provocations. Many people haven't truly engaged with AI's potential.

Demos and visceral experiences that jolt people into understanding how AI could transform your organizations or even make them a little uncomfortable have immense value in sparking curiosity and overcoming inertia. Show what seems impossible today but might be commonplace tomorrow. His conclusion, re-examining the organization. The truth is that even this framework might not be enough. Our organizations, from their structures to their processes to their goals, were all built around human intelligence because that's all we had.

AI alters this fundamental fact. We can now get intelligence of a sort on demand, which requires us to think more deeply about the nature of work.

When research that once took weeks now takes minutes, the bottleneck isn't the research anymore. It's figuring out what research to do. When code can be written quickly, the limitation isn't programming speed. It's understanding what to build. When content can be generated instantly, the constraint isn't production. It's knowing what will actually matter to people. And the pace of change isn't slowing. Every few months or weeks or days, we see new capabilities that force us to rethink what's possible.

The models are getting better at complex reasoning, at working with data, at understanding context. They're starting to be able to plan and act on their own. Each advance means organizations need to adapt faster, experiment more, and think bigger about what AI means for their future. The challenge isn't implementing AI as much as it is transforming how work gets done. And that transformation needs to happen while the technology itself keeps evolving. The key is treating AI adoption as an organizational learning challenge, not merely a technical one.

Successful companies are building feedback loops between leadership, lab, and crowd that let them learn faster than their competitors. They're rethinking fundamental assumptions about how work gets done. And critically, they're not outsourcing or ignoring this challenge. The time to begin isn't when everything becomes clear. It's now while everything is still messy and uncertain.

The advantage goes to those willing to learn fastest.

KPMG can show you how to integrate AI and AI agents into your business strategy in a way that truly works and is built on trusted AI principles and platforms. Check out real stories from KPMG to hear how AI is driving success with its clients and

at www.kpmg.us slash AI. Again, that's www.kpmg.us slash AI. Today's episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context.

which if you don't know exactly what that means yet, do not worry, we're going to explain and it's awesome. So Blitze is used alongside your favorite coding co-pilot as your batch software development platform for the enterprise. And it's meant for those who are seeking dramatic development acceleration on large scale code bases. Traditional co-pilots help developers with line by line completions and snippets,

But Blitze works ahead of the IDE, first documenting your entire codebase, then deploying more than 3,000 coordinated AI agents working in parallel to batch build millions of lines of high-quality code for large-scale software projects. So then whether it's codebase refactors, modernizations, or bulk development of your product roadmap, the whole idea of Blitze is to provide enterprises dramatic velocity improvement.

To put it in simpler terms, for every line of code eventually provided to the human engineering team, Blitze will have written it hundreds of times, validating the output with different agents to get the highest quality code to the enterprise and batch. Projects then that would normally require dozens of developers working for months can now be completed with a fraction of the team in weeks, empowering organizations to dramatically shorten development cycles and bring products to market faster than ever.

If your enterprise is looking to accelerate software development, whether it's large-scale modernization, refactoring, or just increasing the rate of your STLC, contact Blitzy at blitzy.com, that's B-L-I-T-Z-Y dot com, to book a custom demo, or just press get started and start using the product right away.

Today's episode is brought to you by Super Intelligent and more specifically, Super's Agent Readiness Audits. If you've been listening for a while, you have probably heard me talk about this. But basically, the idea of the Agent Readiness Audit is that this is a system that we've created to help you benchmark and map opportunities for your business.

in your organizations where agents could specifically help you solve your problems, create new opportunities in a way that, again, is completely customized to you. When you do one of these audits, what you're going to do is a voice-based agent interview where we work with some number of your leadership and employees.

to map what's going on inside the organization and to figure out where you are in your agent journey. That's going to produce an agent readiness score that comes with a deep set of explanations, strength, weaknesses, key findings, and of course, a set of very specific recommendations that then we have the ability to help you go find the right partners to actually fulfill.

So if you are looking for a way to jumpstart your agent strategy, send us an email at agent at besuper.ai and let's get you plugged into the agentic era. Okay.

Okay, so there is a ton in this piece that I very much agree with. And Ethan and I, I think, probably have a pretty similar experience when it comes to how we're judging this. He is talking to a huge number of enterprises across probably a lot of different sizes and industries. And at this point, Superintelligent talks to about 20 different companies a day across all of us. So we are also getting a similarly diverse earful about how this is all happening and playing out.

So what are some of the things I agree most strongly on? First of all, it cannot be overstated how much there truly is a leadership challenge here. The line that I think is by far the most important, will efficiency gains be translated into layoffs or will they be used to grow the organization? This is both the central question that employees need answered to fully commit to whatever AI strategy you want them to commit to. And it is also the most critical question for shaping what your AI strategy actually is going to look like.

Are you in the camp of efficiency AI, where all you care about is doing the same with less? Or are you in the camp of opportunity AI, where you're thinking about growth and all the different things that you could do that were never possible before? As I have said before, I am very sure that many organizations will by default opt for efficiency AI and even for a short time be rewarded by Wall Street and short-term investors who like the fact that their costs are going down.

I also think it is equally inevitable that those organizations will be slopped all over the map by organizations who instead go the opportunity route and understand that there is a much bigger change here than just writing more marketing copy.

or paying less than your legal bills or customer service. The organizations who think beyond simple efficiency are going to absolutely wipe the floor with those who do not. I also think that broadly, Ethan's articulation of the need for a combination of leadership, bottom-up, and top-down centralized initiatives is correct. You do really need all these pieces working together in concert.

I think one small piece, but which is actually incredibly valuable, is his notion of building AI benchmarks for your organization. As you'll see in just a minute, I disagree with some of his assertion around how little room there is for outside help here. But I think that he's absolutely right that each organization will to some extent be unique and differentiated. And the best way to keep track of how AI is working for you is to create your own benchmarks, even if it's just as a compliment for other management systems.

So where do I start to diverge from this piece? I do have a small one with the line, nobody has special information about how to best use AI at your company or a playbook for how to integrate it into your organization. This is a quibble because what Ethan is saying here is that organizations cannot simply outsource what is going to be a massive broad-based structural transformation to external partners. And with that, I agree.

The specific ways in which AI will impact your company are going to be really distinct to you, and they're going to involve just a lot of decisions. Where you prioritize, which areas of efficiency are most important, how you reinvest the gains of those efficiency.

What new opportunities you want to seize? What areas you want to drive into? Even more basic stuff like governance, policies. These are ultimately decisions that need to be made internally. However, there are lots and lots of folks now who are getting better at helping provide exactly the sort of playbook that he's saying doesn't exist. The way that I would reframe this is that if you're looking for a playbook that has the answers, Ethan is right.

If, on the other hand, you're looking for a playbook that helps you ask the right questions, there's a lot of that out there. Now, of course, you're welcomed or even encouraged to take a grain of salt with my opinion on this, given that I have a product in the agent readiness audit that is exactly this. But still, I think he's slightly overstating the case for emphasis and trying to orient people towards the core idea that they have to take responsibility for these decisions for themselves. Still, the broader issue that I have is, again, not one with the piece per se.

It's that this feels like a 2024 essay and we're now living in a 2025 world. And if you're looking for a simple way to understand what has changed, it is at the risk of being predictable, agents. Agents have fundamentally shifted the conception, in my experience, of most organizations when it comes to how they think about AI. So here's what agents are shifting.

For two years, we've had this bottoms-up sort of discourse where organizations have been thinking about this sort of question of how to capture the efficiency gains being won by individuals who are using co-pilot style tools. But inherently, this is an incredibly limited view of what AI is going to do. An individual being much more efficient in their job is powerful.

and it's going to be a part of the landscape for a little while here. However, if you are a regular listener to this show, you'll know that my base case is that effectively all of the work that we do now will be done by agents in the future. Our job will be to coordinate, to orchestrate, to manage those agents, to set them on specific tasks, to figure out how to get the most out of them, and to do things that were previously completely impossible. I believe in a way that is being understated in this piece,

that many leadership organizations are starting to grok this. I think there has been a radical snapback from thinking about AI in bottoms-up terms to in top-down terms. One of the ways that I've seen this manifest

is that while for much of 2024, we saw the sort of employee upskilling that had previously been the domain of sort of sidelonged departments like L&D move into the mainstream and become a key part of the AI conversation, the second that agents emerged on the scene in any sort of plausible way, it was kicked right on back down to a secondary priority as leadership tried to figure out on a more fundamental and core level how agents were going to remake the nature of their organization.

So much so, in fact, that I think there's an overcorrection in many organizations and people are underappreciating the value of also thinking about employee upskilling. I also think that employee upskilling is going to take a different slant than what it is now. It's going to be less about prompting and more about agent management, but that's a conversation for another time. The point is that I actually think that there is a seismic shift happening right now and that relying on statistics from 2024, even though we're only in May of 2025, is just woefully out of date.

I think that we have had an inflection point over the last six months that demarcates before and after in a way that is hard to overstate. I think that right now, this is just a sense. It's not necessarily yet embodied in numbers, but I think you're going to see it fast.

For example, this line, most find that the use of official AI chatbots maxes out at 20% or so of workers. That's just not true anymore. Again, I mentioned the KPMG survey, which found a jump from 22% to 58% of daily use. It's just shifting much more quickly than this would make it seem.

Now, stylistically, I assume that Ethan is writing not just to educate, but also to influence and is making decisions around how he wants to frame things in order to make organizations feel both a sense of urgency, but also like they are empowered to do something.

I have a different tact. I think if your organization is still in this mode of nudging down the line and just sort of squinting around to see what uses your employees are finding for AI, you are not just behind, but you are dangerously behind. Now, danger is subjective. If every organization is in that same spot, then fine. We all evolve at the same time. Enterprise inertia was always going to be the big constraint, not technology.

The problem is some organizations are not moving that slow. In fact, the organizations that we see and that come to our door every day are trying to move extremely fast and embrace big seismic shifts. They are not slow walking. They are not talking about pilots. They are talking about the complete reorganization of how they work. And that, I believe, is where the mindset needs to be. Just to wrap up.

I think this is a great piece. I think that any organization that read this, embraced it, and operated on the basis of this would be in the top quartile of performers. I'm just saying it's going even faster than Ethan is making it seem here. And I see that accelerating, not slowing down. With that exciting and or ominous note, depending on your perspective, I will leave you to the rest of your Memorial Day weekend if you're in the U.S. Appreciate you listening or watching as always. And until next time, peace.