We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
主播
以丰富的内容和互动方式帮助学习者提高中文能力的播客主播。
Topics
主播: 2024年,生成式AI从令人兴奋的实验转变为企业发展的必要条件。许多员工在工作中秘密使用AI,这给企业带来了挑战,因为公司无法有效地传播新的效率和流程。领导力至关重要,领导者需要鼓励员工使用AI,并设定清晰的规则和护栏,同时阐明AI如何帮助员工,而不是取代他们。高绩效的组织通常设立专门的AI部门,直接向高层领导汇报,并制定AI使用规则和发展愿景。 2024年大多数企业在AI应用方面仍然处于实验阶段,尚未实现显著的投资回报率(ROI),许多企业陷入了AI试点项目停滞的困境。企业迫切需要一个AI赋能生态系统,以支持快速变化的AI技术采用。AI是一个持续的转型过程,企业需要持续适应新的AI技术和流程。虽然外部咨询对AI战略有帮助,但企业最终需要自主掌握AI能力。企业对AI的内部开发能力越来越自信,但长期来看,专业第三方软件提供商仍然占据优势。大型语言模型(LLM)的模型本身并非长期竞争优势,关键在于如何整合AI并构建周全的系统。 2024年,许多企业专注于构建必要的AI基础设施,包括内部开发能力、赋能生态系统和数据准备。2025年,AI智能体将成为企业关注的焦点,企业需要为此做好准备。获得各部门(如法律、合规和安全部门)的支持对于成功应用AI至关重要。企业需要快速行动,才能在AI竞争中保持领先地位,需要同时关注自身发展和竞争对手,并专注于自身团队的赋能和高效决策。LLM进展的放缓不应成为企业放慢AI战略的理由。 目前AI主要用于替代现有工作流程,但未来AI将推动创新。AI智能体将成为未来AI应用的重要方向,企业需要积极尝试和探索。企业需要建立适应变化的文化和思维模式,才能在AI时代取得成功。企业应该将AI视为创造机会的技术,而非仅仅提高效率的技术,应该关注如何利用AI进行创新,创造更大的价值。

Deep Dive

Key Insights

Why were employees using AI secretly in the workplace in 2024?

Employees were using AI secretly because they feared being told they couldn't continue using it, preferring the efficiency and benefits of AI over traditional methods.

What challenges did 'secret cyborgs' present for companies?

Secret cyborgs hindered the dissemination of new efficiencies and processes, preventing leaders from understanding organizational progress and making strategic decisions effectively.

Why does leadership matter in AI adoption within enterprises?

Leadership sets the tone for AI use, encourages experimentation, and articulates a vision that includes employees, helping to alleviate fears of job displacement.

What were the characteristics of organizations that excelled in AI adoption in 2024?

High-performing organizations had dedicated AI bodies, C-level leadership involvement, and a clear vision for how AI would transform the organization while including current employees.

Why did 2024 not become the year of ROI for most enterprises in AI?

2024 remained a year of experimentation and iteration, with most organizations still figuring out how to derive value from AI tools through trial and error.

What is 'pilot purgatory' in the context of enterprise AI?

Pilot purgatory refers to the phenomenon where AI pilots show promise but fail to scale, leaving enterprises stuck in a cycle of starting but not completing AI projects.

Why is there a need for an enablement ecosystem in enterprise AI?

Enterprises require new systems to understand, suggest, track, and scale AI experiments, as current systems are inadequate for the rapid pace of AI innovation.

What trend did enterprises show in building vs. buying AI software in 2024?

Enterprises shifted from buying 80% of their software in 2023 to building 47% in 2024, reflecting growing confidence and a desire to create custom solutions for their unique needs.

Why are there no significant moats in AI models in 2024?

The rapid advancement of smaller, efficient models has leveled the playing field, making it more about integration and systems than specific technology choices.

What infrastructure changes did enterprises focus on in 2024?

Enterprises prioritized building AI capabilities, improving enablement ecosystems, and enhancing data readiness to maximize the value of generative AI tools.

What is the significance of 'agents' in enterprise AI for 2025?

Agents will revolutionize how enterprises operate by enabling employees to manage virtual teams, leading to new levels of efficiency and innovation in various functions.

How does buy-in from various departments help in AI adoption?

Buy-in from legal, compliance, and security teams ensures that AI initiatives address potential challenges early, fostering internal advocacy and smoother implementation.

Why should enterprises not slow down their AI strategies despite LLM progress plateaus?

Even if AI capabilities plateau, it would still take a decade to fully integrate AI into workflows. Enterprises should use this time to catch up and prepare for future advancements.

What is the difference between efficiency tech and opportunity tech in AI?

Efficiency tech focuses on doing the same with less, while opportunity tech enables enterprises to innovate and create new possibilities, fundamentally transforming their operations.

Chapters
Many employees are secretly using AI tools at work without disclosing it to their employers. This creates challenges for companies in terms of process dissemination, organizational learning, and strategic decision-making. Leadership plays a critical role in addressing this issue by creating a supportive environment for AI adoption.
  • 75% of knowledge workers used AI, but 78% didn't discuss it at work
  • Employees don't want to go back to old processes after using AI
  • Leadership must encourage AI use and provide clear guidelines

Shownotes Transcript

Translations:
中文

To close this year out, I'm sharing 17 reflections on the state of enterprise AI. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link at our show notes.

Hello, friends. Back with another end of year episode. One of the things that I do most is talk to big companies about their AI journey. That happens, obviously, in the context of this podcast, where a lot of you listeners are thinking about AI inside your companies. But of course, also is what we are building our super intelligent business around, helping companies figure out how to adopt AI more effectively and more quickly.

These observations are in no particular order. They're not ranked or anything like that. They're simply things that stand out as I think back on the year that was when it comes to AI in the enterprise. I'll be interested to hear how many of these resonate with you and how you're thinking about them heading into the new year.

We kick off with a big theme from the beginning of the year, which is secret cyborgs. This was Ethan Mollick's term for a phenomenon where employees would be using AI at work, but not telling anyone about it. LinkedIn and Microsoft did a survey early in the year where they found that 75% of knowledge workers were using AI, but 78% of them weren't discussing it at work.

There were a variety of reasons behind that. And the one that resonated most to me was that they simply didn't want to be told that they weren't allowed to do that anymore. That basically, once you start using AI instead of your old non-AI enabled process, you simply don't want to go back.

Now, this, of course, presents a really big challenge for companies. With legions of secret cyborgs, there's no good mechanism for disseminating new efficiencies and processes across the organization. People don't benefit from what their colleagues are learning. There's no way for leaders to get a good picture of what's going on in their organization and make better strategic decisions. I think it's likely that this phenomenon got a little bit better over the course of the year, but it leads me directly into my second reflection,

Leadership matters. And I mean, bigly. Leadership matters in a number of ways when it comes to AI. First of all, the employees that are using AI need to know that it's not only okay, but encouraged. To the extent that there need to be ground rules and guardrails, that needs to be clearly articulated with an eye to helping them rather than hindering them.

Leadership matters in the sense that leaders need to be seen using AI to move recalcitrant employees off the bench. And in general, leaders need to set a tone that paints a vision for how the company is going to be in the future with AI. One of the big concerns that employees have is will the use of AI undermine their job in some meaningful way, either because what they do is seen as illegitimate or because they'll be viewed as replaceable. The only way to help fight against that is for leaders to articulate a vision that includes those employees using AI in the future.

One of the things we found over and over again with Superintelligent was that there was a very clear profile of organizations who were doing AI well. First, they had a body that was specifically dedicated to AI, and in particular, knowing how AI interacted with each different business unit or department and providing a coordinating function that allowed those different needs to flow together in some meaningful way.

On top of that, that body in the best performing organizations had direct C-level leadership. Which C-level didn't exactly matter. What mattered is that this was a priority from the very top of the organization. Finally, the highest performing organizations had started to try to articulate not only rules of the road for using AI, but a vision for how AI would transform the organization in a way that included the people that were there now.

One thing that leaders didn't get right about 2024, though, I think there was this nice idea that 2023 was going to be the year of experimentation and 2024 would be the year of ROI. This was, in the vast majority of cases, absolutely not the case. 2024 was still, in most cases, a fumbling, bumbling, experiment, iterate, try, fail, try again thing.

All with the knowledge and the belief that there was clearly something here. I think what makes AI different than some previous technologies we've had where the ROI wasn't immediately apparent is that it is so clear that AI is going to change how we do work.

In many cases, I think we're feeling like when we're not getting a ton of value out of AI, it's probably a usage issue with us, not with the tools themselves. I've seen lots and lots of examples of this, where even within the context of a tool, it just takes a ton of experimentation to figure out which ways of using it actually drive value.

Regardless, the vast majority of organizations did not get to figuring out ROI this year. And if you are among them, I don't think that should put a damper on your future experimentation, even if you'd like to start being able to better measure ROI in the year to come. Related, pilot purgatory absolutely exists right now. Big enterprises are littered with pilots that have started, shown promise, and then not exactly gone anywhere.

In fact, many organizations we see are so deep in this pilot purgatory that they're actually trying to create new systems for supporting AI adoption because what they have is clearly not working. Which brings me to my next point. There is a desperate need for an enablement ecosystem. Organizations right now are simply not equipped to adopt a technology that changes at the speed of AI at anywhere near the speed and scale that's required.

Organizations need new ways of understanding how people are working right now, suggesting new AI alternatives, tracking experiments and pilots, analyzing the results, and scaling what works. It's no less than a total overhaul in change management, performance management, learning and development. All of it is going to contribute to a fundamentally different system that hopefully starts to get organizations capable of integrating change at the speed that AI is creating it.

And something that's important, which we will come back to again later in this list, is that boy, is this not a one-time change. AI is an ongoing transformation. From here on out, there will always be a more technologically enabled, better process for how people are doing things than the one they're currently doing. The limitations and the barriers will be human, not technological. And so designing systems for better integrating new processes is going to be absolutely mission critical.

What's more, the people who are designing those systems and who are implementing those systems need to be internal as well as external. Consultants are absolutely crushing in this market. They're making more money than just about anyone else except maybe NVIDIA. And that's okay. Consultants can be a really powerful part of any strategy. Superintelligent is effectively consulting as a software. But still, the capabilities ultimately have to resolve back home.

The reality is that the way that AI is going to drive value and the use cases that will actually remake the business are so unique to each organization that you simply can't outsource this process, at least not entirely.

The good news is that organizations are getting a little bit more confident. One of the statistics that I think best tells this story, Menlo, the venture firm, has done an enterprise AI study for each of the past two years, and in 2023, found that enterprises bought 80% of their software from third parties versus just building 20% in-house. This year in 2024, that number had shifted wildly. 53% of software was bought externally, while 47% was built internally. Now,

Now, I do not believe that this is likely to stay the case forever. I think it's going to boomerang.

What I think it reflects, though, is a recognition and a growing confidence among enterprises and a realization as they dig into AI that there are certain applications that would be really good for them based on their particular vertical or what they do that aren't available in the market yet. Now, why I think it will boomerang is that ultimately third-party software providers who are entirely specialized and focused on a single issue tend to build better software in the long run than the internal hacks of a company whose job is something else entirely.

But I think the fact that enterprises are taking the time to actually go do the reps, put in the work, and build stuff that seems useful for them is still going to pay off hugely, even if they end up not using the software that they built now forever. Like I said, I think it shows a confident shift. And I do think that organizations who have that build capacity are likely to perform better by being closer to the locus of change when it comes to the Gen AI software itself.

A big lesson in general for AI this year that I think has implications for enterprises is that there really don't appear to be moats and models. However, this year, everyone caught up and built GPT-4 class models. Much of the battleground this year was in fact how much power they could wring out of smaller models that could be run on smaller devices or more cheaply. Now it is entirely possible that we will see other big leaps where models do become for a time a moat again.

We're now, of course, entering the reasoning era with models like O1 that are trying different approaches to scaling. And like I said, that could, for a time again, create a major differentiation between the state of the art and everyone else.

However, in the long run, it seems pretty clear that if you look over the course of a number of years, which particular stack you decide to invest in, as long as it's one of the credible ones, shouldn't make the big difference. It's going to be much more about how you integrate AI and the thoughtful systems you put around it than the particular technology choices you make in the short term. Kind of related to all of this, one of the key themes that you see coming out over and over again is that 2024 was really a year of enterprises building out the necessary infrastructure.

In some cases, that might have meant these build capabilities. In other cases, for organizations that were really thinking ahead, it was things like trying to be better about enablement ecosystems. And another one that we saw is companies taking data much more seriously, trying to think about data readiness and making sure that their data was ready to go to really get full value out of all these Gen AI tools.

Now, this is a really interesting one because the very nature of data readiness could change as different capabilities of LLMs evolve. But I think more broadly, it shows the maturation of enterprise AI. Today's episode is brought to you by Plum. Want to use AI to automate your work but don't know where to start? Plum lets you create AI workflows by simply describing what you want. No coding or API keys required. Imagine typing out, AI, analyze my Zoom meetings and send me your insights in Notion and watching it come to life before your eyes.

Whether you're an operations leader, marketer, or even a non-technical founder, Plum gives you the power of AI without the technical hassle. Get instant access to top models like GPT-4.0, CloudSonic 3.5, Assembly AI, and many more. Don't let technology hold you back. Check out Use Plum, that's Plum with a B, for early access to the future of workflow automation. Today's episode is brought to you by Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices, and establishing trust is more important than ever.

Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI risk management framework, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI.

Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time. Learn more at vanta.com slash nlw. That's vanta.com slash nlw.

If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms, agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.

That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.

If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. Shifting gears slightly, one thing that we observed over and over again at Superintelligent is that buy-in solves problems. There are a lot of challenges that Gen AI introduces into the enterprise.

There are legal issues, compliance issues, new security risk issues. And the companies that were best at facing these were the ones who were involving the people in departments who were responsible for those challenges right up front. What we found over and over again is that there's so much excitement, yes, trepidation too, but so much excitement, genuine excitement around the potential of AI, that there are a lot of people who are willing to be internal advocates and fighters who are trying to make things work.

Whereas in the past, maybe different departments like legal or compliance or security viewed themselves as roadblockers, when it comes to Gen AI, they're just trying to make things work right. That idea of getting buy-in across a wide ecosystem of stakeholders, and it's something that I think could be much more broadly adopted to the success of a lot of different organizations. Speaking of moving quickly, if 2024 showed anything, it is that there is no such thing as fast enough. In fact, one thing that we saw over and over again is that the companies who were farthest ahead were inevitably those who still felt the farthest behind.

And that's because Gen AI is this sort of iceberg where you scratch the surface and you just realize that there's so much more. You find one prompt and it unlocks this world of new possibilities and is so clear that we are just barely nudging into a new era. Organizations ultimately have to benchmark themselves in two ways.

First, yes, unfortunately, they do have to benchmark themselves against their competitors. The race is on to adopt AI and the organizations that do so the most effectively and the most quickly are going to have an advantage going forward. There is simply no denying that. But at the same time, they have to be benchmarking against themselves as well. When it comes to what is fast or fast enough, the best organizations that we saw were, yes, aware of what their competitors were doing, but were mostly focused on how they could be the best for themselves.

The more energy they spent on empowering their teams, building new enablement systems, figuring out good ways to make quick decisions, inevitably they were outperforming rather than dwelling on what other people were doing. The simple reality, as I've said before, is that the way that Gen AI will impact any one particular organization is going to be so unique and distinct that it really does require a ton of internal exploration and experimentation. And a focus on doing that briskly and with intention seems to be where the best organizations are trending.

Related to that, this so-called slowdown in the progress of LLMs does absolutely not mean you should slow down your AI strategy in your organization.

One of the big things that we've been talking about over the last quarter is, of course, the fact that the pre-training method of scaling LLMs has seemed to have reached some plateaus. Just simply throwing more data and more compute at the problem is starting to reach some real limits. This is what has generated all sorts of new explorations and new scaling methodologies like test time compute, which is a part of the O1 reasoning model. And I think there could be a temptation for organizations to say, great, if this means that AI is going to slow down, that means that maybe we don't have to have such urgency, the

The reality, however, is that if AI stopped right now, it would still probably take a decade to fully integrate and understand all the ways that it could impact how we work. To the extent that there is any breather in capabilities changes, it represents not a chance to slow down, but maybe just a little bit of a chance to catch up. And of course, the reality is with so many vertical applications coming online, the slowdown really isn't going to feel like much of a slowdown at all.

Moving into some bigger ideas for going forward, we have still in 2024 very much been in the one-to-one replacement era of AI. And this is totally natural. It makes sense when we're presented with a new technology to see how it would make the things that we do currently better, faster, or cheaper.

And AI certainly does a lot of that really well. We can produce more content for marketing and we can do it faster. We can summarize meetings much more efficiently. But I think it's important to remember that when the history books are written, it is almost certainly the case that the winners of the AI transformation will be those not who one-to-one replaced all of their functions with AI, although that'll be a part of it, but instead who used AI to fundamentally innovate what they do.

I'm going to come back to that in my last bullet as I have another frame of reference for it. But 2024 was definitely more a replacement era of AI than an innovate era of AI. And I think we'll start to see that shift a little bit in 2025. Part of the reason for that is that the agents are coming. Agents will definitely initially be viewed as a one-to-one replacement technology. How your customer service bots do the work of your current customer service agents.

However, where agents get really interesting is when you start to think about what it means if every one of your employees right now were a manager who had an army of 10 or even 100 employees to do what they do, but amazingly well in ways that you couldn't even imagine now. 2025 is the year when agent experiments will actually start to happen, when viable, functional, vertical, and horizontal agents will come online, when function-by-function agents start to become normalized.

And because of that, even if you thought you might have been just starting to get out of the pilot era of AI assistance, guess what? When it comes to agents, we are all pilots again. There is going to be no way to figure out how these things work best for our organizations without simply trying them. And that is going to involve lots and lots of pilots, lots and lots of experiments. And this brings me to my last of two points. Mindset matters.

More than anything else, and when I think about what advice I would give to leadership, it has to lead towards a culture of change. A culture where change is embraced as the new normal, where AI is not seen as a one-time transformation, but as the beginning of a new way of doing business that is constantly updating and evolving, where people are empowered to grow constantly, thinking in new ways about how to do their job, and hopefully having a better experience because of it.

It is so tempting for companies to try to neatly sequence things. 2023 was the year of experimentation. 2024 was the year of ROI. 2025 was the year of scaling. But it's just not going to be like that. There is always something that's going to be piloting. There's always something that's going to be being analyzed for ROI. There's always something that's going to be scaling. And then before you know it, something new will come along and go through that process all again.

Change is the new normal. And it's been this way for some time, but what Gen AI does is it extends the breadth of that change to everyone and everything and every process. And it speeds it up dramatically in a way that we simply can't ignore anymore. We need to build organizations that are fundamentally designed to be able to change. And that's going to start with culture and mindset.

Lastly, if there's one thing that I hope enterprises take away from any conversation with me, it's this idea that AI is opportunity tech, not just efficiency tech. And this, of course, goes back to that idea of one-to-one replacement versus innovation.

It is so tempting and will be very rewarded by markets to view AI strictly as an efficiency technology. I get to do the same with fewer inputs than I use now. I get to save money and produce the same number of widgets. Short-term markets, like I said, reward that type of thing. They reward cost-cutting. But the organizations that win the AI transformation will not be those who view it as a technology for doing the same with less.

The organizations that win this transformation will be those who view it fundamentally as an opportunity creation technology, where they can do more with the same or much, much more with a little more.

Instead of thinking about customer service agents as a one-to-one replacement for the employees you have now, what would it look like to create the greatest customer service that's ever existed, where agents were available 24-7, were unbelievably good at really simple issues, and also really good at routing difficult issues to the exact right person to solve it in a way that made the experience better than anything that was possible before?

What if marketing wasn't the lowest common denominator boring social trend following, and instead was marketing departments building software and games and experiences for their people because they're all using Cursor or Devon and have become coders without being coders? These things are all within our grasp, and the organizations and enterprises that win are going to be the ones who seize those opportunities.

All right, that will do it for these 17 reflections on enterprise AI. I'm sure as soon as I press stop recording, I will think of six others. But just a small look at what I've seen this year. I think if you take a step back, it is actually quite remarkable how fast enterprises have gotten it together to start adopting this technology. In fact, frankly, a lot of the tool developers aren't very good at supporting them right now because they didn't expect them to get it this fast. This is pushing all of us, in other words, to new heights. And I'm very excited to be a part of it with you. Until next time, peace.