We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
C
Chelsea Troy
N
Nathaniel Whittemore (NLW)
T
Tim O'Reilly
Topics
@Nathaniel Whittemore : 我认为 AI 的能力提升和相关讨论的增多,使得 AI 对就业的影响问题日益突出。许多公司会首先将 AI 视为一种降低成本的措施,满足于用更少的资源做同样的事情。但我相信,那些将 AI 视为机会的公司,会从结构上和根本上思考如何重新设计他们所做的事情,以提供以前不可能的东西,最终增长胜过效率。我们不应低估我们引导这项技术朝着某个目标前进的能力。当引入真正将完成人们现在所做的大部分任务的代理时,关于 AI 旨在为人们服务的说法会变得复杂。我认为公司试图取代这类工作是在尝试做一些他们认为对人们有净积极影响的事情。AI 不应被视为对现有行为的一对一替代,而应被视为释放新机会。我认为真正重要的是让组织和企业阐明他们希望如何将从所有新的效率和 AI 带来的生产力中获得的收益进行再投资。公司可以选择是将人们转移到新的领域创造价值,还是将他们抛弃,而我们作为客户、消费者、媒体和投资者,可以在公开市场上奖励或惩罚这些行为。我认为硅谷对此的看法比你从这篇文章中看到的要细致得多。 @Tim O'Reilly : 许多硅谷投资者和企业家似乎将人们失业视为巨大的机会,这让我深恶痛绝。AI 优先不应意味着将人类简化为可被消除的成本,而应意味着利用 AI 增强人类能力,以解决以前无法解决的问题,并使机器系统更贴近人类需求。AI 优先现在已经变成了使用 AI 来取代人类。不应该使用技术来取代工人,而应该增强他们的能力,使他们能够做以前不可能的事情。那些仅使用 AI 来降低成本和取代工人的公司,将会被那些使用 AI 来扩展其能力的公司所击败。借助 AI,我们可以将所有内容翻译成多种语言,使我们的知识和产品在以前无法服务的地区也能访问和负担得起。AI 生成的翻译比没有翻译要好。如果我们仅仅尝试使用 AI 更快速、更经济地实施我们以前做过的事情,我们可能会看到一些成本节省,但我们将完全无法给我们的客户带来惊喜和愉悦。我们必须重新构想我们所做的事情,问问自己,如果我们以全新的工具包来解决这个问题,我们如何才能用 AI 来做这件事。用户界面的发展趋势是使计算机越来越接近人类彼此交流的方式。我们真正想做的是利用 AI 使我们的客户与我们的内容之间的互动更加丰富和自然,简而言之,更加人性化。我喜欢看到我们在考虑用什么样的网络或移动界面来包装它之前,先对与 AI 的互动进行原型设计。AI 原生并不意味着 AI 唯一。现代开发的艺术是协调这些系统以相互补充。将 AI 融入我们的企业、生活和社会的挑战确实很复杂,但这并不意味着拥抱将人类简化为可被消除的成本的经济效率崇拜。这意味着做更多的事情,利用 AI 增强人类的能力,以解决以前不可能解决的问题,以以前无法想象的方式解决问题,并以使我们的机器系统更适应人类的方式解决问题。 @Chelsea Troy : 大型语言模型并没有完全取代编程工作,而是呼吁我们掌握一种更高级、更具情境意识和更具社区导向的技能,而坦率地说,我们已经被要求这样做了。

Deep Dive

Chapters
This chapter explores Tim O'Reilly's perspective on the 'AI First' approach, challenging the notion that it primarily focuses on replacing human workers. O'Reilly argues it should be about augmenting human capabilities and creating new opportunities.
  • Tim O'Reilly criticizes the interpretation of 'AI First' as solely focused on job displacement.
  • O'Reilly advocates for AI augmentation to enhance human capabilities and solve previously impossible problems.
  • O'Reilly uses the example of AI-powered translation at O'Reilly Media to illustrate how AI expands reach and accessibility.

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, does AI first mean replacing people? The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Thanks to today's sponsors, Blitzy.com, Vanta, and Super Intelligent. And to get an ad-free version of the show, go to patreon.com slash ai daily brief.

Welcome back to the AI Daily Brief. Today is a long reads episode of the show, and we will be reading a piece by Tim O'Reilly about this concept of AI first. Specifically, we're going to be asking whether AI first means necessarily replacing people. This question of AI job displacement has been swirling around the ether a lot more recently. Part of this is just because of increased capabilities. Part of it's because when people talk about this stuff, it prompts more people to talk about this stuff.

Silicon Valley stalwart and media entrepreneur Tim O'Reilly apparently doesn't like what he's seeing some Silicon Valley investors and entrepreneurs mean when they say AI first. On LinkedIn this week, he wrote, Many Silicon Valley investors and entrepreneurs seem to view putting people out of work as a massive opportunity.

That idea is anathema to me. It's also wrong, both morally and practically. The problems of integrating AI into our businesses, our lives, and our society are indeed complicated. But whether you call it AI-native or AI-first, it does not mean embracing the cult of economic efficiency that reduces humans to a cost to be eliminated. No, it means doing more, using humans augmented with AI to solve problems that were previously impossible, in ways that were previously unthinkable, and in ways that make our machine systems more attuned to the humans they are meant to serve.

So let's read Tim's piece on O'Reilly Radar, and then we'll come back and have a bit of a discussion. You guys have spoken loud and clearly that you officially hate the 11 Labs reads. I will not guarantee that I will never do them again, but I'm trying to make enough time to be able to do this in the more boutique, organic, artisanal, human reading kind of way. Tim's piece is called AI First Puts Humans First and writes, I was alarmed and dismayed to learn that in the press, AI first has now come to mean using AI to replace people.

Many Silicon Valley investors and entrepreneurs even seem to view putting people out of work as a massive opportunity. The idea is anathema to me. It's also wrong, both morally and practically. The whole thrust of my 2017 book, What's the Future and Why It's Up to Us, was that rather than using technology to replace workers, we can augment them so that they can do things that were previously impossible. It's not as though there aren't still untold problems to solve, new products and experiences to create, and ways to make the world better, not worse.

Every company is facing this choice today. Those that use AI simply to reduce costs and replace workers will be out-competed by those that use it to expand their capabilities. So, for example, at O'Reilly, we have primarily offered our content in English, with only the most popular titles translated into the most commercially viable languages. But now, with the aid of AI, we can translate everything into dozens of languages, making our knowledge and our products accessible and affordable in parts of the world that we just couldn't serve before.

These AI-only translations are not as good as those that are edited and curated by humans, but an AI-generated translation is better than no translation.

Our customers who don't speak English are delighted to have access to technical learning in their own language. As another example, we've built quizzes, summaries, audio, and other AI-generated content, not to mention AI-enabled search and answers using new workflows that involve our editors, instructional designers, authors, and trainers in shaping the generation and the evaluation of these AI-generated products. Not only that, we pay royalties to authors on these derivative products. But these things are not really yet what I call AI-native. What do I mean by that?

I've been around a lot of user interface transitions. From the CRT screen to the GUI, from the GUI to the web, from the web on desktops and laptops to mobile devices. We all remember the strategic conversations about mobile first. Many companies were late to the party in realizing that consumer expectations had shifted and that if you didn't have an app or web interface that worked well on mobile phones, you'd lose your customers. They lost out to companies that quickly embraced the new paradigm. Mobile first meant prioritizing user experiences for a small device and scaling up to larger screens.

At first, companies simply tried to downsize their existing systems — remember Windows Mobile — or somehow shoehorn their desktop interfaces onto a small touchscreen. That didn't work. The winners were companies like Apple that created systems and interfaces that treated the mobile device as a primary means of user interaction. We have to do the same with AI. When we simply try to implement what we've done before, using AI to do it more quickly and cost-efficiently, we might see some cost savings, but we will utterly fail to surprise and delight our customers.

Instead, we have to re-envision what we do, to ask ourselves how we might do it with AI if we were coming fresh to the problem with this new toolkit. Chatbots like ChatGPT and Claude have completely reset user expectations. The long arc of user interfaces to computers is bringing them closer and closer to the way humans communicate with each other. We went from having to speak computer to having them understand human language.

In some ways, we had started doing this with keyword search. We'd put in human words and get back documents that the algorithm thought were the most related to what we were looking for, but it was still a limited pigeon. Now, though, we can talk to a search engine or chatbot in a much fuller way, not just in natural language, but with the right preservation of context in a multi-step conversation, or with a range of questions that goes well beyond traditional search. For example, in searching the O'Reilly platform's books, videos, and live online courses, we might ask something like,

What are the differences between Camille Fournier's book, The Manager's Path, and Addy Osmani's leading effective engineering teams? Or what are the most popular books, courses, and live trainings on the O'Reilly platform about software engineering soft skills? Followed by the clarification, what I really want is something that will help me prepare for my next job interview. Or consider verifiable skills, one of the major features that corporate learning offices demand of platforms like ours.

In the old days, certifications and assessments mostly relied on multiple choice questions, which we all know are a weak way to assess skills, and which users aren't that fond of. Now, with AI, we might ask AI to assess a programmer's skills and suggest opportunities for improvement based on their code repository or other proof of work. Or an AI can watch a user's progress through a coding assignment in a course and notice not just what the user got wrong, but what parts they flew through and which ones took longer because they needed to do research or ask questions of their AI mentor.

An AI-native assessment methodology not only does more, it does it seamlessly, as part of a far superior user experience. We haven't rolled out all these new features, but these are the kind of AI-native things we are trying to do. Things that were completely impossible before we had a still largely unexplored toolbox that daily is filled with new powerful tools. As you can see, what we're really trying to do is use AI to make the interaction of our customers with our content richer and more natural. In short, more human. One mistake that we've been trying to avoid is what might be called putting new wine in old bottles. That is, there's

there's a real temptation for those of us with years of experience designing for the web and mobile to start with a mock-up of a web application interface with the window where the AI interaction takes place. This is where I think AI-first really is the right term. I like to see us prototyping the interaction with AI before thinking about what kind of web or mobile interface to wrap around it. When you test out actual AI-first interactions, they may give you completely different ideas about what the right interface to wrap around it might look like.

There's another mistake to avoid, which is to expect AI to be able to do magic and not think deeply enough about all the hard work of evaluation, creation of guardrails, interface design, cloud deployment, security, and more.

AI native does not mean AI only. Every AI application is a hybrid application. I've been very taken with Philip Carter's posts, LLMs are weird computers, which makes the point that we're now programming with two fundamentally different types of computers. One that can write poetry but struggles with basic arithmetic, another that calculates flawlessly but can't interact easily with humans in our own native languages. The art of modern development is orchestrating these systems to complement one another.

This was a major theme of our recent AI CodeCon Coding with AI. The lineup of expert practitioners explained how they are bringing AI into their workflow in innovative ways to accelerate, not replace, their productivity and their creativity. And speaker after speaker reminded us of what each of us still needs to bring to the table.

Chelsea Troy put it beautifully, saying, Large language models have not wholesale wiped out programming jobs so much as they have called us to a more advanced, more contextually aware, and more communally oriented skill set that we frankly were already being called to anyway. On relatively simple problems, we can get away with outsourcing some of our judgment. As the problems become more complicated, we can't.

The problems of integrating AI into our businesses, our lives, and our society are indeed complicated. But whether you call it AI-native or AI-first, it does not mean embracing the cult of economic efficiency that reduces humans to a cost to be eliminated. It means doing more, using humans augmented with AI to solve problems that were previously impossible in ways that were previously unthinkable, and in ways that make our machine systems more attuned to the humans they are meant to serve. As Chelsea said, we are called to integrate AI into a more advanced, more contextually aware, and more communally oriented sensibility.

AI first puts humans first. Today's episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context, which if you don't know exactly what that means yet, do not worry, we're going to explain and it's awesome. So Blitzy is used alongside your favorite coding copilot as your batch software development platform for the enterprise, and it's meant for those who are seeking dramatic development acceleration on large scale code bases. Traditional copilots help developers with line by line completions and snippets,

But Blitze works ahead of the IDE, first documenting your entire codebase, then deploying more than 3,000 coordinated AI agents working in parallel to batch build millions of lines of high-quality code for large-scale software projects. So then whether it's codebase refactors, modernizations, or bulk development of your product roadmap, the whole idea of Blitze is to provide enterprises dramatic velocity improvement.

To put it in simpler terms, for every line of code eventually provided to the human engineering team, Blitze will have written it hundreds of times, validating the output with different agents to get the highest quality code to the enterprise and batch. Projects then that would normally require dozens of developers working for months can now be completed with a fraction of the team in weeks, empowering organizations to dramatically shorten development cycles and bring products to market faster than ever.

If your enterprise is looking to accelerate software development, whether it's large-scale modernization, refactoring, or just increasing the rate of your STLC, contact Blitzy at blitzy.com, that's B-L-I-T-Z-Y dot com, to book a custom demo, or just press get started and start using the product right away. Today's episode is brought to you by Vanta.

Vanta is a trust management platform that helps businesses automate security and compliance, enabling them to demonstrate strong security practices and scale. In today's business landscape, businesses can't just claim security, they have to prove it.

Achieving compliance with a framework like SOC 2, ISO 27001, HIPAA, GDPR, and more is how businesses can demonstrate strong security practices. And we see how much this matters every time we connect enterprises with agent services providers at Superintelligent. Many of these compliance frameworks are simply not negotiable for enterprises.

The problem is that navigating security and compliance is time-consuming and complicated. It can take months of work and use up valuable time and resources. Vanta makes it easy and faster by automating compliance across 35+ frameworks. It gets you audit-ready in weeks instead of months and saves you up to 85% of associated costs. In fact, a recent IDC whitepaper found that Vanta customers achieved $535,000 per year in benefits, and the platform pays for itself in just three months.

The proof is in the numbers. More than 10,000 global companies trust Vanta, including Atlassian, Quora, and more. For a limited time, listeners get $1,000 off at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off.

Today's episode is brought to you by Super Intelligent and more specifically, Super's Agent Readiness Audits. If you've been listening for a while, you have probably heard me talk about this, but basically the idea of the Agent Readiness Audit is that this is a system that we've created to help you benchmark and map opportunities for

in your organizations where agents could specifically help you solve your problems, create new opportunities in a way that, again, is completely customized to you. When you do one of these audits, what you're going to do is a voice-based agent interview where we work with some number of your leadership and employees

to map what's going on inside the organization and to figure out where you are in your agent journey. That's going to produce an agent readiness score that comes with a deep set of explanations, strength, weaknesses, key findings, and of course, a set of very specific recommendations that then we have the ability to help you go find the right partners to actually fulfill.

So if you are looking for a way to jumpstart your agent strategy, send us an email at agent at besuper.ai and let's get you plugged into the agentic era. All right.

Back to NLW here. So if you've been a regular listener, you will have heard me talk about similar themes, right? The conception that I often come back to is efficiency AI versus opportunity AI. I've said before, and I'm sure I will continue to beat this drum, that I think that there will be a very natural temptation for many companies to treat AI strictly as a cost-cutting measure first, basically to be content with doing the same with less.

I think Wall Street will even reward this behavior in the short run. I also think that those companies will get wildly out-competed by companies who instead view AI as opportunity, who think structurally and fundamentally about how to redesign what they do to offer things that were previously not possible. Better products, better services, more products, more services. Ultimately, growth beats efficiency. And I think that will at some point jog us from the efficiency-first or efficiency-only kind of mindset when it comes to enterprise AI into this broader, more comprehensive vision.

Now, I also think that the faster we get through that efficiency phase, the less risk of wild societal disruption there is. And so I think it's good that Tim and others are having this conversation. I also think it's important to remember that we get to plant our flags about our vision for the world we want to inhabit. Yes, technology has its own momentum and inertia, but that doesn't mean that we don't get a stake and a say in where we're driving this train. I think we underestimate our agency to orient this technology towards a certain end.

Where I think some of this language is going to have some trouble is that I think that the nuance of what Tim is trying to say about AI being meant to serve people gets complicated when we introduce agents that really will be doing a huge percentage of the tasks that people do right now.

A big portion of the work that anyone has to do, at least from a knowledge work perspective, is likely to be done by agents in the future. And I think that not only is that okay, that's probably good. I think the companies that are trying to replace that type of work are trying to do something that they believe will be net positive for people.

And it does go beyond just the rote tasks that we don't like. Think about vibe coding. Vibe coding isn't just replacing the bad tasks that coders don't like, although part of the benefit of AI for coding is that. It's also allowing non-software engineers to create with code in a way that wasn't possible before. Now, I would argue that this all falls under this sort of broader framework that Tim is trying to articulate of thinking of AI not as one-to-one replacement for existing behaviors, but as unlocking new opportunities.

But I guess my point is, I don't think that a priori, any of us should cling too strongly to the exact set of tasks that make up our work today. Those just really are going to change. I don't think there's any way around it. To me, I think that the real big thing here is in getting organizations and enterprises to articulate their vision for how they want to reinvest the gains that they get from all of their new efficiencies and productivity that comes from AI.

Are organizations going to do stock buybacks and pay dividends? Or are they going to reinvest? Are they going to move people whose jobs, or at least the collection of tasks that previously made up that job, are now being done by agents to different areas with new opportunity to create value? Or are they just going to cast them off into the sunset? These are decisions that companies get to make, and we as customers and consumers and media and investors get to reward or punish in the open market.

I think the more of a movement that we push towards this vision of opportunity, the more likely it is that companies start there rather than being dragged there by force over time. One optimistic thing, while I have no doubt that Tim has had conversations that have disturbed him, I think that the perspective of Silicon Valley on this is a lot more nuanced than you might think from this piece.

In setting up his argument, he referenced the piece by Ed Newton Rex, who, to his great credit, is putting his money where his mouth is and trying to build a fairer, juster version of AI in his estimation, but whose main point of leverage is being loud, and specifically loudly antagonistic towards Silicon Valley's approach to AI as sort of part of the business model. Again, this doesn't mean you should discount that opinion. It's just worth having the context.

If you look at where some of this other idea of being enthusiastic about job displacement might come from, at least from a sensibility standpoint, there is a big discussion like this one on a Y Combinator podcast from last year around how vertical AI agents could be bigger than SaaS, because all of a sudden they're competing for labor budgets, not just software budgets. And while that reality has indeed captured venture capitalist imaginations, I don't think that they are all sitting there hoping that everyone gets fired. I

I think that most of them believe that there's going to be a natural transition process. But in the meantime, there is going to be big shifts of exactly the type that they tend to make their money from.

Even some of the most egregious examples of Silicon Valley seeming to be really callous are a little bit less clear than they seem at first. For example, you might remember those stop hiring humans billboards, which to be clear, I really did not like. Tongue in cheek or click baity or not, that's a company who's making a choice to get eyeballs at the cost of inflaming some very tense cultural conversations, basically socializing the losses as a negative externality for all of us.

while they have the benefit of more attention. But to be fair to them, if you read their post from last year explaining the campaign, they really don't even see themselves as trying to get people to replace humans. As I said, all of this is not at all to be Pollyannish about the fact that there are some people who are both intentionally and structurally making it their incentive for AI to replace people. The whole premise of me reading this is that I think Tim's piece is an important one and good food for thought.

But I stubbornly perhaps remain optimistic about how all this plays out. And I'm glad you're here to keep having the conversation. For now, that's going to do it for today's AI Daily Brief. Until next time, peace.

We're sunsetting PodQuest on 2025-07-28. Thank you for your support!

Export Podcast Subscriptions