This podcast is supported by Google. Hey, everyone. David here, one of the product leads for Google Gemini. If you dream it and describe it, VO3 and Gemini can help you bring it to life as a video. Now with incredible sound effects, background noise, and even dialogue. Try it with a Google AI Pro plan or get the highest access with the Ultra plan. Sign up at Gemini.Google to get started and show us what you create.
Today on the AI Daily Brief, 20 or more ideas for jobs that AI might create. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. ♪
Welcome back to the AI Daily Brief. Thank you to our sponsors, Gemini, Blitzy, Vanta, and Agency.org. As always, if you are looking for an ad-free version of the show, you can head on over to patreon.com slash AI Daily Brief, starting at just $3 a month. And one administrative note for today. Today's episode is an extended main. There will not be headlines, but we will be back with the headlines tomorrow.
This is a fun topic. It's a slightly different take on something we've been talking about a lot, which is, of course, AI-related job displacement. But hopefully this takes things in a slightly new direction. Welcome back to the AI Daily Brief. The conversation about AI and jobs has undeniably been getting louder recently. And I think there's a pretty simple reason for that. The short answer is that we are now living through the transition from the assistant era of AI to the agentic era of AI.
Agents implicitly carry with them the capability to do broader and bigger sets of work at higher levels of complexity and allow companies to start thinking about and actually asking the question, are there entire sets of tasks that can be done by agents instead of by humans? And does that mean there are entire categories of jobs that can be done by agents instead of humans?
Now, as this conversation has increased, we've seen notes from these CEOs of Shopify and Fiverr and Duolingo all about the coming changes and how it's going to impact their companies. And we got another one of those today, which in this case came from Amazon CEO Andy Jassy, and which we will dig into in just a moment. My position on all of this is that I am firmly in the camp that AI is coming for our jobs.
I think that this idea that we won't be replaced by AI but with a person using AI is a comfortable delusion that will look very ancient and antiquated very soon. And yet, I do not believe that this won't mean that we won't have jobs. I just think that jobs are going to look entirely different. So in addition to covering today that new memo by Amazon's Andy Jassy, we're also going to start looking at where AI might begin to create jobs, anchored by a big piece in the New York Times about exactly that.
Let's talk first, though, about this memo. The big banner headline that caught everyone's attention was that he said that it was likely that agents would reduce the overall headcount at the company. But of course, it was a little bit broader than that. Amazon published the note, which was sent to employees directly via email. And here are a couple of excerpts. Section one is platitudes about how generative AI is changing everything. Jassy calls it a once-in-a-lifetime transformation that completely changes what's possible for customers and businesses. Section two is a
He then talks about all the ways in which generative AI is coming to Amazon's products, including Alexa Plus, Amazon's AI Shopping Assistant, features like their Lens and Buy For Me features, but also an additional tooling that they're creating for their independent sellers. He adds a bit more about advertising and chips. Basically, you can see that the point is that AI is everywhere across the Amazon product suite.
But the gist of the piece is that Jassy wants to go even faster. He writes,
There will also be agents that routinely do things for you outside of work, from shopping to travel to daily chores and tasks. Many of these agents have yet to be built, but make no mistake, they're coming and coming fast. But what are the implications for Amazon? Well, he says these agents are going to change the scope and speed at which we can innovate for customers. Quote, agents will allow us to start almost everything from a more advanced starting point. We'll be able to focus less on rote work and more on thinking strategically about how to improve customer experience and invent new ones.
Agents will be teammates that we call on various stages of our work that will get wiser and more helpful with more experience. If we build and leverage the right agents, it's going to rapidly accelerate our ability to make customers' lives easier and better every day, and it's going to make our jobs even more exciting and fun than they are today.
Today, Jassy says Amazon has over a thousand Gen AI services and applications that are either built or in progress. But he still says that's a small fraction of what will ultimately come. Specifically, the company's goal is to make it easier for people inside the company to build agents and then share them with others. Now, here's the big line that everyone picked up on. As we roll out more generative AI in agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today and more people doing other types of jobs.
It's hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company. And so he suggests, go use AI, take advantage of resources they're providing to educate themselves, because a change is coming.
Now, in some ways, this is pretty similar in tone to the other CEO letters that we've seen. The difference, of course, being that Amazon is a much bigger company than Shopify or Duolingo. It reads to me not like there is an imminent wave of job cuts, but a way to start planting a flag for making what is hopefully a more gentle transition, but one which is still inevitable.
Now, one person who made this point even more strenuously recently was Vista Equity Partners CEO Robert Smith. He recently told a private capital conference in Berlin that, quote, we think that next year, 40% of the people at this conference will have an AI agent and the remaining 60% will be looking for work. He continued, there are 1 billion knowledge workers on the planet today and all of those jobs will change. I'm not saying they'll all go away, but they will all change.
Now, not everyone is as far along as him. At the same conference, Orlando Bravo, the head of private equity firm Tama Bravo, was asked about Dario Amadei's warning that AI could wipe out half of white-collar jobs within the next few years. He responded, It's a very futuristic point of view, but I think it will make white-collar jobs more productive and people a lot smarter. I use ChatGPT in all kinds of models all the time. I say, hey, write me a paper on this topic, and then I can think more thoughtfully and more deeply about that topic and improve it.
Now, no disrespect here, but if the mental model of AI is manually instructing deep research to write papers, you're really still only looking at the tip of the iceberg. Within a year, especially with where he sits and what his firm does, I can almost guarantee that he will have a team of agents that are constantly monitoring for relevant information, like a hyperactive army of 24-7 analysts.
Other areas where we're actually seeing AI-driven headcount include British Telecom. They said that their current plan to cut 40,000 jobs and reduce £3 billion in costs by the end of the decade might be an understatement. CEO Alison Kirkby said, And of course,
This change is already happening all over the world. The Independent recently profiled a string of workers affected by AI layoffs. One HR worker at a Bay Area benefits management firm said she lost her position shortly after her company's AI build-out. She said of her former employer, I thought that because I had put in so much time and been so good on the higher-level stuff, he would invest in me. Then, as soon as he had a way to automate it away, he did that. He just let go of me.
This, by the way, is why it's so important to have leadership conversations that are deep and thoughtful and sincere and actually bring your employees into your plans. The more that employers can articulate their vision, the better those transitions are going to go.
NYU Stern Professor Scott Galloway has been watching AI's impact on jobs and recently came up with the catchy phrase of saying, I think of AI as corporate Ozempic. And that is, Ozempic goes into your brain and kind of switches off a switch that says you don't need more calories, even though your instincts are telling you to consume as many calories as possible, if you're fortunate enough to have salty or sugary or fatty food in front of you. And typically when you're a CEO and you're growing, the signal is, I need more calories, I need more people.
Musk, to a certain extent, by offering a minimum viable product with 20% of the staff of Twitter and really meta-announcing what was the seminal earnings call where they said, we've laid off 20% of our staff and meanwhile maintain growth of 23%, sending earnings up 70%, everybody started thinking, I want the great taste of growth without the calories of more people. And AI is the Ozempic.
Galloway thinks that the AI transformation is going to be an extremely rough time for the average worker, but will greatly benefit elite talent, stating, "...if you're really good, this is really good news for you. America has essentially been optimized for the top 10%. AI is going to take the top 10% who work really hard and are really creative and know how to leverage these tools and just make them effing warriors. I mean, they're just going to be monsters."
Now, with all that, some people are starting to ask what we're going to do about it. Columnist John McLeon, writing in The Hill this week, argued that this is a problem screaming for a government solution. He writes, I share this one because I think it's a theme that we're going to see more and more in exactly this type of op-ed in policy and political discourse.
When asked how to ensure the gains from AI are properly distributed across society in one interview, Nobel Prize-winning AI researcher Geoffrey Hinton said, "Socialism."
So who out there is optimistic? Well, one person is LinkedIn founder Reid Hoffman. He wrote a piece in the San Francisco Standard specifically aimed at a new graduating class. He writes, "...I actually think graduates have reason to be excited. I'm more bullish than most on the future of human labor. AI is reshaping how value is created. As the race for efficiency and scale heats up, jobs will be cut, industries will vanish, and job losses may outpace opportunities at least for now."
At the same time, the best way to minimize the effects of workplace disruption is to explore the opportunities that rapid change creates. While it's rational to look for ways to AI-proof one's future, it's also insufficient. What you really want is a dynamic career path, not a static one. Would it have made sense to internet-proof one's career in 1997 or YouTube-proof it in 2008? When new technology starts cresting, the best move is to surf that wave. The rest of his article advised graduates to master AI tools going way beyond prompt engineering and vibe coding.
But there is another potentially obvious answer, which is the new jobs that AI will create. Now this is extremely difficult to guess at.
The industries that AI enables might not exist right now, and so understanding what jobs are going to be in them is particularly difficult. There is also almost always a lag, whereas Reid said, we're likely to see greater losses than gains, at least in the short term. This episode is brought to you by Blitzy. If you're a technology leader, here's something that probably sounds familiar. Your organization's competitive edge is buried in legacy code that desperately needs modernization,
But the resources required feel out of reach. That was the case for a global investment analysis firm. They needed to migrate 70,000 lines of complex MATLAB financial algorithms to Python. Algorithms that drive investment decisions for trillions in assets. Their estimate? Months of high-cost specialized engineering work. Instead, they partnered with Blitze. Blitze's autonomous AI preserved mathematical precision and generated over 80% of the new codebase.
completing the migration with just five days of engineering time. They cut the timeline by 95% and saved 880 engineering hours. If your organization is facing similar modernization challenges, visit blitzy.com to schedule a consultation and discover how AI-powered development can transform your technical capabilities. Today's episode is brought to you by Vanta. In today's business landscape, businesses can't just claim security, they have to prove it.
Achieving compliance with a framework like SOC 2, ISO 27001, HIPAA, GDPR, and more is how businesses can demonstrate strong security practices.
The problem is that navigating security and compliance is time-consuming and complicated. It can take months of work and use up valuable time and resources. Vanta makes it easy and faster by automating compliance across 35-plus frameworks. It gets you audit-ready in weeks instead of months and saves you up to 85% of associated costs. In fact, a recent IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the platform pays for itself in just three months.
The proof is in the numbers. More than 10,000 global companies trust Vanta. For a limited time, listeners get $1,000 off at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off.
Today's episode is brought to you by Agency, an open source collective for interagent collaboration. Agents are, of course, the most important theme of the moment right now, not only on this show, but I think for businesses everywhere. And part of that is the expanded scope of what agents are starting to be able to do. While single agents can handle specific tasks, the real power comes when specialized agents collaborate to solve complex problems. However,
Right now, there is no standardized infrastructure for these agents to discover, communicate with, and work alongside one another. That's where Agency, spelled A-G-N-T-C-Y, comes in. Agency is an open-source collective building the Internet of Agents, a global collaboration layer where AI agents can work together. It will connect systems across vendors and frameworks, solving the biggest problems of discovery, interoperability, and scalability for enterprises.
With contributors like Cisco, Crew.ai, Langchain, and MongoDB, Agency is breaking down silos and building the future of interoperable AI. Shape the future of enterprise innovation. Visit agency.org to explore use cases now. That's A-G-N-T-C-Y dot org. But the New York Times recently published this interesting piece called AI Might Take Your Job. Here are 22 new ones it could give you.
And while author Robert Capps took a bunch of time to talk with ChatGPT and deep research around what jobs might come, this piece talked to a bunch of different experts to get some ideas. Robert writes, "...if we want to know what these new opportunities will be, we should start by looking at where new jobs can bridge the gap between AI's phenomenal capabilities and our very human needs and desires. It's not just a question of where humans want AI, but also where does AI want humans?"
Capps then breaks the categories into three areas, trust, integration, and taste. One of the roles that he discusses under trust is what he describes as an AI auditor, i.e. people who basically explain what the AI is doing and why, and document it for technical, explanatory, or liability purposes. Another somewhat related idea is an AI translator, quote, someone who understands AI well enough to explain its mechanics to others in the business, particularly to leaders and managers.
Another set of roles in this area are trust authenticators or trust directors, basically people who build chains of logic that can be used to support decisions made by AI or hybrid AI human teams. And what about when AI interfaces with the real world? One of the challenges of AI, if it's doing the majority of the work, is that AI can't be held responsible for whatever mistakes it makes. And so does that mean that a human is going to need to sign off on AI work? Are there going to even be legal implications? If
Ethan Mollick called a new legal guarantor role for AI a sin eater, who's basically the final stop in the chain of responsibility.
Finally, two more in this trust category. One idea is for a consistency coordinator. Consistency across a lot of different AIs is a really big challenge and in many cases might require some amount of human oversight to make sure outputs are consistent. Capps writes, Can a fashion house be assured that a particular dress will be accurate and consistently represented across dozens of AI-generated photographs?
The last trust idea is for an escalation officer, which is basically someone who can tell when AI is insufficient for a particular task or just not preferred for a particular task and can bring in humans.
Capps points to an essay by Daniel Suskind, who points out that there are roles that humans prefer other humans to perform. An example is that while AI has long been able to beat even the best chess players, professional chess between humans still remains popular. The idea of an escalation officer is someone who can step in when an AI feels inhuman, such as in customer service.
The next domain of potential new jobs is the domain of integration. One role, which frankly we are absolutely already seeing right now, might be called AI integrators, or experts who figure out how to best use AI in a company then implement it.
Right now, this role is in many cases being performed by external partners. Superintelligent is effectively a version of this. But over time, you will start to see more of a balance where this skill set becomes integrated, to use the word right here, into the normal headcount set of most companies. Alongside that will be an AI plumber, basically a different type of IT specialist who is focused on fixing AI when it breaks.
What about when it comes to figuring out what new models to use? I would guess that part of the reason that many of you are here listening to me now is that there is so much going on in AI that it's hard to sort through and figure out what you actually have to be paying attention to. Well, Capps and his expert in this case, who is Sal Khan, the founder of Khan Academy, see this as an opportunity for something that might be called an AI assessor. Says Khan, these models are constantly changing. You're constantly making perceived improvements to features, but you need to evaluate whether you're regressing.
Another role related to the AI integrator is the AI trainer. The idea of an AI trainer is that when it comes to corporate use of AI, a lot of the value is in how good its access to the company's data is. The AI trainer is the, quote, person whose job it is to help AI find and digest the best, most useful data a company has, and then teach it to respond in accurate and helpful ways.
Because AI is going to interact so much with both employees and customers, it's a new, frankly, much more complex area for brand. Does this create a role for something like an AI personality director? Someone who can decide whether the personality of an AI that represents a company is supposed to be cloying, complimentary, sardonic, grumpy? Capps points out that AI personality could become as core to brand as its logo, and I actually think that is dead on.
There are also a set of roles that will come for industries that are more heavily regulated or just more complex. An example they point out is a drug compliance optimizer in the healthcare field, who would be a person who develops AI-driven systems to make sure patients take the right medications at the correct time, or an AI human evaluation specialist. Sort of a cousin of that escalator role that we talked about before, but a person who figures out where in a particular care setting AI or a human is going to perform better.
The last category of new jobs from AI comes in the form of taste. Capps writes, it will remain a human's job to tell the AI what to do, but telling AI what to do requires having a vision for exactly what you want. In a future where most of us have access to the same generative tools, taste will become incredibly important.
Now, the example he uses here is, of course, recent AI Daily Brief guest Rick Rubin, who doesn't play instruments, doesn't know how to work a soundboard, and doesn't know anything formal about music. When Anderson Cooper asked him in an interview, what are you being paid for? Rubin said, the confidence I have in my taste and my ability to express what I feel has proven helpful for artists.
Capps then points out that the entire field of design is going to look different in the AI land. He points out that some of the roles will have the same titles, but be so fundamentally different that they actually are something new. For example, a product designers he writes, they'll have a much greater ability to own products from top to bottom. The role will not just be about the big picture, but also all the choices that bring that big picture to life.
He also sees this idea of designer as impacting or evolving certain roles now. Instead of being a writer, you might be an article designer. In film and TV, you might see story designers. In everything from marketing to games, you might see world designers. Where, as he puts it, a person fabricates an entire universe complete with fictional characters and locations. But what about outside of creative industries?
A human resource designer, he suggests, might more thoroughly control everything from training materials to detailed benefits and leave policies. And civil designers, who might be more focused on the creative part of the job than on the specific technical pieces, become a complement to civil engineers. In marketing, given how ubiquitous everyone's access to all these tools will be, you might have something called a differentiation designer, which would be a very complex, comprehensive sort of brand officer.
And so that is Capp's list. And I think that even if you don't agree with those, or if you're a little skeptical, or if you think AI might actually do some of those things that right now we imagine humans still doing, I think it's an important shift in the discourse to start talking in this way about what might come as well.
When I asked O3 what it thought some new job titles might be, a lot of them had to do with the new management of the agentic workforce. In the category of multi-agent coordination, it saw new roles for AI agent orchestrators who, quote, decompose business goals into chains of tasks, spin up specialized agents, set guardrails, tune cost versus latency, and schedule human review checkpoints, but also enterprise agent lifecycle manager.
people who version, monitor, and retire fleets of agents across business units. You're already starting to see new leadership roles like chief AI officers and AI governance and compliance officers, but you're also likely to start to see new roles in risks, ethics, and compliance like model behavior auditor or safety analyst. O3 points out that there's going to be a lot of opportunity at the intersection of the human and AI experience. This is
This is something that was obviously a big theme for Robert Capps as well. But we might just straight up see a human AI interaction designer that's specifically focused on how to understand the trade-offs and interactions and the handoffs between agents and people in the provisioning of different types of work. And I guess the last thing that I will say is that almost all of these in some ways have to do with the new things that are going to be required to manage all of these new agents and AIs.
In other words, they don't even get into yet the new industries that will be created because of the capabilities of AIs, which will themselves create a whole new set of roles. On tomorrow's episode, I'm going to talk about how I think that the addition of sound generation in VO3 was clearly, in retrospect, a key unlock and inflection point moment that has totally transformed how AI video is interacting with social media. We are seeing maybe a dozen new categories of content, except
exploding across TikTok and YouTube Shorts, and those could beget entire fields that don't have named roles yet. Ultimately, it is so much harder to imagine a future that we're just beginning to wrap our heads around than it is to understand the immediate term losses and changes today. But I do continue to believe that while we absolutely have to take seriously this transitional period and engage it from an economic and policy standpoint, there is really so much to look forward to in the future as well.
For now though, that's going to do it for today's AI Daily Brief. Until next time, peace.