Today on the AI Daily Brief, the Doctor Strange theory of AI agent work. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. ♪
Hello, friends. It is the weekend, meaning we are in a long reads episode. However, today we're doing something a little bit different. First of all, when I looked around looking for op-eds that got me fired up this week, I really just didn't find anything. I think people might be too busy playing with these new models to be spending much time writing. Secondly, it's also just been a weird week logistically. Started in Mexico, ended in the Adirondack Mountains, and
And so it's as good a time as any to do something a little bit different. And so for this week's Long Reads episode, I'm actually going to be riffing off one of my own pieces. This is something that I had shared on Twitter and LinkedIn about a week ago, in which I find myself coming back to at least conceptually pretty frequently right now. So I am just going to riff my way through this. And the TLDR of this idea is that I am more and more convinced that we are under rather than overestimating just how profound a disruption agents are going to be.
I think that most people, even who are really considering agents, are thinking about them as one-to-one replacements for existing work rather than ushering in totally new and as yet unforeseen opportunities to reconceptualize work on a more structural level.
Now, the reference point for this, like I said, it's called the Doctor Strange theory, is Benedict Cumberbatch's Doctor Strange character in Avengers Infinity War and Endgame. If for some reason you haven't seen those movies yet, and you would still like to, skip ahead a little bit. This is officially your spoiler alert a decade later.
Long story short, though, in Avengers Infinity War, the entire galaxy is facing the threat of Thanos. Thanos has gone around capturing the Infinity Stones, each of which has a different power, and put them together in a gauntlet that will allow him to snap his fingers and remove 50% of all life across the universe. He's doing this as a big Malthusian reset, believing himself to be benevolent, and giving the universe its best chance for long-term survival. The Avengers are, of course, fighting him tooth and nail.
They do not buy that it's better that half of the people they know are suddenly extinguished from the Earth, and every other planet as well, but things aren't looking good. Indeed, at one point, mystical arts practitioner Dr. Stephen Strange looks into different possible futures across the multiverse to understand in how many different scenarios the Avengers would beat Thanos. Ultimately, Strange visits 14,605,000 possible futures, and out of all of those, the Avengers only win in one.
That sets up the next two movies worth of conflict. And for our purposes, it sets up the type of different ways that I think agent work might happen in the future. So what the hell am I talking about? Is this just ridiculous clickbait?
Like I said, right now, I think people are imagining agents in a couple of very basic and specific ways. I think we imagine agents replacing single tasks or workflows. I think people who are really convinced of the power of agents are then also imagining those workflows and single tasks orchestrated together to replace groups of work.
potentially even up to and replacing specific jobs that exist now, especially if you consider jobs just collections of different tasks and workflows. And I think many of even the most bullish of people on agents just imagine this sort of one-to-one replacement. Certainly when we talk to enterprises about how excited they are about agents, part of the calculus is the ROI determination and the ROI simplicity of the fact that if an agent works, it does existing work for less cost than the human equivalent.
Now, I should also say as a caveat here that that does not mean that every enterprise out there is just looking forward to firing everyone it has on staff. There are going to be a full range of different ways that companies take advantage of and reinvest those savings. And I've spoken before about how I think that the companies that actually win this transition will be those who invest their savings, not in things like stock buybacks, but instead totally differentiated ability to service customers, develop new products, etc. But ultimately, that is a topic for a different conversation.
The point is still that I think most people, even who are very bullish about agents, are thinking about them as one-to-one replacements. Person did a thing before, agent does a thing now. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in.
Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk. Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company.
Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vantage to manage risk and improve security in real time.
If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms, agent-based platforms.
agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.
That's why Super Intelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.
If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. Now, for those who are zooming a little farther out, some are imagining multi-agent systems. So instead of just a social media message writer, they're imagining a whole agentic marketing department.
An agent that writes for Twitter slash X, an agent that creates images for Instagram, maybe a campaign organizer that coordinates that output together. And yet, I still think even this is underselling how it's actually going to work. A question that I think is useful to ask is not, could an agent do this work at a similar or even improved level of the human that does it now?
I think a really opportune question to ask is, what if we had 100 agents doing this work? Or 1,000? Or 10,000?
So let's bring this into the realm of examples. What if instead of having an agent for writing tweets or X posts, you had 100 agents writing tweets? Maybe of those 100, 20 of them were explicitly trained on your company's brand voice, but had a different emphasis, a different tone, a different style for different audiences. Among the other 80 tweet writing agents, maybe some would be modeled after your competitor's brand voices. Still others are modeled off of leading brands in other industries.
And then maybe just for funsies, some are designed to mimic famous writers. Now let's imagine that the same campaign planning agent that we had mentioned before had given these tweet writing agents a particular goal. The 100 writer agents all go off writing their best tweets to achieve that goal, obviously in the particular voice and style that they've been trained to execute, and share them into a system where they are then forwarded on to another group of agents whose job is to analyze tweets in the context of different audiences.
Maybe those analyzer agents have different audiences that they're particularly attuned to. Maybe they have the context of other conversations that are trending across the platform. Those analyzer agents then go off and run a wargame-style testing scenario where they try all of the writer agent's tweets against a bunch of different audience scenarios, ultimately producing stacked-ranked results with analysis that gets sent back to the campaign planner who makes the final suggestion. And here you see we're basically doing the Doctor Strange zoom into the multiversal future and see what happens in the scenario's vision of agent work.
It's enabled by the fact that the cost of intelligence has basically come to zero, and you can run all of these experiments incidentally. Then of course, whatever suggestion that the analyzer makes for the tweet writer agent is paired with the Instagram post created in the same way, email copy created the same way, a full campaign action plan created the same way, and all of this comes together as an overall proposal.
The human in the loop looks at maybe the top three options that are presented, making the final call, and approving and authorizing the various outputs, probably, by the way, with software that's purpose-built for helping people interact with and manage these new armies of agents. Now let's imagine this type of behavior across a huge array of business functions, like all of them. And also remember, I'm using this 100 number as a simplification, but I'm not even really sure what the upper bound of agents would be.
Whatever the answer is, I'm fairly sure it's not going to be technologically or cost constrained. The point of this is that the future that we're heading into is not one where the stuff that's done now by humans is divided up between humans and agents. It's that the availability of agents at mass scale, specialized for an infinite variety of highly specific purposes, orchestrated together with new types of tools and platforms and agents themselves that are specifically designed for that orchestration, means work will look totally different than it does now.
What does it mean that we're going to have the opportunity to have hundreds or thousands of agent employees where before we might have had just a single human? I don't know. I don't think anyone knows. But I think that's the type of question that we need to be asking if we really want to try to understand the future. Of course, this could go an entirely separate way.
It could be that all this wargaming is ultimately less effective than just finding people with good taste. The only thing that I am 100% sure of is that no matter how hyped agents feel right now, I genuinely believe we are wildly underestimating just how different work in five years looks like compared to today. Anyways, friends, that is my Doctor Strange theory of AI agent work. Hope this was a fun one for your weekend. Appreciate you listening as always. And until next time, peace.