Today, we're looking at a prediction of what the company of the future in the world of AGI might look like. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. ♪
Hello, friends. Welcome back to a long Reads episode of the AI Daily Brief. Today, we're looking at a great essay that I had somehow missed from Dwarkesh of the Dwarkesh podcast from back in January. The post is called What Fully Automated Firms Will Look Like. And the description is, everyone is sleeping on the collective advantages AI will have, which have nothing to do with raw IQ. They can be copied, distilled, merged, scaled, and evolved in ways humans simply can't. So what I like about this piece, just to give it a kickoff, is that it's a very, very
is that it's enormously difficult for people to really imagine and extrapolate out to exponential futures. This is a piece that tries to do that in the context of business. And while I am sure that when you hear me, or in this case, my 11 Labs clone read this, you will not shake your head bobbing up and down agreeing with everything because so much is speculation. You'll probably have some different thoughts about how things might evolve.
but at least it creates a context for conversation and maybe some ways to work backwards from there to here to understand how we might want to evolve things in the here and now. So with that, I'm going to turn this over to my AI voice avatar once again from 11 Labs, and then I'll come back and discuss it.
Even people who expect human level A, I soon are still seriously underestimating how different the world will look when we have it. Most people are anchoring on how smart they expect individual models to be. I.e., they're asking themselves, what would the world be like if everyone had a very smart assistant who could work 24-7? Everyone is sleeping on the collective advantages AIs will have, which have nothing to do with raw IQ, but rather with the fact that they are digital. They can be copied, distilled, merged, synchronized,
scaled, and evolved in ways humans simply can't. What would a fully automated company look like, with all the workers, all the managers as AIs? I claim that such AI firms will grow, coordinate, improve, and be selected for at unprecedented speed. This essay is not a prediction of what GPT-5 will be doing, nor about emulations of existing humans. Rather, I'm trying to imagine what the world will look like once we actually have AGIS. The
the descendants of LLMS that have gotten so good that they can do basically anything any human can do. Currently, firms are extremely bottlenecked in hiring and training talent, but if your talent is an AI, you can copy it a stupid number of times. What if Google had a million AI software engineers? Not untrained amorphous workers, but the AGI equivalents of Jeff Dean and Noam Shazier, with all their skills, judgment, and tacit knowledge intact. This ability to turn capital into compute,
and compute into equivalents of your top talent is a fundamental transformation. Since you can amortize the training cost across thousands of copies, you could sensibly give these AIs ever deeper expertise. PhDs in every relevant field, decades of business case studies, intimate knowledge of every system and code base the company relies on. The power of copying extends beyond individuals to entire teams.
Small, previously successful teams, think PayPal Mafia, early SpaceX, the Trader's 8, can be replicated to tackle a thousand different projects simultaneously. It's not just about replicating star individuals, but entire configurations of complementary skills that are known to work well together. The unit of replication becomes whatever collection of talent has proven most effective. Copying will transform management even more radically than labor. It will enable a level of micromanagement that makes founder mode look quaint.
Human Sundar simply doesn't have the bandwidth to directly oversee 200,000 employees, hundreds of products, and millions of customers. But AI Sundar's bandwidth is capped only by the number of TPUs you give him to run on. All of Google's 30,000 middle managers can be replaced with AI Sundar copies. Copies of AI Sundar can craft every product strategy, review every pull request, answer every customer service message, and handle all negotiations, everything flowing from a single coherent vision.
There is no principal-agent problem wherein employees are optimizing for something other than Google's bottom line or simply lack the judgment needed to decide what matters most. A company of Google's scale can run much more as the product of a single mind, the articulation of one thesis, than is possible now. Section Merge
Think about how limited a CEO's knowledge is today. How much does Sundar Pichai really know about what's happening across Google's vast empire? He gets filtered reports and dashboards, attends key meetings, and reads strategic summaries. But he can't possibly absorb the full context of every product launch, every customer interaction, every technical decision made across hundreds of teams. His mental model of Google is necessarily incomplete.
Now imagine Megasundar, the central AI that will direct our future AI firm. Just as Tesla's full self-driving model can learn from the driving records of millions of drivers, Megasundar might learn from everything seen by the distilled sundars. Every customer conversation, every engineering decision, every market response. Unlike Tesla's FSD, this doesn't have to be a naive process of gradient updating and averaging.
Mega Sundar will absorb knowledge far more efficiently, through explicit summaries, shared latent representations, or even surgical modification of the weights to encode specific insights. The boundary between different AI instances starts to blur. Mega Sundar will constantly be spawning specialized distilled copies and reabsorbing what they've learned on their own. Models will communicate directly through latent representations, similar to how the hundreds of different layers in a neural network like GPT-4 already interact.
So, approximately no miscommunication, ever again. The relationship between MegaSundar and its specialized copies will mirror what we're already seeing with techniques like speculative decoding, where a smaller model makes initial predictions that a larger model verifies and refines. Merging will be a step change in how organizations can accumulate and apply knowledge.
Humanity's great advantage has been social learning, our ability to pass knowledge across generations and build upon it. But human social learning has a terrible handicap. Biological brains don't allow information to be copy-pasted, so you need to spend years, and in many cases decades, teaching people what they need to know in order to do their job. Look at how top achievers in field after field are getting older and older, maybe because it takes longer to reach the frontier of accumulated knowledge.
Or consider how clustering talent in cities and top firms produces such outsized benefits simply because it enables slightly better knowledge flow between smart people. Future AI firms will accelerate this cultural evolution through two key advantages, massive population size and perfect knowledge transfer. With millions of AGIs, automated firms get so many more opportunities to produce innovations and improvements, whether from lucky mistakes, deliberate experiments, de novo inventions, or some combination.
As Joseph Henrich explains in The Weirdest People in the World, quote, "...cumulative cultural evolution, including innovation, is fundamentally a social and cultural process that turns societies into collective brains. Human societies vary in their innovativeness, due in large part to the differences in the fluidity with which information diffuses through a population of engaged minds and across generations."
Historical data going back thousands of years suggest that population size is the key input for how fast your society comes up with more ideas. AI firms will have population sizes that are orders of magnitude larger than today's biggest companies. And each AI will be able to perfectly mind meld with every other, from the bottom to the top of the org chart. AI firms will look from the outside like a unified intelligence that can instantly propagate ideas across the organization, preserving their full fidelity and context.
Every bit of tacit knowledge from millions of copies gets perfectly preserved, shared, and given due consideration. Today's episode is brought to you by Super Intelligent and more specifically, Super's Agent Readiness Audits. If you've been listening for a while, you have probably heard me talk about this, but basically the idea of the Agent Readiness Audit is that this is a system that we've created to help you benchmark and map opportunities in
in your organizations where agents could specifically help you solve your problems, create new opportunities in a way that, again, is completely customized to you. When you do one of these audits, what you're going to do is a voice-based agent interview where we work with some number of your leadership and employees to
to map what's going on inside the organization and to figure out where you are in your agent journey. That's going to produce an agent readiness score that comes with a deep set of explanations, strength, weaknesses, key findings, and of course, a set of very specific recommendations that then we have the ability to help you go find the right partners to actually fulfill. So if you are looking for a way to jumpstart your agent strategy, send us an email at agent at besuper.ai and let's get you plugged into the agentic era.
Today's episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context, which if you don't know exactly what that means yet, do not worry, we're going to explain and it's awesome. So Blitzy is used alongside your favorite coding co-pilot as your batch software development platform for the enterprise, and it's meant for those who are seeking dramatic development acceleration on large scale code bases. Traditional co-pilots help developers with line by line completions and snippets,
But Blitze works ahead of the IDE, first documenting your entire codebase, then deploying more than 3,000 coordinated AI agents working in parallel to batch build millions of lines of high-quality code for large-scale software projects. So then whether it's codebase refactors, modernizations, or bulk development of your product roadmap, the whole idea of Blitze is to provide enterprises dramatic velocity improvement.
To put it in simpler terms, for every line of code eventually provided to the human engineering team, Blitzy will have written it hundreds of times, validating the output with different agents to get the highest quality code to the enterprise and batch. Projects then that would normally require dozens of developers working for months can now be completed with a fraction of the team in weeks, empowering organizations to dramatically shorten development cycles and bring products to market faster than ever.
If your enterprise is looking to accelerate software development, whether it's large-scale modernization, refactoring, or just increasing the rate of your STLC, contact Blitzy at blitzy.com, that's B-L-I-T-Z-Y dot com, to book a custom demo, or just press get started and start using the product right away. Section Scale The cost to have an AI take a given role will become just the amount of compute the AI consumes. This will change our understanding of which roles are scarce.
Future AI firms won't be constrained by what's scarce or abundant in human skill distributions. They can optimize for whatever abilities are most valuable. Want Jeff Dean-level engineering talent? Cool! Once you've got one, the marginal copy costs pennies. Need a thousand world-class researchers? Just spin them up. The limiting factor isn't finding or training rare talent, it's just compute.
So what becomes expensive in this world? Roles which justify massive amounts of test-time compute. The CEO function is perhaps the clearest example.
Would it be worth it for Google to spend $100 billion annually on inference compute for Mega Sundar? Sure. Just consider what this buys you. Millions of subjective hours of strategic planning, Monte Carlo simulations of different five-year trajectories, deep analysis of every line of code and technical system, and exhaustive scenario planning. Imagine Mega Sundar contemplating, how would the FTC respond if we acquired eBay to challenge Amazon? Let me simulate the next three years of market dynamics.
Ah, I see the likely outcome. I have five minutes of data center time left. Let me evaluate 1,000 alternative strategies. The more valuable the decisions, the more compute you'll want to throw at them. A single strategic insight from Megasundar could be worth billions. An overlooked risk could cost tens of billions. However many billions Google should optimally spend on inference for Megasundar, it's certainly more than one. Distillation.
What might distilled copies of A.I. Sundar or A.I. Jeff be like? Obviously, it makes sense for them to be highly specialized, especially when you can amortize the cost of that domain. Specific knowledge across all copies. You can give each distilled data center operator a deep technical understanding of every component in the cluster, for example. I suspect you'll see a lot of specialization in function, tacit knowledge, and complex skills, because they seem expensive to sustain in terms of parameter count.
But I think the different models might share a lot more factual knowledge than you might expect. It's true that plumber GPT doesn't need to know much about the standard model in physics, nor does physicist GPT need to know why the drain is leaking. But the cost of storing raw information is so unbelievably cheap, and it's only decreasing, that LAMA7B already knows more about the standard model and leaky drains than any non-expert. If human-level intelligence is more than 1 trillion parameters,
Is it so much of an imposition to keep around what will, at the limit, be much less than 7 billion parameters to have most known facts right in your model? Another helpful data point here is that good and featured Wikitext is less than 5 megabytes. I don't see why all future models, except the esoteric ones, the digital equivalent of tardigrades, wouldn't at least have Wikitext down. Section. Evolve.
The most profound difference between AI firms and human firms will be their evolvability. As Gwern Branwen observes, Why do we not see exceptional corporations clone themselves and take over all market segments? Why don't corporations evolve such that all corporations or businesses are now the hyper-efficient descendants of a single ewer corporation 50 years ago, all other corporations having gone extinct in bankruptcy or been acquired? Why is it so hard for corporations to keep their culture intact?
and retain their youthful lean efficiency, or if avoiding aging is impossible, why not copy themselves or otherwise reproduce to create new corporations like themselves? His answer? Quote, Corporations certainly undergo selection for kinds of fitness and do vary a lot. The problem seems to be that corporations cannot replicate themselves. Corporations are made of people, not interchangeable.
easily copied widgets, or strands of DNA. The corporation may not even be able to replicate itself over time, leading to scleroticism and aging. The scale of difference between currently existing human firms and fully automated firms will be like the gulf in complexity between prokaryotes and eukaryotes. Prokaryotes, like bacteria, are not only remarkably simple, but have barely changed over their 3 billion year history, whereas eukaryotes rapidly scaled up in complexity
and gave rise to all the other astonishing organisms with trillions of cells working together tight-knit. This evolvability is also the key difference between AI and human firms. As Guern points out, human firms simply cannot replicate themselves effectively. They're made of people, not code that can be copied. They can't clone their culture, their institutional knowledge, or their operational excellence.
AI firms can seven. If you think human Elon is especially gifted at creating hardware companies, you simply can't spin up 100 Elons, have them each take on a different vertical, and give them each $100 million in seed money. As much of a micromanager as Elon might be, he's still limited by his single human form. But AI Elon can have copies of himself, design the batteries, be the car mechanic at the dealership, and so on.
And if Elon isn't the best person for the job, the person who is can also be replicated to create the template for a new descendant organization. Section. Takeover.
So then the question becomes, if you can create Mr. Meeseeks for any task you need, why would you ever pay some markup for another firm when you can just replicate them internally instead? Why would there even be other firms? Would the first firm that can figure out how to automate everything will just form a conglomerate that takes over the entire economy?
Ronald Coase's theory of the firm tells us that companies exist to reduce transaction costs so that you don't have to go rehire all your employees and rent a new office every morning on the free market. His theory states that the lower the intra-firm transaction costs, the larger the firms will grow.
500 years ago, it was practically impossible to coordinate knowledge work across thousands of people and dozens of offices. So you didn't get very big firms. Now you can spin up an arbitrarily large Slack channel or HR database so firms can get much bigger. AI firms will lower transaction costs so much relative to human firms. It's hard to beat shooting lossless latent representations to an exact copy of you for communication efficiency. So firms probably will become much larger than they are now.
But it's not inevitable that this ends with one gigafirm which consumes the entire economy. As Guern explains in his essay, any internal planning system needs to be grounded in some kind of outer loss function, a ground truth measure of success. In a market economy, this comes from profits and losses. Internal planning can be much more efficient than market competition in the short run, but it needs to be constrained by some slower but unbiased outer feedback loop.
A company that grows too large risks having its internal optimization diverge from market realities. That said, the balance may shift as AI systems improve. As corporations become more software-like, with perfect replication of successful components and faster feedback loops, we may see much larger and more efficient firms than were previously possible. The market continues to serve as the grounding outer loop.
How does the firm convert trillions of tokens of data from customers, markets, news, etc. every day into future plans, new products, and the like?
Does the board make all the decisions Politburo-style and use $10 billion of inference to run Monte Carlo tree search on different one-year plans? Or do you run some kind of evolutionary process on different departments, giving them more capital and compute labor based on their performance? These are all what we would today call culture. Markets facilitate an evolutionary process which selects not only goods and services, but the institutions that are best at turning the world into valuable goods and services.
I think this will continue.
All right, back to real NLW here. One of the big questions with all of this is going to be human friction. Human friction will persist and it will change the nature of how AI evolves. For example, a huge amount of the evolution that they articulate in that post might be slowed down by regulation. I've said before, and I continue to be of the opinion that we'll probably see government incentives to keep people employed that will have people think differently about how they design organizations.
A second, human friction, will be values prioritization. Once again, this scenario exists in a pure capitalist model where efficiency and cost of provision and all these sort of things are the only values that matter. But that's not the world that we live in. Even the purest capitalist systems have other types of values from other types of value systems that interact and compete. I think we'll see even more of that in the future.
And finally, there's the human friction of dealing with and servicing human customers. Ultimately, even a fully AI company is going to have to deal with real people with all their foibles and irregularities. And I think that will impact how this all shakes out. And yet I still think that there are some really interesting things that we can work backwards from here.
For example, this idea that we're going to be managing teams of tens of thousands of agents or digital employees is something I've talked about a lot. And I do think that parts of almost all of these notions of AIs that can be copied, distilled, merged, scaled, and evolved will come to bear even if humans are still involved.
Overall, it's a super interesting post. I really appreciate Dwarkesh sharing it. If you haven't yet, check out the Dwarkesh podcast. It's basically thinking person's Joe Rogan with a much more deep research toast and a lot of focus on AI. For now though, that is going to do it for today's AI Daily Brief. I appreciate you listening as always. And until next time, peace.