Today on the AI Daily Brief, the US government knows that AGI is coming. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes.
Hello, friends. Welcome back to another AI Daily Brief. Quick note, I'm traveling today, so we are doing a little bit of a different episode. This will be audio only. It will not include the headlines. But tomorrow we will be back with our normal episodes. And there is a lot of interesting stuff going on out there. We might even have to do a bit of an extended headlines episode tomorrow. But for now, let's dig into this very interesting article that is starting to get a lot of chatter and discussion.
Today we're discussing a really interesting podcast and article from Ezra Klein that first appeared in the New York Times. It's called The Government Knows AGI is Coming, and it's getting a huge amount of attention all across the internet. Now for background context on this, in 2023, after Chachapiti launched, there was a lot of U.S. government engagement with this new area. We had Sam Altman testifying before Congress.
We had the Biden executive order. And of course, that's to say nothing of all the government efforts around the world where there were safety summits and all the like.
Now, in 2024, things really fell off in that conversation. And it's hard to see that as anything other than attributable to the U.S. presidential election. Behind the scenes throughout 2024, there was a bit of a battle between the EACCs, the accelerationists on the one hand, and the AI safety folks on the other, with the specific battle being around the California legislation that would have forced a whole new set of rules on companies operating in that state.
Now, ultimately, that legislation passed the state legislature, but was vetoed by California Governor Gavin Newsom. And so all of this is to say that lurking behind the scenes, there are clearly some high-level policy discussions happening, but they have absolutely not been front and center. Even over in the Trump administration, where David Sachs was declared the AI and crypto czar, so far, much more of government policy, or at least executive order, has been focused on the crypto side of that office rather than the AI side.
All of this is background for this piece from Ezra Klein, where he interviewed Ben Buchanan. Buchanan was the special advisor for AI in the Biden White House. The setup from Klein was, quote, For the last couple of months, I've had this strange experience. Person after person from artificial intelligence labs, from government, has been coming to me saying, it's really about to happen. We're about to get artificial general intelligence.
Klein argued that there has been a profound shift, in fact. The people who very recently believed that AGI was still 5 to 15 years away have suddenly updated their timelines. Most of those folks now think that AGI will arrive within the next two to three years, and almost certainly within Donald Trump's second term in office. Klein added, "...they believe it because of the products they're releasing right now and what they're seeing inside the places they work. And I think they're right."
With that background, Klein's discussion with Buchanan explored what it would mean for the government, for individuals, and for society more broadly. Klein wrote,
We don't know how labor markets will respond. We don't know which country is going to get there first. We don't know what it will mean for war. We don't know what it will mean for peace. And while there is so much going on in the world to cover, I do think there's a good chance that, when we look back on this era in human history, AI will have been the thing that matters.
To take a big step back here, part of why this is worth covering on this show is that one of the beats that I keep track of is not just technology advancement, but the meta-level society shifts that go with it. If you spend any time in media right now, you can feel the resonance of this piece from Klein. It is absolutely starting to filter its way more deeply into discussions where AI isn't necessarily a daily conversation in a way that could have some meaningful implications.
So digging into the interview a little bit more deeply, one big theme of it was just reinforcing this idea that AGI is coming and that it represents a serious change that needs to be prepared for. Buchanan pointed out that whereas some people have dismissed it as just corporate hype, that is in his estimation decidedly not the case. One of the big issues he pointed out is the disparity between people who have maybe just used ChatGPT a little bit and those who are actually using cutting edge tools like coding agents or deep research tools in their understanding of just how fast AI is getting better.
Now, coming at this from the government perspective, Buchanan related this to previous big power struggles. He commented, I do think there are profound economic, military, and intelligence capabilities that would be downstream of getting to AGI or transformative AI. And I do think it's fundamental for U.S. national security that we continue to lead in AI.
He referenced the classic speech from President Kennedy regarding space travel, where Kennedy said that we do these things not because they're easy, but because they are hard. However, he pulled out another section of the speech where Kennedy had said, for space science, like nuclear science and technology, has no conscience of its own. Whether it will become a force for good or ill depends on man. Only if the United States occupies a position of preeminence can we help decide whether this new ocean will be a sea of peace or a new terrifying theater of war.
Buchanan believes this sentiment applies just as strongly to AI, saying, He did caveat to add, which is not to say we don't want to work with the Chinese.
So far, all of this is pretty standard. This is just opinion, maybe an informed opinion, but still an opinion nonetheless. But then he did get into some of his examples that have helped shape his perspective on this. For example, Buchanan described a recent DARPA project to investigate AI's capability to carry out cyber attacks on rival networks.
He commented, I would not want to live in a world in which China has that capability on offense, defense, and cyber, and the United States does not. And I think that's true in a bunch of different domains that are core to national security competition. Buchanan pointed out that when it comes to U.S. government involvement, part of the challenge is that AI has been developed largely in the private sector. If you look at previous technology leaps, the internet, nuclear, microprocessors, radar, and many more, the Defense Department had a fundamental role in driving development, or at least funding development,
which Buchanan noted gave the U.S. government a capacity to shape where the technology goes that by default we don't have in AI. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in.
Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk. Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company.
Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time. For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. There is a massive shift taking place right now from using AI to help you do your work
to deploying AI agents to just do your work for you. Of course, in that shift, there is a ton of complication. First of all, of these seemingly thousands of agents out there, which are actually ready for primetime? Which can do what they promise? And beyond even that, which of these agents will actually fit in my workflows? What can integrate with the way that we do business right now? These are the questions at the heart of the super intelligent agent readiness audit.
We've built a voice agent that can scale across your entire team, mapping your processes, better understanding your business, figuring out where you are with AI and agents right now in order to provide recommendations that actually fit you and your company.
Our proprietary agent consulting engine and agent capabilities knowledge base will leave you with action plans, recommendations, and specific follow-ups that will help you make your next steps into the world of a new agentic workforce. To learn more about Super's agent readiness audit, email agent at bsuper.ai or just email me directly, nlw at bsuper.ai, and let's get you set up with the most disruptive technology of our lifetimes. Another big theme beyond national security was economic disruption.
Buchanan's view of the disruption to come is that it will be chaotic, unevenly distributed, and rapid. The core insight, one that will be quite familiar to everyone who's listening here but would be perhaps surprising to the layperson, is that intelligence is about to be commoditized and that high-agency individuals and teams will be able to achieve things that were never possible before.
The flip side, though, is that there are lots of people who are going to have trouble keeping up. Buchanan said that while transitioning to an AGI economy consisting of dynamic firms and robust competition isn't necessarily a bad thing, he added, where I imagine you and I agree, and maybe Vice President Vance as well agrees, is we need to make sure that individual workers and classes of workers are protected in that transition. I think we should be honest. That's going to be very hard. We've never done that well.
The conversation touched on what needed to be done as we transitioned to that economy. There was a discussion between the merits of economic support like UBI, the trade-off between open source and closed source, and government adoption of the technology. As an aside, something that I will be coming back to in this show is that more and more my base case is that we are not going to see UBI in the way that we think about it now. I think it's going to be politically untenable. I think there's going to be too many economic challenges proposed with it. I
I think instead it's much more likely that we see government incentives for corporations to not fire people. There could be a negative expression of that as well, where companies get fined for firing people. But I think that feels more draconian, A. And B, I think in this case, you can probably move markets more with a carrot than with a stick.
I think if there are incentives for not firing people, it helps increase the incentive to think about how you use these new gains from AI technology, not just as a cost-saving technology, but as a way to out-compete your peers. As you've heard me rant before, I think that's where we get into positive territory when it comes to AI economic disruption. So I actually think that that's going to become a more important conversation over the next, call it, three to five years.
To the extent there was any one key theme throughout the piece, it's that AGI is coming and the conversation needs to not be about whether it's coming or not, but what to do about it.
Like I mentioned, part of the reason that this was worth covering is that the interview appears to be extremely resonant. New York Times reporter Kevin Roos posted, Now, Roos historically doesn't shy away from dealing with emerging tech in a credulous way, so for him to frame the issue like this was a little bit surprising. When asked what that penalty was that he was referring to, he commented,
This underscores one of the fundamental issues with the discourse around AGI, and frankly, the discourse that we've had since journalists and tech had their big breakup a little more than a half decade ago. Because big tech has become the enemy, quote-unquote serious journalists are meant to have a default position of standing in opposition to, or at least disbelieving, tech's claims.
Even beyond that, though, journalists and particularly headline writers and editors find themselves in a tricky position, where at some point, if they write yet another article about how amazing some new advances, it feels either tired because people have read that before, or just unbelievable, because how could there possibly be this much change all in such a compressed period of time? In that context, it becomes more clickable to churn out a piece claiming that AI is overhyped, and so there's a tension to do that even if that's not actually the case.
I pointed out that we tend to see this every summer as the output of the labs goes down a little bit and everyone settles into their slower summer season.
Honestly, if you need a good example of how hard this discussion is to move from theoretical to actionable, when Professor Ethan Mollick tweeted out that he wished more people were taking seriously the possibility that AGI is coming in the near future, he wrote, you don't have to buy it yourself, but leaders and policymakers need to consider the possibility it's true. He added, to be clear, no one knows whether AGI is possible, let alone near term. It may not be. But quite a few serious experts, including many of the leaders in the space, seem to think it's imminent.
It seems a mistake to assume they must be wrong and to not consider the possibility. The replies are absolutely full of people saying that the relatively modest progress of GPT-4.5 was proof that AGI wasn't coming anytime soon. And these are folks who are paying the most attention.
I think this is a great example of where we have completely lost the sauce because of our infatuation with the term. When you get up close and personal around the changed capabilities of AI, it is very nigh impossible to convince yourself that it isn't going to fundamentally disrupt how work happens. In fact, as I mentioned in my Dr. Strange piece from a couple of days ago, I find that the closer I get to it, the more I'm convinced that we are underestimating how it's going to happen.
Even those who are embracing the reality of agents replacing one-to-one a huge portion of the work that currently happens, they're simultaneously not thinking about what happens when you could hire a thousand people to do a job that one person previously did. And of course, I don't want to get hung up on that one example. The point is just that structurally, agents and AI are going to represent such a radical shift.
The question, of course, is does it matter whether we call that AGI or not? And I think that the obvious answer is from a policy standpoint, no, it doesn't matter even one bit. And if you need evidence of this, I'd ask you, did it matter when the Turing test was passed? Of course it didn't. New capabilities came online. People started to use them. People started to build on top of them. The world started to adjust and moved on, and it didn't care at all about whether someone officially declared that that test had been beaten.
This is exactly how it's going to happen with AGI. You're not going to wake up one morning to the New York Times declaring that AGI has happened. Instead, you're going to watch around you every single type of productive knowledge work shift in fundamental and incalculable and what would have been two years ago unimaginable ways.
People are going to wake up and discover that their jobs have completely changed out from under them, that the way that they have done things for a decade is completely out of date, that new competitive forces are totally upending the market they're participating in. And whether people decide to call that AGI or Lady Gaga's newest single, it doesn't matter. What matters is the impact, and the impact is coming fast and furious. ♪
I'm glad that the discussion is resonating. I hope the conversation gets picked up more broadly. But for now, that is going to do it for today's AI Daily Brief. Appreciate you listening as always. And until next time, peace.