We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future

The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future

2025/6/12
logo of podcast Your Undivided Attention

Your Undivided Attention

AI Deep Dive AI Chapters Transcript
Topics
Eza Raskin: 我认为人工智能的发展存在两种极端情况:一种是少数国家和公司掌握权力,形成反乌托邦;另一种是权力分散导致混乱。因此,我们需要找到一条狭窄的道路,使权力和责任相匹配。我和Sam来自不同的背景,但我们对人工智能的发展方向有着共识。 Daniel Barkay: 我认为人工智能的发展可能会导致集权控制或失控的混乱,但存在一条狭窄的道路,即技术力量与责任相匹配。 Sam Hammond: 我认为技术塑造和改变了我们制度的本质。如果发生另一次技术转型,我们应该默认假设会发生类似规模的制度转型。我们需要巩固民族国家内部的权力,同时保持对自由、法治和法律面前人人平等的尊重。挑战在于如何保持稳定,并应对同时加强国家和社会力量的冲击。

Deep Dive

Shownotes Transcript

Translations:
中文

Hey, everyone. Welcome to Your Undivided Attention. This is Eza Raskin. And this is Daniel Barkay. So today's guest is Sam Hammond. He's the Chief Economist of the Foundation for American Innovation. And I'm very excited to have this conversation with Sam, in part because we just come from different backgrounds. We have different worldviews, sort of take different stances about the world. And yet on the biggest thing,

We seem to agree. And so we really wanted to have this be a conversation about, well, how AI is going to go and how it can go well.

The recap is AI companies and global superpowers are in a race to develop ever more powerful models, moving faster and faster without guardrails. And of course, this dynamic is unstable. Putin has said whoever wins AI wins the world. Elon Musk says AI is probably the way World War III starts. We've just passed the threshold of the latest anthropic models, starting to have expert level virology skills.

And there are really two end states that we've talked about on the podcast, and I think Sam sees two. And that's either we end up in a dystopia where a handful of states and companies get...

previously unimaginable amount of wealth and power, or that power is distributed to everyone and that world ends in increasingly uncontrollable chaos, which will then make the move to dystopia even more likely. And so there is a narrow path to getting it right, where power is matched with responsibility at every scale, but right now we aren't on that path. And so today's episode is really about how we might get on that path.

So, Sam, thank you so much for coming on Your Undivided Attention. Thank you for having me. Now, Sam, as you can probably hear, I'm just getting over a little cold, but I was really looking forward to this conversation, so I wasn't going to let that stop me. So Asa just talked about this in the intro, but you come to the AI conversation from a different perspective of a lot of our guests. Like a lot of the people that we have come primarily from AI safety or harm reduction. And here at CHD, that's our priority as well.

But you have what some might call an innovation-first approach to AI development. And you've described yourself as a former accelerationist, a techno-optimist in the Marc Andreessen vein. But you also talked about updating your worldview because of the fragility of our institutions. Can you just tell our listeners a little bit about where you're coming from and the top line of how you think about technology and AI?

Sure. So I've always thought of myself as maybe a techno-realist more than optimist per se. So I got into this area as a young kid interested in philosophy of mind, cognitive science, evolutionary biology, debating, you know, is the mind a computer? And coming to the conclusion that by some description it is at a pretty young age.

and trying to reverse engineer that. And intellectually, I think one of my earlier sort of philosophical transitions was being very much a hardcore libertarian and coming to realize via an understanding of this history that institutions like property rights, the rule of law, religious freedom, these things are actually kind of new constructions and are not natural. They're part of recent world

Western history. And they don't necessarily result from a weak state, right? They resulted from, in some cases, the strengthening of the state out of the feudal era with early technological growth that favored the consolidation of militaries, the early ability to collect taxes, formal bureaucracies. These things were driven by the printing press, by other technological currents. So this way in which technology

shapes and alters the nature of our institutions became very apparent to me. At the same time, I was also very interested in political philosophy. And a lot of my interest in innovation and technology came from understanding the history of the Industrial Revolution and how out of equilibrium we are. Most inventions that we use on a daily basis were the biggest impact things were

invented in a span of less than 100 years. And you can think of that as the first singularity, right? Georgia McCloskey has this, what she calls the hockey stick curve of history, where you look at GDP growth over time, and for most of human history, it's basically zero. And then sometime around the late 1700s or the 1800s, it goes vertical. And we're on that vertical curve, and everything we owe to our civilization is a result of that stupendous economic growth.

And so that begs the question, could this happen again? Is there a further inflection point in the near future? And so this first singularity you talked about, the industrial singularity, like we're living in a world of just pure industrialization now, right? And it's so different from what we had before. And you're saying that moving forward into this other world, it could be as different as it was between industrial society and pre-industrial society, right?

Talk a little bit about how you think of that transition, because you also come at this from this mixture of this could be this beautiful transition, but also this could be quite a chaotic transition.

If there is going to be another technological transition, we should, I think, by default, assume that there will be similar institutional transitions of similar magnitude. And no one could have foreseen circa, you know, 1611 with the first publication of the King James Bible, that in a few hundred years, we'd have the first railroads and enlightenment telegraph networks. So I just fully expect that if we do get to AGI, and I think we're quite near, that we'll have a similar transition, but probably one that's much more compressed in time.

And that will challenge all our assumptions of, you know, effective governance, well, right size institutions, and all the dressings of modern nation states.

So, you know, I first learned of your work, Sam, and I think we first met at a little conference in Berkeley last year. And you were giving a talk, and actually, out of a little bit of your talk, we barred for that when Tristan just gave his TED Talk on Narrow Path. And I think you borrowed some of the things on surveillance and other bits from our AI Dilemma talk. It's sort of nice to see the reciprocity there.

And actually, I'm going to ask you in a second to sort of recapitulate a five, 10 minute version of that talk. I think there's so many really great points in there. But I think we should start with this sort of thought experiment that you give in that talk of you invent a new technology and it uncovers a new class of responsibilities and then society has to respond and you give that as x-ray glasses. And so I'd love for you to just give that example.

Yeah, so the intuition here, by the way, comes from looking at the ways in which even small technical changes can lead to very large qualitative outcomes. Like the birth control pill drove qualitative changes to society. And so, you know, in this talk I give, I open up with a thought experiment. You know, imagine one day we woke up and just, like,

Like Man from Heaven, we had x-ray style glasses that we could put on and see through walls, see through clothing, everything you could do with x-ray glasses. There's really three canonical ways society could respond.

There is the cultural evolution path, which is, you know, we all adapt to a world of post-privacy norms. We get used to nudism. Then there's the adaptation mitigation path. Well, can I slow that down just a little bit? Just because if you're listening to this, so if you invent x-ray glasses and everyone can all of a sudden do what? They can see through walls, see through clothing, a bunch of parts of the world.

parts of our society that we sort of depend on that we've gotten used to being opaque suddenly become transparent and things break right so anyway keep going right then there's the adaptation mitigation path right so we could you know retrofit our homes with copper wiring or things that could block out the x-rays we could wear leaded underwear we could take a variety of mitigations

And then there's the regulation and enforcement path, which is maybe government uses this monopoly on the force to pull all the extra glasses, say we're the only ones allowed to use the extra glasses. And probably society would have some mixture of all three of these things. But what wouldn't happen is no change.

Right. So classic collective action problem. So what I love about this example is that on one hand, you know, if everyone gets the x-ray glasses, you're thrust into this kind of chaos where all these people are doing things they shouldn't, understanding when buildings are unlocked, when people aren't home. It can cause chaos in society. On the other hand, if only the government has the x-ray glasses, then you're entering this kind of dystopia, right? Where it

It's sort of corruption, state power overreach. Or in the third case that you're saying, where we're just adapting to all of this. It's like throw out all of the social norms. We have to invent a new society from scratch. And it feels like we don't want any of the three of these things, right? We want to find a narrow path where we don't have to worry about everyone wreaking havoc. We don't have to worry about the government sort of having all the power. And we don't have to worry about all of our social norms that we built our whole society onto unraveling.

And we here at CHD are committed to finding that narrow path between these kind of all three of these bad outcomes. We may have some differences of opinion about how we get there, which we can discuss. But I wanted to give you like a chance to set the stakes for this conversation. Like, what are the different pitfalls of us doing this wrong? And why should people care that we get this transition correct?

Yeah, you know, connecting this back to my libertarian evolution, part of it was understanding the Industrial Revolution as sort of this package deal, right? And the economist Tyler Cowen has an old essay called The Libertarian Paradox, where he points out that, you know, it was sort of libertarian ideas around laissez-faire markets, capitalism that spawned the Industrial Revolution and kicked off this tremendous phase of growth.

But by the same token, it set off a series of dynamics and new technologies, capabilities, new kinds of negative externalities that necessitated the growth of first bureaucracies to regulate things like public health and safety, and then welfare states to facilitate compensation for people who lost their job through no fault of their own. And so there's always going to be these trade-offs. And so that concept of the narrow path comes from Garen Asimogul, the Nobel Prize-winning economist now who has a book called The Narrow Corridor,

where it is this history of the transition into modernity and how following the English Civil War and the wars of religion, there was a realization that we need to consolidate power within nation states while also maintaining respect for freedom of religion, for rule of law, for equality under the law.

and striking a balance between the power of state and the power of society. And so the challenge is like, almost like in terms of like differential calculus or something like that, how do we stay on this stable path and deal with the shocks that are both simultaneously strengthening the power of the state and the power of society, right? Because AI is not merely enabling, you know, security agencies to be able to do more bulk data collection and things like that. But it's also an aggregate empowering individuals to have the power of a CIA or Mossad agent, right?

And what does that mean in aggregate as society, just by dint of there being way more computation available to everyone else, starts to overwhelm the capabilities of the state? So many people assume that you can just have a change to technology and then it won't change society nearly as deeply as you think. And I think that's right. But I think we believe we can do this better and worse. And I hear people kind of throw up their hands and say like, oh, it's just inevitable change.

we're going to have to change everything and that's okay. Whereas I kind of worry about this, right? I think we can do radically better or radically worse at these transitions. And I'm worried that if you just say it's a package deal, then we're not factoring in our own agency to make this go different. You know, there are hinge points in history where human agency matters a lot. But you can't just, you know, to be a good surfer, you need to know when to catch the wave.

and it's a necessary but not sufficient condition that you know how to surf. And there are better and worse surfers. But if the wave is not cresting, then you're not going to do anything. And so there are these big tidal forces in history. And then there are ways in which things really are package deals because of the way they alter the kind of coordinating mechanisms we have in society. We had a gala last year for our 10th anniversary where we had Kevin Roberts, the president of Heritage Foundation, speak about

very conservative organization. And Dvarkash Patel at the Dvarkash podcast interviewed him and asked him for his takes on superintelligence, which I thought was fun. And he said, if we have superintelligence, we might have a 10% GDP growth or greater, but we'd also potentially rapidly go down a post-human branch of the evolutionary tree.

And Kevin Roberts was like, oh, I'm a conservative. I love GDP growth. I'm all for that 10% GDP growth, but I'm also a Christian conservative. I don't want to become post-human. And to point out that it's a package deal is not to deny its agency, but just to make us reflect on the ways in which you can't have one without the other in some cases. And if we are going to go into this future with clarity, then we need to be realistic about the ways in which these things are bundled together.

Okay, so I hear that, which is to say, like, technology always changes the landscape on which the game is played. So the game is going to change. And you can't help that. But which game you decide to choose to play on top of that game board is still up for grabs. And the initial conditions matter a lot. Yeah.

But I do want to see if I can get you to tease out a little bit of the counterfactual of just imagining what an F-grade might have looked like for the Industrial Revolution. And the reason why I want you to paint that out is, I would argue right now, we have very little sort of state power intervention into trying to put guardrails on AI.

And I'm curious how that would have looked if we're in the same place now that we were in the Industrial Revolution. What would that have looked like? Yeah, I mean, we could have had a nuclear holocaust, right? We could have had, you know, a march through Europe of the Third Reich or of the Soviets taking over the world. And you can get situations where you get lock-in in a less than ideal equilibrium. You know, I think it is kind of miraculous that we haven't blown ourselves up so far. You know, obviously, the Industrial Revolution was like,

massive boon for human living standards, well-being and knowledge creation and understanding. And I think it was worth it, even with all the calamity that we had to pass through. By the same token, you know, the printing press arguably, you know, precipitated the wars of religion with like much inferior technology. And yeah, I think we wouldn't deny that, like, you know, I'm glad that we have the written word and books and academic publishing and all these things.

Former colleague of mine, Eli Dorado, has a blog post called On the Collapse of Complex Societies, reviewing some of the literature on how complex civilizations collapse. And one of the recurring themes is that you often will have these sort of technological trends that are moving quicker than institutions can adapt. And partly one of the reasons institutions fail to adapt is because there's an incumbent that is forestalling the eventually necessary adaptation process.

And I see this playing out with debates around artificial intelligence. And, you know, it's like we're going to have to give some things up, right? And maybe one of those things is like,

our understanding of intellectual property. It may be, you know, we may want to have restrictions on the level and degree of surveillance, but, you know, at least from my vantage point, it seems like, and I'm not saying this is good or bad, just like some kind of surveillance state probably is going to be inevitable in our future. And the question is, what are the guardrails and what are the limitations on that? And how is it actually governed?

And so there's, I think, co-equal risks in trying to steer the narrow corridor in a way that's not really progressing anywhere, that's not really taking the developmental trajectory of a society seriously, that's actually in a weird way trying to hold down to some aspect of the ancient regime.

Actually, I think that point you just brought up on sort of total ubiquitous technological surveillance is a thing I don't hear talked about enough. Just that without AI, total ubiquitous surveillance is impossible. But with an AI, it's inevitable. Already, AI is enabling things like Wi-Fi routers and 5Gs to see through walls. And certainly with the next generation of cell phones, 6Gs.

The companies like Nokia and Ericsson are talking about as a feature, network as a sensor, that is, because it's in terahertz range, the network can tell your heart rate, your facial gestures, micro expressions, where your hands are. And that means everywhere human beings are in cities,

everything is known. And how do you possibly fight an enemy when you have no secrets? And that just seems like a thing that we're not talking about enough. And that sort of gets into this next question of the sticks, right? Like, right now in Congress, you know, someone is trying to sneak a provision in that says states cannot make their own rules a

around AI that it can only happen at the federal level. So it means there can be no sort of like rule-based innovation that's not for the entirety of the United States. And so getting to the narrow path actually and like what we should be doing

As a society, it's right here. It's right now. And I really want to get you to talk about, because you've said you're not a fan of preemptive regulation. What should we be doing in your mind? How do we get onto a narrow path? So I think there's different buckets of things that seem obviously good. One to start with is going back to this initial conditions point.

I look at the experience of the Arab Spring where weaker states actually failed because effectively of information technology, Facebook.

And they were much less adapted to a world of suddenly ubiquitous ability to coordinate and mobilize and critique government actors and expose corruption or hypocrisy. Now, coming out of that, China, other countries saw what was going on. It was like, damn, we need to get control over the information ecosystem.

And in a sense, China is now well adapted to a world of ubiquitous open source AI models and all kinds of powerful information technology because they control the pipes.

So the question is, like, from these initial conditions, if China or the West get to very powerful intelligence first, is there a kind of winner-take-all dynamic where one of them pulls ahead in the same way the U.S. pulled ahead of the Soviet Union in terms of GDP and technological capacity, potentially exporting technology that, you know, enables weaker states to surveil their citizens in a way that doesn't respect human rights and civil liberties? So I think that it's

Point A is it's incumbent on if we care about Western liberal democratic values for the West to maintain and grow its lead in AI and to export its technologies around the world. And in some cases, export tools will be used for surveilling, but have embedded within them privacy and civil liberties enhancing principles and values.

And you add on to that the idea that it's not just about surveillance, the idea that technology in general, but especially AI, may radically change the game theory of centralized versus decentralized states. That sort of capitalist democratic states ended up out-competing in the 20th century.

might have been an artifact of the technological environment of industrialization. But now AI might give an advantage to highly centralized governments. And to your point, I want to live in a world where we maintain, you know, human rights, democratic values, some of these things. But like, we have to figure out how that works within an AI world.

Yeah, absolutely. Absolutely. But I think that that's going to be an ongoing kind of learning by doing in many cases. And the question is, who's doing that learning by doing? And so, you know, my zeroth order, like policy recommendations is always, you know,

do whatever it takes to ensure that the US and the broader West maintain their AI advantage and hardware and the models themselves and energy and the inputs that go into these models and then to proactively engage the rest of the world for adoption purposes. Yeah, so let's double click on that a bit. Like you have called for a Manhattan project for AI to try to do some of this stuff. Yeah.

Tell us a little bit about what you think should happen in order to maintain that competitive advantage or to make sure that AI strengthens our society. Yeah, to be clear, the piece I wrote was the Manhattan Project for AI Safety. Yeah, I might have co-opted it a little bit for the conversation. Yeah.

I've been critical of this idea that the federal government should have some secret black site, five gigawatt data center and build AGI in a lab. I think that would be very dangerous and actually in some ways decelerationist because if we just let the companies proceed, they're going to move much faster than the Department of Defense. So there's going to be a component of this in a national standard setting.

A component of this is fixing our own internal problems around energy permitting, data center infrastructure, and then controlling the export of our most advanced hardware. So like the video chips. China has a trillion-yuan, $138 billion state VC that is their Stargate project in a sense. It's doing this big push to build data centers for their leading tech companies. And right now, the best chips in the world are export controlled.

And so there's this cat and mouse game going on and how we allocate global compute. And so I think that's a very important vector for maintaining the aggregate amount of compute that is in the jurisdiction of Western countries or our close allies. I think one of the...

the challenges in this conversation are like the crux as I often think slips by and I'm curious how you'll react is AI as a technology is very different than every other technology because with other Technologies like if you need to build a more powerful airplane that means you need to understand more about how airplanes work if you want to build a taller skyscraper you need to understand more about like the foundations of building but with AI

You don't actually need to know more about how the internals of AI works to build a bigger, faster, more powerful, more intelligent AI. And that means there's, I think there's a confusion when we say we need to beat China at

there's a smuggling in of, well, that means that whatever we're building, we can control. But actually what we've seen is that the more powerful the models, the less able we are to control them. And so shouldn't the race be towards strengthening a society versus racing for a technology that we don't yet know how to either individually or cybernetically control? I guess I kind of question the premise, um,

I think as these models have gotten more powerful, in some senses they've gotten easier to align. I think we're rapidly moving into a world of reinforcement learning post-training, and I think that's going to open up a whole host of other problems. But where we stand today, in some sense, the biggest, most powerful LLMs are vastly more aligned and controllable than the ones that we had two or three years ago. Although we're also seeing increasing rates of O3 deceives a lot more than previous models.

And I don't think any of those things are insurmountable. So a lot of these AI safety debates sort of end up conflating a variety of different concerns people have. One of them is the classic alignment problem. How do we control very powerful superintelligence? The things I've been talking about are more on like, suppose we have very controllable superintelligence. The world still looks very different.

And I think we're going to move into a world where there's not just like one singleton AI that takes over, but one where there's just a diffusion of very powerful capabilities in many people's hands and powerful AI agents all doing more or less what you ask them. And, you know, there will be probably cases of like, you know, quote unquote rogue AIs that like,

shut down the colonial pipeline or do a ransomware attack or something like that. But I think for just using basic access control techniques, I don't think we need to have a full mechanistic interpretable understanding of how these models work to know almost at the level of physics

that they have the behavior that we desire. We still entered into this very new world. And most of the biggest problems are still unresolved because obviously people have very different interests, right? You know, an ex-boyfriend could sick a malicious AI bot that is fully aligned in the sense that it does

exactly what he asks on his ex-girlfriend and then have that thing autonomously replicate itself and constantly be terrorizing her. Those things are not an alignment failure, it's on the part of the humans not being aligned. Fair enough. If we step back a bit, what you're talking about is there's two big problems. Alignment is typically a question of, is the AI doing what you're asking it? Right?

And then there's the question about fragility of our institutions. So forget about the alignment problem for a second. Let's assume you're right. I question that a little bit. I think it's going to be much harder to make sure that these things are aligned, but never mind. Let's keep that on the side for a second.

You talk a lot about the fragility of our institutions, and I really want you to go more deeply into that because I'm worried that our institutions are not going to be able to keep up with this and that we're going to enter, even with aligned AI, a period of a very chaotic transition that is quite avoidable, in my opinion. And I think some of the reading I've done from yours is we're on the same page here, is that we need to really watch out to make sure that deploying this recklessly across our society doesn't create a whole bunch of chaos that we wish we had never done.

So can you walk us through a little bit about your view on institutional fragility with AI and a little bit on what we can do to avoid it? Sure. So, you know, in big picture, what I see is this differential arms race between public sector and private sector AI diffusion, where AI is diffusing much more rapidly into the private sector than the public sector. And so I think there's a need for more accelerated adoption in the public sector. That's point number one.

But that doesn't really go quite far enough, right? Because if you think about the different vectors in which AI is going to cause, not from misuse, but just from

valid use at unprecedented scale, like essentially institutional denial of service attacks, where you can imagine if we all had AI lawyers in our pocket, we all can just be suing each other constantly. The courts are going to get overwhelmed unless they adopt AI judges in some form. Because I don't foresee our court system, which technologically, they still use human stenographers, adopting AI at that pace, I think there is a world where there's a kind of displacement effect.

Where just in the same way that Uber and Lyft and modern, you know, ride hailing technology displaced licensed taxi commissions, right? Where, you know, maybe we will have like the Uber but for drug approvals or the Uber but for adjudicating contracts and commercial disputes. And when you say Uber, do you mean that there's some startup somewhere that says, aha, I'm going to solve this.

There's going to be a new, it's like court, but without all the vowels. Um,

And then what was part of the state moves into something that is private, and that private thing is now subject to all of the VC incentives. Is that what you're saying? Yeah, it might not be one company. It might be something that's done competitively or bottom-up. You could easily imagine a world where a lot of things that today require formal institutional processes and bureaucracies get sort of pushed out to the edge of using kinds of raw intelligence.

And that may not come from any one company, but it will look like a very different way of doing business. Can you tell us which institutions you think are the most vulnerable for disruption around AI? And what are the kinds of disruptions you're expecting to see? First of all, you look at where there's going to be the most rapid progress and what institutions we already have to govern those processes. And then what is their willingness to actually adapt and co-evolve?

Look to where there's likely to be very rapid AI progress. Dario Amadei in his Machines of Living Grace essay talks about the potential for, in the very near term, AI scientists that could perform basic R&D and biology in other areas autonomously in parallel with thousands of other AI scientists. That could, in theory, lead to a speed-up of scientific discovery collapsing what used to take a century into a decade or less. If

We have institutions like the FDA that are in charge of approving drugs, and they do that through a three-phase clinical trial process where you have to get human volunteers or patients to have two different treatment and controls. And it's a very long, drawn-out process. If you stretch this out long enough, you could even imagine us one day having human models in silica that could completely characterize the effect of a drug on a particular patient.

disease without ever needing human trials and have that validated against humans to prove that it's accurate. The barrier is not that the people at the FDA don't see this coming, but that these things are written into law.

And, you know, the question of how fast can the FDA adapt is fundamentally the question of how fast can Congress write new laws and how forward-looking are they? And what makes it so hard is it's not just the FDA, it's not just drugs, it's not just science, it's going to be everything, everywhere, all at once. And so this is what gravitates me towards just seeing us not quite getting this all right and for things to shift off into sort of private sector solutions. Okay, but help me see the balance there because...

In your blog post, 95 Theses on AI, one of the things you say is, periods of rapid technological change tend to be accompanied by utopian political and religious movements that usually end badly. You know, when you're saying, okay, we're going to revolutionize the FDA and we're just going to let this play out and we're not prepared for the change that's going to come, that's what comes to mind for me is that this feels like

a utopian movement saying we can just allow it to run roughshod across our institutions and that that will usually end badly. So help me, what's the balance here? I agree that we don't want to get stuck

with our existing institutions dragging this into the dirt. But at the same time, I don't want some sloppy thinking about this will all end well lead us into this sort of religious belief that this is going to go well and not have us think hard about how we roll this out. Yeah, I certainly don't have a religious belief that it'll go well. I think what characterizes utopian movements is believing in some end state or knowing how the story ends and trying to move us closer to the end of the story. And I don't know how the story ends. And I don't think anyone really knows how the story ends.

I think what we can know are sort of general principles for complex adaptive systems. There's no one in charge of the thing we call America. And when I look at the things that are barriers to AI, you know, you mentioned earlier the AI moratorium that's been proposed. I think that's built on this faulty assumption that the things slowing down AI are AI-specific laws.

When actually the thing that's going to slow down AI diffusion are all the laws that deal with everything else, right? The laws in healthcare or finance or education. And so I think if I could wave my magic wand and do two things at once, I'd, you know, A, have in some sense more rigorous oversight over AI labs, more AI-specific research.

safety rules and standards for the development of powerful forms of AGI. At the same time as I'm essentially doing a jubilee on all the regulations that currently exist in most sectors. Not because we want a world that's totally deregulated, but because those regulations are starting to lose their direction of fit. What I am hearing you say is we're going to need new paradigm institutions. It actually was reminding me of a moment I was at the Insight Forum last

which is that moment in history where for the first time Congress called all of the CEOs from IBM and OpenAI and Google. There's Elon Musk and Mark Zuckerberg to come to the Capitol to answer the question like, what's about to happen? How can this go well? It is a funny thing for me to be sitting there across the table from $6 trillion of wealth.

But after that event, I ended up going through a long walk in D.C. and I ended up somehow at the Jefferson Memorial. And the Southeast Portico, I saw a quote of his that I'd never seen before. And it said, I'm not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind.

As that becomes more developed, more enlightened, new truths discovered, and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat from which fitted him a boy as a civilized society to remain ever under the regime of their barbarous ancestors. End.

That just really hit me, which is I'm sitting in the Capitol and we're basically having a debate where we're not even really talking to each other. Like this is such an old institution. There are many ways of updating our institutions using the new technology so that it can scale with AI. And so I'd just love for you to like get specific. You were starting to about some of the other ways that we might fundamentally upgrade our institutions. Yeah.

Yeah, so the fact that right now in Congress they're debating the big beautiful bill, this big tax bill, over a thousand pages. Why don't members of Congress who often have 24 or 48 hours to read these things have AI tools where they can just plop the bill in and ask, what does this do for my state? Does this have any poison pills? Are there any provisions in this law that say one thing but could be used to do something else?

And you could imagine, you know, this would be incredibly useful, not only in itself, but because Congress is notoriously short-staffed. You know, this is just one area, and I've done a little bit of work on this and pushing Congress to modernize its tech stack and actually begin embracing these tools. Because as it stands today, most congressional offices that I talk to use Chats to BT on a regular basis, but in violation of their own guidelines. Sure, right. And, you know, you see this up and down in the federal agencies as well. This goes back to my FDA point is like,

It's not enough to give FDA officials an AI co-pilot. We're going to need fundamental process reform.

And I think a lot of these more scalable mechanisms are going to look something similar to Twitter's transition from having a trust and safety team to having community notes, where they went from something that was, you can think of the elect that were deciding what posts violate the rules or not, to something that was bottom-up. We can critique Elon Musk's broader interventions and his own trustworthiness, but the community notes algorithm is incredibly important.

inventive and actually aligning incentives so that groups of people that tend to disagree, if they agree on a particular note, that note gets amplified. And so are there other community notes like solutions for the things that government does? And then are there areas of government that just genuinely obsolesce?

Right. And I think this is where there's going to be the biggest tooth pulling exercise, because there are certain aspects of things that governments do that are technologically contingent. Will we need a National Highway Traffic Safety Administration if all the cars are autonomous and we don't have a single traffic death? And, you know, will it just wither away or will it like metastasize into some other beast? And, you know, I think that this is going to be one of the biggest fights.

I have to admit, I'm both hopeful, like I love the sort of pro-democracy tech angle, especially using LLMs to figure out ways of supercharging our governance and not using this sort of 19th century, 20th century system to try to, you know, really govern, but rather get inside and change some of this stuff. But also, I'm kind of worried about these short timelines introducing a technology into the heart of some of our most important facets of government today.

where we still don't really know how it works, if it's ready. Do you have any ideas on how you think those transitions should go? Over what timeframes? When is the tech ready to integrate? I mean, when the rubber meets the road, do you have any specific recommendations?

Yeah, I'm going to say things that will sound a little contradictory. On the one hand, I think that it's important to open up the ability for folks within government to experiment. And right now, the way the rules are written for IT procurement, for instance, is all around compliance and minimizing risk. And it's this very risk-averse culture.

But that risk aversion has come out from sort of codifying processes that worked in the past. And the analogy I sometimes give is when you're designing, say, a park or the quad for a university, you could lay down the sidewalks that you think are the right sidewalks, or you could just leave the field barren and let people choose the path they walk on. And when you start to see a path forming, that's where you build the sidewalk. And I

And I think there's going to be analogous things with the use of AI where because it's a general purpose, we don't know all the ways people could use it productively. And so we need to have pilot programs and sort of a more permissive ability for individuals within government and within corporations and other large institutions to experiment without needing permission and see what works. And then only later do you start codifying things.

At the same time, we've also seen in government when they do these sort of, especially mega projects or big pushes for adoption, that it's really important that you not be too early. You know, when you're even a few years too early to a technology where everyone sort of sees where the ball is going, you can get locked into something inferior. And so the way things have historically worked is that the U.S. government

has been a fast follower of the corporate sector. And so I think we're going to need to see something similar in this era where the hyperscalers of our current day are like the Carnegies and the Rockefellers and so forth from the earlier era. And they need to bring their learnings into government and make the government also hyperscale. What it seems like you might be advocating for here is treat the government a little bit more like a corporation. We've just seen with Doge and Elon some version of, you know,

a whole bunch of young 20-year-olds rushing into the government. And actually, when we last talked at the Curve conference, you were very optimistic. I'm curious now that we've sort of seen it. And you were like, we may be living in the best possible world. Are we living in the best possible world? How has that gone? Yeah, ex-ante, I thought and still think that if there was going to be this narrow corridor path, it would take something like Doge. Something that was...

detached from all these political constraints and public choice problems that would hold back more dramatic reform. And as Doge has played out, it's been obviously a huge mixed bag, right? And that's probably because they're not a singular thing. Doge is in part a tool to enact the president's agenda, per se. And the reason they went after USAID as their first target was because the president signed an executive order putting a pause on all foreign aid.

And so it wasn't that Elon or the Doge kids had it out for foreign aid. It's because they were tasked with using information technology as the conduit to reestablish executive control over the bureaucracy. And that just happened to be the way that played out. At the same time, and this has not been nearly as reported on, behind the scenes, there's a lot of genuine modernization going on. You know, I have a friend at Health and Human Services who's now the CIO of

And HHS is a sprawling agency. They have, I think, 17 or 19 other sub-CIOs. One of his jobs right now is the fact that no one in government can share a file with someone else in HHS because they're all using different file systems. And it's this mundane fragmentation that has accumulated over time that I think Doge should be trying to solve.

Because at some point, we are going to have very powerful AI bureaucrats, for lack of a better word, tools and agents that could replace hundreds of thousands of full-time employees within the government. And we need some of that infrastructure in place. And there's just some basic firmware-level government reforms that are needed. And Doge is addressing them while also being a bull in the China shop.

So it seems like we're stuck between this adaptation regime that you were talking about, like how do we make the US government resilient and adopting AI? But then also you're worried that this may just be sufficient to cause these institutions to collapse totally. This seems like we're back in paradox territory. Can you talk a little bit about that? And do you think that adaptation is going to work?

I've mentioned the innovator dilemma before. The example of like, you know, would the taxi commissions build their own Uber and Lyft? By default, they don't. The question is, can you do the impossible and defy the innovator dilemma? And...

Public institutions like the federal government, one of the big disadvantages they have over private institutions is private institutions, private companies are constantly being born and then dying. And there's this constant rejuvenation process. And we only have one federal government. That being said, the U.S. government has undergone kinds of what you could call like refoundings or reboots, whether that was, you know, Lincoln or FDR, or you could say the Great Society was a partial. We've gone through these sort of constitutional resets in the past where

And I see the Trump administration more broadly as trying to facilitate another one of these constitutional recesses. Now, it's bundled up with all kinds of other political commitments around trade, around immigration, things that I don't necessarily agree with. And what it comes down to is like, is the bureaucracy this headless beast that just keeps on going on, you know, business as usual on autopilot? Or do you have some source of agency within government that can actually begin reorganizing it and preparing it for major change? And this gets back to my

earlier point about we can't be utopian and know what the end state is, but we can apply general principles for complex adaptive systems. And one of those is rapid feedback loops, experimentation, fail-safe testing. And we just at the moment completely lack the infrastructure to do that. And so there's some precursory work that needs to be done.

I would love it. Just like I think you believe there needs to be a Manhattan Project for AI safety. I think we need an Apollo mission for massively upgrading society's defensives because VCs generally are not going to put money there. So the market isn't really going to get there until it's a little too late. And we've seen examples of this in cybersecurity, where as our infrastructure digitized,

There just was no strong incentive for private corporations to massively invest in their defensive. And it's just left America's cyber capacities deeply vulnerable. And so I think we're going to see something similar now, unless we can do a kind of large-scale Apollo mission, which is not to say some big centralized thing, but we certainly need enough resources to accelerate our defenses. Yeah.

I keep coming back to your 95 Theses on AI because there's so many gems in there.

But there's one that I really love, which was, you know, building a unified superintelligence is an ideological goal, not a fait accompli. And there's something in here that really resonates with me. You'll hear from people about, we're building AGI, we're about to build superintelligence, this is this goal. And you'll hear people talk about this like it's not even a goal, like it's a foregone conclusion, like it's just the tech path in front of us. But, you know,

Asa and I are quite aligned that how we build this technology is very much up to us and whether we're racing to one goal or another is a choice. Can you talk a little bit about why you say that it's an ideological goal and how do you see that? Machine learning and deep learning is a general purpose technology and we could use that to construct better weather forecasts or to solve protein folding.

But this idea that we need to have a single, coherent, unified system with agency, sentience, and so forth, that is vastly superior to human intellect in every possible way, doesn't seem necessary to me. It's not like there's a big market demand for that. I definitely see the case that there's a market demand for human-level agents that do routine office work and stuff like that. So I do worry, and this gets into the ideological undercurrent in Silicon Valley,

that there's a strong kind of messianic almost milieu where we are going to bring on the sky god and um i don't think we know if that's inevitable or not it does seem clear that if something like that were to happen

It's not this big structural thing. Certainly China is not racing to build AI SkyGod. They're racing to build automated factories. They're much more pragmatic and practical. It's going to come down to the CEOs of a handful of companies with a kind of glint in their eye. I think it's such an important point. Jaron Lanier, for example, talks about we should be building AI like tools and not like creatures.

And personally, I think it's a real choice that we have, and it's not some foregone conclusion. We can build a more tool-like future with AI and not just build the sky god. It would certainly be safer. And in order to not build such a thing or to deploy them safely will require human beings doing perhaps the hardest thing, which is solving multipolar traps, learning how to coordinate them.

where often our behavior is bound by the fear of me losing to you, my company losing to your company, my country losing to your country. But

But the fear of all of us losing has to become greater than that paranoic fear of me losing to you. And that to me is like, is the calling card of how to walk the narrow path or the narrow corridor is solving the ability to coordinate at scale while still maintaining like honest rivalry. Yeah, 100%. There's a few enough actors in the world that will be able to build those systems in the near term that they should, at least in theory, be able to coordinate in the same way we have coordinated over time.

nuclear weapon proliferation, biological weaponization, chemical weapons, even now, you know, gain-of-function research, right? And in many ways, the stuff that is going on in these AI labs is a kind of gain-of-function research. Right. For listeners that don't know, you know, gain-of-function research in biology is where you deliberately train scientists

into a biological organism, an undesirable characteristic, for example, the ability to jump species or the ability to become more infectious. And then you try to study it and figure out what makes that happen. In theory, so you can prevent that from happening, right? But that's where the hubris comes in. You're being very Promethean. You're giving an ability to an organism the ability to do something that you don't want it to do. And then you're assuming that you can control it.

The one saving grace is that AI models don't get into your respiratory tract. Right, right. They just get into your economy and get into your politics. And then get into your mind. Well, I just want to say, Sam, it's been such a pleasure having you come on the podcast. We really are in this sliding closing window of

what AI is before it becomes fully entangled with our GDP and we can't make changes. We're in that final period of choice. And even though we come from very different ideological stances, it seems like there's a lot that we've agreed on also that we haven't

But I'm just very grateful to get to have this conversation and get it out to a wide group. So thank you so much for coming on Your Undivided Attention. Thank you. It was a lot of fun. Yeah, thanks, Sam. This was great. Your Undivided Attention is produced by the Center for Humane Technology, a nonprofit working to catalyze a humane future.

Our senior producer is Julia Scott. Josh Lash is our researcher and producer. And our executive producer is Sasha Fegan. Mixing on this episode by Jeff Sudeikin. Original music by Ryan and Hayes Holliday. And a special thanks to the whole Center for Humane Technology team for making this podcast possible. You can find show notes, transcripts, and so much more at humantech.com.

And if you like the podcast, we would be grateful if you could rate it on Apple Podcasts. It helps others find the show. And if you made it all the way here, thank you for your undivided attention.