If you can't even put together a Manhattan Project for this, right? If you can't even put together all of our greatest scientists working on this for a generation, you're not going to make it. Like, you're not even trying. This is a thing that has made discourse around AI so bad. If someone comes to you and tells you, this is the definitive difference between human and animals, between chat GPT and pre-chat GPT AI, between chat GPT and humans, you should be like, well, they're a crank.
This is a scientific problem where we put a lot of efforts in and we failed. Why do you think your intuition, just because it feels like it, is like the correct one? We want to understand intelligence, we want to understand coordination, we want to understand alignment. These are obviously problems that are extremely relevant to our species' long-term happiness, but they're also very dangerous. The goal is just to buy more time.
Right now there is a willy-nilly race. There is almost no thought given to safety. If one does not address these problems, then you lose. And currently we're losing.
So Tufalabs is a new AI research lab. I'm studying in Zurich. It is funded from PASS Ventures, involving AI as well. And so we are Swiss version of DeepSeq. So a small group of people, very, very motivated, very hardworking. And we try to do some AI research starting with LLM and O1 style models. What we're looking for now is chief scientist and also research engineers. You can check out positions at tufalabs.ai.
Okay guys, so welcome to MLS2. It's so good to have you both here. It's great to be back. It's been a while. Indeed. So in November, you guys published a piece called The Compendium. It's a monster. It's about 115 pages long. I really enjoyed reading it. Maybe Connor first, can you give us the elevator pitch of that? Very simply, we just want to make the full case of why we think extinction from superintelligence is very likely.
why we think this happens, why we think it's still happening and what can be done about it. Yeah, okay. So you were saying basically in the first section, so you're talking about the state of AI today and you said that recent AI advances are due more to scale. You're saying that safety is being sidelined. So you said, you know, open AI, deep mind, anthropic, they're pouring resources into building more powerful models. They're openly acknowledging the risks while they continue to go, you know, full speed ahead. I think it's
still holds true. The advances that we're seeing in AI are not from understanding different aspects of intelligence, it's not making intelligence into a more measured objective property, it's not factoring it into natural concepts and so on and so forth. It's mostly still mostly black box approaches.
and just iterating on algorithms until we find one that works and then pouring more compute. I still think that the risks are openly acknowledged and I still think they're racing for it. It seems pretty transparent from a point of view. Like the world in which we learned more about intelligence and started to understand more about intelligence and how it works is very different from what is currently happening basically.
But Connor, why is it the case that intelligence can just be a black box, that we can just continue to train and train? I mean, you made the comment in the compendium that this type of technology is not designed, it's grown. Why is that the case? Why is it so easy, if you like, to mechanise intelligence?
Truth is, I think this is one of the great mysteries of our universe. Like, I think this is genuinely an unsolved question. I think it's genuinely surprising that if you have a bunch of particles and you shake them around in a bit of a soup, eventually a human pops out. Like, I think this is, like, genuinely weird and, like, not something that would be obvious. Even if we had a chimp, right? If you tell me I take a chimp...
I don't know if you've seen a chimp, they're kind of terrible, you know, and you give it a bit of bigger brain, now they like start making art and nuclear bombs. This is not obvious, right? Like it's not obvious that you just like take a bunch of neurons and you put a bunch more neuron soup together and eventually you get something that can like invent mathematics.
This is a fact about reality though. So like the, like for me, this like scaling and like neural networks working isn't something that was predicted by some like deep underlying theory of physics. It's just something we observe happens and we don't really know why, which I think is like, you know, some people take this as therefore, therefore it won't keep happening.
happen or therefore it won't matter but I think of it kind of the other way around the fact that we don't know why this happens doesn't allow us to like make any predictions of like when it will stop or like
How it will go forward but one thing that I was taken back by is in a way this argument diminishes How clever we are as bio machines right? You know, we we self replicates when we have children. There are no power grids There's there's no, you know kind of orchestration network. We just have sex and nine months later and a child pops out and it's
It just seems incredibly efficient, right? Because you also made the comment in the compendium that there's no kind of physical laws that prevents this type of intelligence emerging. But I think of humans as being just in a different category.
You can think of it that way. I think it definitely makes sense. Humans were optimized by a different process for a different thing. I think that's a fair way of seeing things. I just think it's kind of irrelevant. What we see from this, what I take from this, is that weird things are possible and predicting what is possible is very hard.
Like I think it's just very very hard to predict what is or what isn't possible. It's definitely true that humans are extremely clever in many of the ways they're designed. In some ways it's extremely stupid, you know? The point is that we... the fundamental thing is we don't understand intelligence. We don't know really what it is, how to build it, how to structure it. We don't have a science. We're pre-scientific. It's kind of like how, you know, alchemists thought about chemistry. Like alchemists could look at
acids and metals and organic compounds and be like, well, some shit's going on here and obviously important. I think we're in a similar circumstance with intelligence. I don't think we can figure it out. I just think we currently haven't. So I'm just trying to rationalize this because a lot of people will hear you saying we need to basically make AGI illegal. That seems like not hyperbolic, but for the average person, that seems like a very strong thing to say.
What's the justification for that? What you just said is not true. The average person actually completely agrees with this. I think this is completely commensurate. We've done polling on this. But ML researchers was 10%? Yes. I think the fact that you still get 10% of extinction risk, that you still get many people talking about extinction risks, is like...
a good show that you can go against incentives. Geoffrey Hinton left Google to talk about extension risks. So, you know, incentives are not an iron law that confine you and prevent you from being an actual human with agency and free will. Isn't there this notion, though, that even if AIs could self-replicate, they would need to have factories, they would need to have access to materials.
It feels like there's this argument that intelligence is all you need. And it's almost an appeal to omniscience that these things, if only they are smart enough, they can do all of these things. But surely in the real world, you need to be moving molecules around. You need to be working with materials and so on. It seems like in the physical world, it's a really difficult thing to do to build this kind of intelligence.
Yes, absolutely. Building things is hard. And the fact that humans can build microchips and chimps cannot is both downstream of us having thumbs and of us having higher intelligence. These things are very closely related. But when we're thinking about building an AGI, we're not talking about we're building a thing on a rock in outer space.
We're talking about a thing that's deeply integrated into the economy. These systems are being integrated as hard as possible, as fast as possible, put on the internet, given all human knowledge, giving access to tools, given human supporters, human helpers, and the wider economy. The economy is much smarter than any individual human. The economy can build machines that I can't build, that no individual could build. I don't think any individual person can build a lithography machine, but the economy can. Humanity can build lithography machines.
And I think it's useful to think about, like in practice, when we see AGI or ASI systems emerge, that they will be part of these systems and they will use these systems as part of their extended cognition.
You know, a powerful AGI system will be able to do anything a corporation can do and probably will, at least for a little while. You know, it will probably purchase machines. It will probably do research. It will probably pay humans to do things. Like, these are useful levers. Will it do that indefinitely? Probably not. You know, if there is something that's much smarter than the entire economy and it gets like,
a couple years to develop new technology, I don't know what technology would invent. Maybe it would involve paying humans, maybe it wouldn't. Interesting. I mean, one of the comments that you made about intelligence, Gabe, was that I think you define it in quite simple terms. It's just the ability to solve problems. Why is that a good way to think about intelligence? I'll be mean. I think a nice reason for why it's a good way to define intelligence is that
it's quite under-specified. You can pick whatever classes of problems that you want.
If you want to pick formal problems, you can get into the computability theory type of things. If you talk about problems of different time and space resources, you get back into complexity theory. If you think about more logical descriptions or in grammar-based description, you get into Chomsky hierarchy theory. It's vague enough. It lets you define the types of problems that are of interest to you.
For instance, we can just take a couple of benchmarks like Arc AGI and say that this is the definition of intelligence. So it's like a very practical definition, not in the sense that it's very grounded, just in the sense that you can use it however you want. So that's why I like this definition. Yeah.
You made this very good argument in your paper that there are always people who say there's something missing, right? If only we had intentionality or I quite like when you were just touching on the Chomsky hierarchy and so on. You know, Chomsky said that humans are different. And there was this Prometheus moment where we had some weird genetic mutation and we could now do the merge operation in our brains and we became Turing machines.
And that's the reason why we dominated the world, right? That we have this ability that we didn't have before. And that seems to go against this notion of a continuum, right? Because, I mean, Conor, you were just saying that look at chimp brains, look at our brains, the brains get a little bit bigger. Now all of a sudden we're dominating the world. But it's more interesting from my perspective to think that we are GIs.
Chimps are not GIs and if you think about it, ASIs will also be GIs. They'll be Turing machines. We'll be able to communicate with them.
It seems to me that it would be different with us and ASIs than it would be with us and chimps. Maybe. I mean, it depends on what you mean by different, right? Like, I don't know if you've ever actually observed chimps for like an extended period of time or read a book about chimps. We have a lot in common with chimps. Chimps have politics, you know, and not just like, oh, this guy is higher ranked than this guy. They scheme. They make plans about we're going to ambush this guy so we make this guy the leader and then he will repay me.
Like this is the complexity that chimps have. I really recommend reading actual books by actual primatologists about chimps. Just them telling stories about chimps. Because I think like people... Like one of my favorite genres of like philosophy, which Chomsky was a great example of, is philosophy of mind that could be completely disproven if they had met one animal in their entire life. It's like animals are way smarter than I think a lot of people realize. It's like...
Dogs can plan they can pattern recognize they can remember things they can understand identity. They have theory of mind. You know, it's not perfect, right? Like sure It's much more primitive than humans, but obviously on a spectrum, you know chimps orangutans and so on are on a spectrum even rats have a lot of sophisticated behaviors like I'll look at a rat do a thing and I'll be like He's just like me for real, you know, like I understand what he did that like he had an emotion there like look he's angry I think especially with mammals like
This isn't just noise. Like, there is kind of like a steady increase of a thing.
I think a lot of people have tried very, very, very hard to use all of their big brain IQ points to come up with something that will cleanly, beautifully separate humans from everything else. And I think the story of science and that, you know, ever since Darwin and beyond is that like there is no clean separation. There is no clean cut. There is no Turing machine, not Turing machine. All the evidence points to the opposite. It's just like you just do more things. You have more patterns. You have more...
you have more general cognition, you have more compute, you have more things, you have better algorithms, but it's all on the spectrum. I agree with so much of what you've just said. I think it's possible for both things to be correct. There was a great book by Max Bennett, A Brief History of Intelligence, and he was saying that mammals, they have the same prediction apparatus as we do. But maybe there's something to what Chomsky is saying, that we do have a capability that is a discontinuer, and there is a continue of the other capabilities that you're talking about.
It goes to your point that we can galaxy brain ourselves and we can point out specific instances of capabilities that we have that animals don't have, that ASIs might have. But your premise is that this is just a pointless discussion. I think there is a point to the discussion where you must take a step back.
So people make the mistake many times being like, oh, the difference between animals and humans is tool use. And then animals use tools. Okay, well, it was not tool use, it's language. And then animals use language. So you keep, you know, postponing what is the actual thing. So on one end, you could be like, well, there's no such thing.
On the other hand, you can still be like, well, I predict that the next generation of humans might be able to build nukes, and I still predict that the next generation of chimps won't. And I predict that no other animal species will. So it's like we have an internal model, we have an internal understanding that there is a difference. So what we must recognize is that we don't have a good understanding of what that difference is. Is it the last difference? We don't know. We do not have a good understanding of this. We can have hints, we can have
Intuitions, we can have various things. I will happily share mine right after. But I think the first thing to do is to actually take a step back and be like, well, when scientists tried to define this difference, they failed. And I think it's a very important step in science, which is to realize that we failed to solve the problem. For instance, P versus NP is a very salient problem where we realize that it fails. And whenever someone comes and says, I have a solution to P versus NP, our first reaction is, well, you're a crank.
And I think we should have not necessarily to that extreme, but a similar attitude with intelligence. If someone comes to you and tells you this is the definitive difference between human and animals, between chat GPT and pre-chat GPT AI, between chat GPT and humans, you should be like, well, they're a crank.
This is a scientific problem where we put a lot of efforts in and we failed. Why do you think your intuition, just because it feels like it, is like the correct one? Personally, I think this is a thing that has made discourse around AI so bad, is that basically anyone feels like they have the correct intuition. And I think having the humility to take a step back and be like, I have my guess, but they are my guesses, is like...
quite hard. The other one is also there are discontinuities and continuums. Let's say at some point you must camp out enough that you discover, I don't know, oral tradition or writing or something that is higher bandwidth. If you don't get there within a generation or within a tribe level of complexity, well, you never develop it.
And if you get there, then you can use this new bandwidth, this new type of system to do more. Once you have writing, you can have traditions that persist way past the degradation of oral transmission. When you have a clear language with a clear grammar, even before writing, just more systematized grammar, you can have oral transmission that lasts for longer. I'm not saying it's writing oral transmission. I'm literally just using those as examples.
But what I mean is that you can also have discontinuities that emerge from continuums. You're better at language and through this and some chance, you get to a place where you develop some oral tradition that lets you do much more, possibly, you know, proto-religions that was like impossible for previous species. And this lets you do more and more and more.
And so I think there's also this type of stuff that is easy to forget. It's like discontinuums emerge from continuums, qualitative differences can emerge from mere quantitative differences. Ultimately it's all physics and quantum waves, this type of stuff. I think for ASI there are similar considerations. You know, if you're a human, you must learn something. Like just learning one fact as a human, you must do spaced repetition, it's still not that good and so on and so forth. Teaching everyone one fact
is basically impossible, whereas for AIs doing a fine-tune of all the deployed instances, trivial. And so with ASI you expect that you will have this type of discontinuities from continuums. You can realize some dynamics, but even as you realize them, you should keep in mind and be very focused on what you do not understand. You should make clear what you're confused about and so on and so forth.
Just I don't know what they are and we should not be pseudo-scientific and use like pseudo-technical terms to talk about those extra things because right now our theories only support saying vague things like this. We do not have a strong enough science of intelligence to be that precise.
I kind of agree with you. First of all, you were talking about the nuance of the dynamics of collective systems and systems thinking and emergence. So a lot of people, certainly in the olden days when we thought about intelligence, they thought all intelligence happens inside the brain. And even if we needed to have
some kind of emergent properties that it would only emerge from a baseline, a basis of capability inside the human brain. And that's not necessarily the case. So we might say you need to have Turing machine brains, but maybe you don't because you can build systems of lower brains or systems of LLMs that are Turing complete in some way so that the whole thing becomes a wash.
The problem though is that we have this legibility problem. We need to have precision in regulation, right? And if we're saying all of these things are extra, we can't use the word agency, we can't use the word consciousness, then how do we have the precision and the legibility to actually talk about the things that we need to regulate? When I think about
doing hard things. I think it's very important to separate between hardness, how much effort it takes to do something, and whether something's possible. For me, these are different things. So, most of the arguments of the kind of shape that you're talking about are something like, well, we don't know exactly what, you know, the intelligence thing is, therefore it's impossible to regulate. This I disagree with.
The argument of "we don't know exactly what the thing is, so therefore it's hard to regulate" - that I agree with. I'm sure most people have heard about Big Tobacco, especially in the 50s and the 60s and their massive campaign against the emerging scientific consensus that smoking caused cancer, which of course is bad for business. There's a fun thing where to this day we still don't actually know which chemical exactly in tobacco smoke causes cancer.
This is still not 100% clear. We have many good guesses and so on, but it's not 100% settled. So back in the 60s, the tobacco companies said, "Well, of course, if smoking caused cancer, we want to regulate it. But let's first find out what the chemical is, and then we'll figure it out." And this is the same thing that's happening here. Sure, of course.
We should stop AGI once we figured out what it is. So you come back to me once you figure that out. And this is just a isolated demand for rigor. This is like a completely disproportionate demand for rigor. It's like, sure, it would be great if we figured out the proper metrology of intelligence, if we figured out the proper signs of intelligence and like, boom, here's the thing.
if you do this thing all problems solved would be great but this is not the world we live in you have to say like what is the smallest circle i can draw where i'm guaranteed
It's in there. So the more we understand intelligence, the more we can shrink the circle. But if we don't understand intelligence, that circle has to be quite large. I mean, this reminds me of that expression, it's like pornography, right? I know it when I see it. And you're saying it's like intelligence, it's dangerous intelligence. I know it when I see it. So it feels like an appeal to our intuitions. I know it when I see it.
is talking about something that is grounded in reality. And so in regulation, you can talk about things that are grounded in reality. For instance, you can say, let's ban all projects that have the explicit aim of building AGI or ASI. This is not a thing that you can measure, but this is a thing that a judge can decide on. This is why we have a judiciary system and why not everything is done in laws through the legislative systems. Like,
For instance, we can mention fraud, we can mention deception, and so on and so forth. They're not fully grounded definitions, but there's still a judge that can come and be like, yep, that's it.
So first, that's like one thing that you can keep track of in reality, which is just the intent of people. We have ways to talk about the intent of people of the law. It's normal where people, laws is for people. So if we could not talk about this, we'd be pretty screwed. And we can leverage this. So for instance, we can talk about compute and be like, yes, let's ban training runs
beyond a certain threshold of compute and so on and so forth, but just compute by itself is not ASI. You can have ASIs under thresholds of compute. If you just limit compute, what you're doing is that you're funging the resources that were into compute into algorithms or into more theory and so on and so forth. So you must actually go wide. You must look at what are the different precursors to superintelligence. So it can be like, I'll ban large amounts of compute
out ban this type of open source models. We can ban automated AI R&D, so just AI systems improving themselves without any human in the loop. We can ban systems that require less than 10 days of effort to jailbreak. We can ban fully online systems
where there's literally no kill switch, no gate, nothing like this. And there are many things that we can do. And that's what it looks like to be in a pre-scientific field. It does come back to this pornography thing again a little bit though. So you're saying that even professional ML researchers, many of them, in fact,
almost half of them think that there is a very significant risk of existential harm and average people just out there also think there is a significant risk of existential harm and Just the average person. I mean, how are they basing that assessment? I think I mean depends on the individual of course, but I think there's very common intuition if There's something and it's smarter than you and you don't control it That's bad and turns out that's a really good intuition all things equal
No, you shouldn't build something smarter than you that you cannot control. The burden of evidence should be on why is that a good idea? Why should we give you this extraordinarily unusual license to take this completely unprecedented amount of risks with civilian lives that did not consent to be part of this? Why should we give you a license like this? It shouldn't be, you prove to me that I shouldn't do this.
The thing I'm wrestling with, it sounds objective when we use these percentage numbers, but Sayash Kapoor put a piece out about this in his AI Snake Oil blog, and he was saying that there's subjective probabilities. Now with the smoking thing, I'm not sure whether at the time there was evidence, there was hard evidence that it was killing people, giving people cancer. Maybe there wasn't, but isn't it a little bit more speculative now that we think that there is a 10% risk, but how is that actionable?
And think about if I started, I move in next to your house where your kids live, and I go into my garage and I start making napalm or start brewing, you know, house-made, you know, pipe bombs. What is the probability that I will blow you up? Well, you don't know. That's just your subjective probability. You're just being subjective about whether or not I'm going to blow up your children. You don't have an objective probability of how likely my pipe bombs are to blow up your children. This is how criminal law works. This is how risk assessment works.
There are no objective probabilities of how likely are you to have an accident? How likely will this happen or that happen? It's just we use the best models and judgments that we have.
But this comes back to the legibility thing, right? So, you know, if I threaten you with a gun, it's intelligible that I'm threatening your life. If I slowly poison a lake over the course of 50 years and there are all of these second order effects, it's illegible, right? And I feel like it's the same way of AGI because you're kind of saying basically that there's a catastrophe. Like if we don't act now, there's going to be a catastrophe. But all of the other people out there, they're saying, well, you know, wait a minute, guys. I
I'm building these amazing applications with these LLMs and I'm doing all these cool things and there's economic growth and it's capitalism and this is an amazing thing. How do you explain to those people that this is actually very dangerous? I think there's a very nice thing here, which is we're not in that world. We're in the world where the people building the systems, like the ones at the frontier, like Demis, Sam or Dario, have actually said that there are extinction risks.
They actually signed a statement in public saying there are extinction risks from AI and it should be a global priority alongside pandemics and nuclear war. I don't understand why we'd grasp with a hypothetical that is grounded in nothing given that this is not the world we live in. We have Geoffrey Hinton who has left Google specifically to talk about these risks. We have Yoshua Bengio who talks about those risks. So there are a few people who disagree.
The point is not that this is unanimous, but the point is not that we're not in the world where people working on this are just like completely denying everything on us. You mentioned Dario Amodi. I mean, he put a piece out recently called Machines of Loving Grace. And that felt like a bit of a U-turn because he was much more upfront about
the risk from AI earlier on. And I can't help but get the feeling that now they've got so much investment, you know, you kind of you have to say the right thing. And on the other side of the spectrum, you've got people like Andreessen and he put out this techno capitalist manifesto
And in your compendium, Connor, you actually, you wrote some profiles about all of the different, you know, reasons why people behave the way they do. Maybe, maybe like bring in those, those two manifesto pieces, but also how does it fit into these profiles? On the individual level or like close to the individual level, there's like,
I think it's sometimes useful to think about like five major ideologies that are currently like involved in the AI race in particular. The kind of like oldest that is like pushing for this is what we call the utopists. These are people who, for various reasons, think that transhuman utopia is both desirable and achievable. And in fact, they should be the one to do it.
So this underlies a lot of your Dario's and your Ademis's and your effective altruists and stuff like this. Another view of this, which is the Andreessen view, which seems similar to this, but I think it's kind of different, is the accelerationist view. What they're saying is there's more growth, more gooder. More technology, more gooderer.
So we just make more technology and put it there and then good. Accelerations don't even try to be king of the world. They're just like, we just build all the technology and open source it and then good.
There's other factors involved in this as well. Another big one is big tech. For them, AI is just another form of power, another form of, you know, gain. Nothing particularly out of the usual there. And of course, there's also just a bunch of opportunists, you know, who are just kind of in it because it's hype, because, you know, crypto is not the thing anymore, so they have to go somewhere. And like, this is the thing.
Okay, and essentially all of these folks believe that whoever gets to AGI first will win? Depends, right? Like everyone thinks this to different degrees, right? Like a lot of the accelerationists believe that the person who has to get to it first is nobody. You have to open source it. And the utopists think, well, if I get AGI, everything good. Big tech,
I don't think really thinks that way at all. It's just like power, power, power, power, power. And the opportunities I think don't think at all. I think it's just like whatever makes them money. I don't think they believe things. People like Andreessen, even sat here the other day, he said, you know, my benchmark of progress is growth.
Undeniably, over the last 100 years or so, there's been at least a correlation with growth and increased standards of living and whatnot. So it's a reasonable inference, even if possibly an incorrect one. What would you say to those people?
Many things to say. The first one is just fertility has been crashing for the last few decades, even though we still had growth. So I think the model is pretty bad. There are many things that have worsened, even though we still had growth. I don't think this is an actual intellectual position. I think this is much more driven by ideology.
I'm interested in labour market disruptions as well. I know that's a bit more of a near-term risk, but I think it was Buckmaster Fuller and quite a few people, as you've just been alluding to, said that when we have this technology, that we won't need to work very much anymore. But now we're in like
an even weirder stage, right? Because we have the prospect of the value of human labor going to zero because AGI will be able to do everything. And then we've got this notion of, well, do we need universal basic income? And are we heading towards a form of zombie capitalism, right? Where people don't have anything to do and people have no purpose in life. So I think there are like three main outcomes that different people will expect. I think one is extinction risks. This is the one that
Connor and I expect is like the main line on our current course of action. Of course we do things because we think it can be changed but it's like right now it's the main line. The other one is we have somewhat controllable, somewhat superhuman intelligence and the last one is the one that you just mentioned which is post-scarcity. And the true thing is like our science of economics is not good enough to manage this. Basically the two proposals that I often hear are one like
communism, like people being like, yep, the states will just dictate what everyone has and the state manages everything and so on and so forth. The other one is UBI, so universal basic income. UBI in practice means, you know, the AI companies produce everything and you tax them, I don't know, like 90% and you redistribute this as income to people and then people have some markets dynamics.
where we have strong enough regulation that AI companies do not just strangle people. And like, have you lived on earth in the last few decades? The last few decades are like the decades where big tech has not been regulated at all. With social media, they just could do whatever the fuck they wanted. Newspapers are like super regulated, radio super regulated, TV super regulated, social media much more power, algorithm mediated, no regulation whatsoever. With AI, no regulation whatsoever, voluntary commitments,
So do you think that in five years you'll have the political power as they have like the entire economic power to be like, well, now we'll tax you and we'll make you give money to everyone like, bro.
How do you have a good economy when you have a few companies that have overwhelming power and you do not have leverage through your population? Or rather, the population has no leverage over you and you just get lobbied by them and you've already failed for like 15 years. It's like... I mean, Yanis Varoufakis used the term techno-feudalism, you know, where the new capital is GPUs and we have this insane concentration of power. The weird thing to me is that a lot of accelerationists, they kind of...
accept that this is the future yet they still welcome it. Why is that? Ideologies are like a way of thinking where you just want a simple solution and you're trying to justify why this simple solution works. You're not looking at the world and be like from the current world what is my portfolio of solution that I must
you know, approach and refine and so on and so forth. It's like backward thinking. You start from the solution and then try to justify it as much as possible. I think there are other reasons. The biggest one is just a big misunderstanding of humanist and Western values. The West has triumphed because it built like fucking strong institutions.
The reason why you can have nice competition that is productive and so on and so forth is because we built an entire apparatus that was meant for this. That's the entire point of liberal humanism, the 19th century's doctrine and so on and so forth. If you don't have rules, you don't have competition. You get the state of nature, you get street fights, you get Somalia. It's not nice places. People prefer watching MMA and people prefer the West to do businesses.
So you need a very strong rule of law, you need a very strong institution that will enforce contract, that will own the external, like the negative externalities and make sure that you can compete, that you will not, you know, cripple your opponents, that you will not inflict harms on everyone else and that the rules will be so such that when you compete, you do something beautiful, you do something productive and things like this.
This understanding, I think it has been eroded by like massive thud campaigns and massive lobbying campaigns from like VCs and techno-capitalist, techno-optimistic VCs, like where they just thud everything and they just finance anyone that will say those things. Not for the spearheads of the spokespeople. I don't think they matter much. I think they're mostly driven by power and ideology. But for all the people following them, I think those people just forgot what made them
humanism great, what made Western institutions great, what made our, you know, constitutions last for 250 years. It was that we actually managed to strike a pretty good balance between, you know, fierce competition and a strong frame that leverages this.
Yeah, I mean, just to do devil's advocate a little bit. So the libertarians, they also want to have laws and particularly things around like the protection of their private property and many others. But they would say that because I understand that the power lens is a great lens to understand things. But a lot of libertarians don't have power. And they say, well, we need to have, you know, the natural world is distributed. It's robust.
And open source is a good thing because we'll build AIs that can counterbalance each other and the system will be better overall because we'll have this kind of freedom of enterprise and private capital and so on. What's your response to that?
Nature is not a beautiful balance. It's red in tooth and claw. Nature is full of parasites and violence and cannibalism. And like, it's terrible. Being a wild animal is terrible. The idea that nature, like existing in nature, is like this like nice, beautiful, everything is good, everything is careful, is such a privileged, like first world problem view of reality. It's like growing up in like a sheltered suburb
which is an incredibly complex system, which is optimized to give you as much comfort as possible. I mean, like, wow, I don't, I mean, I'm basically a wild animal. Look at me, like, wow. Like, see, the world's easy. Why is everyone complaining? Like, many of these people come from a very privileged background, right? Many of these libertarians are not hard, scrabble, you know, people born in third world countries and, like, fought their way up or whatever. You know, some of them are, and, like, respect to them, right? But, like, many of them are suburban white kids.
Let's be very clear here. And this is not a coincidence. This is not a coincidence because the truth is that if you talk to many people who've been in bad regimes, who've been in failed states or whatever, they care a lot about laws.
They care a lot about order. Like, way more than Westerners or Libertarians would. Often many of these cultures will have extremely, like, too far, maybe, rules about law and order and obedience and authority and whatever. And this is also not a coincidence. It's because chaos is terrible.
Chaos is truly, truly, truly awful. I think this is the fundamental difference between me and these libertarians, is that deep down, I think chaos is bad. Like, the state of nature is entropy, chaos, and death. Everything that is not that is an exception that we must fight for, something that we must build, and that is very, very hard. Like, there's a saying that there's no such thing as chaotic good. I think this is true.
I think there's three alignments. There's lawful good, lawful evil, and chaotic. Those are the only three that exist. If you're chaotic, if chaos exists, if there's no rules, you can't be good. It's too chaotic. It's too whatever, right? Systems will just decohere. If you have order, you can have evil.
Or you can have good. You know, you can, like, order can be evil, obviously. But you can't have non-ordered good. As I like to say, it's like, you know, libertarians are like house cats, completely dependent on a system they neither understand nor appreciate. And like, you know, sorry to be harsh here, but like, this is like truly what I think. It's just like, for me, the humanist values, the institutions, the West are an incredible anomaly. This is not the default state of the universe, and we can lose it, and then that's it.
I mean, so I think you're advocating here for ethical humanism. And they might, I guess, argue that that's quite anthropocentric because what you're saying here is that we shouldn't lie back on the natural order of things. You're saying that we have some kind of privileged status
And we have created systems of governance and institutions and so on, which have served us very well. And if we just let the system rip, if we trust the void god of entropy, as you so beautifully articulated, Connor, then everything's going to go to hell in a handbasket.
I think this is a pretty core observation. Yeah, we think that what's important is for humans to flourish, to some extent sentient beings, to erase suffering or at least force suffering. We think that everyone deserves to pursue their happiness and to grant them the ability to actually reach it. I was at a conference and I had a talk, a conversation with like a guy from San Francisco. It was like batshit crazy.
And the guy was like in a very bad social circle where at the end of the conversation, he was like, oh yeah, I do not care for children. And even the children close to me, I don't feel more affection toward them than to a counterfactual non-sentient being. And I think that to the extent that we privilege the happiness of existing humans and postpones AGI, then that's bad because we're like,
preventing trillions of non-human happy beings from existing. And so it's like, fuck the preferences of existing humans and also fuck them because they have no power so they don't matter. And after that, there's a San Franciscan girl that comes to me and is like, "Oh, what did you talk about?" I was like, "I was talking about how I dislike anti-humanist ideologies in SF."
Then she was like, oh, humanism. Is this the thing where people want to live in human bodies forever? And I think this is basically the understanding of a lot of those people of humanism. They don't understand human values. They literally do not have an understanding of human values. I would not call this anthropocentric. This is stupid. There is no one else. Like there's only us and a couple of other sentient beings. So either you care for the people and the beings that are here, or you care for like
non-existent things. So Conor, you said in the compendium basically we've got this problem that the foxes are guarding the hen house and what I mean by that is, I mean Dario he went on the lecture and he said, oh yeah, you know, we can self-regulate. We've got all of these levels, these threat levels, you know at threat level three we do this and OpenAI have similar things. That seems like a good thing, right?
Well, I mean, so I've recently been talking to the top petrol engineers at ExxonMobil, and they have assured me that their climate change readiness preparedness framework system will ensure that their new oil drilling well will totally not be a problem. Oh, well, they know a lot more about oil than I do. I mean, they're the petrol engineers, right? So, like, they must know a lot more about this than I do. They probably know more about climate change than I do. Like, you know, I'm no
climate engineer, like, I don't know. So, so what's the problem, Tim?
Yeah, but the thing is, on the surface, I agree that they might suddenly change tack. So OpenAI, next month, they might say, "Wait a minute, boys, we're going to start charging $4,000 a month for this, and now normal people can't afford this technology anymore." Because right now, looking at what they're doing, they do seem to red team. When you use ChanchGPT, it seems aligned to some extent. It won't do illegal stuff. And it seems like they are genuinely making an effort.
We could say that ExxonMobil is making an effort to stop climate change. And like there are genuine arguments to be made here. They do spend probably millions of dollars on climate change prevention. And they finance scientists and like they reduce their emissions in like various ways and whatever, right? So like should we therefore be like, all right, climate change is not a problem anymore. ExxonMobil is on the case. Why would you a priori even listen to this? Why would you even entertain this?
Like, where is the cynicism here, right? Like, where is the journalism here, right? Like, where, like, if someone who's like, hello, my job is to lie to you, and you're like, well, this seems like a trustworthy guy, I should listen, I should hear him out. You should be like, no, I'd like, why would I listen to this guy?
Right now, there is a framing of alignment that has been done by the people racing, which is basically PR washing. It's like alignment means my users like my products, which is completely unrelated from how AI alignment was coined, which
which was aligning AGI or superhuman intelligences to human values such that when they act out in the world it's with the best interest of people. This thing has been a PR campaign for the last 10 years basically
And right now when people ask this type of questions, it shows that they won. It goes even deeper. OpenAI did a 1984 by talking about the OG alignment problem as super alignment. No, it's just alignment. That's literally what it meant. Having nice chatbots was not alignment.
a big sub-problem of alignment is tomorrow you have a super powerful system that has more power than the rest of the world combined. There is now a global dictator. What do you have this global dictator do? And right now we don't have any answer to that one. Our best answers are like Western Constitution with possibly a few bells and whistles. But even then, if we're like this is now the world government forever, we'd be like super afraid.
because we just do not have a good answer to that question. This is like the general alignment problem. How do you make a complex system that is powerful aligned with human values? And this has so many sub-problems that we haven't solved yet.
studying complex systems, studying systems more powerful than human, which ties back into the discussion of intelligence and the continuum. Thinking about how you can make more objective human values, thinking about governance, public choice theory, game theory, sociology, psychology. It's like many, many fields that we
It's not even that we haven't solved them, it's just we haven't scienced them yet. We do not have standard definitions, objective measures, systematic indicators of progress for any of those fields. And so when you talk about we've made chat models that people like, I'm like, but this is not solving any of the problems I care about.
This is not solving any of the worrying shit. If tomorrow you have a system that is more powerful than the rest of the world, what is the type of stuff that you want it to do? We don't feel like we know meaningfully more about morals, public choice theory, governance, institutions, and any of those pretty core topics than five years ago when GBA3 started or 10 years ago when we had the boom of deep learning. And so if tomorrow, anthropic, open AI, deep mind start making progress on all of those questions,
They're like, "Whoa, those are tractable solutions. Let's integrate them to our governance structure." And we start feeling more confident about the deployment of more and more powerful systems. They're like, "Wow, this will be real progress."
And we will see it in real life. We will feel more confident when there are more powerful systems happening. In a way, this is, and the thing is, I know many alignment folks that work in the large labs, and they're very bright, they're brilliant people, but there is a kind of safety washing where we've been gaslit into thinking that friendly chatbots equals alignment. In the compendium, you basically said that it's really hard
It's a moral and philosophical challenge as well as just a technical challenge. The current approaches are not on track. There's no automatic fixes. It's a high stakes gamble. And you proposed that we need to have a kind of Manhattan project to fix this problem.
What it would mean to solve alignment for superintelligence would mean taking all of the problems that people solve in their day-to-day lives and you're going to solve all of them with software and there will be no bugs and produce a world that is worth living in.
And I'm like, holy shit. If you can't even put together a Manhattan Project for this, right? If you can't even put together all of our greatest scientists working on this for a generation, you're not going to make it. Like, you're not even trying. We want to understand intelligence. We want to understand coordination. We want to understand alignment. These are obviously problems that are extremely relevant to our species' long-term happiness and, you know, having a good outcome. But they're also very dangerous. To address those problems, to solve those problems,
you need to have an institutional solution. You need to have an institution, a group, a project, whatever, that can possess the knowledge to build an unaligned ASI and then not do that. This is the important property that you need. If we're talking about how do we get to a good future with aligned ASI, on the path to getting to an aligned ASI, way before that, you get to a point where you can build unaligned ASI. And you have to then not do that.
Currently, our civilization does not have the ability to not do this. This is why we die. Could we build an institution that is like cut off from the rest of the world, has super high cybersecurity, is extremely cautious, extremely staffed by our greatest geniuses and philosophers and so on, that take decades or even centuries to work on this problem? Could that work? Maybe.
Like that's the kind of thing where I'm like, all right, you know, like that, that could work. This seems like the minimum thing you would try. I want to get to this point of why is alignment so difficult, right? So, so naively, and we need to, we need to get to the complexity here. Naively, you might just think I've read Isaac Asimov and, you know, he was painting this picture of how we can use these rules, you know, to, to basically guard the behavior of AI robots. Do this, don't do that. Why would that not work in, in practice?
I think basically dumb ideas are ideas that refuse to engage with the complexity of the world. Here, if you do engage with the complexity and you're like, what is a set of rule? I will feel confident if a super powerful system had those rules. I think it's a very good way to think about the problems. If tomorrow we have a world government, what is the constitution? What is the set of laws that we want this world government to have?
I don't think we have a good answer to that. If we had a good answer to this, I would be much more optimistic about solving alignment now.
Not in general, I think we can solve it in general, although we have a tight deadline, but specifically now with the current deadline. So if we had this answer, let's say tomorrow we have a super powerful system. What is the set of rule where if we're guaranteed that the system follows those rules, we feel good about it? The only constraint being we understand the rule. You cannot ask it just do whatever is good. You have to actually understand the rule. So the rule of the laws of robotics, do not harm people.
like directly physically harm people, we can understand that one. That's one of the problems. It's like this is just genuinely hard. The second one is that actually the first problem is trivial. You can just say do nothing, boom, solved. But this is not enough. One of the big things
is that you want to avoid bad things from happening. You still want a system that can deal with the other risks that are here. So for instance, someone else building misaligned ASI is like the most common one, but you could also say World War III. You could also say bioweapons. Your system must not do nothing. It must deeply interact with human society in a way that will radically change its future. That's why we want AI.
ASI. What is a set of rule for a system that is so powerful that it can radically alter the course of humanity? What is a set of rule where you're confident that the system following those rules ends us in a nice place, not with someone else building an aligned ASI so your system cannot just do nothing or be super limited or just do cancer research and so on and so forth. Once you have those rules in a way that is like
reliably legible to us, you know, it's not using words like agency where we all disagree on the definition, but it's more like directly physically harm people to this level. So it's not an objective physical thing, but it's still
very legible, we can assume that the system will be able to evaluate it. Once we have those rules, then there's the technical problem of have the system follow those rules. The good thing about rules is legibility. I mean, this harks back to symbolic AI, right? There were folks who tried to design AI systems using hard-coded rules of knowledge.
It was great because it was intelligible, right? But the problem is it's brittle, right? How do you define a chair? Just like the blind men and the elephant where all of the men have different perspectives of a bigger whole truth. So the world is a very gnarly complex place. And it turns out that there is no consistency with this.
So then if we want to have a framework of governance with any precision, we start galaxy-braining ourselves. We say, well, maybe we should look at the intention, the consequence, the function, the dynamics, the behavior. In all of these different things, we're trying to make surface contact with reality and they all have different trade-offs. This is why I said, if you have a system that has this level of granularity, where you're allowed to use such concepts and your rules invoke such concepts,
I'm still hopeful. If we had a constitution that uses concepts up to that level of non-objectivity and we're confident that this constitution, if it becomes a constitution of the world government, leads to a good future, I will be much, much more hopeful for alignment. It doesn't mean that technical alignment is a nothing burger.
because for instance you know the chair example or the direct harm example you can always go for edge cases and so if you do not protect yourself against edge cases you can get adversarial attacks and so on and so forth so you will want some technical ai safety that says well i only look about at the mode of the distributions if i get into edge cases i will avoid combining
many edge cases or I will be much more conservative and this type of stuff. But I think this is a much more tractable problem. If we have the correct rules, now your job is to make an AI follow them reliably and not get trolled.
I'm much more optimistic. Yes, but of course they still need friction with reality. Something that came to my mind is I guess there are different forms of alignment. I mean, similar to in the real world, you can control people through motivation. So a carrot and a stick. You can place guardrails. So you can say if you go out of bounds, then something bad happens or even just guards in general. I can put someone in prison and say you're not physically allowed to move out of these bounds.
Do those ring true to you? I mean, how many approaches are there to aligning these systems? AGI alignment is just a special case of general alignment. It's like, how do you have a complex system and do what you want, right? The original meaning of the word alignment, as it was originally used, is often what would now be called intent alignment.
is you build a system that wants to do the right thing. There are other forms of alignment. I think a common confusion that people have around alignment is that they look at GPT and they're like, "Well, look, it understands human values." This is mostly irrelevant. It's not completely irrelevant, but understanding is not the same as wanting. We don't exactly know how a future AGI system will make decisions. How will an ASI's decision-making system work?
I don't know. I think it will have something in common with current systems, but many things will be very different. And how will you causally interact with that system or modify or shape that system in a way where it consistently produces outcomes that you would endorse morally? I think this is the question, right? There are degradations of this, right? Such as, for example, corrigibility.
Corrigibility is this idea of building systems that allow themselves to be modified or shut down. There's stuff like control. A system will do exactly as told even if it's bad for you. This is a simpler version of alignment because alignment is more enlightened in a certain degree.
And as for how exactly to achieve these things, depends on the level of extraction we're talking about. I don't think an AGI system is going to be one coherent blob on one specific computer chip. The same way that I don't think
thinking of a human as one specific blob of neurons in like one specific brain is like necessarily the best way to predict what they will do if you want to get humans to not to do certain things or to do other things you usually have to think of higher levels of extraction you have to think about laws you have to think about groups norms culture you know physical threats you know policing stuff like this and i think which of these methods work and like which one are even
options is super context dependent. Like when we talk about how we enforce norms on humans, we use different concepts than when we talk about enforcing invariance on computer programs, right? Because we have different tools and we're trying to achieve different things. And I think we don't know what the right abstractions are to talk about AGI or ASI in general. Like a lot of the concepts we use to talk about humans, you know, when we talk about policing or we talk about incentives, God forbid, or we talk about
threats or whatever, right? How many of these will generalize to like ASI systems? I think it's unclear. I think we don't know. And like there's no economics or you know microeconomics of superintelligence. There's no public choice theory of superintelligence, etc. I think it could be developed but we don't have it currently. I will say we don't have it for the human level. So for alignment which is having
system act in a way that is aligned with human values. We don't have a reliable way to do this. So you mentioned a carrot and a stick. This doesn't get people to act in a way that is aligned with human values. This is closer to control and still a limited form of control. You know, principal agent problems, incentive problems, misaligned incentives problems, perverse incentives, all this type of stuff. So this is not even a solved problem. So there is alignment. You have a degradation which is corrigibility, which is having systems that
can be corrected. So this is basically one of the great things of like Western constitutions is that they can be changed. They plan for referendums, they plan for laws that can be changed. This is what they're about. Whereas if you think about a lot of historical religions or ideologies, they're like
set in stone. So by construction they don't have the corrigibility property. They're meant to be fixed forever. So usually we don't go for alignment because we know we don't have it. So in practice the things that work are not like that. The things that work are like more corrigible. Even when we pass laws we do not expect much that they will be aligned with human values. We more expect
trial and experiments. So instead, if you think about it, the main focus of the constitution is not even on corrigibility, it's on what I usually call boundedness. So just make sure your system doesn't have too much power. This is the entire spirit behind like democracy, republics, checks and balances,
having many different representatives, decentralizations, having the town guy, the mayor, departments, councils, region councils, many ministers, and so on and so forth. It's much more a form of boundedness where we make sure that each system is limited enough so that none can take too much power, then actual co-regibility and much less alignment.
We talked about markets. The way we do markets is with strict boundedness. We're not aligning people with the common goods. We're just creating conditions, very harsh conditions, such that when people compete there, we get some things that are aligned with good. But this takes a lot of effort and sometimes this fails.
And right now we do not have a great theory of either of those. The example of growth and yet fertility crash. So even if we still have growth, fertility crash. So what gives? Another example is big tech. You know, we did boundedness, we were like, we'll split different power. We have antitrust laws, we have all this type of stuff.
And yet for the past 15 years, you have big tech that can just do whatever they want with all the crazy new tech, social media, social media feeds, AI, and not be regulated in any way whatsoever. This is both a failure of boundedness and of corrigibility. We cannot correct them. They do not let themselves be corrected. When you start talking about regulation, they lobby hard, they thud. This is like anti-corrigibility.
So if we had a system that was resilient to like all of those failure modes, at the human level we could start thinking about alignment and I assume that if we had this we would have a grounded enough understanding of alignment that we can start talking about super intelligence alignment. So it's not just making sure that current companies don't fuck us over, it's making sure that if you had a company that had a billion times its current power, it still works and doesn't fuck us over.
It's like we're so far from this. And when someone is like, "Oh, alignment is just an engineering problem and we're making tractable progress on it." I'm like, "Where is the tractable progress?" If you have solutions to this type of problems, tell us and the institutions will be much better. Even smaller problems like corrigibility, boundedness and so on and so forth have direct applications. It's just, we suck at them. We don't have a level of alignment that is
as refined as our institutions work and are aligned. We don't have a level of courageability, much less alignment that is as refined that reliably when our institutions veer off, we know they can course correct.
We don't have a level of boundedness such that you don't get one of the richest men on earth with one of the most politically powerful men on earth work together and just collude. One thing I want to get to with the power thing, though, is that it becomes so abstract. You know, this is the way a lot of companies and governments, they start using such abstract language like power, safety and so on. It doesn't mean anything. And if we were to regulate against power, we would have to use proxies along the lines that you suggested. You know, how many ministers do I have or how many people do I have?
And as soon as you start using proxies, then you have the good hearts law problem. This is why alignment is hard. It's like we're just not good at this. If you ask me, hey, Gabe, how do you design a constitution that does not fall prey to good heart laws? I'm like, hey, man, you know, people say I'm smart, but I'm just me. I'm not at that level.
And I think this is what I mean when I say like owning where you are. It's like collectively, we're just not that good. We don't know. The best thing to do from a point of view given that we don't know is one, stay alive and two, start experimenting, build your knowledge and iterate. We suck at this. We should iterate much more. It's one of the typical thing about governance is that no one wants to iterate.
You know, all ideologies are like "just do my thing", remove all regulations, kill all capitalists, kick out all immigrants and so on and so forth. Like you don't even need this, you can experiment. So France in its constitution since like 15 to 20 years ago has a thing where you can experiment. It has enshrined in its constitution the right to test a law locally and if it works out well, you know, extend it. From a point of view, if you think about like what 21st century governance would look like
embracing the complexity of things, embracing our current state of knowledge, it will be much more things like this. It will be things like where you experiment in a bounded way, you see what happens, you draw lessons from that, you try to take those different problems, ground them, build standard definitions that are more objective than what we have, build objective indicators, reliable measures of progress, scientific methods that can deal with a reality that is more complex than just physical systems or
reproducible systems like simulations and things like this. And this is a whole scientific program that should be undertaken. There's also this problem of legibility to the DMOS. During COVID, do you remember, it was quite an interesting time that we had all of these scientists coming on the TV and they were giving us very high resolution information about the COVID pandemic and so on.
And you know, the problem with high resolution information is it's inconsistent and it's illegible. People didn't understand it and people simply didn't, they couldn't make sense of it. And that's why in a democratic system we have this kind of hierarchy, right? We have the experts at the top who are at a high resolution, maybe, maybe not. - It's more technocracy than democracy.
Well, you appreciate the point that it's beyond the cognitive horizon of most people to actually understand how these people are busy, they're working, they've got stuff to do. So, you know, maybe we should have an elite group of experts who understand things at a higher resolution and they should institute governance and so on. What do you think? The people at the top are not supposed to be experts.
They are supposed to be representatives and as representatives when there are things that are about a domain of expertise, they are not supposed to take sides, they are more supposed to take a consensus and if they are not a consensus, they are supposed to take a portfolio of approaches that takes into account the different beliefs of the different experts, which is very different from, I don't know, you have a scientist as minister of science and he takes his own opinions.
So I think this is a very technocratic view of democracy, which is not the one that has worked in the West historically. My point is, though, is that you are describing this epically inscrutable alignment challenge. And I'm guessing that only a small group of people would be able to wrestle with that. Is that fair?
So I think this is a very good question. I think it's very important. I think, for instance, this is a part where I disagree a lot with people like Eliezer Yudkowsky. For instance, Eliezer talks a lot about making people smarter and things like this.
I personally do not think this is the bottleneck. I think the biggest bottlenecks, I will name three, are like traumas, compulsions and trolls. Traumas are things that people want to avoid. So for instance, in the important problems, you have to do things that you dislike. For instance, talking to people. A lot of researchers or engineers hate talking to people, and this is causally why they discard governance and policy. A lot of the things that are necessary to solve alignment involve talking to people.
And you'll need to do experiments with real world people, with real world groups. You'll need to talk with people who disagree with you and you will have to not assume immediately that they're stupid. And in practice I have found this to be a much bigger bottleneck than "Oh, I would have loved that this 130 IQ person had like 60 extra IQ points or whatever." So that's one, it's like traumas. The other one is like the opposite, it's compulsions.
So you have a lot of smart people who are extremely compulsive. We had people who came to work on safety, they had a specific approach in mind, we did some of that, we were like "Well the approach didn't work, let's change" and then the person feels bad because they just wanted to work on this approach, not necessarily on the best one. This is what I mean by compulsion, it's like the opposite of a trauma. A trauma pushes you away, a compulsion pulls you forward.
And the last one is trolls. Ideologies are like great trolls. Like modern ideologies that want to simplify everything will always tell you that a huge chunk of your humane values are like wrong and that you should discard them. Same for cults and things like this. It's things that are not even your compulsions. You're not even drawn to them. It's just they fuck you up.
And so in practice, I found that the strongest bottlenecks are like people with trauma, people with compulsion and people getting trolled. And I think the solution to this is not higher IQ with people that are smart enough to reliably follow rules of various complexity, which I think, for instance, at least includes like two to 5% of the population. I think for those people, you can create
more advanced scientific methods, more advanced constitutions that they can reason about and through those methods they can solve those problems. I think the bottleneck when you design such a system is how to avoid people being traumatized and guided by their trauma, guided by their compulsion and trolled. This is for instance a big thing about objectivity. When we introduce objectivity in science, it's not that objective things are the only things that matter.
There are many, many other things that matter. It's just that when you have something that is objective, it's much harder to get trolled. You don't have bickering about definitions, it's just objective. When you have something that you can touch, someone cannot lie to you, you can just touch it yourself. A big part of the scientific method is not about saying that interpersonal dynamics, interactive dynamics, and so on and so forth do not exist. It's about finding methods where when we follow those methods, we do not get trolled.
And I think this is a much bigger bottleneck to solving those problems than, I don't know, higher IQ or having a hundred experts gather together and so on and so forth. I think this is a wrong view of science, which is an artifact of our current system where the best that we can do is do a sandbox, you know, for startups and labs and...
you know, give them enough money both in the public, you know, with research labs and in the private industry with like VCs and hope that there are some who win and do epic shit without any theory for a while. I mean, maybe we should move on to your proposals a little bit, but you guys are basically proposing that we need to have this grassroots movement and it's
It seems, I mean just being honest, it seems ambitious, right? Because it's a collective action problem and there's this legibility problem and we need to build strong governance and institutions along the lines that you've been proposing. This seems intractable. Hard or intractable? Well, you tell me again. Well, I think it's hard. I don't think it's intractable. I mean, otherwise I wouldn't be doing this. Like, from my point of view, saving the world was never going to be easy.
And if you thought it was, welcome to the real world. You know, like the real world is actually hard. Like it's just like we have to actually wrestle that the real world is complex, the real world is actually hard. And the solution that gets around that is not an actual solution. And like all these ideologies that are simplifying the world into some simple thing, all you need to do is X. All you need to do is more growth. All you need is open source everything. All you need to do is communism, whatever, right? All of these are fake and wrong. Like all of them.
is because the world is actually complex. It's not infinitely complex, you know? If it was infinitely complex we wouldn't even have physics, right? Like everything would just... there would be no people, right? It's in fact not infinitely complex. We have in fact built many great things, you know? We are in this like, you know, well slightly underheated, you know, warehouse right now but like, you know, at least it's mostly dry. We live in industrial society, right? Like we have, you know,
medicine, we have mostly quite safe streets. We live in London and despite the reputation, it's mostly fine. And these things are not a given. It's the same thing as we were talking about before, about chaos is the default. I think we could have had the same conversation a billion years ago, where we're just like, "Adams, why do you think that coherent cell walls are possible?"
And that seems really hard. Like, look at all these forces in the universe. And you're saying there's supposed to be a cell wall that can hold these things at bay? That seems unlikely. It seems intractable. And my answer would have been, I don't know, like, why are we even talking here? We're molecules. In the modern world, the way I see things is there are actually tractable ways to make money. One of the things is just, like, we are not currently in World War III.
We have, you know, gave us like at least 2 to 5% of the population. In my opinion, honestly, more people are like more than smart enough to understand these problems here. Like I think the percentage is actually higher than this personally. I think there is a difference between understanding the problem and making progress on the problem. That's very fair. So 2 to 5% who can make progress on the problem, which is a lot. Like there's a lot of people, right? And in terms of understanding the problem, I think most people can understand the problem to the degree necessary for democracy to function.
The whole point of democracy is kind of what Gabe was talking about here is that democracy is a system where not everyone has to be able to solve the problem. That's the beauty of the system. If 50% can at least understand the problem and 2% can solve it, that's okay. You still solve the problem. And that's really great. There is nothing physically stopping us from just
stopping, right? Just don't build AGI, right? Like governments tomorrow physically could just go to all the people and say, knock it out. This is now illegal. Now, whether it will actually happen, that's a very different question. I'm sure Gabe has a lot to say on like, what are the next proximal steps one would take towards that? But I want to make it very clear is that like, there are worlds where it is actually not tractable.
where there are concrete physical barriers. Like, this cannot be done. For example, worlds in which ASI already exists.
I think there's some small probability that somewhere in a server a proto-AZI has already been spun up and we just don't know it yet. I think it could be the case, like 1% chance or 2% chance. If that is the world we currently live in, it's too late. How do we win? And when I say win, I don't necessarily mean like a yes or a no binary. There might be gradations of winning, but how do we win?
I think the biggest shot that we have at winning is to leverage our current institutions, basically. For instance, if we're at war, if World War III on AI had already started, I would be actually pessimistic about the prospects of winning. I think it will go from like tractable but hard to...
quite intractable. It's still physically possible, it's just I don't see what I could do. So what I mean is specifically we should leverage the fact that we're at peace right now. There is some trust between states, even though different people are trying to speedrun losing that trust in many different ways.
You want to win, to win you need power, so you try to accumulate power. The problem is that this is a negative sum game. If everyone does this, what happens is that everyone fights, competes, you have possibly one winner left still alive at the end, and possibly with the resources that are left, which can be trust, money, alive people and whatever, they might enact the win. But it's not even guaranteed that they have enough to enact their personal win.
So I find this theory of change really bad. The reason why I say it's negative sum is that zero-sum games are just redistributions. If you have to sink resources into redistributing, this becomes a negative-sum game. So it's actually a negative-sum game. That's basically how I see DeepMind, how I see OpenAI, how I see Anthropic, how I see XAI. All of them are competing for being the one who builds AGI. You can give them good intent or
or not, it doesn't matter. What matters is that their current theory of change, their current plan is race so you're the first. And this is an extremely negative sum game. Another type of theory of change is if you want something, you can realize that you're human. It likely means that others want the same thing. And so instead of trying to get
the power to accumulate to you, you try to get power to accumulate to everyone who wants the same thing. Which in the case of winning, meaning surviving and not dying from ASI, captures pretty much fucking everyone. And so given that you have everyone that wants this, you know, you have a massive stream, you have the truth behind you and so on and so forth, and you should think of all the ways you can leverage this. I talked to a bunch of people in AI safety and many of them tell me something like:
"Oh, but we cannot do this Gabe, the institutions are broken." Okay, what do you mean the institutions are broken? They tell me "Oh, the politicians don't understand anything about extinction risks, they're so fucking dumb, they don't know anything about AI, blah blah blah, womp womp womp." I'm like "But have you talked to them? Have you taught them?" And the answer is usually "No."
And I'm like, well, it's not the institution that is broken, it's you. The institution is predicated on the experts go to the representatives and explain the thing to the representatives and make the case to the representatives. That's what a Republican democracy is. It's not a technocracy, it's not the experts make the decision. It's not...
just you have a few people making the decision, it's not just a representative regime, it's a representative democracy. So you still must have the people who know come to the representative and give them the information.
It's not because you're a representative that you have science like this. At some point, someone must tell you the information. The arbitrage to be made is so big that with Control.ai, the non-profit, over December and January, and I think the beginning of February too, they just cold mailed all MPs and Lords.
I forgot what the response rate was, but at the end we got 60 meetings with them over a couple of months. And just at that point when we tell someone else in AI safety we got 60 meetings with members of parliament, they're like, "Wow, how did you do this? We didn't know you were super networkers," and so on and so forth. It's like, "Bruh, we just cold mailed everyone."
There's no network, no warm intro and so on, like coldest mail possible. And then we just made the case for extinction risk to the best of our understanding for like 30 minutes to an hour, depending on the meeting. And 20 of them, so like a third, 33% hit rate, like supported a statement
Extinction risks are talked about by experts like Geoffrey Hinton, Yoshua Bengio, Sam Altman and so on and so forth. And there should be binding regulation in the UK to curtail them. This is like 20 MPs where a lot of people will prefer playing like House of Cards type of shit, you know, machinery, shooting each other and so on and so forth, just to get one MP promise that possibly in a few years they might say something for them.
I think that right now people are way too cynical for no reason and they have not even tried. I think Gabe's like even underplaying like this. This is only one example of a thing that, you know, me, Gabe, Control AI have seen over and over again. When we proposed this project, which is to get
elected leaders to sign a statement that includes the word extinction. Like it's very clear, we're like, we talked to, you know, seasoned lobbyists, you know, PR firms, you know, like, you know, like real experts and they all unanimously told the same thing. This is impossible.
It cannot be done. Maybe if you take out the word extinction, maybe then you can get a couple people. But even then you're going to have to network, you're going to have to pay to play, you're going to have to like whatever. Like this is, otherwise it's impossible. Don't even try, you're wasting your time. And this was just complete bullshit. It's like, it's not perfect. No, it's like there are many of these things that did turn out to be true, but like many of them were not. And no one has done the obvious thing of just
actually doing the Republican Democratic thing of going to our representatives and actually making the case to the best of our abilities.
Okay, but what is the pushback? Why do they not want to use the word extinction? It depends on different people. Again, the hit rate for something not optimized at all without giving them anything in exchange, like a training session on AI, just making the case and that's it, was 33%. If you think about any sales process, you never have a conversion of 33%, already ludicrously high. From my point of view, the main reason why it's not higher is just we haven't optimized much yet.
who just have done the dumbest shit possible, which is just go to the politicians, cold mail them, schedule a meeting with the one who responds and just make the honest case. Treat them as a fellow human being who can think and make decisions by themselves. And the case was not perfect. Many things were not perfect. Sometimes they're like, let me see with my whip and they don't come back to you and so on and so forth.
many different things. But still, we think this should be scaled up, scaled up in many ways. We should go for draft bills instead of just statements. A bunch of them were interested in actual draft bills to see what would the regulation look like. I think this should be scaled even within the UK, not only legislatives, but also, you know, executive people, intellectuals, academics, journalists. Like this just requires a lot of effort to go
to everyone and just explain them the situation to the best of our ability. And obviously we think this should be done beyond the UK, like US, EU, other relevant jurisdictions in general and polities. And right now, the main thing that makes me optimistic is that whenever I hear someone who does this, they always give me like super positive results.
Not only in AI, like in general. We do have functioning institutions. People are well educated, especially representatives and so on and so forth. We have the best education that we've had since the beginning of the history of humanity. Any high schooler knows more than any scientist a thousand years ago. And so I think that to the extent that your plan and your theory of change does not leverage this and it's just a negative sum game, yes, you'll be fucked and we'll all be fucked.
but to the extent that we leveraged as actual great institutions, what made, you know, the modern world work, you know, what led to the great things that we in the West benefit from and so on and so forth. I think that
If we leverage those institutions, we'll get great results. And right now, the main thing that we want to do is basically extend this proof of concept, make it legible, and help other people and other organizations who want to do similar things do the same.
I assume there will be bottlenecks, but I don't think the bottlenecks will be intractable. I think the bottlenecks will look more like we need more people to participate. But there's gradations of winning. So around 30%, that's a lot. And that's talking about institutions and elected representatives and so on. There's also the matter of just normal people, and that may even be higher, but
What does it mean when you scale that number up? What barriers will there be? Will you find dynamics that when you get to 50%, 60%, 70% that you'll have adversaries, you'll have forces that push against you? I think it's the other way around. I think the adversaries are the default. There are already adversaries, there are already many adversaries. They already exist, they have adversed, they have spent the last 15 years adversaring.
You know, alignment doesn't mean alignment anymore. Voluntary commitments are the peak of what states have in mind. The international summits have been interrupted and the latest one has been moved from like the AI Safety Summit to the AI Action Summit. They are already like racing as hard as they can and so on and so forth. I think that
what we see is like the worst dynamics that we can see. It's like when no one fights back. It's like when no one actually tries to use the good methods. It's like when even the good guys are using the bad methods. Anthropic is racing.
And Anthropic has the support of the Effective Altruist. Holden Kornoski, the CEO of Open Philanthropy, the main Effective Altruist organization, has joined Anthropic in January. Effective Altruists are a key example of this. Instead of building grassroots support and making the case to everyone, they instead did entryism. They tried to enter various organizations and take power from there, which is like the typical Trotskyist handbook, usually. If we actually try to leverage
humanist methods leverage our institutions and so on and so forth. We have made so much more progress technologically, education wise, that we're now at a point where there's like so much that can be done. Imagine, I don't know, one of the founding fathers or if you want to go outside of the Western tradition, imagine the Buddha, you know, having access to the Internet right now. All of them will be like, what the fuck?
we had in mind projects that will be across generations but we can actually solve it all now the buddha wouldn't talk about reincarnation the founding fathers wouldn't talk about you know doing things over many generations and not hoping to actually solve the problem and things like this like holy we can do it let's go let's go i think the main thing to do is that leverage our institutions leverage the good values that we have leverage the education that people do have leverage our current information and communication technologies and
And I think if we actually do this, and if we actually have people go talk to their representatives, go talk to their local intellectual elites, whether it is academics, journalists, experts, whatever, I think we can actually get like meaningful change. I think there are many obstacles on the way that, you know, have the shape of like traumas, compulsions, trolls, trolls being the biggest ones. For instance, you know, a lot of the nice people don't want to say the true things. They've been taught that saying the true things, I don't know, it's like,
cringe or not strategic. A lot of the good people have been taught to lie.
And so I think there are a lot of habits to unlearn. Our institutions are pretty powerful. We can pass laws. We do have constitutions. We do have countries. We do have nation states. If tomorrow we just want to ban AGI, we can just like block all of the AGI companies like from one day to the next. Yeah, but I mean, we started off by saying the collective action problem is not intractable, possibly, but very hard.
And we have these race dynamics because the winner takes all. And then we spoke about the spectrum of winning, right? It's not a binary, but you need to at least saturate a threshold. You need to find some consensus in order to make this work. And also, we live in a world where there is just insane amounts of misinformation, disinformation, and so on.
So it just seems like there are so many countervailing forces that you're fighting against here. I don't think they matter that much. I think the misinformation, disinformation, still, despite all of this, will still have polls and surveys where people are really for
basically any type of regulation against AI. So it's like despite all of this, we're still there. This is what I mean when I say the adversaries have adversed for like the last 15 years. And despite this, they're still only at that point. Despite this, people still see things. This is why I think that if we leverage our institutions, it only gets better from now on. I think the way it gets worse is if we do nothing, the race dynamics, you know,
just US forces every other country to militarize basically and at some point the tensions are so high and we don't have any international treaty to alleviate this you know through diplomacy economic sanctions and whatever that it starts becoming military at that point you get either skirmishes attacks which I already think start getting too late to possibly World War 3 at that point we're truly fucked but right now that's not the world I see
In the current world, there are still a lot that can be done despite all the dynamics that you've mentioned. Like the current experiments are not before, they're after. So it's like the more we do it, the easier it gets, not the harder. The adversaries have already adversed. That's why the situation is so bad. They have done this for the last 15 years. Now, if we fight back, I expect it only gets better. It's not really a collective action problem. We talk about dynamics like the prize on our dilemma, where you need everyone to do the thing at first.
But you can easily transform a prisoner's dilemma situation into one where people can unilaterally do things. I think there are a bunch of things that can be done here. The first one is for each country to have an internal plan to deal with things.
Like if the US tomorrow lets one of its companies build existentially risky technology, whether it is weapons of mass destruction or just unaligned ASI, I think each country should have an actual contingency in case they see this happening with an ambiguous proof. You don't need coordination for having an internal plan. Another thing is feelings like kill switches. You want to have a kill switch on all of your clusters.
You know, we talk about runaway AI and we're like, we can just turn it off. Well, implement the mechanisms to turn it off. Like each country should have, you know, the president press a red button and all the clusters are shut down for like 15 minutes until we can see what's up. And this should be tried, you know, every three to six months. And the clusters are not shut down. The APIs are still running. When this happens, they should be fine. And this is a thing you can do unilaterally. It's just you having more control over what happens within your own territory.
And finally, for the elephant in the room, which is the international treaty. The nice thing with international treaties is that you can put whatever you want into them. So of course, as an individual country, you will not want to just sign it unilaterally and bind yourself unilaterally. You know what you can do? You can put in the treaty. This treaty only takes effect when you have 35% of GDP and 35% of world population that are represented. GDP, so that you have competitive pressure, so that you can play against others.
and world population so that you have legitimacy. And this is a thing you can do. Coincidentally, I'm writing a brief and policy paper featuring those. But I think you don't need to go for collective action problems. That's the beauty of technology in general. It's like in a world where we have never thought about game theory, it's hard to come up with such a treaty.
you know that does not take effect until you have critical thresholds and things like this because you're not even thinking in those terms you just do the naive treaty and it fails this is what i mean when i say we can do better than the founding fathers that we can do better than the enlightenment humanists and things like this we have all of this understanding we're in a strictly more scienced more knowledgeable better technology-wise situation than they were
This is why I'm like truly convinced that with their shit, they managed to build the current world, the current modern world. With our shit and their shit combined, imagine what we can do. This is what I mean. Like in practice, this is what it means to have access to technology. All countries can just meet every month over a Zoom call and talk about AI. This is not tractable like a hundred years ago.
You cannot ask representatives of all countries to meet every month. We can now. There are many things like this in coordination that are not possible, that I think we should do, that I think are not collective action problems. If you don't want to join the worldwide call, just don't, you know, enjoy not having confidential information, sure. I think we should do it. I think to the extent that we do more of it, things will get better. And this is true at all levels.
It's true at the country level, at the world level, at the citizen level, at the local expert, local elite level. I think it's just we should play that game more because whenever we have played it in the past, we have won more. And even today, when we play it, we get good things.
And coincidentally, when you play the opposite game of just gaining more power, things go more to shit. It's not a coincidence. I think a lot of people do feel that way. I think a lot of people do understand that the reason things go well for everyone is when we
act in a way that makes things go well for everyone. I think everyone has a native understanding that if you try to act in a way that makes things shit for everyone else, like getting power for yourself to trample over others because you don't agree with them, it makes
things shit for others and everyone understands that when others do this it makes things shit for you I think those are like very very intuitive things for people in the 21st century a thousand years ago you needed religions and I think now we're like at a point where we have truly understood those dynamics
We're like well past humanism. Humanism needed a lot of morals to substitute for understanding. Now we have this actual understanding. We understand why when people are being selfish and there is no frame for that selfishness, things go worse. We could even write formal models of it.
We have economics and game theory. I think that armed with this understanding, yeah, just play the winning game. Don't try to gig your brain, don't get trolled, don't get into the fud, just play the winning game. And the more we play the winning game, the more we win. Is 10 people enough? I don't think so. Do we need a billion people? I don't think so. I think the wonder number might just be between 10,000 and a million.
And it's both a lot, but it's also very little. Like imagine us and Control AI, that's like 10 people. Imagine a thousand to ten to a hundred thousand Control AIs, you know, all contacting MPs, all contacting experts, all having just 33% success rate without any coordination, without anyone else, without optimization. Imagine if they all do this.
It is, but it's still a very complex collective action problem. And you guys are advocating to ban AI development, basically, which is going to have a lot of pushback. It already has, and not all AI development. But also, I might say that there are many parts of your proposal which a lot of people would get on board with. I mean, you mentioned the kill switch, the firewall.
That's a very good idea. I mean, we should firewall every single country, every single region. When bad stuff happens, we need to cut it off. If you pitch it at that level, you could make a lot of your proposals go through because no one's going to say that's a bad idea. But anyway, just to you, Conor, maybe just final thoughts. You want to make AI illegal. I want to make ASI illegal. ASI. Unregulated, untrustworthy, non-consensual ASI. So you wouldn't ban large language models?
Not necessarily. If they are precursors to ASI, yes. But they usually could be. I think we should not have open source GPT-5 because that's like way too close to comfort. Fog of war is way too big for me to have, like, be comfortable having a GPT-5 system be open sourced. Will this happen? Probably. And I think that's very bad. I think this is a sign of us not doing the obvious things. Like, I think this is like actually an important question that I think people should like actually, the audience, like you at home, think about
Why no kill switch? Why is there no kill switch? Why do companies constantly get hacked? Why do we constantly lose all of our personal information and nothing fucking happens? What the fuck is going on? Like actually though, like we know how to build secure systems. Like this is a thing that can be done. Why is it not happening? And like why would you expect it to be any different when we're dealing with systems that are way harder to build, right? And the answer is just like, no, I don't expect it to.
And like we should have been 20 years ago, we should have already been building way more secure internet protocols and way more secure hardware and way more secure like, you know, firewalls and stuff and like emergency switches and stuff. If we had gone back and if I was in charge, like I would have built a much more secure internet than the one we're on right now. Like how is it that we still like click on links and then some JavaScript overflows or Chrome and then the whole thing is pwned forever? Like what the fuck is going on? Like who the fuck designed this shit?
I think these are actually important and like we don't have time to go all the way into it, but it's important to think about why did this happen? To get back to your important question, okay, there's going to be pushback against, you know, ASI being banned. And like, yes, I think a lot of this pushback is from an extreme minority, a minority that has an extremely vested interest, which is, you know, the tech companies who want to build ASI and are okay with killing everyone.
There are people who are okay with murdering other people. I don't care about that. We vote them out and we say, "Nope, you cannot murder people in fact. It is in fact illegal for you to do so. And if you try, we throw you in jail." That's what democracy is, right? I just challenged that a tiny bit. I mean, on Twitter, maybe I'm in a bubble, but it seems like everybody loves AI. I think you're in a bubble. I think there's no sane person on Twitter, including myself. Go to any median person.
in like any middle income or high income country and like ask them what they think about transhumanism? Like what do you think is the hit rate between them saying, yeah, it seems like a good idea, we should accelerate towards that. Like how many, like actually, do you actually expect that fucking any of them will say, yeah, that sounds like a good idea?
It's an extremely small minority of like extremely weird people. Which to be clear, I think it should be legal to be weird. I think weird people are great, right? I'm pretty weird, right? I think it should be legal to be weird and like have weird beliefs as long as you don't harm other people. I'm okay with some people, some philosophers, you know, talking about long-term futurist counterfactual ASI, you know, uploads.
whatever, right? You know, knock yourselves out, have fun. I hope you're having a good time, right? What is not okay is if they then are doing things which actually threaten human lives, which is what's happening right now. And this is the thing we need to focus on, is that there are actually people who are actually threatening human lives with their actions. And there actually has to be a democratic oversight in this, that people have a stake in their own lives and their own continued existence and the amount of threat
their society is exposed to. People have a stake in this. And we have done polling. The numbers are overwhelming and bipartisan. You know, there are some people who are like, 20% I die? Who cares? Let's go. There are some people who are like this, right? And like, I think they should get a vote too.
Like, I do think they should be allowed to vote for this position, but they're going to get outvoted. And they know this, which is why they are doing lobbying and blackmail and threats and, you know, propaganda and FUD, etc., because they can't win a fair fight. Going back to what you were saying earlier, what Gabe was talking about as well, it's like, why do we think this is tractable at all?
There's a very, very important thing that we have going for us. And there are worlds in which this is not true. And if this wasn't true, I wouldn't feel optimistic. And the fact is that this is actually true. Is that people actually don't want ASI to kill them. There are worlds in which this is different. There are very strange worlds where for some reason, most people are like Silicon Valley EAC people, like 70% of the world population. If that was the case...
Alright, fuck. We're actually fucked right, but we actually Asked like sometimes I feel like I'm living like in the Truman Show or something right? We're like people tell me Oh, don't don't go over there. Don't there's nothing there. I
and i'm like have you checked like well no i haven't checked but don't go there there's nothing there there's nothing there for you bro bro don't even look it doesn't matter i'm like oh i'm gonna go check oh no no you're wasting your time it's actually strategically you can't talk to politicians no you you just can't you just can't and then i go over there i talk to the politicians i'm like wow okay actually there's a bunch of the thing here and this has happened to me so often in my life i like i feel so gaslit by like all these
techno-libertarian people who I grew up with. Where I'm just like, well, no, I actually talked to normal people. I actually did polling. I actually did focus groups. I actually talked to politicians. And their model of the world is factually wrong. And this is what we must leverage. This is the thing that still gives me hope that this is tractable. If I had checked and the techno-libertarians had been right, I don't know. That would be a very different world than the one we live in.
And this is a conversation we had. I pushed for this for actual years. Gabe is one of the people who got me to look through the Truman Show and actually go and ask. I remember when I first met Gabe, I was often telling him, "No, Gabe, we can't talk to politicians. No, it's impossible. They won't listen." He's like, "Have you tried?" And I'm like, "Well, no, I haven't." And then back and forth and back and forth and back and forth.
Eventually I did talk to politicians. And yeah, some of them are idiots, sure, of course. But like, turns out many of them are actually not. Or at least they care. And they're trying to figure things out and like they need help. And then I look at San Francisco, I look at the technology and they're like, "You fucking lied to me." Connor, Gabe, thank you so much for joining us today. It's been great. Yeah, it was a pleasure. Pleasure.