We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Genesis: artificial intelligence, hope, and the human spirit

Genesis: artificial intelligence, hope, and the human spirit

2025/1/30
logo of podcast LSE: Public lectures and events

LSE: Public lectures and events

AI Deep Dive Transcript
People
C
Craig Mundie
Topics
Craig Mundie: 我参与撰写了《创世纪:人工智能、希望与人类精神》一书。这本书旨在更强调人工智能积极的一面,并提出一种控制人工智能的架构,以清醒的乐观态度应对挑战。我们认为,人工智能是终极的通才,其能力在广度和深度上都超越人类,因此需要对人工智能进行治理,以平衡风险和回报。当前,政府和企业在应用人工智能时,往往只关注对现有流程的增量改进,而忽略了从零开始重新构想业务模式的可能性。我们需要一个能够对所有人工智能应用进行统一且透明的仲裁的AI系统,以应对人工智能带来的信任问题。为了控制人工智能,我们需要一个可扩展的架构,并通过透明的方式建立信任,这需要人工智能公司与监管机构之间的积极合作。为了建立对人工智能的普遍信任,我提出了一种基于“维基”的架构,允许每个社会保留其自身规则,并通过人工智能进行仲裁,而无需事先达成共识。人类无法预测和理解人工智能的所有行为,因此,控制人工智能的关键在于建立一个能够处理未预料情况的系统,而不是试图制定能够涵盖所有情况的规则。我提出的仲裁系统试图通过学习人类社会中内隐的道德和伦理准则来保护人权,这与现有方法有根本不同。人工智能可以彻底改变教育、医疗等领域,但前提是需要对这些领域进行重新构想,而不是仅仅进行增量改进。人工智能的发展将经历三个阶段:工具阶段、共存阶段和共同进化阶段,人类需要在人工智能发展的早期阶段做出明智的选择,以确保良好的未来。建立全球人工智能规则的关键不在于寻求事先的共识,而在于建立一个能够在不同背景下执行规则的架构。 Mairéad Pratschke: 我认为,AI素养对公众至关重要,只有理解AI的基本原理,才能参与到对AI发展方向的讨论中。我们需要警惕科技公司对AI的控制,并确保AI的发展能够造福全人类,而不是加剧社会不平等。当前政府和机构在AI应用上的措施往往流于表面,未能触及根本问题。我们需要重新思考教育、医疗等领域在AI时代的模式,才能充分发挥AI的潜力。我们需要关注AI系统中日益突出的代理能力,以及由此带来的风险。我们需要关注AI系统中日益突出的代理能力,以及由此带来的风险。

Deep Dive

Shownotes Transcript

Translations:
中文

Welcome to the LSE events podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences. Welcome everyone and those online to LSE for this hybrid event. I'm Martin Anthony. I'm the director of the LSE Data Science Institute here at LSE. I'm delighted to see so many of you here and I know there are hundreds more online. Welcome.

And I'm absolutely delighted to welcome Craig Mundey and Marietje Pratschke for this evening's event. Craig is the president of Mundey & Associates, an advisor to Microsoft, the Cleveland Clinic, and early stage companies involved in AI, biotech and fusion energy. He has served presidents Clinton, Bush and Obama on multiple advisory councils. He's currently co-chair of a track two dialogue with China on artificial intelligence.

Mairead is Professor and Chair in Digital Education at the University of Manchester, a Research Fellow and Advisory Board Member at the USA's National AI Institute, and I'm delighted to say a visiting professor at LSE's Data Science Institute. Today, Craig and Mairead will discuss Craig's new book, Genesis, Artificial Intelligence, Hope and the Human Spirit, co-authored with Henry Kissinger and Eric Schmidt.

As the discussion will no doubt reveal, this fascinating book addresses the many challenges and opportunities that AI presents to society and to us as humans. It does so in a profoundly serious way, drawing on the huge experiences of its authors, bringing in perspectives from an impressive range of thinking, from science, geopolitics and history.

It's an extremely thought-provoking and balanced work, seriously addressing the many challenges presented by AI, but also acknowledging and envisaging many benefits. To quote from the book, "While some may view this moment as humanity's final act, we perceive instead a new beginning. With sober optimism, may we meet its genesis."

The event is being recorded and hopefully will be made available as a podcast subject to no technical difficulties.

As usual, at the end, there'll be ample opportunity for you to put your questions to our speakers. For our online audience, you can use the Q&A facility to submit questions. For those here tonight in the theatre, I'll let you know when we open the floor for questions at the end. I'll try to ensure there's a range of questions from both our online participants and those here in person. And I'm absolutely delighted to hand over to Craig and Marie.

Good evening everybody. All right, thank you for having us Martin, thank you. It's really nice to be back in the DSI, quickly becoming my second home and lovely to meet Craig and have an opportunity to talk about this book.

And also a little bit about the book that some are saying this is the sequel to, the first book that Kissinger also co-wrote, The Age of AI. So maybe before we jump into this book, it would be good to just begin by placing this in context, maybe share a little bit about how you got to be involved in Genesis. Well, Henry and I, our personal and business history goes back to 1998, and that's

somewhat ironically today the thing that brought us together were two actions governments were taking at the time related to technology one was the u.s had some years earlier decided that it should control the use of encryption as a munition as it had been since world war ii and wanted to ban its use by commercial companies and in exporting products

So for Microsoft, at the sort of early days of the emerging internet and building on email and other things, encryption was important to us, but that wasn't really considered in making a decision to do that. So I'd spent five years working with the government in the United States around that problem. In the end, the person who helped me solve it was John Hamry, who went on to run CSIS.

which is a think tank that Henry had founded and was a trustee of until he died. And the second thing, there were three things happening at that time. The second thing that was happening was this was the beginning of the sort of emergence of cyber security issues. And I was in the position where I was taking the lead on that at Microsoft. It was nascent. But I gave a talk about it at CSIS where I met Henry. And we developed a quick friendship and

He started to advise me on sort of diplomacy issues, and I coached him on technology, which he was largely illiterate about at the time. But he was prescient in understanding that the emergence of computing was causing profound changes, and he relished the opportunity to continue to have that talk. The third thing that happened at that time was China made a decision to outlaw Microsoft on national security grounds.

So it sounds a lot like the discussions you see today where the US has decided we should outlaw TikTok on national security grounds. And Bill Gates asked me to go there and see if I could figure out how to keep that from happening. It turned out the lawyers at Microsoft in their normal practice hired some consultants to guide them on that, and they hired Henry Kissinger. And so Henry and I found out that we were both asked to work on the same problem.

And so we did. And ultimately we did solve that problem for not only China, US, but ultimately for a whole array of countries. And that benefited everyone. So Henry and I developed a friendship and were brought together multiple times during that. And in the late years, well actually it kind of gets into how did Eric come to this. Eric and I also intersected in a couple of ways. One was

Eric was one of the early people at Sun Microsystems and that coincided with a company I started which was making small supercomputers. And when you connected them with a network it was the beginning of essentially client-server technical computing. And the two companies collaborated and I came to know Eric and Vinod Khosla and Bill Joy and others through that interaction. Later Eric went on to become the CEO of Google

But along that way, Henry had just thought it was interesting to start to introduce these kind of technological issues to the Bilderberg Group, which met once a year and has since 1947. Henry had been there pretty much every year since the early 1950s. And so he took me to one of these meetings, which in itself was interesting for me, but also interesting in that I was really the first technology executive to join those discussions. And

Later on, that turns out to be important, but because two things. As other companies followed in the footsteps of Microsoft in becoming big and technologically important on a global basis, you inevitably get drawn into these geopolitical questions. In fact, we coined the term techno-geopolitics. Now, that was about 20 years ago, and it was kind of novel at the time. And of course, now it's a front and center consideration for pretty much all countries.

Eric and I decided about seven years ago that the Bilderberg Group, which he had also started to come to, should educate people on AI. And so we arranged a briefing. You know, Demis Hassabis, Sam Altman, and other notable people came to give it. And Henry sat in on that briefing and was immediately struck by

what he thought was the profound effects of having a machine that was going to be smarter than humans. There was no precedent for that. It seemed to him obvious that the implications of that were myriad. And he was 95 at that time, and he ended up becoming a student of this topic. He spent a couple years really learning and thinking deeply about it, and of course,

Eric and I were two people that had a long friendship with him at that time, and so we all talked regularly about it. In the last years of Henry's life, he and I would talk regularly every other Friday for a couple hours about whatever we thought were interesting issues, and this began to permeate those questions. As he got more educated, he collaborated with Eric and Dan Huttenlocher from MIT on that first book, The Age of AI, and in the end, I did some editing on that book for Henry,

And I said to him that it was great because it sort of chronicled some of the risks that people and the challenges that people had, but it didn't talk that much about the benefits. And meanwhile, I had been involved for some years by that time with OpenAI as well as my historical relationships with Microsoft.

It led me to devote a lot of my time to thinking about this question of how in the world are we going to control these AIs? And so all these things came together and a decision, a suggestion really by Henry in one Friday afternoon call, he said, well, some of these ideas are interesting and they might provide direction. We should write a book together.

So we said, all right, let's do that. And then Eric joined us in that conversation. So in the end, the three of us authored this book with kind of a goal of two things. One, to put more emphasis on the upside because we thought that the popular coverage of this had over-indexed on the downside risks.

And as was cited in the quote a minute ago, we thought we could do this with sober optimism because we at least could see a path to an architecture to control these machines in the long term. And that's how the book came together. All right. Well, that's a very comprehensive introduction. Thank you. It's interesting because I compared the two books, and the first thing that struck me when I read this book was that it felt like

Kissinger's life as a statesman who had been so deeply involved in the atomic age and thinking about that arms race, the Cold War, so much informed this. And it's interesting to hear you saying that you wanted this to be more positive than that first book, for example, or maybe more positive than the narrative that you were hearing at the time. Because for me, what came through from this book was very much...

that need to contain, that of course we know from containment as in nuclear weapons that Kissinger would have

was a leader on and talked about a lot, but also the need to align, and we'll get to this later I think in this talk, about aligning the development of AI with human values to make sure that it doesn't run rampant and go off and do terrible scary things that we all hear about on headline news. But for me what really came through in the book was this balance between what was clearly his voice, someone who had spent his whole career working in geopolitics

dealing with those issues. And then what was also

Yeah, definitely a positive side. Some would say more techno-optimist side, balanced by the voice of the statesman. So this is why I was interested to hear how you went through that whole process and wrote it together. But I'm keen to dig into the actual content as well, so maybe we can go to that in questions if anybody is curious about that. So let's just start at the very beginning, because the book really starts by framing...

AI, as you say, in a very positive light as a new age of discovery. I'm hoping you've all read the book. I assume everyone has studied the book, but if not, I'll give you the highlights. So it's a new age of discovery and to cut it short, the idea is that we are now going beyond our physical bodies and the capabilities, I suppose, of our human brains and that AI is going to allow us to do this thing that some people refer to as intelligence augmentation. That's a phrase you've probably heard quite a bit.

So the idea then is this quite optimistic idea that we can move beyond our physical selves into this artificial extension of ourselves, but of course to do that, that raises all of the questions that we're hearing about these days in terms of control and safety and agentic AI and geopolitics and everyone's been watching Deep Seek for the last two weeks, et cetera, et cetera. So one thing that struck me in that opening part of the book was that you compared, you plural, compared this early discovery

and you framed it by talking about Shackleton in the very early pages and that really struck me partly because I'm Irish so of course I was paying attention to Shackleton and I'm also a historian by training and I thought I'm very interested in these stories of discovery but I noticed the one

line that was in there, which was that Shackleton is considered a hero not because he went almost as far as he did and then had other journeys, but that he decided at a really critical point when he was, what, 90 something kilometers short of his goal to turn back. And he decided to turn back because he decided there was too much risk at that point and he didn't want to jeopardize the lives of his people.

So that, I thought, was quite a provocative way to start it, what becomes quite a positive book. And yet you lay it out there at the very beginning. How are we going to make that judgment? How are we going to know how to balance human values, whose values, my values, your values, and other groups' values? How are we going to balance and contain

this intelligence and enable ourselves to augment our intelligence but also not risk getting to that moment that Shackleton turned back from. Yeah, I mean the Shackleton example, and the other thing that we talk about in the early parts of the book is polymaths. Yes, the enlightenment, yes. And the importance that polymaths have played in human history and how we

sort of made breakthroughs first by the individual action of some and then when we could aggregate them, you know, we made other bigger breakthroughs. And the reason it's important to think about that is these machines, in some sense, are the ultimate polymath. And they're not only polymathic in the sense of understanding what humans understand, but we expect and already see that they're becoming super intelligent in each of these domains. And so you've got

something that exceeds human capability in both axes, you know, how many domains and how smart in each. And that's a very important difference. To come back to the Shackleton example and the need to govern these things, you know, our belief is that this system ultimately represents the ultimate dual-use technology. And you're not making a localized system.

sort of readily evaluated decision, Shackleton had to make the choice, but he knew what he was trading off in the sense, "Oh, I can't get this many people still living. You know, I've got that far to go. I see what the trajectory is, and I don't like the outcome." This gets a lot harder. One, because of the breadth, and two, because it's sort of hard to know what you're trading off. One of the reasons that in the book we try to highlight the upside is

Anybody who has to make a risk-reward trade-off has to understand what is the reward relative to the risk. And if you only think about the downside, then you're essentially making a risk trade-off without understanding what you're giving up. And so because we think this ultimately changes everything about everything, you're giving up a lot if you decide to say, I think I'm going to turn back.

And the book is very explicit, saying at this point in time, for other reasons, in a sense, if you thought that there were actually six Shackleton-like things and that they were all trying to get to the North Pole and there was some really advantage of getting to the North Pole first, would it have changed its calculus?

He wasn't in a race with anybody. He was in a race with endurance to get there. And so this is a much more complicated calculus. And in fact, the origin of the architecture that we propose in the book and was sort of my own work for the last few years

actually started in discussions I had in the early years after I started working with OpenAI. You know, Sam asked me to help them think about these longer-term policy and geopolitical issues. I wasn't going to help them on the technical aspects, but the...

the charter of OpenAI, which I, and this is about maybe 60 people at the time, and they were still hunting. They had no clear strategy to move ahead. They were exploring ways to proceed. But it was clear that the people that had joined the company really believed in the importance of their sort of founding goal, which was to build ultimately an artificial general intelligence that would benefit all of humanity.

And of course you say that, everybody could say it, but then you immediately get to this question that says, "Well, what does that mean?" And you say, "Well, then we really wanted to do good for humanity." You say, "Okay, well, who gets to define good?" And you end up in this sort of endless spiral of argumentation on, "Well, who gets to set these objectives? Who makes these choices?"

One day when I was visiting, I kept saying, "Look, I see a lot of tactical engagement in the company, even at that stage, around how to deal with some of the overt risks." But to me, having sort of fought these wars in the previous generational technology shift,

around computing and personal computing, I said, you know, you're ultimately going to face these bigger, longer-term challenges, that the strategy has to be bigger than applying some tactical fixes to these things. And I said, what? And he said, well, you know, we're not there yet. Of course, we didn't even have a clear path forward. When was that roughly?

That was about seven years ago. So maybe Open AI was two or three years old. And you know at that time they were working on robotics, game playing, and other things all looking for clues as to how to move in a direction that would produce this general intelligence. And in what turned out to be serendipitous, but for me especially important,

He said, "Well, look, there's one person who really likes thinking about these longer-term things in this area, and it's Dario Amadei." He was still at OpenAI. He was tasked with humoring me every time I visited, and we would sit down and talk for a while about this question. Interestingly, after only a few of these sessions, we came to an agreement that neither of us

when we thought about the long term, could think of a way to control these AIs other than by an AI.

So was he thinking about constitutional AI back then? Well, I think this was the genesis part of the plan. Yeah, that's what I was just wondering. You know, of why he went, then he left and formed Anthropic. And Anthropic's differentiation was in fact to put the Constitution in the AI. Exactly. And he and I did not talk about this before or even since, but my view was that sparked a direction for him

that manifest as the constitution in the anthropic work. Meanwhile, I'd gone away and kept thinking about this question of how would you do that and what's the big architecture of this? And then when they launched Anthropic, I looked at it and I had two feelings about it. One feeling that I had was it's great because to some significant degree, the constitution that they embedded

proved that the AI could be taught or forced to follow some set of rules. On the other hand, from where I came from in tech-dom, it also seemed obvious that the world would not, in one great leap, decide that that constitution was everybody's constitution. And I think you've seen some of that in what's happened since they launched it.

So for me, it gave reinforcement both to the validity of the idea that an AI could, in fact, become an adjudication system. But it also then made me more convinced that the complete architecture of trust, as I now call it, had to encompass many more dimensions that hadn't been considered and frankly can't be considered if you think it's

done one company at a time on an ad hoc basis. And so if you want broad trust in this and you want to believe that there's going to be ultimately a plethora of players who want to build AIs of different sorts in different places,

And yet you want to have some more globalized concept of trust. You had to do something more. And that's what I spent my few years working on and now prototyping. And it was that idea that gave us

if you will, the comfort that we could be soberly optimistic because we at least could see a path to control. And control is different than containment. For sure. But anyway, that's how we got there. If you look at where we are now...

That's the vision, right? To have trust architecture. But I think probably most people in the room today and maybe those watching will be very cognizant, especially in the last two weeks or so, everything that's been on the news, of the fact that I think we are seeing really, really quite clearly that each model, each company,

Various ideologies are really readily apparent. You know, we knew it before Elon Musk's grok acts in a different way from Claude acts in a different way from a different LLM that's been no secret to anybody and of course with all of the media coverage of deep seek in the last few weeks has been a lot of talk about about that and That's a whole other topic we can get into or not. But um, it feels like we're we're not

at that stage that you're talking about it in terms of that trust architecture vision and instead in the book you float we're not even close we're not even close I mean we're not frankly we're at the kind of messy nobody's really no we're not trying and in fact I think what's really interesting about the book is that you you know you float this vision of

as you say, that the polymath, the AI is the ultimate polymath. It's collective intelligence. It's the mixture of experts in every expertise, right? In terms of the actual, you know, the architecture of the LLM, but also in terms of our own narrow domains of expertise. And that all sounds so promising, and yet...

I find myself looking at some of the gap between what we talk about in terms of what can happen with AI in the book, for example, the idea of rule by reason is one of the things that you talk about in terms of the political use of AI.

that we could, for example, dispense with the emotional decision-making that leads to some terrible situations. I'm sure none of you can imagine any of those on the news lately at all. But, you know, sometimes emotional decision-making is not the best for us, and the book really stresses this idea that we could, with AI, we could perhaps dispense with our human frailties, if you will, right? Our intellectual frailties and...

have this rule by reason. That's one lovely idea. Another one is, you know, this idea of abundance, economic prosperity for everybody, which I'm sure, again, we're all very keen to have.

But I'm curious to know what your thoughts are on the steps towards those two in particular, because we're here at LSE, we're at the London School of Economics and Political Science, and we're not necessarily focusing on the philosophical side, we're focusing more on the maybe on the ground policy side. And what I see happening, for example, UK context, last week, those of you who follow UK news will have seen, or maybe you didn't, it gets hidden in the news a bit, but that the government,

planned and announced the release of a new set of AI tools for the civil service, which they have kind of wittingly nicknamed AI Humphrey in honor of, yes, minister, right? So lots of people will get that joke and find actually the naming hilarious for that reason. But anyway, that's a whole other issue.

But it almost felt like when I looked at that and I look at some of the challenges that we are facing in terms of the future of work, everyone's worried about de-skilling, un-skilling, how are we gonna make the transition to this utopia where we all have endless prosperity?

And the actual moves I'm seeing governments making and the policy decisions I'm seeing governments making, they're not that lofty at all. They're very mundane in that sense. We're going to make the bureaucracy, we're going to rationalize and centralize bureaucratic work. We're going to help the civil servants deal with their data overload. These are all great ideas, not to quibble with them. But they don't come close to solving war.

or creating prosperity. So I'm wondering what do you think of, because I love the vision, but I look around me and I look at what's happening on the news and I look at the debates that are happening and I think,

People on the street are worried about the day-to-day. They're worried about their jobs. They're worried about war. They're worried about violence. They're worried about control of AI systems. So what's the step? A lot of those are all came from humans and the last did we're very we have lots of issues us humans Yes, humans aren't doing so well really we're not doing great, but that's the whole point. We never have right so let me tease apart a few things in this question. So one the

Each of the examples that you cite, and arguably almost any of them that are of interest, all now reduce back to this question that says, but how can I trust it? So health care, I mean, it's already been demonstrated for the most part, even in clinical trials, that with no special training, the AIs of the day are better diagnosticians in medical cases than almost any doctor.

That's not even argued anymore. When that first study was done, the doc said, "Oh yeah, but you gotta be empathetic." So then they ran a trial comparing the empathy of the machine in the mind of the patient compared to the doc. Guess who won? Not the doc. - Yeah. Human bias against machines is really interesting. - But why did they win? Well, it turned out that the patient said, "Well, you know, it has time for me. It's patient."

I can ask it dumb questions and it patiently gives me answers. If I want to be educated, it educates me. And incidentally, in education, they're finding the same. Correct. Tutors, students are reporting the exact same thing. I can ask this tutor a question at 3 in the morning, they're not going to get tired, they're not going to get sick of asking my questions, and I don't have to feel silly. Right. So it's that part. And so, in a sense, these are just examples of...

of where the AI in many of these dimensions that people historically said, oh, only humans can do that kind of thing. I think it's being shown that that's not the case. A few months back, I looked at some statistics for the early apps being deployed on the OpenAI platform for money. And one of the top, top two apps were companionship applications. That of all the things people were trying to do,

who was actually buying using these things was people who were lonely and so in a sense they found more comfort in talking about their trip troubles with the machine again kinda because this sense of empathy but also because the thing has an unlimited range you know problems or issues or ideas that you can talk about so in a sense even in those things the polymathic nature you know kinda shines through when you

You look at what are governments doing? I contend they're only doing the same thing that almost all companies are doing, which is, and it's very natural for established organizations, is they look at a new tool and they say, okay, great tool. How do I make incremental improvements to everything I'm currently doing? I want to make it better, faster, cheaper, more productive, whatever it is.

But this doesn't ask the fundamental question that says, "Look, there's never been a tool like this." So if you were a startup as opposed to an existing company and you had a clean sheet of paper, how would you do it? And the answer, the coaching that I give to a lot of CEOs is it takes a different mindset and a different set of talents in the people to basically start with a clean sheet of paper

and start with a presumption that an AI-centric implementation of whatever the thing is you're trying to do will produce a qualitatively different answer than any amount of incremental tweaking of how you've historically done it. And that's why ultimately healthcare, education, you know, etc., all will have to be reimagined in the face of this.

But I'm also very pragmatic, as are Eric and Henry, and we recognize you can't take everybody from here to this redefined world in one step. But even to take incremental steps, you actually have to get people comfortable to start moving. As it comes to this question of governing the AIs,

Hi, I'm interrupting this event to tell you about another awesome LSE podcast that we think you'd enjoy. LSE IQ asks social scientists and other experts to answer one intelligent question. Like, why do people believe in conspiracy theories? Or, can we afford the super rich? Come check us out. Just search for LSE IQ wherever you get your podcasts. Now, back to the event.

My belief was that we had to find a way to use an AI to provide uniform and transparent adjudication of all the uses that any AI would be asked to follow. When there were only one or two, people might have said, "Okay, I don't know." I look at the DeepSeq thing and say, "I don't care whether it's better, faster, cheaper, or anything else." What I think is interesting is that

It arrived in the United States in the same kind of weeks that they were trying to ban and kick out TikTok. It had none of the controls that they've been demanding from Microsoft, Google, DeepMind, Anthropic, and others. And in fact, the only thing it exhibits was behaviors that the Chinese government demanded that it watch for.

But you know what's really interesting about that? If you watch the TikTok, what was happening when they first, but of course it was banned for what, 24 hours or something, and then Trump said he was going to not ban it. The US, not just US, but there was a lot of reports of influencers and creators on TikTok in the US that were going to red note the Chinese kind of...

the Chinese version of it. And I thought, God, that's really interesting. Because in education, we worry so much about data privacy, students' mental health, all the stuff that you've just alluded to. Like we know that, for example, AI is a great diagnostician. No problem there. We know that AI doctors are better than humans. We don't want to admit it yet, but the proof is coming in. We know that they can be more empathetic. But I think the part that we get really uncomfortable with is, exactly as you said, we're starting to realize that

that boundary that we've drawn between human and AI,

it's not gonna last. It's disappearing. And I've always kind of wondered about this human skills, AI skills in neat little buckets and thought, well, as AI is getting better, it's actually able to do a lot of the things that we say are human things to do. But when you talk about things like the social apps, I'm thinking about stuff like replica or character AI or things like that. One of the fears I think that we have again, kind of in society in terms of policy and application and how we use these tools is,

What happens when you have companies that are creating effective AI, social AI companies like Hume, for example, or these ones that are working on listening while speaking models, more and more, but getting better at voice mode, and that thing that you've just alluded to where the AI is quite personable.

causes us to lower our defenses and our boundaries, and then maybe we are interacting with AIs in a way that we maybe shouldn't be quite yet, because as you're saying, the trust architecture doesn't exist yet. Yeah, but that's my point, is that you can take any one of these things, and if you push

hard enough on it, I can tell you bump into the trust question. Exactly. Except, do you read that in the papers any place? No. You read it in a different way, I think. Yeah, but any place you get close to it, it's only framed as, oh, you know, when that thing showed up here, it didn't have it. Or, you know, Anthropic did this, but OpenAI didn't do that. Yeah. And so...

For me, all of these things, including the deep sea thing, were just a reinforcement of the idea that humanity, as it's organized itself around things like, first, let's say religions, and then the current crop of sovereign states, in each case they end up creating some rules. And those rules are very diverse.

Some of them have some very deep common underpinnings, but some not. And what I know from my own work in the previous generation is I can't put a product in a country and expect that my product gets to ignore their rules. You know, I tell little stories about early days at Microsoft. I ended up

forming and running a geopolitical analysis group at Microsoft as a common service to all the product groups. And why? Here's a case in point. The early versions of Windows, those of you that are old enough to remember, you said you had to set your time zone. And to set your time zone, you clicked on the clock and it popped up a little map of the world.

and you would point to the map and say, "This is where I am." And it would say, "Okay, I know what time zone you are. I'll change your clock." So we shipped that around the world and they make it convenient. We had a color-coded map. And then we got calls from different countries saying that we're not going to be allowed to ship windows in their country. And we would say, "Well, why not?" And they said, "Well, you know, we looked at your time zone map and there's this little island right there.

And we think we own that island, but you colored it the wrong color. Now it turned out if you call a guy whose color it was colored in, he says, "We're really glad it's colored our color, no problem." And so little by little, we realized that the operating system had to be chameleon-like. In the context in which it operated, it actually had to present itself different ways.

And those little lessons and the generalization of that in that platform informs the way I think about how do you get this thing started. That anything that says to me that I have to ask any two parties to agree in advance on some consensus view of a rule, a rule set, is a non-starter. And especially if any of those parties are governments. And so what that means is you have to embark on this

with something that allows, I'll say the AI, or the adjudication function at least, to have this same chameleon-like property. The good news is that task, I believe, and we've not done enough work to believe, is extremely manageable given the capacity of these AIs. It turns out that the union of all the rule sets humans have ever created at every level of granularity, except for one, which I'll comment on,

is a tiny, tiny part of what we teach these machines as we educate them. And so there's no capacity issue in doing it. But when you try to operate within this union of all the rules, it becomes a very high dimensional problem. Humans can't do this.

I mean, which judge do you know that knows every rule for every society and every country all the time and how it changes? Just practically you can't do it. But in terms of this kind of reasoning is actually really easily done by these polymathic machines. And so this creates the basis of saying, look,

I don't have to require anybody to change anything that they're currently living by in order to have a system where the architecture is common. It's sort of like having a common operating system that runs in 192 countries who all have different laws. This is sort of the next level of that, but recognizing in this case you have to factor it out

of any one of the implementations because otherwise the complexity of verification is too hard. The other reason you have to think carefully about this and Kissinger and many discussions we had about what was similar and not similar between the origins of the nuclear nonproliferation activities and this thing was the realization that you can't

positive solution that requires any of the actors to give up what they consider to be their strategic competition. And that's another thing that you just cannot get them to do a priori. Can I cut you off? Sure. Because I feel like this is a slightly different direction from what comes through in the book and we've taken a different path. So

You're talking about the trust architecture that will solve the problem of people having squabbles about the values and AI in different places because we'll decide on our own AI essentially with this trust architecture. You'll want to have an application AI that you can control. The question is...

Is it ultimately to your advantage to agree to some common architecture of control? So here's my question, because in the book, and maybe we shouldn't worry about the book, maybe we should move on from the book. Because in the book, one thing that really jumped out to me was this idea that is floated about corporations, especially, basically starting to act like nation states.

And in the book is framed that potentially corporations could accrue the kind of power that current nation states have and that they could form alliances similar to nation states. And while in the context of a trust architecture it seems not so bad,

if you read it in terms of what the nation, corporations taking on powers of nation states, it starts to sound a little bit scary. So I'm wondering, how do you get from... I think there's two different things in the book. Okay, okay. And I can't remember everything exactly where it is, but the idea, this is sort of,

raised as a potential issue that you have to think about how to deal with. And it's coming up now because this is the issue now. Sure it is. But look, in my view, in my history,

This has been coming up for a long time. AI is just the latest instance of it. I just had a meeting at the House of Lords to talk about these issues, and I think some of them agreed that trying to go forward and create rules about these things in the future without more active collaboration between the companies developing the technology and the people trying to regulate or legislate seems like an obvious option.

you know, direction that has to be taken. That said, the last part of the book is more akin to what I'm describing here, which we call the strategy. - Yes. - All right? And the strategy, well, you know, there's way too much detail they have put into the book about implementations and trade-offs and how to do it, but the strategy said, first you have to have a, well, I'm adding adjectives for emphasis, you have to have a scalable

architecture for excuse me implementing these controls but that by itself is not sufficient once you have that let's say a company comes up with it or an academic effort came up with it or a multi multilateral group decided to do it yeah doesn't matter what its origin is you know if if that is can be done transparently then that turns out to be a big step in the direction of

of allowing people to collectively give their trust proxy to this system. - And do you think we're in a state geopolitically right now where that's a potential outcome, where maybe the EU,

I don't know, ask me next week. - NAFTA countries, next week, okay. Once DeepSeek has calmed down and people aren't freaking out that there's an arms race that's just gone up a lot, for example. - Or perhaps after people think about, wow, how did that thing get released and became the most popular download in a matter of days, when it actually didn't have most of the things we've been sitting here telling everybody we were worried about. - Yeah, exactly. - So there's always this big gap between the public and the elite.

That's always been true. And what we say we want to protect people from and then what they go and do on their own anyway. I know, that's my point. That's the part that I find quite interesting. You know, in academia, we worry a lot about protecting students and then they walk around and don't really seem to worry.

We worry about some of those data privacy issues, but we worry for them. People used to talk to us about advertising related businesses. We kind of joke, look, most of these people would sell their grandmother to get free something on the internet. I remember last year I was in Spain and in Madrid they had to stop this because WorldCoin had offered people 50 euros to scan their irises.

And incidentally, WorldCoin's just been renamed because, of course, you probably know Donald Trump is all big on crypto. He's released his meme coins, and Melania's got hers in the last week as well, and they're out there floating crypto everywhere. So a year ago, this happened in Spain, and people were lined up around

the block getting their irises scanned for WorldCoin and I thought, oh my god, for 50 euros. They just don't have the worries you do. They don't. And maybe this is the issue is that the more you learn about this, the more worried you get because this is what I've noticed too about studying AI. The reality is these people in the aggregate make trade-offs in their lives too. Yes, definitely. And so in some sense you say, hey, they expect...

the government to be working at the end of the day to protect their interests. Yeah. All right. But there's something about AI leaders being the ones that are always the ones who are most afraid. I mean, it was Jeff Hinton a couple of years ago. AI, well, he's described as one of the godfathers. So leaders, godfathers, whatever name. Yeah, but he's flipped a bit now. Yeah, it seems like...

Well, I was a little cynical at the time because I thought this is great that he's speaking out, but I also noticed that it was when he had resigned from Google. So that's a little bit cynical. It shouldn't be like that. But people speak out at different times and then they come back and they change it a little bit. And we've seen that in the last week with Altman and Amadei talking and Sachin Adela weighing in. And, you know, this is all a week after everybody's there at Trump's inauguration waiting to, you know, basically...

As David McWilliams, if anyone follows the Irish economist David McWilliams, he describes it as lining up to kiss the ring. You know, and he talked about this idea of kingmakers and this idea of, you know, the technology CEOs and the politicians just being too close together. So I guess on the one hand, yes, we have... It's funny, in the previous Trump administration, he was firing all the scientists in the... Exactly. So it flips. So it turns out Trump has changed too.

Trump is taking some advice differently, definitely, I've noticed. But coming back to like Yoshio Benji and these guys and Hinton, in some ways, again, there's a resemblance to what happened with the people that did the work on nuclear weapons. They had a big motivation to compete to create these things, and they thought, hey, this is kind of an existential deal, and so they did it.

In a sense, everybody knew at a conceptual or mathematical level what the effect would be. But many of the people who developed them, only when they saw one explode, even in a test environment, kind of had an epiphany of wow, this is a qualitatively different kind of thing. And many of them started to lobby for saying, wow, maybe we shouldn't have done this. And so I think it's a natural

reaction when you realize you were sort of present at the birth of something that represents a qualitative shift. My belief is that both the business people and the academics who for many years have been on this pursuit, you know, are starting to say, okay, it's really happening. And now I can, you know, it's not so ephemeral in terms of what the impact might be.

But I think, you know, and I had the occasion to talk to Benji a few months ago, and I was actually in some sense happy because in the book we say the idea that you can stop is that train that left the station. It's gone. Because unless you could guarantee 100% that everybody in the world from this point forward would all stop on the same day and never do anything,

all you've done is see your future to whoever keeps going. And so stopping is not an option. And so once you accept that, and then you have to turn your attention not to worrying about stopping, but worrying about continuing and how to do that best. And so my view is those people are now moving in a direction

I'll say similar to the thinking that I've been pursuing and promoting here with the book, which is, well, we better get serious about trying to figure out how we come together and figure out how to align these things. The happy words have been safety and alignment. Then you say, okay,

safe by what measure and you get you know an arm wrestling contest and you say aligned with whose values and you get into a You know an arm wrestling contest. So leave it to them who decides well in in my proposal I say look I'm gonna do this by saying every society gets to keep their current rules say who even decides that I

Each society can decide who they want to proxy the input of their rule set. What if they don't want anything to do with it at all? Okay, then they're out of the club. No, no, this is a super important point. Or if they don't have AI centers and they feel like, for example, if you're a small country that can't afford to have all the data centers because... No, you don't need any data centers, okay? I mean, the way to think about this, I mean, in abstract terms, has anybody here ever seen Wikipedia? All right? Well, just think of a curated wiki

where I give a page to every governmental, institutional thing on the planet. It's your page, all right? Yeah.

Here's the rules for what you can put, not what you put on the page, but the things you should put on the page. The type of things we're interested in. Give us all your rules. Give us all your regulations. How about inputting all your history of jurisprudence? And is this the doxa in the book? No, it's not. This is essentially...

The already written part. Okay. This is the discrete rules that the world operates on. What if you have an oral culture? I'll come to that. These are real questions that come up. They are. Now, it turns out, I'll come back to that question. Sorry, carry on. You're on a train. So,

So now I've got this curated wiki, curated in the sense that your society, I don't care whether you're a church, a country, or a company for that matter, or an individual. I say, okay, stick them in there. Put the rules in. No problem. The union of all that stuff is a tiny fragment of what these AIs can ingest. And

The only problem with, and in fact when we started doing this with a lot of the like the legal codes of all the world, guess what the engineers told me? They said, "Hey, we already read them all. They're already in there. You can ask questions about it." And then a smart engineer said, "The problem is the rules have no special significance compared to everything else." As a result, even though it knows them all, it can even help you write a new one if you want.

It has no bearing on its behavior. So whether you want to do it, I'll say, Dario's way, which is constitutional, what was constitutional doing? It was taking some subset of rules, in this case fairly abstracted, and it was designing the machine to give special significance to those rules. My contention is you can do that by factoring that out of any one AI.

and saying it's better to have a consolidated place where all the rules can be recorded, including the ones that they haven't been trying to find and deal with. But I don't require anybody who's going to put rules in for their institution, they don't have to consult with anybody else. They don't have to agree. I don't care whether these rules are diametrically opposed.

Because I'm going to trust at the start that the AI can essentially look at that high-dimensional space. Because, again, importantly to think about, the AI in this case, once it's ingested, I'll call this funky Wikipedia thing of the rules, it doesn't use it as a library. That's the difference. Okay.

when the AI learns things. No, of course not. It's not just pulling. Well, you said of course not, but most people don't understand this well. And I wanted to get from what you were talking about with rules-based systems to where the whole learning comes in because that's an important part of the book too, isn't it? It's that these machines learn. They learn. The claw example is a great one because Anthropic talks a lot about constitutional AI

they don't talk they talk a little bit i listen to i'm going to sound like a maniac i listened to a five-hour podcast a few weeks ago the lex friedman podcast he had darryl amadean and he talked for about two or three hours and then he had they had his a scottish woman i believe who who is the person who she's a philosopher by training really interesting and she talked about her experience and how she is

not training Claude, but how she is basically, how Claude is emerging and what their relationship is like and what the behavior is like. But it was a really, really interesting philosophically grounded discussion. It was beyond here are the rules and it was more about and now the system is creating something different. And wow, it's doing something crazy. I'm just happy to start the process by collecting the world's rules. Or at least

the rules from those who decide that it's in their interest to have a collective way of

normalizing the way that we manage these AIs. And does agentic AI fit in there as well? Sure, it doesn't matter. Okay, so all of the systems will continue to fall. I'm just saying all of it, the platform AI, all the application layers on top of that platform AI, all of these things essentially just have to get bolted or have the adjudicator bolted on the side. It all sounds so clean.

And orderly. Actually, it is. Because if it wasn't, it couldn't be scaled up or people would claim that it would be too expensive. So happily, we've done enough work in prototyping this to believe at least that the cost of taking any of the existing AIs, at least as far as we know them today, and

Bolting it together with this adjudication system is a very small amount of incremental work.

And in fact, that work level is going down because the AIs themselves are happy to write the code to hook them up. Exactly. And so in a sense, all these, you know, worries, is it too burdensome, you know, heavyweight? No, actually, it's architecturally quite clean. Sure. I mean, the AIs are teaching each other and the agent AIs are training each other. No, no, they're not teaching each other. None of that here. Maybe that was a very bad phrase, but you know what I mean? They are acting...

Sometimes on their own. I don't want to say which ones are okay, so I can't name exactly what I'm thinking about But I've you know, you're kind of getting ahead of the problem am I yeah, okay? Because they're I mean I have read studies about for example agentic systems where the worry is it was just All the things you read about today. Yeah, no one has ever attempted this yet

- Okay, so this is all a hypothetical future, 'cause the study I'm thinking of was alerting us to the potential danger of, for example, AI systems no longer, they all use natural human language now, but going back to speaking in code and humans being out of the loop in the way that we've kind of comforted ourselves that we're in right now, and that that is one of the potential dangers. I mean, you mentioned-- - But every example you're citing,

I contend is motivation to do what I'm suggesting. Sure, absolutely. Because, you know, asymptotically, the idea that humans are going to examine these things, A, impossible, B, not scalable. So you fail on either architectural and or implementation grounds. Once you start to say, okay, I have rules, then you can implement them. You asked about the DOXA. So I struggled for a while because, if you remember the origin here, Dario and I both

viewed that at the time that the great gift of these machines was their polymathic capability, but that's also the thing that guarantees that humans can either anticipate or perhaps even understand what these things will do. So the idea that a human can write a rule to prevent a thing they can't imagine and don't understand, to prevent it from happening, is nonsensical.

That's the danger though, right there. That's the worry that people have, I think, isn't it? Well, it's the worry, but that's because no one's tried to think, how do I deal with that case? So I thought about this a lot. And as I wanted to do this, I got some people together to work with me on it on a volunteer basis. And one of them was a lady who was at that time doing software development, but it turns out her formal education was as a cultural anthropologist. Oh, great.

And I was talking to her about this problem. I said, look, you know, at the end of the day, I need some grounding for these things. It turned out, in parallel, I was working with another company, actually here in London, and they were doing an AI-based thing that depended on the AI doing predictions that depended on consistent application of the laws of chemistry. And lo and behold, they also were discovering the same problem, that AI

You could ask the AI about laws of chemistry that were well accepted, not disputed, and it would tell you everything you want to know about it. But occasionally it would ignore it because the law of chemistry didn't have any special standing in either. And so now you say, wow, okay, these problems, if you look at them, are all the same. They wanted to ground the model that they were using in the physics laws, in the chemical process laws.

I wanted to basically say, hey, I want to ground this thing in all the laws that exist. And then I ultimately have to backstop all that for when the AI goes into uncharted territory. And so I said, you know, I've heard that there is a lot of commonality in human societies at some very basic level of moral and ethical reasoning and behavior. And I said, but I don't quite understand all that. And she said, oh, yeah.

The cultural anthropology people call that the doxa. And I said, "Doxa? Never heard that word. Tell me about the doxa." And it turned out the doxa had exactly two properties that I loved and were the answer to my ultimate grounding question, which is the doxa is not innate. You're not born with it. Therefore, you have to learn it. So if you can learn it and you can figure out how it's learned, then the AI can learn it too.

So it's your cultural values? It's your cultural system? Let me finish. The second, no. The answer is no. Okay. Because it's common in all cultures and societies. And the second property is that they can't write it down. The advantage, he wants us to take questions or something. The

Well, let's finish the thought. This is important, final thought on this. This is like the key part of the book, the doc session. We don't want to cut him off at this point. It's a key part because it does speak to this rhetorical question that everybody's been asking, which is whose, you know. Whose values? So the thing I love about the fact that it's learned and can't be written down

is that I don't have to have anybody argue over whether the description of it is right. And, you know, you can chuckle, but it turns out that's actually very important. Where does it come from, then, if it's not written down? How do you know what to put in? Well, it's learned, all right? And so we've actually done a bunch of experimentation around this. So it turns out it's essentially like folkloric.

And so we said, "Okay, well how does that happen?" So we did one experiment that was fascinating. Which was, he said, "Okay, if I could..." I mean, the thought experiment is, "Okay, if I could take a sensor pack, you know, that was like humans, and I could lay it in the crib next to babies, then, you know, however the baby learns, it would see the same thing, hear the same thing, do the same thing, it'll learn them too. Is there a shortcut?" So then we started asking ourselves,

I wonder if there's any artifacts that humans use to aid their folkloric transmission of these things. And some bright person said children's fables, the moral to the story. And so we actually did a little experiment where we actually took a bunch of children's fables from the UK and the US and we basically trained the AI on these things. I'd love to see that.

And then we used it to say, okay, now govern this little toy AI with respect to questions that relate to moral reasoning. And it basically seemed that, in fact, it not only understood each story, but that it had integrated them into a model that dealt with more complex issues.

Moral and ethical reasoning. So back to universal almost values of some sort. Right. And it was never written down expressly. Almost all societies have these little stories that they tell their kids and that they guide them by as they grow up.

So that's our lesson for this evening. And that's your lesson for tonight. Read your childhood stories, your folklore. No, the big problem is you might not read them because you already know them. You already know them. What you ought to do is think about them. Yeah. All right. But actually society ultimately overrides a lot of this stuff. Yeah.

by telling you what your culture should do or believe. And that's a whole other conversation for another day because I'd love to talk more about cultural anthropology and folklore and all of that. It's a brilliant place to be. I feel like I'm interrupting. Sorry.

You guys could do a five hour podcast and I'd just... Well, we're just almost scraping the surface of it. There's so much in the book to talk about. Thank you so much. We do promise our participants some opportunity for questions. So I'll take some questions.

here in the room. And please, raise your hand if you have a question and when the microphone, wait till the microphone comes to you, and when it does, please identify yourself, you know, are you a student or an alumna, whatever, and ask your question and please keep it fairly brief and make sure it is a question. You know what he means. There's loads of hands up. Okay, so where are we? The one behind you. Over there?

How does AI protect the value of human rights if the trend of AI and algorithms does not count human engagement? Instead it is used to safeguard the interests of the elite. Okay, interesting question. I didn't catch it all. You're going to have to say it again. Did you want to repeat your question? Rather than have me summarize it? How does AI protect the value of human rights if the trend of AI and algorithms does not count human engagement? Instead...

Does not count what? It does not count human engagement in its data set, but instead is used to safeguard the interests of the elite. Okay. You want to take that one? That's for Craig. Well, I don't know, I guess the last discussion about the DOCSA is probably the closest short answer I can give you to how, at least in the adjudication system that I'm proposing, remember nothing that I'm suggesting here has been implemented yet.

So the behaviors, what companies are doing, what governments are doing is not guided by any of this now. So I'm in fact trying to introduce in some way these very basic human values, ethics, morals, etc. where they in fact aren't written down, aren't uniformly represented and in fact are difficult in some cases

using classical ideas of training these things to have represented or have factored into the behavior of these machines. So, at least on a quick answer, I think that's what I'm trying to do, but it would be, it's a very radically different way of attempting it than anything that's been tried so far that I know of. Hi. I'm a living fossil from the early days of the Internet, and when we started, we thought the Internet would be the great equalizer

And then came social networks and proved us completely wrong. And the question is, will AI will make us even less right as a result? Less right? In the Internet being the great equalizer, will it even worsen the case since the companies that control social networks and global platforms are also controlling the AI that will amplify their power and reduce the power of normal people, or like 99.9% of human society?

Can I start that one? I mean, I think this is about the choices we make now. You know, and this is why when I look at the conversation that's going out there and, you know, people that I talk to in academia in particular,

I feel like AI literacy has to be the number one thing because you can't have conversations like this and you can't talk about how to steer a machine in the right direction if you don't understand the basics of how it works. We have to get past this kind of AI as magic and actually start to understand it in order to do that. There's been so much talk again in the last few weeks about the promise of open source compared to, but people often do compare,

Open source is a promise in the way that I think in education sometimes we think about open educational resources as the wonderful thing because they are because they're safe and they're contained but open source AI I'm not sure you can compare it in the same way.

But for me, I just wanted to say that, I mean, I think that's why literacy for all of us is so critical. So we're able to ask those difficult questions so we can push back on things like, you know, do you want CEOs of tech companies to decide your political policy for you? Or do you want to decide that yourself? Do you want there to be an alliance of techno states? Or do you quite like your cultural norms in your doxa? You know, like I think we are, I think we're at a very important moment

right now where we have to make those without a doubt. I'm optimistic and relative to this equality question, Kissinger 25 years ago looked at the arrival of computing and the internet at a personal level and his first comparison was he says, "I think this is the most important thing since the invention of the printing press." The printing press was the triggering technological insertion

that ultimately resulted in a complete transformation of European society. It brought the Renaissance, it brought you enlightenment, it brought you the Treaty of Westphalia that got you the nation states we're all living in today. And then he got to this AI question and he said, "Well, this is even bigger." But his concern, even 25 years ago,

was that compared to the printing press time, he said, you know, that took a couple of hundred years. This seems like it's going to happen a lot faster. That was a prescient comment in my view because he was forecasting what we're now observing, which is our institutions have not really been able to adjust to the rate of change that the general adoption of these technologies has led to. So you could say there's a gap there. Now bring on the A.I.

and Henry devoted the last five years of his hundred year life to this exact question, which is how can we help society start to think through the even greater ramifications of this that are gonna get diffused even more quickly. And therefore you kind of have to start with the assumption that institutionally we're not prepared for this change.

you know i think that's really what we have to be talking about on the other hand democratization of many of things that have been in the hands of government and are largely failing on a global basis like education like health care you know could be can be transformed in you know in dramatic ways by the arrival of these technologies you know i was at oxford yesterday and you know i somebody asked me a question about education and i said my personal belief is

that the education system has to be radically transformed because every child from an early age now, there's absolutely no, I'll say technological or cost reason, that they shouldn't have a personal Socratic tutor to take them anywhere they want at any rate they want in learning. Now that leaves open a set of questions about socialization and other things that we could address in other ways. Now somebody from Oxford said, we still do it that way here.

And I said, yeah, but the problem is it's not scalable. That's why we ended up with the education machinery that we have had. But there's no reason to think that just tuning up the current education system by a little more use of AI will do anything close to the benefits of rethinking education in an AI-centric environment.

And for me, every one of these problems fall to radical change that comes from the insertion of AI into all these questions.

Can I just add one thing as somebody who works in digital education? These are things that we hear about a lot, and we also get a lot of pushback when people hear things like AI is going to transform education. It gets everybody's back up pretty quickly. In fact, that's one of the triggers, I would say, where I am. But that's a trust question.

It's a trust question but actually this is why in the course of our conversation I was very keen to get to the nitty-gritty of the policy of the implementation part because yes I agree with you that now for example kids can have a Socratic tutor but how do we get to the point where we have a system where that's actually the norm and I think yes the Peace of Westphalia 1648 it ended 30 years of wars of religion

That's the part we can't paper over. The printing press did transform European society, but there was a lot of struggle and a lot of strife and a lot of death and a lot of suffering and a lot of destruction. I'm a historian by training. So we can't just paper over the transformation and just say we're going to transform. Because what I've realized from my study of history and cultural history in particular is that

We know what we have to do, and this is why I'm so interested in AI these days. We know what we have to do and we know what we can do, but we have not figured out how, and we're not answering that question. I think for everyone who's interested, again, that's why I think literacy and policy and action is the key. Kissinger was trained as a historian and a philosopher. Yes, I know. I was so impressed by how much he learned at 95. It's incredible. But what's interesting is, as he pointed to the printing press, he said it was a trigger.

Yeah, exactly. And in all of his experience, you know, computing itself he viewed as a trigger. Yeah. And AI is essentially the new and biggest trigger. So we're at the start. And so what that means is the change is going to come. Yeah, 100%. I agree with that. And it's only a question of the degree of chaos, you know, that ensues.

So we agree there'll be chaos at least. My own experience is in all these things, when you want to change things or see a need for change, there's only two ways. Great leadership or crisis. There's nothing else.

So you just ask yourself about that great leadership thing now. Now there's a lot of hands. Everyone's volunteering to be a great leader. I should take a question from our online audience. Ian Sheridan, who's an LSE alumnus and a lawyer, says, "With AI, China places importance on state control, the EU on regulation, and the US on free markets." You may or may not agree with that characterisation, but his question is, "How can these three actors develop common global rules?"

The key answer is in my proposal there's nothing common about the rules. Super important point. What's common is the architecture for enforcing rules in context. That's a qualitatively different deal. I agree with the implied problem in the question is seeking commonality or consensus at the outset is a lost cause.

And the only thing that changes rules over time is people who show a new way and people say, I like that better than this. And then that adoption basically drives the change. And most of the technology you all live with and use every day, that's how it all happened. There was no a priori agreement, particularly in the software domain, as to the rules by which these things would all work, operate, interoperate, etc.,

I think that's partly what people are worried about. Social media, for example, is one that people compare to a lot, right? And they say, this is where we went off the rails. Fine, but I'll stand by my statement. I get you. If you want to put a picture in all the social media things,

How's that possible? Because we decided to have JPEG files. Can I address the lawyer's question just to say one thing, which is literally a direct quote from the book? Because I've been thinking about this. I'm Irish, so I'm an EU citizen. I live in the UK. I grew up in North America as well. So I kind of look at every side of this all the time. In the book, one of the quotes that I pulled out was you collectively saying we can't expect unity.

But here's the quote: "What some see as an anchor to steady ourselves in the storm, others see as a leash holding us back." So it's almost like we haven't even, we can't even get to the point where we decide what rules we're going to agree on because we haven't even decided as a culture, as a world, whether or not rules are a good thing. You know, the EU wants rules, wants regulations, but for example Ireland, a country in the EU, is quite torn right now because a lot of its businesses is in the US. Yeah.

It's tricky. I don't have the answer. I teach at the DSI here at the LSE. So given this historical perspective, in the age of the cannon and the age of gunpowder, castles stopped being built. Which institutions do you believe will survive in this age of chaos that you foresee?

after the onset of AI, and also who do you believe is best positioned to provide leadership in terms of whatever education or just giving us a way forwards? Is it the leaders of AI companies who are full of engineers and physicists or is it other people within society?

Can I say one thing and then give it to Craig? Because one of the things, again, I keep coming back to the book because this is where I started with tonight, but there's a suggestion in the book that just like power might go from nation states to corporate enterprises, there's also a suggestion in the book which I found really interesting, which was that like now or historically, pre-modern I believe was the reference, you know, people had religious allegiance. So you ask which institution will last, will it be government, religion, etc., the big ones, right?

And the suggestion in the book is that people will transition potentially from quasi-religious alliances and loyalties to AI tribes, I think is the phrase. I have to admit, for me, that

kind of got my heart racing a little bit because i follow it and i was thinking about people like the you know effective altruist movement and the different and the squabbling that's already happening in the ai space if you watch those debates um

That's not an answer as to whether or not an institution will survive, but if that's the future of religion, I don't know what to say about it. Education, I worry about education as an institution. I'll be honest, that's why I personally am talking about education so much. I think we really need to be paying attention to the transformation that's coming. As for government and geopolitics, I think that you are definitely the person to answer that question, probably all three. It's hard to handicap this one country or country

activity at a time. What I think is likely to happen is that long term there will be some kind of adjustment, not strictly due to the AI, but largely much as a result of what the AI contributes in so many areas. When we think about it sort of optimistically what happens with the acceleration that comes in just discovery and engineering,

by the emergence of this. We're not talking about small improvements. Dario published a paper in October, which I also tend to agree with, where he said he believed that the improvement that would come with the arrival of AI would be more like a step function of a 20x improvement, not a 20% improvement. He said, so how should people think about that?

paraphrasing, it says, "On the current course and speed of humanity," which is pretty good. Everything that we will do on the current course and speed this century, once the AI arrives in the next three or four years, will probably be done by 2035. That's a fundamental change in opportunity, economy. We can become a space-faring species. We can redesign ourselves.

I mean, it's not like tweaks to the world we live in and who we are. And these are the things that indirectly answer your question, not in any predictable way. But what does it mean when we decide, I mean, you can look at people like Bezos, who more on a philanthropic basis says his hope is that we end up with a trillion humans.

He says, "But it's pretty clear, I don't know how to make them all fit on this rock." So let's go build colonies in space. Let's become explorers. And my thinking is that, and I think his thinking is that, with the acceleration in our engineering and science capabilities that will come from this, that's not science fiction anymore. But if you're going to put more people in space than live here, then should we

tweak them up to be better you know for long-term living in space because we all got naturally selected for living down here these questions are going to be real questions that humanity has to come to grips with and so many of the questions like which countries survive I don't know you know when there's more humans in space than are down here in the next hundred years you know I don't know what that means and

Somewhere along the line you have to realize that the decisions we're making now while the AI is evolving in the book You know, we we say that unlike anything we've ever done before this thing goes through three stages Not one everything humans have built historically we used as a tool to amplify our either physical or intellectual capabilities This is the first thing where tool is not the end state. It's the beginning state and

In the book we say the second stage is coexistence. Coexistence sort of begins when humanity sort of agrees or acknowledges that what we've actually birthed is a new non-biological species that's intelligent and smarter than humans. So, you know, today we have a lot of people running around doing synthetic biology and, yep, they're actually making new species, all right? They're not super intelligent, at least now, and they're biological.

Here, we're doing the other thing, which is we're making them super intelligent. They're just not biological. When you get into that, you're suddenly in a situation where you have to say, wow, we're in this partnership now. Between this thing that can help us do things that were historically unimaginable, but it's on its own independent path. We ultimately have to rationalize and reconcile the coexistence between these two species.

And we say the third stage comes once humans have basically chosen their path of evolution. Then we both enter a co-evolution state, and it's very difficult to know where that ends. But in broad terms, you know, you can say there's only three possible outcomes. One outcome is that one really gets ahead of the other and just chooses to ignore it and advance.

Behind the other I mean some people say oh it won't ignore it It'll just make us pets or destroy us or whatever But then of course they look at human behavior relative to less intelligent species on earth and say I hope they don't act like we do and but then you say okay Well, let's let's set you don't want that outcome. Well, maybe you have like the other end of the spectrum and

where we've made choices and it's evolving and we decide, wow, there's a long-term symbiotic relationship and we just happily go forward into outer space or whatever it is. And you say, okay, that could be interesting. And then you might say, wow, what if one plus one equals three?

And we decide that we've evolved by design, the human species. It's evolved, and we decide, hey, we should hybridize into a thing that's not just carbon or not just silicon, but some combination of these things. In my view, totally feasible from an engineering point of view.

And so these things have all been the realm of real science fiction writing for many years. And humanity is now going to actually face, as it has in other cases, that these could become realities. And the choices we make in the next few years, while this thing is evolving and emerging, will set the, cast the die for what humanity will largely move toward. And then that puts you into these longer term cases.

Who knows how long this will be, 100 years, 200 years, 300 years. We watched the whole renaissance and everything else. That took 300 years. This might happen faster. And the take-home, if you haven't read the book, is be nice to your AI. Because seriously, if they learn that you're a bit of a jerk, they learn that. Yeah, okay.

Some soberly optimistic notes to end on there. Thank you so much, Craig and Mireille. It's been a fascinating discussion. I'm sorry we didn't get to everybody's questions, but there'll be time afterwards for further discussion. We're inviting you all to a drinks reception just outside. Craig will be there to sign his book as well. So please, thank you all for coming. I want to thank the LSE events team and the DSI events team, particularly Ellie Finlay. Thank you.

Thank you for listening. You can subscribe to the LSE Events podcast on your favorite podcast app and help other listeners discover us by leaving a review. Visit lse.ac.uk forward slash events to find out what's on next. We hope you join us at another LSE event soon.