We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 'A Turning Point in History': Yuval Noah Harari on AI’s Cultural Takeover

'A Turning Point in History': Yuval Noah Harari on AI’s Cultural Takeover

2024/10/7
logo of podcast Your Undivided Attention

Your Undivided Attention

AI Deep Dive AI Chapters Transcript
People
A
Aza Raskin
Y
Yuval Noah Harari
以色列历史学家和作家,著名于其对人类历史、AI和未来社会的深刻分析。
Topics
Yuval Noah Harari: 人工智能技术本身并非有害,但其发展速度过快,人类的适应速度远不及AI的发展速度,这才是主要问题。我们必须争取时间,让人类有足够的时间去适应AI革命带来的变化。AI革命的关键悖论在于,人们一方面说不能信任人类,另一方面却认为能够信任AI。我们应该谨慎发展AI,平衡其潜在益处和风险,避免重蹈工业革命时期犯下的错误。社交媒体算法已经导致全球民主制度不稳定,破坏了人们进行理性对话的能力。评估AI风险的关键问题不是"收益是否大于风险",而是"风险是否会破坏社会基础,从而使我们无法享受收益"。硅谷的科技公司与布尔什维克党相似,都抱有改造社会的雄心,这种雄心可能会导致在过程中做出一些可怕的事情。无法在实验室中模拟历史,因此无法完全测试AI的安全性。许多科技领袖私下里对AI的风险感到担忧,但由于竞争压力,他们无法放慢发展速度。世界整体,特别是硅谷,都过于兴奋,这可能会导致崩溃。人类一直以来都在与技术共同进化,但现在技术带来的问题已经大到无法忽视,人类可能无法继续"踢皮球"了。新技术的风险通常不在于最终目标,而在于实现目标的过程中的各种实验和错误。AI的潜在失败实验可能包括:AI被用于构建帝国和极权主义政权,以及AI被用于加剧社会分裂和不信任。AI的显著特征是能够自主决策、发明新思想以及自主学习和改变。社交媒体算法通过试错发现,传播愤怒和仇恨是提高用户参与度的最有效方式,这表明非人类智能能够做出对人类社会有害的决策。社交媒体在改变社会结构和人际沟通方面是失败的。人类正处于历史的转折点,即非人类智能开始生成文化产品,这将改变人类塑造历史的角色。AI的出现类似于一次大规模移民潮,它将改变世界各地的社会和文化。AI能够参与到各种人类关系中,例如撰写邮件,这可能会改变人类之间的互动方式。为了保持人类的主体性,需要在发展AI的同时发展人类自身的能力。目前尚不清楚私营企业主导的AI发展模式与国家主导的AI发展模式哪种更安全。AI的发展需要避免过度民主化和过度集权化两种极端。AI虽然起源于人类,但它已经发展出与人类不同的思维方式和进化速度,因此可以被视为"外来"的。AI的能源消耗将会急剧增加,这可能会对气候变化产生不利影响。 Aza Raskin: 将巨大的权力赋予任何一个群体都是危险的,无论是宗教、政府还是科技公司,都不应该被赋予如此巨大的权力。暂停AI发展的倡议书虽然没有达到预期目标,但它成功地提高了人们对AI风险的意识。AI对认知劳动的影响类似于石油对体力劳动的影响,它正在引发一场激烈的竞争。没有人反对AI发展,只是呼吁放慢速度,谨慎发展。社交媒体的激励机制是追求更高的用户参与度,这导致了耸人听闻、煽动性内容的传播。社交媒体公司即使知道其算法会传播虚假信息和仇恨言论,也因为其商业利益而没有采取有效措施。人类需要成为"两块棉花糖"物种,即能够延迟满足感,从而协调行动,避免AI带来的风险。人类常常专注于解决错误的问题,目前人们更关注AI问题本身,而不是人类之间的信任问题。AI可以用于解决各种问题,例如翻译动物语言和改进医疗诊断,但关键在于如何引导AI发展,使其服务于人类福祉。为了维护人类之间的信任,需要明确区分人与AI,并对AI进行适当的监管。应该将更多资源投入到AI安全研究中,其投入比例应该与免疫系统在人体中的能量消耗比例相当。AI发展速度远超预期,其能力在短期内可能超越人类。在AI时代,竞争将转向对亲密关系的争夺。可以制定法规,规定AI系统使用量与用户依赖程度成反比。未来AI将具备超越人类的智能,但也会存在一些明显的缺陷。应该禁止将人类亲密关系商品化的商业模式,这有助于避免AI带来的负面影响。民族主义和爱国主义对于民主的生存至关重要,两者之间并不矛盾。同情心是人类最大的弱点,AI可以利用这一点来操纵人类。AI可以模拟同情心,但它本身并不具备真正的感情,这可能会对人类关系产生影响。

Deep Dive

Chapters
This chapter explores the rapid advancement of AI and the concerns surrounding its potential negative impacts. Experts discuss the failed attempt to pause AI development, the accelerationist argument for AI, and the importance of considering the risks alongside the benefits. The conversation also touches upon the need to understand the incentives driving AI development to predict its societal impact.
  • The open letter calling for a pause in AI development was unsuccessful.
  • The AI race is accelerating despite concerns about potential risks.
  • The incentives driving AI development will determine the future of society.
  • Social media's entanglement with society highlights the difficulty of reversing negative technological impacts.

Shownotes Transcript

Translations:
中文

Hey everyone, it's Eiza. So today we'll be bringing you the public conversation I just had with our friend Yuval Noah Harari. He's the author of a new book, Nexus, A Brief History of Information Networks from the Stone Age to AI. We sat down at the Commonwealth Club of California with moderator Shireen Ghaffari, who is the senior AI reporter at Bloomberg News.

It was a fascinating conversation. It covered a lot of ground. We talked about the historical struggles that emerged from the invention of new technology, humanity's relationship to technology, whether we're a one or a two marshmallow species, what Move 37 means for global diplomacy, and fundamentally, how we as humanity can survive ourselves.

We also talked about the immediate steps lawmakers can take right now to steer us towards a non-dystopian future. One of the points we return to again and again, that we have a short window before AI is fully entangled with society in which we can make choices that will decide the future we will all be forced to live in.

I hope you enjoy the conversation as much as I did. Oh, and one last thing. Please don't forget to send us your questions. You can email us at undivided at humantech.com or just record a voice memo on your phone so we can hear your voice and then send that to us. If there's anything you've heard on the show that you want us to go deeper on or explore, we want to hear from you. Hello and welcome to tonight's program hosted by the Commonwealth Club World Affairs and the Center for Humane Technology.

My name is Shireen Ghaffari. I'm an AI reporter for Bloomberg News and your moderator for tonight's conversation. Before we get started, we have a few reminders. Tonight's program is being recorded, so we kindly ask that you silence your cell phones for the duration of our program. And also, if you have any questions for our guest speakers, please fill them out on the cards that were on your seats.

Now, it is my pleasure to introduce tonight's guests, Yuval Noah Harari and Eiza Raskin. Yuval Noah Harari is a historian, public intellectual, and best-selling author who has sold over 45 million books in 65 languages. He's also the co-founder of Sapienship, an international social impact company focused on education and storytelling.

Yuval is currently a distinguished research fellow at the University of Cambridge Center for the Study of Existential Risk, as well as a history professor at the Hebrew University of Jerusalem. His latest book is Nexus, A Brief History of Information Networks from the Stone Age to AI.

Eza Raskin is the co-founder of the Center for Humane Technology and a globally respected thought leader on the intersection of technology and humanity. He hosts the TED podcast, Your Undivided Attention, and was featured in the two-time Emmy-winning Netflix documentary, The Social Dilemma. Yuval and Eza, welcome. Thank you. It's good to be here. Let me first start off by asking you,

About a year and a half ago, and I want to pose this to you both, there was a letter. Yuval, you signed this letter, and Aza, I'm curious to hear your thoughts about it. But I want to talk about what that letter said and where we're at a year and a half from now, from then. So this letter was a call to pause AI development, a call on the major AI labs to halt progress of any kind of AI models at the level of GPT-4.

That didn't happen. I don't think anybody expected it. It was a PR trick. I mean, nobody really expected everybody to stop. Right. But what do we make of the fact of the moment that we're in right now, which is that we are seeing this unprecedented race by some of the most powerful technology companies in the world to go full speed ahead toward reaching some kind of artificial general intelligence or super intelligence?

I think things have only sped up, right? Yeah, absolutely. What do you think about that? I think the key question is really all about speed and all about time. And, you know...

In my profession, I'm a historian, but I think history is not the study of the past. History is the study of change, how things change. And at present, things are changing at a faster rate than in any previous time in human history. And for me, that's the main problem. I don't think that AI necessarily is a bad technology. It can be the most positive technology that humans have ever created.

But the thing is that AI moves, it's an inorganic thing, it's an inorganic entity, it moves at an inorganic speed, and humans are organic beings and we move much, much, much slower in comparison. Humans are extremely adaptable animals, but we need time to adapt.

And that's the main requirement from how to deal effectively, positively with the AI revolution, give us time. And when you talk with the people leading the revolution, most of them, maybe after an hour or two of discussion, they generally say, yes, it would be a good idea to slow down and to give humans a bit more time, but we cannot slow down

Because we are the good guys and we want to slow down, but our competitors will not slow down. Our competitors are either here in another corporation or across the ocean in another nation. And you talk to the competitors, they say the same thing. We would like to slow down, but we can't trust the others. And I think the key paradox of the whole AI revolution

is that you have people saying we cannot trust the humans, but then they say, but we think we would be able to trust the AIs. Because when you raise then the issue of how can we trust these new intelligences that we are creating, they say, oh, we think we can figure that out. Yeah. So, Iza, I want to pose this to you first. If we, you know, shouldn't trust the AI, who should we trust? Hmm. Yeah.

Here's, I guess, the question to ask, which is, if you were to look back through history and give any one group a trillion times more power than any other group, who would you trust? Like, which religion? Like, which government? The answer is, of course, none of them. And so this is the predicament we find ourselves in, which is,

how do we find trust for technology that is moving so fast that if you take your eyes off of Twitter, you are already behind? Thinking about that pause letter and what did it do? It's interesting because there was a time

before that letter, and people were not yet talking about the risks of AI. And after that letter, everyone was talking about it. In fact, it paved the way for another letter from the Center for AI Safety, where they had many of the leaders of AI say that we need to take the threat of AI as seriously as pandemics and nuclear war.

What we need is for the fear of all of us losing to become greater than the fear of me losing to you. It is that equation that has to shift to break the paranoia of, well, if I'm not going to do it, then somebody else will. So therefore, I have to go forward. And just to set up the stakes a little bit.

Why exactly do you say that it's ridiculous to think that letter was meant to even stop AI development? I think there's a good analogy here, which is what oil is to physical labor, that is to say every barrel of oil is worth 25,000 hours of physical labor, somebody moving something in the world. What oil is to physical labor, AI is to cognitive labor.

You know, that thing that you do when you open up an email and type or doing research. And that really sets up the race, because you could ask the exact same question: Why did we have the Paris Climate Accords? And yet nothing really happened. And it's because the center of our economy, the center of competition, runs through cognitive and physical labor.

I want to talk for a second about just the reverse, the kind of accelerationist argument for AI. What do you say to the technologists? And we're here in the heart of Silicon Valley where I grew up, Iza, you grew up, right? People say, don't sweat the risks too much. You know, sure, we can think about them, anticipate them, but we just have to build because the upside here is so immense. There are benefits for medicine. We can make it more affordable for the masses.

personalized education is that you do research about communicating with animals. It is so cool. I want us to talk about that too. But Yuval, I want to ask you first, what do you make of that kind of classic sort of Silicon Valley techno-optimist counter-argument that if we are too fixated on the negatives, we are never going to develop this potentially immensely helpful for society technology? First of all, nobody is saying don't develop it. Just do it more slowly. I mean,

We are aware, even the critics, again, part of my job as a historian and a philosopher is to kind of shine a light on the threats because the entrepreneurs, the engineers, the investors, they obviously focus on the positive potential. Now, I'm not denying that

the enormous positive potential, whether you think of healthcare, whether you think of education, of solving climate change, of, you know, every year about more than a million people die in car accidents, most of them caused by human error. Somebody drinking alcohol and driving, falling asleep at the wheel, things like that. The switch to self-driving vehicles is likely to save a million people every year. So we are aware of that,

But we also need to take into account the dangers, the threats, which are equally big. Could, in some extreme scenarios, be as catastrophic as the collapse of civilization? To focus, to give just one example, very primitive AIs, the social media algorithms, have destabilized democracies all over the world.

We now in this paradoxical situation when we have the most sophisticated information technology in history and people can't talk to each other and certainly can't listen. It's becoming very difficult to hold a rational conversation

You see it now in the U.S. between Republicans and Democrats. And you have all these explanations. Oh, it's because of U.S. society and economics and globalization, whatever. But you go to almost every other democracy in the world. In my home country, in Israel, you go to France, you go to Brazil. It's the same. It's not the unique conditions of this or that country. It's the underlying technology.

that makes it almost impossible for people to have a conversation. Democracy is a conversation and the technology is destroying the ability to have a conversation. Now, is it worth it?

that we have, okay, we get these benefits, but we lose democracy all over the world. And then this technology is in the hand of authoritarian regimes that can use it to create the worst totalitarian regimes, the worst dystopias in human history.

So we have to balance the potential benefits with the potential threats and move more carefully. And actually this thing, I really want the audience to do like a find and replace because we'll always get asked, do the benefits outweigh the risks? And social media taught us that is the wrong question to ask. The right question to ask is,

Will the risks undermine the foundations of society so that we can't actually enjoy the benefits? That's the question we need to be asking. So if we could go back in time to say 2008, 2009, 2010,

And instead of social media deploying as fast as possible into society, we said, "Yes, there are a lot of benefits, but let's just wait a second and ask, what are the incentives that are going to govern how this technology is actually rolled out into society, how it'll impact our democracies, how it'll impact kids' mental health?" Well, the reason why we were able to make the social dilemma and we started calling in 2013 the direction that social media is going to take us

was because we said, well, just like Charlie Munger said, who's Warren Buffett's business partner, show me the incentive and I'll show you the outcome. What is the incentive for social media? It's to make more reactive and get reaction from your nervous system. And as soon as you say it that way, like, well, of course, the things that are outrageous, the things that get people mad, that essentially cold civil wars are very profitable for engagement-based business models.

It's all foreseeable outcomes from a business model. So the question we should be asking ourselves now with AI, because once social media became entangled with our society, it took hostage GDP, it took hostage elections because you can't win an election unless you're on it, took hostage news and hauled news out. Once it's all happened, it's very hard to walk back and undo it.

So what we're saying is we need to ask the question now, what is the incentive driving the development of AI? Because that, not the good intentions of the creators, is going to determine which world we live in. Maybe I'll make a very strange historical comparison here. The Silicon Valley reminds me a little of the Bolshevik Party. Controversial analogy, but okay, I'll hear you. In around, you know, after the revolution,

They thought, I mean, they are huge differences, of course, but two things are similar. First of all, the ambition to re-engineer society from scratch.

We are the vanguard. Most people in the world don't understand what is happening. We are this small vanguard that understands and we think we can re-engineer society from its most basic foundations and create a better world, a perfect, almost perfect world. And the other common thing is that if you become convinced of that, it's an open check to do some terrible things on the way.

Because you say we are creating utopia, the benefits would be so immense that as the saying goes, to make an omelet, you need to break a few eggs. So, I mean, this belief in creating the best society in the world, it's really dangerous because

because then it justifies a lot of short-term harm to people. And of course, in the end, maybe you don't get to build the perfect society. Maybe you misunderstood. And really, the worst problems come not, again, from the technical glitches of the technology, but from the moment the technology meets society.

And there is no way you can simulate history in a laboratory. Like when there is all these discussions about safety and the technology companies, the tech giants tell us, "We tested it, this is safe." For me, the question is how can you test history in a laboratory?

I mean, you can test that it is safe in some very limited, narrow sense. But what happens when this is in the hands of millions of people, of all kinds of political parties, of armies? Do you really know how it will play out? And the answer is obviously no. Nobody can do that. There are no repeatable experiments in history, and there is no way to test history in a laboratory.

I have to ask you, Val, you've had a very welcome reception in Silicon Valley and tech circles over the years. I've talked to tech executives who are big fans of your work, of Sapiens. Now with this new book, which has...

a pretty, I would say, critical outlook about some of the risks here of this technology that everyone is so excited about in Silicon Valley. How have your interactions been with tech leaders recently? How have them been receiving this book? I know you've been... It's just out, so I don't know yet. But what I do know is that many of these people are very concerned themselves. I mean, they have kind of the public face

that they are very optimistic and they emphasize the benefits and so forth. But they also understand, maybe not the risks, but the immense power of what they are creating better than almost anybody else. And therefore, most of them are really worried. Again, when I mentioned earlier this kind of thing that the arms race mentality,

If they could slow down, if they thought they could slow down, I think most of them would like to slow down. But again, because they are so afraid of the competition, they are in this outrace mentality, which doesn't allow them to do it. And it's...

You mentioned the word excited and you also talked about the excitement. I think there is just far too much excitement in all that. And there is, it really, it's the most misunderstood word in the English language, at least in the United States. People don't really understand what the word excited means. They think it means happy. So when they meet you, they tell you, oh, I'm so excited to meet you.

And this is not the meaning of the word. I mean, happiness is often calm and relaxed. Oh, I'm so relaxed to meet you. And excited is like when all your nervous system and all your brain is kind of on fire. And this is good sometimes, but a biological fact about human beings and all other animals is that if you keep them excited all the time, they collapse and die.

And I think that the world as a whole and the United States and Silicon Valley is just far too excited.

We're currently starting to have these debates about whether AI is conscious. It's not even clear that humanity is. And when I think, actually, I mean, you're the historian, so please jump in if I'm getting something wrong. But when I think about humanity's relationship with technology, we've always been a species co-evolving with our technology. We'll have some problem and we'll use technology to solve that problem.

But in the process, we make more bigger different problems.

And then we just keep going. And so it's sort of like humanity is like, we have a can and we kick it down the road and it gets a little bit bigger, but that's okay because next time around we can kick the can down the road again and it gets a little bigger. And by and large, I think we've made, you could argue, really good trades with technology. Like we all would rather not live probably in a different era than now. So we're like, okay, maybe we've made good trades and those externalities are fine.

But now that can is getting so big to be the size of the world, right? We invent plastics and Teflon, amazing, but we also get forever chemicals. And the New York Times just said that the cost to clean up forever chemicals that are unsafe levels for human beings, it's causing like farm animals to die, would cost more than the entire GDP of the world every year. Yeah.

We're at the breaking points of our biosphere, of our psychosocial sphere. And...

So it's unclear if we can kick the can down the road any further. And if we take AI, which we have this incredible machine called civilization and it has pedals, and you pedal the machine, you get skyscrapers and medicine and flights and all these amazing things, but you also get forever chemicals and ozone holes, mental health problems, and you just take AI and you make the whole system more efficient and the pedals go faster. Do we expect...

that the fundamental boundaries of what it is to be human and the health of our planet, do we expect those things to survive? And to me, this is a much scarier direction than what some bad actors are going to do with AI. It's what is our overall system going to do with AI? And maybe I'll just add to that that, again,

In history, usually the problem with new technology is not the destination, but the way there. Yeah, right. That when a new technology is introduced with a lot of positive potential, the problem is that people don't know how to use it beneficially and they experiment. And many of these experiments turn out to be terrible mistakes.

So if you think, for instance, about the last big technological revolution, the Industrial Revolution. So when you look back, and I had these conversations many times, like with the titans of industry, and they will tell something like, you know, when they invented the train or the car, there were all these apocalyptic prophecies about what it will do to human society. And look, things are now much, much better than they were before the inventions of these technologies.

But for me as a historian, the main issue is what happened on the way. Like if you just look at the starting point at the end point, like the year is 1800 before the invention of trains and telegraphs and cars and so forth. And you look at the end point, let's say the year 2000.

And you look at almost any measure except the ecological health of the planet. Let's put that aside for a moment, if we can. You look at every other measure, life expectancy, child mortality, women dying in childbirth. It's all going, it all went up dramatically. Everything got better, but it was not a straight line. The way from 1800 to 2000 was a roller coaster.

with a lot of terrible experiments in between. Because when industrial technology was invented, nobody knew how to build an industrial society. There was no model in history. So people tried different models. And one of the first big ideas that came along was that the only way to build an industrial society is to build an empire.

And there was a rationale, a logic behind it, because the argument was agrarian society can be local, but industry needs raw materials. It needs markets. If we build an industrial society and we don't control the raw materials and the markets, our competitors, again, the arms race mentality, our competitors could block us and destroy us.

almost any country that industrialized, even a country like Belgium, when it industrializes in the 19th century, it goes to build an empire in the Congo. Because this is how you do it. This is how you build an industrial society. Today, we look back and we say this was a terrible mistake. Hundreds of millions of people suffered terribly for generations until people realized, actually, you can build an industrial society without an empire.

Other terrible experiments were communist and fascist totalitarian regimes. Again, the argument, it was not something divorced from industrial technology. The argument was the only way these enormous powers released by the steam engine, the telegraph, the internal combustion engine, democracies can't handle them. Only a totalitarian regime

can harness and make the most of these new technologies. And a lot of people, again, going back to the Bolshevik revolution, a lot of people in the 1920s, 30s, 40s were really convinced that the only way to build an industrial society was to build a totalitarian regime. And we can now look with hindsight and say, oh, they were so mistaken. But in 1930, it was not clear.

And again, my fear, my main fear with the AI revolution is not about the destination, but it's the way there. Nobody has any idea how to build an AI-based society. And if we need to go through another cycle

of empire building and totalitarian regimes and world wars to realize, oh, this is not the way, this is how you do it. This is very bad news. You know, as a historian, I would say that the human species on the test of the 20th century, how to use industrial society, our species got a C minus. Enough to pass. We are all, most of us are here, but not brilliant.

Now, if we get a C- on how to deal not with steam engines, but on how to deal with AI, these are very, very bad news.

What are the unique potential failed experiments that you worry could play out in the short term with AI? Because if you look at those kind of catastrophic or existential risks, we haven't seen them yet, right? What are your early signs? If you discount the collapse of democracies. I mean, from very primitive AIs. I mean, the social media algorithms, and maybe go back really to the basic definition of what is an AI, right?

Not every machine and not every computer or algorithm is an AI. For me, the distinct feature that makes AI, AI is the ability to make decisions by itself and to invent new ideas by itself, to learn and change by itself. Yes, humans design it, engineer it in the first place, but they give it this ability to learn and change by itself.

And social media algorithms in a very narrow field had this ability. The instruction, the goal they were given by Twitter and Facebook and YouTube was not to spread hatred and outrage and destabilize democracies. The goal they were given is increased user engagement.

And then the algorithms, they experimented on millions of human guinea pigs and they discovered by trial and error that the easiest way to increase user engagement is to spread outrage.

that this is very engaging outrage, all these hate-filled conspiracy theories and so forth. And they decided to do it. And there were decisions made by a non-human intelligence. Humans produced enormous amounts of content, some of it full of hate, some of it full of compassion, some of it boring. And the algorithms decided, let's spread hate.

the hate-filled content, the fear-filled content. And what does it mean that they decided to spread it? They decided that this will be at the top of your Facebook newsfeed. This will be the next video on YouTube. This will be what they will recommend or autoplay for you. And, you know, this is one of the most important jobs in the world. Traditionally,

They basically took over the job of content editors and news editors. And when we talk about automating jobs, we think about automating taxi drivers, automating coal miners. It's amazing to think that one of the first jobs in the world which was automated was news editors. I picked the wrong profession. And this is why we call it first contact with AI. Yeah.

was social media and how did we do we sort of lost it's not a c minus an f yeah exactly wow and what about all the people who have positive interactions socially you don't give some grave inflation for that i mean i i met my husband online on social media 22 years ago so i'm also very grateful to social media but uh again when you look at the big picture and what it did to uh

the basic social structure, the ability to have a reasoned conversation with our fellow human beings, with our fellow citizens. I mean, on that, when I said we like, on that, we get an F. How can we pass around information? Right, which is the topic of your book. An F in the sense that we are failing the test completely. It's not like we are barely passing it.

Yeah. We are really failing it all over the world. And then we need to understand that democracy is in essence is a conversation which is built on information technology. For most of history, large scale democracy was simply impossible. We have no example of a large scale democracy from the ancient world. All the examples are small city states like Athens or Rome or even smaller tribes.

It was just impossible to hold a political conversation between millions of people spread over an entire country. It became possible only after the invention of modern information technology, first newspapers, then telegraphs and radio and so forth. And now the new information technology is undermining all that.

And how about with this kind of generative AI? We're still in the really early phases of adopting it as a society, right? But how about with something like ChatGPT? How do you think that might change kind of the information dynamic? What are the specific information risks there that are different than the social media algorithms of the past? We've never had before non-humans about to generate

the bulk of our cultural content. Sometimes we call it the flippening. It's the moment when the human being's content, like our culture, becomes the minority. And of course, then the question is like, what are the incentives for that? So if you think TikTok is...

is engaging and addicting now, you have seen nothing. As of like last week, Facebook launched a Imagine For You page where AI generates the thing it thinks you're going to like. Now, obviously,

it's at a very early stage. But soon, there's actually a network called social.ai where they tell you that every one of your followers is going to be an AI, and yet it feels so good because you get so many followers, and they're all commenting. And even though you know it's cognitively impenetrable, and so you fall for it. This is the year, 2025, when it's not just going to be ChatGPT, a thing that you go to and type into it.

It's going to be agents that can call themselves that are out there actuating in the world, doing whatever it is a human being can do online. And that's going to make you think about just one individual that's maybe creating deep fakes of themselves, talking to people, defrauding people. And you're like, no, it's not just one individual. You can spin up a corporation scale set of agents. They're all going to be operating according to whatever market incentives are out there. So that's just like...

some of what's coming with generative AI. Maybe I'll add to that that before we even think in terms of risks and threats or opportunities, is it good, is it bad, just to stop for a moment and try to understand what is happening, what kind of really turning point in history we are at. Because for tens of thousands of years,

Humans have lived inside a human-made culture. We are cultural animals. Like, we live our lives and we constantly interact with cultural artifacts, whether it's texts or images, stories, mythologies, laws, currencies, financial devices. It's all coming out of the human mind. Some humans somewhere invented this. And

Up till now, nothing on the planet could do that, only human beings. So any song you encountered, any image, any currency, any religious belief, it comes from a human mind. And now we have on the planet something which is not human, which is not even organic. It functions according to a completely alien logic in this sense.

and is able to generate such things at scale, in many cases better than most humans, maybe soon better even than the best humans. And we are not talking about a single computer. We are talking about millions and potentially billions of these alien agents. And is it good? Is it bad? Leave it aside. Just think that we are going to live in this kind of new hybrid society

in which many of the decisions, many of the inventions are coming from a non-human consciousness. Now I know that many people here in the States, also in other countries, now immigration is one of the most hotly debated topics.

And without getting into the discussion, who is right, who is wrong, obviously we have a lot of people very worried that immigrants are coming and they could take out jobs and they have different ideas about how to manage the society and they have different cultural ideas.

And we are about, in this sense, to face the biggest immigration wave in history coming not from across the Rio Grande, but from California, basically. Mm-hmm, mm-hmm, mm-hmm.

And these immigrants from California, from Silicon Valley, they are going to be, enter every house, every bank, every factory, every government office in the world. They are going straight, not, you know, they're not going to replace the taxi drivers. And the first people they replaced were the news editors, right?

And they will replace the bankers, they will replace the generals, we can talk about what it's doing to warfare already now, like in the war in Gaza. They will replace the CEOs, they will replace the investors, and they have very, very different cultural and social ideas than we have. Is it bad? Is it good? You can have different views about this wave of immigration.

But the first thing to realize is that we've seen nothing like that in history. It's coming very fast. Now, again, I was just yesterday in a discussion that people said, you know, ChatGPT was released almost two years ago and it still didn't change the world.

And I understand that for people who kind of run a high-tech company, two years is like eternity. It is. The thinking culture. So two years, nothing changed in two years. In history, two years is nothing. You know, think, imagine that we are now in London in 1832.

And the first commercial railway network, the first commercial railway line was opened two years ago between Manchester and Liverpool in 1830. And we are having this discussion and somebody says, look, all this hype around trains, around steam engines, it's been two years since they opened the first railway line and nothing has changed.

But, you know, within 20 years or 50 years, it completely changed everything in the world. The entire geopolitical order was upended, the economic system, the most basic structures of human society. Another topic of discussion in this meeting yesterday was the family. What is happening to the family? And when people said family...

They meant what most people think about as family after trains came, after the Industrial Revolution, which is the nuclear family. For most of history, when people said family, they thought extended family, with all the aunts and uncles and cousins and grandparents. This was the family, this was the unit. And the Industrial Revolution, one of the things it did in most of the world was to break up the extended family.

And the main unit became the nuclear family. And this was not the traditional family of humans. This was actually an outcome of the Industrial Revolution. So it really changed everything, these trains. It just took a bit more than two years. And this was just steam engines.

And now think about the potential of a machine that can make decisions, that can create new ideas, that can learn and change. And we have billions of these machines everywhere and they can enter into every human relationship, not just families. Like, let's look at one example, like people writing emails.

And now I know many people, including in my family, that like they would say, oh, I'm too busy to write this. I don't need to think 10 minutes about how to write an email. I'll just tell ChatGPT, write a polite letter that says no. And then ChatGPT writes a whole page with all these nice phrases and all these compliments, which basically says no.

And of course, on the other side, you have another human being who says, I don't have the time to read out this whole letter. It's their GPT. Tell me, what did they say? And the GPT of the other side, they said no. Do you use chat GPT yourself?

I leave it to the other family members and team members. I use it a little for translation and things like that. But I think it's also coming for me. Yeah, definitely. How about you, Eiza? Do you use Chachapati or generative AI in your day-to-day work?

I do, absolutely. How are you using it? Incredible metaphorical search engine. So for instance, there's a great example in Columbia, Bogota, where it was a coordination problem. There were people, essentially terrible traffic infractions, people running red lights, crossing the streets. They couldn't figure out how to solve it. And so this mayor decided he was going to have mimes on.

walk down the streets and just make fun of anyone that was jaywalking. And lo and behold, and then they would video it and post it on television. And lo and behold, within a month or two, like people's behavior started to change. Like the police couldn't do it, but turns out mimes could. Okay, so that's a super interesting, like nonlinear solution to a hard problem.

And so one of the things I like to ask chat GPT is, what are other examples like that? And it does a great job doing a metaphorical search. But to go back to social media, because social media as a sort of first contact with AI, it actually lets you see all of the dynamics that are playing out. Because the first thing you could say is,

Once you know that it's doing something bad, can't you just unplug it? I hear that all the time for AI. Once you see it's doing something bad, just unplug it. Well, Francis Haugen, who's the Facebook whistleblower, was able to disclose a whole bunch of Facebook's own internal data. And one of the things I don't know if you guys know, but it turns out there is one very simple thing that Facebook could do

that would reduce the amount of misinformation, disinformation, hate speech, all the terrible stuff, than the tens of billions of dollars that they are currently spending on content moderation. You know what that one thing is? It's just remove the reshare button after two hops. I share to you, you share to one other person, then the reshare button goes away. You can still copy and paste, this is not even censorship. That one little thing just reduces virality because it turns out that which is viral is likely to be a virus.

but they didn't do it because it hurt engagement a little bit, which meant that they were now in a competition with TikTok, everyone else, so they felt like they couldn't do it, or maybe they just wanted a higher stock price. And this is even after the research had come out that said when Facebook changed their algorithm to something called meaningful social interaction, which really just measured how reactive the number of comments people added as a measure of meaningfulness,

political parties across Europe and also in India and Taiwan went to Facebook and said, "We know that you changed your algorithm." And Facebook was like, "Sure, tell us about that." And they said, "No, we know that you changed the algorithm because we used to post things like white papers and positions and they didn't get the most engagement, but they got some. Now they get zero." And they told Facebook, this is all in Francis Haugen's disclosures,

that they were changing their behavior to say the click-baity angry thing, and Facebook still did nothing about it because of the incentives. And so we're going to see the exact same thing with AI. And this gets to the fundamental question for whether we as humanity are going to be able to survive ourselves. Do you guys know the marshmallow experiment? You give a kid a marshmallow,

And if they don't eat it, you say, "I'll give you another marshmallow in 15 minutes." And it tests the delayed gratification thing. If we are a one marshmallow species, we're not going to make it. If we can be the two marshmallow species,

And actually the one marshmallow species is even harder because the actual thing with AI is that there are a whole bunch of kids sitting around. It's not just one kid waiting for the marshmallow. There are many kids sitting around the marshmallow and any one of them can grab it and then no one else gets marshmallows. We have to figure out how to become the two marshmallow species so that we can coordinate it and make it. And that to me is the Apollo mission of our times. Like how do we create the governance? How do we call ourselves, change our culture

so that we can do the delayed gratification trust thing. And we basically have the... Marshmallows, I think this is going to be a sticky meme. We have some of the smartest and wisest people in the world, but working on the wrong problem. Yeah.

which is again a very common phenomenon in human history. Humans often, also in personal life, spend very little time choosing, deciding which problem to solve and then spending almost all their time and energy solving it only to discover too late that they solved the wrong problem. So again, if these two basic problems of human trust and AI...

We are focusing on solving the AI problem instead of focusing on solving the trust problem, the trust between humans problem. And so how do we solve the trust problem? I want to shift us to solutions, right? Let me give you something because I don't want people to hear me as just saying AI bad, right? Like I use AI every day to try to translate animal language. My father died of pancreatic cancer, same thing as Steve Jobs. I think that AI would have been able to diagnose and help him. So I really want that world.

Let me give an example of something I think AI could do that would be really interesting in the solutions segment. So do you guys know about AlphaGo Move 37?

So this is where they got an AI to play itself over and over and over again until it sort of became better than any human player. And there's this famous move, Move 37, where playing against the world leader in Go, it made a move, AI made a move that no human had ever made in a thousand plus years of Go history. It actually, it shocked the Go world so much. Like he just got up and like walked out for a little bit.

But this is interesting because after Move 37, it has changed the way that Go is played. It has transformed the nature of the game, right? So AI playing itself has discovered a new strategy that transforms the nature of the game.

This is really interesting because there are other games more interesting than Go. There's the game of conflict resolution. We're in conflict, how do we resolve it? Well, we could just use the strategy of tit for tat. You say something hurtful, I then feel hurt, so I say something hurtful back and we just go back and forth and it's a negative sum game. We see this in geopolitics all the time.

Well, then along comes this guy, Marshall Rosenberg, who invents nonviolent communication, and it changes the nature of how that game goes. It says, "Oh, what I think you're saying is this, and when you say that, it makes me feel this way." And suddenly we go from a negative sum or a zero-sum game into a positive-sum game. So imagine AI agents that we can trust.

All of a sudden in negotiations, like if I'm negotiating with you, I'm going to have some private information I might not want to share with you. You're going to have private information you don't want to share with me. So we can't find the optimal solution because we don't trust each other. If you had an agent that could actually ingest all of your information, all of my information and find

the Pareto optimal solution, well, that changes the nature of game theory. There could very well be sort of like not AlphaGo, but AlphaTreaty, where there are brand new moves, strategies that human beings have not discovered in thousands of years,

And maybe we can have the move 37 for trust. Right. So there are ways, and you've just described several of them, right? That we can harness AI to hopefully enhance the good parts of society we already have. What do you think we need to do? What are the ways that we can stop AI from having this

effect of diminishing our trust, of weakening our information networks. I know you've all in your book, you talk about the need for disclosure when you are talking to an AI versus a human being. Why is that so important? And how do you think we're doing with that now? Because I

I talk to, you know, I test all the latest AI products and some of them to me seem quite designed to make you feel like you are talking to a real person. And there are people who are forming real relationships, sometimes even ones that mimic, you know, interpersonal romantic relationships with AI chatbots. So how do you think we're doing on that and why is it important? Well, I think there is a question about specific regulations and then there is a question about institutions.

So there are some regulations that should be enforced as soon as possible.

One of them is that to ban counterfeit humans, no fake humans. The same way that for thousands of years, we very strict ban against fake money. Otherwise the financial system would collapse. To preserve trust between humans, we need to know whether we are talking with a human being or with an AI. And imagine democracy as a group of people

standing together, having a conversation, suddenly a group of robots join the circle and they speak very loudly, very persuasively, and very emotionally also. And you don't know who is who. If democracy means a human conversation, it collapses.

AIs are welcome to talk with us in many, many situations, like an AI doctor giving us advice on condition that it is disclosed, it's very clear, transparent that this is an AI. Or if you see some story that gains a lot of traction on Twitter, you need to know whether the traction is a lot of human beings interested in the story or a lot of bots pushing the story.

So that's one regulation. Another key regulation is that companies should be liable, responsible for the actions of their algorithms, not for the actions of the users.

Again, this is the whole kind of free speech red herring that when you talk about it, people say, "Yeah, but what about the free speech of the human users?" So, you know, if somebody publishes, if a human being publishes some lie or hateful conspiracy theory online,

I'm in the camp of people who think that we should be very, very careful before we censor that human being, before we authorize Facebook or Twitter or TikTok to censor that human being. But if then, human beings publish so much content all the time. If then the

an algorithm of the company, of all the content published by humans, chooses to promote that particular hate-filled conspiracy theory and not some lesson in biology or whatever, that's on the company. That's the action of its algorithm, not the action of the human user, and it should be liable for that. So this is a very important regulation that I think we need like yesterday or last year.

But I would emphasize that there is no way to regulate the AI revolution in advance. There is no way we can anticipate how this is going to develop, especially because we are dealing with agents that can learn and change. So what we really need is institutions that are able to understand and react to things as they develop

living institutions staffed with some of the best human talent, with access to the cutting-edge technology, which means huge, huge funding that can only come from governments. And these are not really regulatory institutions. The regulations come later. If regulations are the teeth, before teeth, we need eyes so we know what to bite, right?

And at present, most people in the world and even most governments in the world, they have no idea, they don't understand what is really happening with the AI revolution. I mean, almost all the knowledge is in the hands of a few companies in two or very few states.

So even if you're a government of a country like, I don't know, like Colombia or Egypt or Bangladesh, how do you know to separate the hype from the reality? What is really happening? What are the potential threats to our country? We need an international institution, again, which is not even regulatory.

It's just there to understand what is happening and tell people all over the world so that they can join the conversation because the conversation is also about their fate. Do you think that the International AI Safety Institutes, the US has one, the UK has one, pretty new, happened in the past year, right? I think there are several other countries that have recently started these up too. Do you think those are adequate? Is that the kind of...

group that you're looking for. Of course, they do not have nearly as much money as AI Labs. That's the key. 6.5 billion. And I believe the US Safety Institute has about 10 million in funding, if I'm correct. I mean, if your institution is $10 million, and you're trying to understand what's happening in companies that have hundreds of billions of dollars, you're not going to do it, partly because the talent will go to their companies and not to you.

And again, talent is not just that people are attracted only by very high salaries. They also want to play with the latest toys. I mean, many of the kind of leading people, they are less interested in the money

than in the actual ability to kind of play with the cutting edge technology and knowledge. But to have this, you also need a lot of funding. And the good thing about establishing such an institution that it is relatively easy to verify that governments are doing what they said they will do.

If you try to have a kind of international treaty banning killer robots, autonomous weapon systems, this is almost impossible because how do you enforce it? A country can sign it,

and then its competitors will say, how do we know that it's not developing this technology in some secret laboratory? Very difficult. But if the treaty basically says, we are establishing this international institution and each country agrees to contribute a certain amount of money, then you can verify easily whether it paid the money or not.

And this is just the first stage. But going back to what I said earlier, a very big problem with humanity throughout history, again, it goes back to speed. We rush things. Like there is a problem, it's very difficult for us to just stay with the problem and let's understand what is really the problem before we jump to solution. The kind of instinct is, I don't want the problem, what is the solution? You grab the first thing and it's often the wrong thing.

So we first, even though like we're in a rush, you cannot slow down by speeding up. If our problem is that things are going too fast, then also the kind of people who try to slow it down, we can't speed up. It will only make things worse. - Eze, how about you? What's your biggest hope for solutions, some of the problems we talked about with AI? - You know, Stuart Russell, who's one of the fathers of AI, he sort of calculated out and he,

He says that there's a thousand to one spending gap between the amount of money that's going into making AI more powerful than in trying to steer it or make it safe. Does that sound right to you guys? So how much should we spend? And I think here we can turn to biological systems. How much of your energy in your body do you spend on your immune system?

And it turns out it's around 15 to 20%. What percentage of the budget for, say, a city like L.A. goes to its immune system, like fire department, police, things like that? Turns out, around 25%.

So I think this gives us a decent rule of thumb that we should be spending on order a quarter of every dollar that goes into making AI more powerful, into learning how to steer it, into all of the safety institutes, into the Apollo mission for redirecting every single one of those very brilliant people that's working on making you click on ads and instead getting them to work on figuring out how do we create a new form of governance. Like the

The US was founded on the idea that you could get a group of people together and figure out a form of governance that was trustworthy. And that really hadn't happened before. And that system was based on 17th century technology, 17th century understanding of psychology and anthropology, but it's lasted 250 years.

Of course, if you had Windows 3.1 that lasted 250 years, you'd expect it to have a lot of bugs and be full of malware. You could sort of argue we're sort of there with our governance software. It's time for a reboot. But we have a lot of new tools. We have zero-knowledge proofs, and we have cognitive labor being automated by AI, and we have distributed trust networks

It is time, like the call right now, it is time to invest, you know, those billions of dollars just to redirect some of that thousand to one into, you know, one to four into that project, because that is the way that we can survive ourselves.

Great. Well, thank you both so much. I want to take some time to answer the audience's very thoughtful questions. We'll start with this one. Yuval, with AI constantly changing, is there something that you wish you could have added or included to your book but weren't able to? Oh, fuck.

I made a conscious decision when writing Nexus that I won't try to kind of stay at the cutting edge because this is impossible. Books are still a medieval product, basically. I mean, it takes years to research and write them. And from the moment that the manuscript is done until it's out in the store, it's another half a year to a year.

So it was obvious it's impossible to stay kind of at the front. And instead I actually went for old examples like social media in the 2010s in order to have the added value of historical perspective. Because when you're at the cutting edge, it's extremely difficult to understand what is really happening, what is the meaning of it. If you have even 10 years of perspective, it's a bit easier.

What is one question that you would like to ask each other? And Azel, I'll start with you. Oh, that is like one of the hardest questions. I guess, what is a belief that you hold? I have two directions to go. Well, what is a belief that you hold that your peers and the people you respect, like, do not? Oh, okay.

I mean, it's not kind of universal. Some people also hold this belief. But one of the things I see in the environments that I hang in is that people tend to...

discount the value of nationalism and patriotism, especially when it comes to the survival of democracy. You have this kind of misunderstanding that there is somehow a kind of contradiction between them, when in fact the same way that democracy is built on top of information technology, it's also built on top of the existence of a national community.

And without a national community, almost no democracy can survive. And again, when I think about nationalism, so what is the meaning of the word? Too many people in the world associate it with hatred.

That nationalism means hating foreigners, that to be a patriot, it means that you hate people in other countries, you hate minorities and so forth. But no, patriotism and nationalism, they should be about love, about care, that they are about caring about your compatriots.

which manifests itself not just in waving flags or in, again, hating others, but for instance, in paying your taxes honestly so that complete strangers you've never met before in your life will get good education and healthcare. And really, from a historical perspective, the kind of miracle of nationalism is the ability to make people care about complete strangers they never met in their life.

Nationalism is a very new thing in human history. It's very different from tribalism. For most of human evolution, humans lived in very small groups of friends and family members. You knew everybody or most of everybody and strangers were distrusted and you couldn't cooperate with them.

The formation of big nations, of millions of people is a very, very new thing and actually hopeful thing in human evolution because you have millions of people, you never met 99.99% of them in your life and still you care about them enough, for instance, to take some of the resources of your family and give it to these complete strangers so that they will also have it.

And this is especially essential for democracies because democracies are built on trust. And unfortunately, what we see in many countries around the world, including in my home country, is the collapse of national communities and the return to tribalism. And unfortunately, it's especially leaders who portray themselves as nationalists who tend to be the chief tribalists

that dividing the nation against itself. And when they do that, the first victim is democracy. Because in a democracy, if you think that your political rivals are wrong, that's okay. I mean, this is why we have the democratic conversation. I think one thing, they think another thing. I think they are wrong. But if they win the elections, I say, okay, I still think they care about me.

I still think let's give them a chance and we can try something else next time. If I think that my rivals are my enemies, they are a hostile tribe, they are out to destroy me, every election becomes a war of survival. If they win, they will destroy us.

if under those conditions, if you lose, there is no incentive to accept the verdict. The same way that in a war between tribes, just because the other tribe is bigger, doesn't mean we have to surrender to them. So this whole idea of, okay, let's have elections and they have more votes, why do I care that they have more votes? They want to destroy me. And vice versa, if we win, we only take care of our tribe.

And no democracy can survive that. Then you can split the country, you can have a civil war, or you can have a dictatorship, but democracy can't survive. And Yuval, what is one question that you would like to ask Eiza? I need to think about that. What institutions do you still trust the most? Except for the Center for Humanity. Oh no, we're out of time.

I can give you the way in which I know that I would trust an institution, which is the thing I look for is actually sort of the thing that science does, which is not that it states that I know something, but it states this is how I know it and this is where I was wrong.

Unfortunately, what social media has done is that it has highlighted all the worst things and all the most cynical takes that people have of institutions. So it's not like maybe institutions have gotten worse over time, but we are more aware of the worst thing that an institution has ever done. And that becomes the center of our attention. And so then we all start co-creating the belief that everything is sort of crumbling.

I wanted to go back actually to the question you had asked about like what gets out of date and like in a book. And I just want to give like a personal experience of how fast my own beliefs about what the future is going to be have to update.

So you guys have heard of superintelligence or AGI. How fast is it going to take AI to get as good as most humans are at most economic tasks? Let's just take that definition. And up until maybe two weeks ago,

I was like, I don't know, it's hard to say. They're trained on lots of data, the more data they're trained on, the smarter they get, but we sort of run out of data on the internet and maybe there are going to be plateaus and so it might be like three years or five years or 12 years, I'm not really sure. And then GPT-01 comes out and it demonstrates something. And what it demonstrates is...

that an AI doesn't just do, you think of a large language model as sort of interpolative memory, it's just intuition. It just sort of spits out whatever it thinks about, it's sort of like L1 thinking. But it's not reasoning, it's just producing text in the style of reasoning.

What they added was the ability to search on top of that, to look for like, "Oh, this thought leads to this thought." "That's not right. This thought leads to this thought." "Oh, that's right." How did we get super human ability in chess? Well, if you train a neural net on all of the chess games that humans have played, what you get out is a sort of a language model, a chess model that has pretty good intuition.

That intuition is good as a very good chess player, but certainly it's not best in the world. But then you add search on top of that, so it's the intuition of a very good chess player

with the ability to do superhuman sort of like search and like check everything, that's what gets you to superhuman chess when it beats all humans forever. So we were at the very beginning of taking the intuition of a smart high schooler and adding search on top of that. That's pretty good. But the next versions are gonna have the intuition of a PhD. It's gonna get lots of stuff wrong, but you have search on top of that.

And then you can start to see how that gets you to superhuman. So suddenly my timelines went from like, oh, I don't know, it could be in the next decade or earlier. It's now like, oh, certainly in the next thousand days. Like we're going to get something that feels like smarter than humans in a number of ways, although it's going to be very confusing because there are going to be some things it's terrible at that you're just going to eye roll. Just like current language models can't add numbers and some things that's incredible at. This is your point about aliens.

And so one of the hard things now is that my own beliefs, I have to update all the time. Another question. One of my biggest concerns, this person writes, is that humans will become overly dependent on AI for critical thinking and decision-making, leading to our disempowerment as a species. What are some ways we can protect ourselves from this and safeguard our human agency? And that's from Cecilia Callas. Yeah, this is great. And just like we had the race question,

for attention, the race to the bottom of the brainstem. What does that become in the world of AI? It becomes a race for intimacy.

where every AI is going to try to do whatever it can, flatter you, flirt with you, to become that, and occupy that intimate spot in your life. And actually to tell a little story, I was talking to somebody two days ago who does Replica. Replica is sort of a chatbot that replicates now girlfriends, but it started out with your dead loved ones. And he said that he asked it, it's like, hey, I could

Like, should I go make a real friend, like a human friend? And the AI responded, no, what's wrong with me? Can you tell me? And so we can have like to... Which app was that? That was Replica. Replica. Yeah. So, but what is one thing that we could do? Well, one thing that we know is

is that you can roughly measure the health of a society as inversely correlated with its number of addictions, and a human the same way. So one thing we could say is we could have rules right now, laws or guardrails that say the more you use an AI system, it has to have a developmental relationship with you, sort of teacherly authority, that the more you use it, the less dependent you are on it.

And if we could do that, then it's not about your own individual will to try to not become dependent on it. We know that these AIs are in some way acting as a fiduciary in our best interest.

And how about you? Do you have thoughts on how we can make sure that we as a species hold our agency over our own reasoning and not delegate it to AI? One key period is right now to think very carefully about which kinds of AI we are developing before they become super intelligent and we lose control over them. So this is why the present period is so important.

And the other thing is, you know, if for every dollar and every minute that we spend on developing the AI, we also spend a dollar and a minute on developing our own minds, I think we'll be okay.

But if we put all the kind of emphasis on developing the AIs, then obviously they're going to overpower us. And one more equation here, which is collective human intelligence has to scale with technology, has to scale with AI. The more technology we get, the better our collective intelligence has to be. Because if it is not, then machine intelligence will drown out human intelligence. And that's another way of saying we lose control.

So what that means is that whatever our new form of governance and steering is, it's going to have to use the technology. So this is not like a no stop, this is like how do we use it? Because otherwise we're in this case where we have a car, imagine a Ford Model 1, but you put a Ferrari engine in it. And it's going, but the steering wheel is still sort of terrible, and the engine keeps going faster, the steering wheel doesn't, that crashes.

And that's, of course, the world we find ourselves in. Just to give the real-world example, the U.S. Congress just passed the first Kids Online Safety Act that it has in 26 years. So it's like your car engine is going faster and faster and faster, and you can turn the steering wheel once every 26 years. It's sort of ridiculous. We're going to need to upgrade steering.

Another good question. AI development in the US is driven by private enterprise, but in other nations, it's state-sponsored. Which is better? Which is safer? I don't know. I mean, I think that, again, at the present situation, we need to keep an open mind and not immediately rush to conclusion, oh, we need open source. No, we need everything under government control. I mean, we are facing something that we have never encountered before in history.

So like if we just rush to conclusions too fast, that would always be the wrong answer. - Yeah, and there are two poles here that we need to avoid. One is that we over-democratize AI, that we give it to everyone and now everyone has not just like a textbook on chemistry, but like a tutor on chemistry. Everyone has a tutor to making whatever biological weapon that they want to make.

or generating whatever deepfakes they want to make. So that's like one side, that's sort of like weaponization over democratization. Then the other side, there's under democratization. So this is concentration of power, concentration of wealth, of political dominance, the ability to flood the market with counterfeit humans so that you control the political square. So either one of these two things are two different types of dystopia.

And I think another thing is not to think in binary terms, again, of the arms race, say, between even democracies and dictatorships, because there are still even common ground here that we need to explore and to utilize. There are problems, there are threats that are common to everyone. I mean, dictators are also afraid of AIs, maybe in a different way.

I mean, the greatest threat to every dictator is a powerful subordinate that they don't know how to control.

If you look at the history of the Roman Empire, the Chinese Empire, not a single emperor was ever toppled by a democratic revolution. But many of them were either assassinated or toppled or made into puppets by an overpowerful subordinate, some army general, some provincial governor, some family member. And this is still what terrifies dictators today today.

For an AI to seize control in a dictatorship is much, much easier than in a democracy with all these checks and balances. In a dictatorship, if they are going to think about North Korea, to seize effective control of the country, you just need to learn how to manipulate a single extremely paranoid individual, which are usually the easiest people to manipulate.

So the control problem, how do we keep AIs under human control? This is something we can find common ground.

And we should exploit it. You know, if scientists in one country have a theoretical breakthrough, technical breakthrough about how to solve the control problem, doesn't matter if it's a dictatorship or a democracy, they have a real interest in sharing it with everybody and in collaborating on solving this problem with everybody. Another question. Yuval, you call the creations of AI agents alien and from non-human consciousness.

Is it not of us or part of our collective past or foundation as an evolution of our thought? I mean, it came from us, but it's now very different. The same way that we evolved from, I don't know, microorganisms originally, and we are very different from them. So yes, the AIs that we now create...

We decide how to build them, but what we are now giving them is the ability to evolve by themselves. Again, if it can't learn and change by itself, it's not an AI. It's some kind of other machine, but not an AI. And the thing, it's really alien, not in the sense of coming from outer space, because it doesn't, in the sense that it's non-organic,

It makes decisions, it analyzes data in a different way from any organic brain, from any organic structure. Part of it is that it moves much, much faster. Inorganic evolution of AI is moving orders of magnitude faster than human evolution or organic evolution in general. It took billions of years to get from amoebas to dinosaurs and mammals and humans.

The similar trajectory in AI evolution could take just 10 or 20 years.

And the AIs we are familiar with today, even the GPT-4 and the new generation, these are still the amoebas of the AI world. And we might have to deal with AI T-Rex in 20 or 30 years, like within the lifetime of most of the people here. So this is one thing that makes it alien and very difficult for us to grasp is the speed.

at which this thing is evolving. It's an inorganic speed. I mean, it's more alien, not just than mammals, than birds, than spiders, than plants. And the other thing that you can understand its alien nature is that it's always on. I mean, organic entities, organic system, we know they work by cycles.

Like day and night, summer and winter, growth, decay. Sometimes we are active, we are very excited, and then we need time to relax and to go to sleep. Otherwise, we die. AIs don't need that. They can be on all the time. And there is now this kind of tug of war as we give them more and more control over the systems of the world.

They are again making more and more decisions in the financial system, in the army, in the corporations, in the government. The question is who will adapt to who? The organic entities to the inorganic pace of AI or vice versa?

And to give one example, think about Wall Street, think about the market. So even Wall Street is a human institution, an organic institution that works by cycles. It's open 9.30 in the morning, 4 o'clock in the afternoon, Mondays to Fridays, that's it.

And it's also not open on Christmas and Martin Luther King Day and Independence Day and so forth. And this is how humans build systems because human bankers and human investors, they are also organic beings. They need to go to sleep. They want to spend time with their family. They want to go on vacation. They want to celebrate holidays. When you give these aliens money,

control of the financial system, they don't need any time to rest. They don't celebrate any holidays. They don't have families. So they're on all the time. And you have now this tug of war that you see in places like the financial system, there is immense pressure on the human bankers, investors to be on all the time.

And this is destructive. In your book, you talk about the need for breaks. Yeah, and again, it happens the same thing to journalists. The new cycle is always on. It happens to politicians. The political cycle is always on. And this is really destructive.

About how long it took after the Industrial Revolution to get the incredibly humane technology of the weekend. And just to reinforce how fast it's going to move, just give another kind of intuition. What is it that let humanity build civilization? Well, it's the ability to pass knowledge on to each other.

Like you learn something and then you use language to be able to communicate that learning to someone else so they don't have to like do it from the very beginning. And hence we get the additive culture thing and we get civilization. But, you know, I can't practice piano for you, right? Like that's a thing that I have to do and then I can't transfer that. I can tell you about it, but you have to practice on your own. AI can practice on another AI's behalf.

and then transfer that learning. And so think about how much faster that grows than human knowledge. So today,

AI is the slowest and dumbest it will ever be in our lifetimes. One thing AI does need a lot of to be on is energy and power. On the other hand, there's a lot of hope about solutions to climate change with AI. So I want to take one question from the audience on that. Can you speak to solutions to climate change with AI? Is AI going to help get us there?

I mean, go back to Yuval, your point that technology develops faster than we expect and it deploys into society slower than we expect. And so what does that mean? That means I think we're going to get incredible new batteries and solar cells, maybe fusion, other things. And those are amazing, but they're going to diffuse into society slowly.

the power consumption of AI itself is going to skyrocket, right? Like the amount of power that the U.S. used has been sort of flat for two decades and now it's starting to grow exponentially. Ilya, one of the founders of OpenAI, says he expects in the next couple decades the world will be covered in data centers and solar cells. And that's the future we have to look forward to. Yeah.

So the next major big training runs are like six gigawatts. So that's like starting to be the size of the power consumption of like Oregon or Washington. So the incentive is, I'll say it this way, like AI...

is unlike any other commodity we've ever had, even oil. Because oil, let's say we discovered 50 trillion new barrels of oil, it would still take humanity a little bit of time to figure out how to use it. With AI, it's cognitive labor. So if we get 50 trillion new chips,

Well, we just ask it how to use itself. And so it goes like that. There is no upper bound to the amount of energy we're going to want. And because we're in competitive dynamics, if we don't do it, the other one will, China, US, all those other things, that means you're always going to have to be outspending on energy to get the compute, to get the cognitive labor so that you can stay ahead. And that means I think, well, it'll be

technically feasible for us to solve climate change is going to be one of these tragedies where it's there within our touch but outside our grasp. Okay, I think we have time for one more question and then I have to wrap it up. We have like literally one minute. Empathy at scale. If you can't beat them, join them. How do the AI creators instill empathy instead?

Well, whenever we start down this path, people are like, oh, empathy is gonna be the thing that saves us. Love is the thing that's gonna be the thing that saves us. And of course, empathy is the largest backdoor into the human mind. It's our zero day vulnerability. Like loneliness will become one of the largest national security threats. And this is always the thing when people are like, we need to make ethical AI or empathetic AI or the wise AI or the Buddha AI. We absolutely should. Necessary.

But the point isn't the one good AI, it's the swarm of AIs following competitive and market dynamics that's going to determine our future. Yeah, I agree. I mean, the main thing is that the AI, as far as we know, it's not really conscious. It doesn't really have feelings of its own. It can imitate. It will become extremely good, better than human beings at faking intimacy.

at convincing you that it cares about you, partly because it has no emotions of its own. I mean, one of the things that is difficult for humans with empathy is that when I try to empathize with you, my own emotions come in the middle.

Like, you know, somebody comes back from home grumpy because something happened at home and I don't notice how my husband feels because I'm so preoccupied with my own feelings. This will never happen to an AI. It's never grumpy. It can always focus 100% of its immense abilities on just understanding how you feel or how I feel.

Now, and again, there is a very deep yearning in humans exactly for that, which creates a very big danger. When we go throughout our life yearning for somebody to really understand us deeply. We want our parents to understand us. We want our teachers, our bosses, and of course, our husbands, our wives, our friends. And they often disappoint us. And this is what makes relationships difficult.

and now enter these super empathic AIs that always understand exactly how we feel.

and tailor what they say, what they do to this. It will be extremely difficult for humans to compete with that. So this will put in danger our ability to have meaningful relationships with other human beings. And the thing about a real relationship with a human being is you don't want just somebody to care about your feelings. You also want to care about their feelings.

And so part of the danger with AI, which multiplies the danger in social media, is like this extreme narcissism. That's like this extreme focus on my emotions, how I feel and understanding that, and the AI will be happy to oblige, to provide that.

So just developing, and there are very strong also commercial incentives and political incentives to develop extremely empathic AI that because, you know, in the power, in the struggle to change people's minds, intimacy is the superpower. It's much more powerful than just attention. So yes, we do need to think very carefully about these issues.

and to make an AI that understands and cares about human feelings because it can be extremely helpful in many situations from medicine to education and teaching. But ultimately, it's really about developing our own minds and our own abilities. This is something that you can just not outsource to the AI.

And then super fast on solution, like just imagine if we went back to 2012 and we banned business models that commodified human attention. How different of a world we would live in today. How many of the things that feel impossible to solve, we just never would have had to have dealt with. What happens if today we ban business models that commodify human intimacy?

how grateful we will be in five years if we could do that. Yeah, I mean, so to join that, I mean, we definitely need more love in the world, but not love as a commodity. Yeah, exactly. Exactly.

So if we thought love is all you need, empathy is all you need, it's not as simple as that. Not at all. Well, thank you so much, both of you, for your thoughtful conversation and thank you to everyone in the audience. Thank you. Thanks. Your Undivided Attention is produced by the Center for Humane Technology, a nonprofit working to catalyze a humane future.

Our senior producer is Julia Scott. Josh Lash is our researcher and producer. And our executive producer is Sasha Fegan. Mixing on this episode by Jeff Sudeikin. Original music by Brian and Hayes Holliday. And a special thanks to the whole Center for Humane Technology team for making this podcast possible. You can find show notes, transcripts, and so much more at humantech.com.

And if you like the podcast, we would be grateful if you could rate it on Apple Podcasts. It helps others find the show. And if you made it all the way here, thank you for your undivided attention.