We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI, society, and our world order

AI, society, and our world order

2024/12/9
logo of podcast LSE: Public lectures and events

LSE: Public lectures and events

AI Deep Dive AI Insights AI Chapters Transcript
People
L
Larry Kramer
R
Reid Hoffman
Topics
Larry Kramer: 我认为发明东西很容易, 难的是如何确保新技术能够造福人类, 并避免其负面影响。我们需要社会科学来帮助我们做出正确的选择, 特别是在人工智能领域, 因为人工智能是一种通用技术, 它将深刻地改变人类社会。如果不妥善处理, 其后果将不堪设想。 我们需要社会科学来指导人工智能的设计、采用、监管和使用, 以确保其造福人类。 我们必须现在就做出选择, 因为我们现在所做或不做的事情都将影响人工智能融入我们生活的方式。 Reid Hoffman: 人工智能的价值既不是价值中立的, 也不是价值预设的, 而是被塑造的。我们必须在人工智能发展的早期阶段采用价值中立的方法, 基于科学的严谨性和事实核查, 检验人工智能背后的逻辑结构。但随着人工智能融入社会, 价值预设的方法就变得不可或缺。我们需要思考如何利用人工智能来优先考虑人类福祉, 如何在最大限度地减少危害的同时增强我们的集体能力。 人工智能的价值是通过迭代式部署来塑造的, 这需要公众参与到人工智能的开发过程中。 人工智能将对社会产生重大影响, 包括工作方式的转变、社会不平等的加剧以及地缘政治格局的改变。我们需要通过迭代式部署来管理人工智能, 并通过测量来进行监管, 而不是仅仅依靠监管。 国际社会需要在共同目标上达成一致, 同时解决不同优先事项。发达国家应该在发展人工智能工具和模型的同时, 认识到将其他国家纳入这一进程的重要性。 我们需要共同努力, 塑造人工智能的未来, 使其造福人类。

Deep Dive

Key Insights

Why is AI considered a general purpose technology?

AI is considered a general purpose technology because it can be applied to virtually any field of human endeavor. Unlike tools like toasters or phones, AI can learn and adapt, making it capable of transforming industries, economies, and societies on a global scale.

What are the societal challenges posed by AI?

AI poses challenges such as job displacement, ethical dilemmas, and unintended consequences. It can disrupt industries, create new power dynamics, and raise questions about regulation, equity, and the balance between human oversight and machine autonomy.

How does AI influence global power dynamics?

AI has the potential to redefine global power dynamics by amplifying economic productivity and reshaping international relations. Nations that lead in AI development will likely dominate 21st-century geopolitics, while those that lag risk falling behind in wealth and influence.

What is the concept of 'super agency' in the context of AI?

'Super agency' refers to the empowerment of individuals and society through AI. When a critical mass of people use AI to enhance their capabilities, it creates a compounding effect that transforms industries, professions, and societal structures, leading to widespread benefits.

Why is iterative deployment important in AI development?

Iterative deployment involves releasing AI tools to the public early and refining them based on user feedback. This approach accelerates development, ensures alignment with human needs, and allows for continuous improvement, making AI more accessible, safe, and effective.

How can AI mitigate risks while maximizing benefits?

AI can mitigate risks by being developed with safeguards, ethical principles, and iterative deployment. By focusing on human well-being, equitable access, and minimizing harm, AI can amplify collective capabilities while addressing potential negative consequences.

What role does humanism play in shaping AI?

Humanism centers AI development on theories of human well-being and individual agency. By prioritizing ethical considerations, compassion, and inclusivity, AI can be designed to enhance human capabilities and foster positive societal outcomes.

How does AI challenge traditional notions of work and productivity?

AI challenges traditional work structures by potentially automating jobs while creating new roles and industries. It shifts the focus from human labor to human-machine collaboration, requiring upskilling and adaptation to navigate the evolving economic landscape.

What are the geopolitical implications of AI development?

AI development has significant geopolitical implications, as nations that lead in AI will shape global power dynamics. Divergent approaches, such as China's focus on surveillance and the West's ethical debates, highlight the need for international collaboration and inclusive governance.

Why is speed crucial in AI deployment?

Speed in AI deployment ensures that societies can shape the technology during its formative stages and reap its benefits early. Delaying deployment risks falling behind in global competition and missing opportunities to address pressing challenges like healthcare and education.

Shownotes Transcript

Translations:
中文

Welcome to the LSE Events Podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences. Welcome, everybody. I'm Larry Kramer, President and Vice-Chancellor here at LSE, and it's my pleasure to welcome you all to this evening's very special lecture hosted by the LSE School of Public Policy.

Before introducing our speaker, let me touch really quickly on a couple of housekeeping matters. First, we'll have time after the talk for questions and answers. There are people online as well as here in Sheikh Zayed Theater, and I'll try to ensure that we take questions from both audiences.

Those of you who are here in the theater, raise your hand and someone with a mic will come to you. When you're called on, give your name and affiliation before posing one ideally short question.

Those of you joining online can also submit your questions through the Q&A feature at the bottom of your screens. Please also let us know your name and affiliation. We're particularly keen to hear from students and alumni. And lastly, for social media users, the hashtag for today's event is the ever creative at LSE events. OK, the topic of our talk tonight, topic of our talk tonight, artificial intelligence is on everyone's minds these days and for good reason.

Earlier this fall, actually in this very theater, I gave a talk in which I laid out five major challenges societies across the globe need to address, noting that these challenges are essential for our future, but also fundamentally problems for social sciences. So the fifth challenge was the challenge of new technologies and especially AI, because inventing things turns out to be the easy part.

Whether new inventions do good or ill, whether they get deployed at scale, whether they benefit the people they should, whether they have significant unintended consequences and whether those consequences are addressed, whether they're made available in cost-effective ways and on and on, are all matters of social science, of economics and finance and law and politics, not to mention psychology and sociology.

New technologies don't deploy themselves. They're designed by human beings with the needs of human beings in mind. They're adopted by human beings who adapt them to circumstances that were often unforeseen or overlooked by the designers.

They're regulated by firms and states run by human beings who act based on incentives created by their situations and roles. And eventually they're used by human beings for a whole range of purposes and with a wide variety of intentions. Every step in that chain is one for which social sciences are needed if we are to get it right.

And we better do so for AI because the costs of not getting it right can be dramatic. Artificial intelligence is not like a toaster or a phone. It's a general purpose technology that can be applied to virtually any field of human endeavor and that can learn and change much as people do.

Now, what that means is hard to see right now, but one thing we can say for sure is that it will profoundly and irrevocably change human society, which means we need to make choices now. Leaving AI to develop however markets shape it is a choice. Regulating it is a choice. Not choosing is a choice. Whatever we do or don't do now will shape how AI wends its way into our lives. So we need to make these choices as best we can, but we need to make them now.

Events like tonight are the reason I wanted to come to LSE, because facing these great challenges in the social sciences, understanding their causes, and helping solve the hard problems they pose is what LSE does. It's why there is no better place to have tonight's conversation about the challenge and the opportunity of AI. And there's no better speaker than our guest tonight, Reid Hoffman, to help us understand it.

I'm going to start with the conventional biographical details because I'm obliged. Born in Oakland, California, Reed graduated from Stanford in 1990 with a BS in symbolic systems. That shows born in Stanford. That's not us. Born in Stanford, California, Reed graduated from Stanford in 1990.

Where'd you go? Pali? No, no, no. Sorry. Early school, Vermont. Sorry. A little inside baseball. Anyway, graduated, then went on to Oxford with a Marshall Scholarship to earn an MS in philosophy in 1993. After graduating, he jumped into the burgeoning Silicon Valley tech community, working at Apple and Fujitsu before launching his own first startup in 1997.

That startup, SocialNet, was maybe the first ever attempt at a dating site. Unfortunately, or maybe I should say fortunately, in 1997, neither the world nor the technology were yet ready for that kind of social networking. So in 2000, Reid joined a different startup, another quirky idea, PayPal.

He was already a board member of Confinity, one of the two entities that merged to create the more famous payment service. But he now became COO, then senior VP for business development of what today we might call Silicon Valley's ER startup. The first success for people like not just Reid, Peter Thiel and Elon Musk as well, and a bunch of others. In 2002, he co-founded LinkedIn with two former colleagues from SocialNet,

The world, it seems, now was ready for this kind of networking organization, although LinkedIn is not a dating site. It is in a manner of speaking, I suppose. Anyway, the rest is history. Reid used his earnings from LinkedIn and from the sale of PayPal to become one of Silicon Valley's most prolific and successful angel investors and venture capitalists. Companies he was instrumental in helping to start include Facebook, Airbnb, Flickr, Helium Energy, Aurora Innovation, and literally countless others.

It is for good reason the tech insiders refer to Reid a la Warren Buffett as the Oracle of Silicon Valley. But Reid has been much more than just an investor, by far the smartest Silicon Valley tech leader. And I say that without qualification. Reid has been an intellectual leader on issues like cryptocurrency, startup management, talent development, and most recently, A.I.

He's written or co-authored books and articles on all of these topics, including Impromptu, which he co-authored with GPT-4, and which can actually write a bespoke chapter for each individual reader.

I read Impromptu before coming to LSE, and it was pivotal in helping me see not just how important AI was going to be, but what it could mean for enhancing human capabilities and so why we need to embrace it. Hence the book's subtitle, Amplifying Our Humanity Through AI. Okay, so that's the stuff on paper, and it's more than a little impressive, but it doesn't really capture what's special about Reid, which is the actual larger-than-life person.

Reid is an inveterate optimist whose countless interests and activities, financial, social, and political, stem from a genuine belief in the possibilities of humans together with an equally genuine concern for them. He has incredibly wide-ranging interests, including everything from politics, where he has perhaps been a bit too active for his own good,

To board games and science fiction, Reid is in fact my regular go-to source for the next great sci-fi Reid. He's an active and wide-ranging philanthropist, podcaster, traveler, advisor to countless others, and all-around Renaissance person. Above all, Reid is a friend to everyone. When Reid connected me with one of his close friends here, the friend and I joked about being among Reid's 10,000 close friends.

But where for most people, that's a kind of snide way of saying that they actually are close to no one. In Reed's case, it is the opposite. He really does have 10,000 people with whom he is close, who he's touched and helped and with whom he has genuine relationships. Reed has the ability to give and give of himself totally in ways that create the kinds of genuine relationships that

both quickly and enduringly. And it's that quality above all that motivates Reid's thinking and action. So keep that in mind as you listen tonight. And with that, let me hand the stage over to tonight's special guest, Reid Hoffman.

By the way, the second half of that was the best intro I've had. So I thank you for that. And it was a little bit of a – it's a funny thing. I was an undergraduate at Stanford, and on my transcript for the entire time I was there, it said, birthplace unknown, which is entertaining given that I was born in the Stanford hospital. Yeah.

So, all right, let's begin. I've got Heraclitus on my mind today, and we will return to Heraclitus even if we can never return to exactly the same point. When I first visited LSE, I was a master's student at Oxford on the possible path to becoming a philosophy professor. It was the early 90s. Around then, the first text message was sent. It was,

Intel released its Pentium processors, which probably most of you in the room don't know what that is. And Tim Berners-Lee introduced the World Wide Web to the public. Today, here with you, I've returned to LSE as a technologist, investor, and founder. Globally, about 26 billion texts are sent daily.

NVIDIA's H1000 GPUs are around 600,000 times more powerful than Pentium processors, and about 70% of the global population uses the internet. My point is not just the time capsule technological progress and my professional development, but to put on equal footing not just how different the world was, but how differently I saw it. Heraclitus was right.

One cannot step into the same river twice, not only because the river is different, but because we ourselves are also changed. As humans, we tend to project the way in which we understand the world as static, or at least we recognize change in the world more than change in us and our understanding of the world.

And this gets to the heart of a question that people have grappled with before human language and that philosophers from pre-Socrates to Popper and beyond would have worked to answer.

How do we come to understand the world? It is perhaps our most fundamental and human question. As many of you know, we have empiricists that root our knowledge in our sensory experience, rationalists who derive it from reason and innate ideas, and idealists who argue it is mediated by the mind. The list goes on. Our schools of thought seem almost more varied than our thoughts themselves.

As a technologist and a humanist, here's what I'd add and emphasize. Humans often underestimate how much of our understanding of the world is from pure reason and perceptions and, sorry, overestimate and underestimate how much it's mediated by technology. Not only because of the role and power of technology itself, but because of who we are fundamentally.

We are more than homo sapiens. If we merely lived up to the scientific classification and just sat around thinking all day, we'd be much different creatures than we actually are. We humans are homo techne, humans as tool makers and tool users.

Technology can expand our vision quite literally. Telescopes and microscopes help us see farther and deeper than we could otherwise. It transports us, whether by airplane, book, or video call. And it extends our life, such as through medicine or gene therapy. Things that many of us have forgotten our technology—language, currency, wheels—underpin everyday life.

This is as important to who we are as to who we'll become. We evolve with and through our tools. We shape our tools, and then our tools shape us. In that exchange, our epistemology and metaphysics also evolve. Our understanding of the world updates through technology, and that phenomenon is never more true than with AI. What I found bewildering is that this moment for AI, this current AI era,

is going to be as important of an evolution in our epistemology and metaphysics than any other technology we've encountered to date. Well, why? Just consider how central humans are to how an AI model is built. Our corpus of online human knowledge is ingested to build foundational models.

Reinforcement training, or the ways that models make decisions and craft outputs, is guided by interactions with us. What we prompt AI to generate is consumed by us and blended back into humanity's digital canon. This shift now challenges how we understood the word world for millennia through discussions with humans. In essence, the process of one human saying something they believe to be true about the world

that guards the agreement of fellow humans. Now with AI, we have a super competent extension of people and application of our knowledge. Moreover, AI is not really a static tool. We continue to improve how it learns and generates. So how will we use it collectively to shape our understanding of the world?

This is a question for all of us. Because of all our technologies, these foundational AI models might be the best technological approximation of us as a collective. The good, the bad, and the ugly. The full range of us with all of our commonalities and differences, especially as the access to AI for more people continues to grow.

This makes discussions of how AI benefits society and humanity much more interesting, but also much more complicated. And that's what I want to focus on today. What AI means for society. On this topic, there are three important questions I hope to address. First, where does the value of technology stem from?

Second, what might AI disrupt within society? And three, how might it change dynamics between societies? Let's start with the first question about a generational technology like AI and the origin of its value. This may be a good place to start because it helps us gauge whether or not we have agency in defining its value or if it's more so an innately good or bad tool. There are two primarily schools of thought.

The first believes that technology is value neutral, that they hold that technology isn't inherently good or ill. It's about how people use it. The second school of thought is that technology is value laden, that technology is inherently good or bad. I believe it's door three, a blend of both.

Those who believe that technology is value-neutral tend to overlook the ethical complexities inherent in technological development and deployment. Consider cigarettes or the atomic bomb. Both are products of human ingenuity, but their societal effects raise profound moral questions.

Cigarettes, despite their economic contributions, have fueled a global public health crisis, while the atomic bomb fundamentally altered the fabric of geopolitics, introducing existential risk. Let's for a second imagine if Nazi Germany had developed the atomic bomb before the United States during World War II. The shape and deployment of that technology in a fascist regime would have had catastrophic consequences, but

likely remaking the post-world global order in ways antithetical to democracy and human rights. In this context, the value-neutral stance collapses. But the value-laden approach isn't quite right either. Advocates of this position may see AI as a panacea for the humanity's greatest challenges or, conversely, as a harbinger of dystopia. Yet this perspective is equally flawed.

Technology is not a raw substance with immutable, intrinsic properties of morality or utility. It can and must be shaped, refined, and integrated with human values to achieve specific outcomes. Alfred Nobel's dynamite is neither inherently constructive or destructive. Humans use it when making tunnels and buildings and when fighting wars.

And this year, under Nobel's name, AI pioneers have won his acclaimed prize for advancements in chemistry and physics. As wielders of AI, we can determine and can reward the impact we want. So is technology value neutral or value laden?

Neither. The truth is, its value is sculpted. It has initial and inertial properties as a technology that we humans then whittle and carve. We shape it, not like quality, but like marble. It takes muscle and tension and repetition, and we must respect and acknowledge its properties while hewing it to our purpose.

And our sequence in sculpting matters too. To start, taking a value-neutral approach rooted in scientific rigor and factual verification is essential to the early stages of technological development. We must examine the logical structures underlying AI, but rigorously revise our hypotheses and approach after it is in the hands of people.

Iterative deployment, or inviting the public to participate in the development process for AI, accelerates this process. This overall approach mirrors the scientific method, systematic, objective, and methodical. However, as AI integrates in a society, the value-laden perspective becomes indispensable. We must ask, how can AI be shaped to prioritize human well-being, both now and in the future?

How can it amplify our collective capabilities while minimizing harm? For example, AI and healthcare should aim not only to diagnose diseases more accurately, but to ensure equitable access to these advancements, irrespective of socioeconomic status. One-one talks for a while. Water, useful.

The geopolitical implications of getting the sequenced value neutral and value-laden approach is significant. The societies that build and deploy transformative technologies like AI wield considerable influence over the global order. This underscores the geopolitical importance of AI. It is not merely a tool, but a driver of power dynamics.

Just as the printing press upended the religious and political structures of early modern Europe, AI has the potential to reshape economies, governance, and international relations. History reminds us, however, that transitions catalyzed by transformative technologies are rarely smooth. Again, the printing press, while enabling unprecedented dissemination of knowledge, also precipitated decades of religious conflict.

AI too will bring disruption. Yet, just as the printing press ultimately became indispensable, AI can create a more interconnected and empowered global society if we manage its transition wisely. Let me start by saying that this transition will not be easy, and it's good that we are concerned about it, as it will be painful in parts and places.

Humans as a species are historically bad at transitions, but we can navigate better knowing that. Transitions are both hard and important for society. Each time we integrate a new technology, and transformative technology eventually becomes indispensable to humans.

This transition to our AI future will be navigated regardless of our planning and coordination, but we should be thoughtful and intelligent about it. To best navigate this disruption, we must advance the positive use cases of AI and foster smoother integration in society. This requires moving beyond binary debates about AI's inherent value and focusing instead on our agency with it.

If we harness AI correctly and collectively, society will experience super agency. That's what happens when a critical mass of individuals personally empowered by AI begin to operate at levels that compound throughout society. In other words, it's not just that some people are becoming more informed and better equipped thanks to AI.

Everyone is. Even those who rarely or never use AI directly. You may not be a doctor.

But suddenly, your doctor can diagnose seemingly unrelated symptoms with AI precision. You may not repair cars, but your mechanic's AI agent can now instantly diagnose the cause of that weird sound when your car accelerates. Even ATMs, parking meters, and vending machines are multilingual geniuses who understand and adjust to your preferences.

That's the world of superagency. Each of these enhancements and enrichments across professions, industries, and sectors don't just add up for society, they transform it. This evolution is not only inevitable, but already underway. And we have the opportunity to make this as much or more about human amplification than human replacement. We can design with superagency in mind rather than chase it from behind as it arises in society.

As the world of super agency starts to more fully emerge, we'll hear the following question asked repeatedly in an increasing pitch. What gives you the right to disrupt society? The query often carries a sharp edge of skepticism, even indignation. After all, no one voted to invite this wave of technological upheaval. Yet disruption does not spring from a vacuum.

It is rooted in foundational rights that underpin free societies. The right to build a company, the right to develop a product, to offer that product to the public and the public to engage with it.

These rights, while essential, do not create disruption on their own. Disruption occurs at the intersection of supply and demand, at the inflection point of product-market fit. A technology disrupts when it resonates with people, when they adopt it, pay for it, and incorporate it in their lives. Without demand, even the most ambitious innovation falters.

As I speak, some of you may be sensing technological determinism or the mighty wheel of capitalism, but I assure you that we have a choice.

But while the choice to engage in AI as an individual can be a personal preference, the choice to not engage in AI as a society is consequential. Societies that resist participation merely delay their integration until the tail end of adoption, losing the opportunity to sculpt the technology in its formative stages.

They will also detain the benefits that AI can bring to the health, wealth, and happiness to generations of their people. However, inevitably, inevitability does not imply passivity. Heraclitus' river is ever-changing, but so are we. We can decide how we move through it.

Just as a sailor navigates by tacking according to the wind rather than relinquishing the helm, so must we steer the course with AI. If disruption is happening, the pressing question becomes, what shape will it take? Some disruptions are easier to imagine than others. We have line of sight in how AI can democratize access to critical resources at scale and

For instance, AI-powered medical assistants can bring quality healthcare to underserved or remote regions where skilled practitioners are scarce or overburdened. Similarly, AI-driven tutors can make personalized education accessible to millions, adapting lessons to individual needs in ways traditional classrooms may not. Tools like these amplify human agency, as well as address systemic inequalities.

Yet, alongside these positive transformations, AI must be safeguarded against dehumanizing applications. The same technologies that accelerate drug development can be weaponized for bioterrorism. The same technologies that provide highly personalized, customized services can be used to surveil. The same technologies that can amplify a personal brand can be used for deep fakes that can manipulate public opinion and sound mistrust.

These risks cannot be eliminated, but they can be mitigated by AI itself, as well as through thoughtful oversight. Beyond these first-order effects, more profound and complex disruptions await. The transformation of work, for example. How do we make sense of a technology that may eliminate jobs and sectors, but also create new occupations and industries? History offers instructive parallels, like the loom.

The advent of the power loom transformed England. It produced cloth 40 times faster than a skilled weaver. The cost of cotton decreased by 80% over 50 years due to mechanization. Textiles, particularly cotton, became the largest industrial sector in Britain and accounted for roughly 40% of England's exports.

On a societal level, the power loom was undeniably transformational for England and for generations of people who benefited from that innovation. While productivity soared, the transition was painful for those whose livelihoods were rendered obsolete. The innovation displaced countless hand weavers, sparking the Luddite movement in the 19th century England.

Until soldiers and laws were deployed to stop them, the Luddites burned down factories, killed factory owners, and destroyed thousands of power looms.

Amidst the transformational change, the machine itself made for a convenient target. The technology, of course, paved the way. But according to author Brian Merchant and a number of historians he cites, it wasn't so much the technology or even the specific machines that these weavers were resisting. Instead, it was the factory system, its exploitative working conditions, and the regimentation and seeming loss of liberty this new way of life demanded.

So how do we address the underlying systems to make more fertile ground for innovation that clearly benefits society? How do we navigate the intermediate cost of disruption and accelerate the benefits throughout society? Let me offer three ways. The first is how we as a society view this technology. The second is how we deploy this technology. And the third is how we manage this technology.

I hope my remarks so far have already started to illustrate how we as a society should view AI. Rather than an existential threat, AI can be a GPS of the mind and usher in a new cognitive industrial revolution if we continue to sculpt it. While I do enjoy a metaphor, I'm actually very intentionally invoking GPS or global positioning system technology.

Back in the early 70s, the U.S. Department of Defense began to work on what would eventually become GPSs.

The technology used radio signals from multiple satellites in medium Earth orbit to pinpoint the geographic coordinates of receivers on the ground. By the end of the decade, the U.S. Air Force had a fledgling version of the system running for military use only. Then, in 1983, the Soviet military shot down a Korean passenger jet that had flown off course in a Soviet airspace.

In the hopes of averting similar catastrophes, U.S. President Reagan announced that whenever GPS became fully operational, the United States would also make it available for civilian use. Years later, President Bill Clinton fully executed on that promise, granting the public the full power and capabilities GPS had to offer.

These acts from two presidents from two different sides of the aisle pave the way for a free global public utility that has become an indispensable resource for navigating the 21st century. Today, all of us use GPS.

So much so that it works in the background in ways that we may not even be aware of. Turn-by-turn navigation is the most common way we benefit from GPS, but it's far from the only one. The precise timing information GPS provides is used to synchronize clocks in telecom networks in ways that help keep mobile phones calls clear and lag-free.

During natural disasters and other emergencies, first responders use GPS-enabled drones to locate missing people, quickly map stricken areas, and even deliver supplies to those who cannot be easily reached. Precision farming techniques that GPS enables make a variety of organic produce more affordable.

So what does this extended detour, ironically, about GPS have to do with AI? First, it maps out a clear example of positive outcomes that can result when the government embraces a pro-technology, pro-innovation perspective and views private sector entrepreneurship as a strategic asset for achieving public good.

Second, it's also a great example of how we can effectively leverage our capacity to turn big data like geographic coordinates and timestamps into big knowledge that can be used to provide context-aware guidance to many aspects of our lives.

And third, and most importantly for democracy, it reinforces individual agency. It's true that we all carry around a tracker in our pocket, one with a mic and a camera, a device that can be used to surveil. But on the other hand, we have a tool that nearly ensures we never get lost again.

So if we can agree on this way of viewing AI, let's now dig into how we might integrate it in a society, how we can deploy AI in a way that minimizes costs and accelerates gains in society. Let's go back to two years ago when ChatGPT was released.

It was magical in both its utility and creativity. You could ask it to write an essay for you or critique an essay that you wrote. You could have it compose questions from a company where you had an upcoming interview or create a personalized epic poem for a relative's birthday. And this was just the start. For good reason, ChatGPT's capabilities got much acclaim. It was exceptional for individuals.

But for me, it was equally extraordinary in how it was deployed to the public. When it was released, ChatGPT was powerful in function, but far from perfect. In fact, for those who were keeping score, it was the fourth major model in OpenAI's GPT series. So why does this matter? OpenAI could have developed this technology behind closed doors until a small cadre of experts had decided it was performing in sufficiently effective and perfectly safe ways.

But instead, it took opportunities to invite the public to participate in the development process. This is called iterative deployment. Individual users were now at the very heart of the experience. And just as important, it gave them opportunities to have experiences that they sought or designed. This marked a critical shift in AI development and human empowerment. Iterative deployment allows for what Thomas Jefferson called the consent of the governed.

which applied in an AI context, is about how people embrace or resist new technologies along with the new norms and laws they ultimately inspire. If the long-term goal is to integrate AI safely and productively in the society instead of simply prohibiting it, then citizens must play an active and substantive role in legitimizing AI. This is how we get highly accessible, easy-to-use AI,

that explicitly works with you and for you rather than on you. But once we release AI into the world, how do we continue to manage it as a society? I believe the most effective way is through iterative deployment. But many, especially here in Europe, I know that's a stretch for the UK these days, but

may instinctively reach for regulatory action. And while I'm not unconditionally opposed to government regulation, I still believe the fastest and most effective way to develop safer, more equitable, and more useful AI tools is through iterative deployment. This allows us to take smaller risks with AI to better navigate any big risks.

When I say we must take small risks to navigate big risks, I should mention, both as individuals and societies, we are always taking risks, whether we know it or not. It's a common misconception that we can steer clear of risk, when in reality, stopping or pausing to avoid risk is a risk, and often a more perilous risk than embracing risk in the first place.

So if we are destined to always take risks, our focus should not be on avoiding them, but navigating them. And one of the wisest ways to do so is to use small risks to negotiate big risks. Taking smaller risk more often is less of a risk and allows for iteration, discussion, and continual employment. Sorry, continual improvement. That's what American economist Hyman Minsky suggests, particularly with his concept of the Minsky moment.

The Minsky moment is the point in time where a sudden decline in market sentiment leads to an abrupt big market crash, marking the end of a period of economic prosperity. To overly simplify it, the Minskyian thesis is that stability creates instability and that maximizing stability in the short run leads to instability in the mid and long term.

Too many safeguards on a financial system can actually make it more brutal. Sorry, more brittle. Brutal, too. And when things break, nobody's prepared and it becomes a huge event. We can learn from the Minsky moment as we think about this era in AI. This means finding the right level of AI safeguards and regulations, not only to encourage progress, but to better fortify a system that has more and more AI in it.

We must take small risks to navigate big risks. A lot of that's through iterative deployment, making AI accessible to a diverse range of users with different values and intentions at regular intervals. But to avoid the Minsky moment in AI, I'd hope we'd collectively shift our focus toward measurement and conversation rather than just regulation. To be clear, I'm not saying no regulation.

Just that we find ways to measure twice and cut once, and we cycle through more conversations on what to do through then immediately going to what's the immediate way to regulate. In short, let's regulate first by measuring. When governments say we're worried about this part of AI, the first question we reach for is,

How can we measure this worry or bad outcome versus, oh no, how can we stop or pause AI? This shift in public sector response and reaction AI is critical, not only for our countries, but because nations around the world are also having this conversation. This brings us to our third and final question. How might AI change dynamics between societies?

On the global stage, AI is poised to redefine the dynamics between nations, not just through the lens of military might, a common historical analogy, but through the subtler and arguably more relevant lens of economic power.

When transformative technologies have historically reshaped societies, they have done so by amplifying productivity, altering the balance of trade, and fundamentally redefining what it means to participate in the global economy. AI will be no different through the magnitude and complexity of its effects will be unparalleled. The military metaphor is tempting and not without precedent.

History reminds us that societies with superior weaponry often gain dominance over those without. This has led to an enduring focus on hard power or the ability to coerce or control through military means. Yet AI, while relevant to defense and security, extends far beyond this. Its true significance lies in the potential to act as an economic amplifier, redefining soft power as Joseph Nye conceptualized it,

Nations capable of integrating AI into their economies will not only enhance their global influence, but also transform their citizens into hyperproductive participants in the fast-evolving global market.

Consider how digital technologies such as the internet, mobile phones, and cloud computing have already reshaped commerce and connectivity. Even rural farmers, historically disconnected from major economic hubs, now use mobile apps to optimize their crop sales or forecast weather patterns. These incremental improvements have fundamentally altered how individuals and societies participate in the global economy.

AI will amplify this transformation exponentially. Nations that embrace this cognitive industrial revolution, analogous to the industrial revolution of the 18th and 19th centuries, will secure disproportionate wealth and stability. Just as countries that industrialized early came to dominate global trade, those that lead in AI will shape the contours of 21st century geopolitics.

However, the international response to AI is far from uniform. Global dialogues, such as those of the United Nations, reveal stark contrast in how different regions approach this technology. In the West, the primary question is often, should we allow this? Policymakers and citizens alike grapple with ethical dilemmas, private concerns, and fears of overreach.

However, in the global South, by contrast, the plea is more urgent. Can you please include us? For many nations in this block, AI represents not just an opportunity, but a potential lifeline to leapfrog decades of developmental hurdles.

Meanwhile, China asks, how can we use this to enhance governance and expand its global influence? Its investment in AI-driven surveillance and smart city technology exemplify a vision of AI as a tool for centralizing power and asserting dominance.

Rush's focus, though overlapping, leans heavily towards leveraging AI for geopolitical influence, often with destabilizing intent, such as the potential for cyberattacks targeting energy grids or communication systems. These divergent approaches underscore a broader reality.

The wants and needs of global societies regarding AI are becoming increasingly varied. Rogue players, whether states or non-state actors, add another level of complexity by seeking to weaponize AI for bioterrorism, hacking, or disinformation campaigns. This fragmented landscape raises a key question. How can the international community align around shared goals while addressing these divergent priorities?

History offers a partial roadmap. The post-World War II era demonstrated the value of inclusivity in global governance. Institutions like the UN and frameworks like the Marshall Plan not only aimed to rebuild, but also to onboard diverse nations in a shared vision of progress. This inclusivity fostered stability and cooperation, benefiting both dominant powers and smaller states. The same principle applies to AI.

While the U.S. and Europe understandably prioritize their own leadership in developing AI tools and models, we must also recognize the importance of including other nations in this process. Doing so not only establishes goodwill, but also mitigates the risk of creating a two-tiered global system.

where some nations benefit massively disproportionately while others fall behind. Yet, inclusivity is easier said than done. Bureaucratic processes and democratic systems, particularly in the West, often slow down decision-making. In an arena as competitive and fast-moving as AI, this can be a liability. Striking the right balance between speed and deliberation is crucial—

But if we err towards one, let it be speed. That's our best chance of shaping this new AI era for good, especially since this transformation won't slow down.

As we match it and move swiftly, we can rely upon the foundational principles of Western democracy and systems of checks and balances to wield power responsibly, even as we shape it swiftly. And in this system, the checks and balances are not just governmental, but the overlapping networks of the public and private sector, the press and nonprofits. All of us are participants and all of us can be beneficiaries.

The challenges ahead are still daunting, but so are the opportunities. Framing the dialogue around AI as technologically positive and forward-looking is essential. European nations, in particular, have an opportunity to lead by example, shaping conversations that emphasize collaboration, ethical innovation, and inclusivity. The question we ask today will define the world we build tomorrow.

What safeguards should be embedded in AI to ensure it serves humanity? How should international partnerships accelerate the benefits of AI while mitigating the risks? How can we ensure that AI development reflects a diverse array of cultural values and perspectives? As we start to answer these questions on AI, we will only get more questions, but we will also get more global progress, wealth, and opportunities.

AI is not a stagnant river. In fact, it's perhaps our fastest moving body of water. And soon, it will be the broadest and most far-reaching, with its tributaries extending throughout society. Like the Nile and Euphrates, it can be the cradle of our civilization if we continue to build with it. And so, we return to Heraclitus. And stand on the riverbanks. What's more, what will we do?

Cultures have so many parables about rivers. There's a Buddhist one that highlights the need to let go of tools once their purpose is served, and a Sufi one that teaches discernment in challenges, and an African one that underscores the faith in the unseen. There are countless more from Christian, Hindu, Indigenous, and many more communities.

They can all inform us, but I think we need a modern parable, one crafted for this AI era, one that resonates with both Heraclitus and then us today. I hope to contribute a line to this modern parable, but the truth is we must write it together, within our society, with other societies. There will be many drafts, many authors, and even more readers, but in the spirit of it matches these lines from T.S. Eliot.

We shall not cease from exploration, and the end of all of our exploring will be to arrive where we started and to know the place for the first time.

We will draw from this and traverse this AI river many times in this coming decade. It'll serve us to remember that even as we arrive where we started with AI, we need to approach the technology and our understanding of it anew over and over again. I think Elliot knew this. Elsewhere in the same book as the previous stanza, he says, the river is within us and the sea is all about us.

We are homotechnic. When we cross the river, we are deepening our understanding of technology and ourselves. And there's something more transformative and powerful ahead, the sea. Let's not cease from exploration. In technology, let's not cease from iterative employment. Modern society depends on it. Thank you.

Now, at least I'm sure you all see what I meant when I said he was an optimist. So we'll turn to questions in a moment, although I seem to have lost my online question tracker. I don't know what happened to them. And as I said, someone will bring you a microphone, say who you are, your affiliation, ask a short question. I'm going to, though, abuse the privilege of moderation and ask the first one, but I don't think it's the question you expected from me. So there is a certain lion lies down with the lamb expectation in this whole story.

And the question I want to ask is the political economy question in terms of how we're actually going to be able to do all of the things that you suggest. So and by that, this is not about AI initially. It's really about how we broadly think about AI.

economic regulation across the board. Our first assumption is we tend to think about people as consumers. We tend to think about the need then to have economic policy that drives down price and makes things easier for people to get. And we do that through increased productivity and efficiency.

And the difference between AI and earlier technologies like the loom, the loom could create the possibility for machines. The loom couldn't operate machines itself. That needed people for. But AI can presumably learn to do whatever new jobs it creates, particularly once robotics catches up to it, which is likely to happen in the future. So it seems to me that as long as the focus is on decreasing productivity,

through increased productivity and efficiency, there's never going to be or almost never a technology that can't be done cheaper, faster, and more effectively and efficiently by AI than by a human, in which case we do have the job displacement problem, which means the only way to make this shift is to think we also have to think about people as workers. And now we have to make a set of complicated tradeoffs.

between how much of a cut we're going to take in decreasing price and increasing productivity in order to preserve and create jobs. And I guess the question I have is, how do you actually see that happening realistically in the world that we live in?

So a few things. First, transitions are always difficult. So I'm not saying, like part of the whole thing is we really have to pay attention to transitions. And one of the benefits that we have with AI is that AI can help us with transitions. Like, for example, it can help with upskilling, help finding new jobs, and it can be dynamic along with, that's just kind of a general principle. Second, it's an actually unknown question about the speed at which

the evolution of work will be either replacement of humans or humans amplified. Some things are very clear in terms of replacement. So for example, if you have jobs such as customer service jobs that are humans following a script trying to be a robot,

the robot's better, right? That's what's going to happen with a job. Now, the jobs may also transform to where they're more strategic, right? Not just how is the cheapest way you can get a person off a phone because a phone call costs an average of $7 to $15 per phone call for most U.S. businesses. I don't know what the parallel pricing is here, but I'm aware of U.S. businesses.

It's probably similar. And so you're trying to drive down the cost, but all of a sudden it could be like a relationship building, a brand building exercise, and there may be other things in it, and there may be how humans participate in that.

And as a contrary parallel, for example, if you say, hey, with this AI tool, this salesperson is 10x more effective. I can tell you as a business person, you don't fire a salesperson, but you'd hire more, right? Because it's like, great, that's the good increase in productivity. And so this is like, I have to figure out a better word for this, but the technical-ish term that is frequently used is centaur, which is person plus machine. And I think there will be a...

long swath of a lot of those jobs. And so now the challenge still transition because the human will be replaced by human with AI. And that's part of the reason why the super agency and kind of getting involved in it.

Now, the final thing is say, well, what happens when we potentially get to this place? And by the way, it's not certain that we're getting to this anytime soon at my point of view. We get to this place where machines are just better all over the place. They're just cheaper all over the place. Do we like society collapse?

Well, there's an interesting historical parallel here from Europe and other countries where in the Middle Ages, that was the life that nobility lived, right? That basically everyone else worked, right? And so since, you know, everyone else worked, they had a life of, let's, we're going to dinner parties and we're doing theater and so forth. So it's still possible to have a society if you had a complete understructure. Now you need to have more kind of

make sure that the power enabled the right kinds of human agency, which is part of the reason why I'm publishing this book on super agency and a focus on human agency as a lens for technological development.

But even in that case, I'm not dystopic. Now, I tend to think that too often we just kind of draw straight lines. Okay, we're going to be there. You know, and there's a lot of discussion about, you know, universal basic income and UBI. And I think that's actually much more substantially in the future than people think. And also, but the real issue is that the intervening steps will be multiple transitionary periods.

And navigating those with as much grace as possible is the right thing to focus on.

Okay. I have, but I will, let's, you know, and where are the microphone people? By the way, just so you know, there, you know, to be at a place and love it as I do LSE is to love its flaws as well as its great qualities. So there are technical difficulties. And so we can't take questions online right now. So yeah, right over there. By the way, that's AI interfering with it, but that's fine. Yeah.

And no hallucinations. Hi, Reid, it's Graham Lovelace. My publication charts developments in generative AI from the perspective of creators. What do you say to the millions of creators around the world whose content has been gobbled up by AI? In many cases, those authors can no longer write in the subject they've spent their lifetime researching. One author the other day told me she spent 15 years writing a book. It is now available verbatim via AI.

Well, I do know that most of the integrity projects try to not make it so that you can simply get even a verbatim article, let alone a book. So when the New York Times published the stuff it was doing from its lawsuit, to get the AI model to reproduce the article, you had to put in the first half of the article.

So the presumption is you kind of had access to the article already. It wasn't possible to say, give me the article that was published on July 5th and da-da-da on this topic. And it doesn't generate it that way. So now part of the challenges of this in order to navigate this is that we have multiple countries developing AI models. And some countries care about the rights of these creators more than others. So, for example, China doesn't care.

And so part of how we have to navigate this, and it's part of the reason why global and kind of multilateral as a perspective is important to do. Now, I, for one, tend to think that there's a reason why we implement a, you know, kind of like, for example, when we define copyright, we define not you're only allowed to do the following things, but say these are the things that are most important. For example,

You're taking something that a content producer is producing for sale and you're now offering it where it's no longer – the sale no longer provides the producer. You shouldn't do that. And the kind of – the open AIs and Microsofts and Googles are all trying very hard not to do that. That being said, their landscape – this is a thing of disruption. Now there are – there is – if everybody is not actually in fact –

Starting to experiment and use AI, that's iterative deployment, for thinking about what the next phase of this looks like, you're going to fall behind very quickly. And I think that the fact that people tend to overly count how much my piece of content is really special. And so I had a discussion with this with a bunch of Hollywood creators around Star Wars.

And I pointed out to them that you could take all of the copywritten content about Star Wars out and AI could still generate Star Wars images, Star Wars fan fiction, and all the rest because there's so much content out there in the general web. It's an inference and generalization machine.

So, while the rhetoric of, you stole my article, is very simple rhetoric, it actually doesn't really fully apply. But I also know that the good players are trying to be good for the new creator economy as well. So, the woman at the fourth row in the white, up at the top. Yeah.

Hi, I'm interrupting this event to tell you about another awesome LSE podcast that we think you'd enjoy. LSE IQ asks social scientists and other experts to answer one intelligent question, like why do people believe in conspiracy theories or can we afford the super rich? Come check us out. Just search for LSE IQ wherever you get your podcasts. Now back to the event.

Somebody put your hands up. We can have the next question ready soon. Go ahead and ask the question. You gave the analogy of GPS, and I do use ChatGPT a lot. For example, when Trump got elected, I asked him, why did people vote for Trump? And he gave me a very balanced answer. And I just wonder whether you feel these stories

I call them supermodels, would become more like social media in the sense that they stop being balanced. They just feed us information we want to hear because I have no idea whether the answer I read is the same as somebody else's answer. Well, the funny thing about these AI models is they reflect...

Kind of currently, and this will evolve a lot, they reflect two things. One is an underlying source of data. And the second is an intense human reinforcement learning. And most of the human reinforcement learning tends to be like the usually the the biases that it reflects are.

are usually elite university graduate students because those tend to be the labor force that's most often deployed for the human factor training. And so the biases you tend to get tend to reflect that because of that. Now, they try to soften it through a training regime to have it be more cognitively diverse and inclusive, but that's the way it's historically

Now, that will change because I think that people will deliberately kind of start reflecting, well, this model, like, for example, you know, I was talking to a writer friend of mine recently, and the writer friend of mine was saying, I really like using Anthropics Cloud and I hate using Gemini because it's so complicated.

Because when I'm trying to craft difficult scenes, such as a suicide of a character, I put that into Gemini and Gemini says, oh, I can't talk about suicide. Well, I'm not allowed. Right. And yet Anthropics Claude allows her to do so. And so you'll see this kind of diversity of these different kinds of models. And I think we'll begin to see kind of like what the design intent through the reinforcement training is and what that is. And for myself personally,

I tend to think that the best way of having a discourse on these things is kind of like what The Economist does, which is I tell you what my general lens and approach is and want to reflect it, and then I'm trying to be as factual and accurate within that. And so I think you could see you have a – like in a U.S. director, you have a red bias model and a blue bias model, and you can ask both of them. Now, one of the things that's interesting about the current model is

And you can provoke them to both answers. Because one of the things that in experimenting, and this is part of the reason why I encourage everyone to do this, the most common prompting advice I give people is it takes roles exceptionally well. So if you go in and say, okay, explain Trump's election to me as from a viewpoint of a Midwestern red state,

and explain Trump's election to me from California's perspective, right? And you'll actually get two different answers, right? But that's actually part of what gives us the space for this being an amplification intelligence because that allows us to explore this. And I think that's part of what we both want to learn the tool to do and the kind of thing that we want the tool to naturally prompt into. But ultimately, this is all a very long explanation of

We train them to be certain kinds of generative devices. And as that kind of generative device, it will reflect the biases of what our training is. And so we should be more clear about what we're doing. We have a question there.

Is it work? Oh, yeah. Thanks for your talk. So this thought occurred to me when I was listening to a talk, so I didn't really think about it. So the question might be not well crafted. During your talk, you presented a lot of analogies, a lot of parallels with the past. But I think many technologies in the past didn't have analogies before. So do you think with AI, there's something new, something beyond that technology?

What didn't happen before and what do you think it can add as a positive or in a negative way? So I used past analogies because part of what people forget about is when the printing press was introduced, a lot of the discourse that we see around AI was also true then.

So, for example, it will destroy society. It will reduce human agency and cognitive capabilities. We'll no longer have to remember things and so forth. All our printing press. It'll destroy authority and truth because the authorities, mostly religious in those cases, would say, no, no, we who control this are the actual interpreters of, you know, the much more religious times of God's word, etc.,

And those parallels to what we're doing today are very stark and clear. You could literally take some of the quotes, what people are saying, and parallel them to some of the criticisms of AI.

And it's important to remember that because it isn't that we haven't gone through disruptive technology periods. Now, to new, of course, there's many things that are new. This is an unprecedented speed, right? The speed at which, like, I build something new in AI, I distribute it through digital services and the internet, and in theory, I could be at a billion people in the next day, right? That's new. The

The fact that it's – this is one of the things that Larry's question was gesturing at. The fact that it has itself agentic qualities, that it can make decisions, do things, that is new. The fact that it is much more difficult to understand for other people because even the technologists today – this has shifted a lot of the computer science.

What you might think is mathematical science to almost natural science. Like, the people who built the scale models didn't actually know what they would become. And one of the things I track as I look at, to use OpenAI as an analogy, like GBD2, 3, 3.5, 4, 4.0, 1.0.

Like, no one, like, we kind of discover what the new features are once we trained it. Like, that's new, right? And so there's a whole set of those things. And then, of course, critics or people who are,

who are advocating for pause or stopping or other things, point to that and say, no, until we fully understand that, we shouldn't allow it. The problem is, is that's not how human societies work. That's not how nations and industries that are competing with each other work. Like, that's what sets the clock. And then we have to navigate within that speed of doing it. And so, yes, there are new things. Now, they say, oh, is speed just terrible? Well, think of it this way.

Every month that we delay getting a medical assistant on every smartphone is a month that we delay billions of people having access to what's happening with me, what's happening with my child, what's happening with my parent, what's happening with my pet, and what should I be doing? The

The cost of that and lives and suffering is huge. And so speed of getting there is actually a massive beneficial in a humanism context. It's not just the challenge of navigating. It's also that. And that's part of the reason why, in general, if I say, well, look, we should navigate intelligently. But if you have an error, error towards a little bit too much speed.

So let's go to the woman in the green sweater here and then the man in front of her. No, no, down here. Oh, there's right here. Oh, you already gave it away. All right. That person. Yes. Then we'll go down to the woman in the green sweater. And then somebody from here. We'll get there. Hello, Reid. Firstly, I'd like to apologize in advance. By the way, say if you would again, just say who you are, affiliations, whatever. My name is Syed Mohammed Waheed and I'm actually an LSE alum. I completed my master's here in MIS in digital innovation.

and I'm currently working as an AI research officer, which is why I found this talk of yours to be particularly interesting and insightful. And I'd like to apologize once again in advance that this probably won't be the shortest question you'll be presented with today. Try and make it short-ish. Can you use an AI to help you with that? Let's hope so. I mean, so I fully agree with the perspective of yours that

AI has an immense influence and impact on our society. And it even goes as deep as shaping our rationalities and personalities over the course of time. And I couldn't help but tie it to a paper that I read a while ago, which is titled Ontological Reversal, where physical reality used to guide the digital one, but now it's the other way around where digital guides the physical reality. A very simple example of this would be the GPS, a technology that you certainly like to cite often is GPS.

how the blue dot guides us and we follow that wherein the digital reality comes first and physical comes second. However, I also happen to agree with the other statement that you put forth wherein you view AI or technology in general from a value sculpted perspective wherein humans iteratively shape AI and even can potentially attain super agency over it.

So, so now that we agree to both statements that AI shapes and guides our society and we shape and guide AI's development, which party do you think engages more in the sculpting bit? And how, how do these sculpting dynamics actually play out? By which party? What do you mean by which party? As in humans or AI? Oh, tricky. Yeah, I'll get it. The, the,

I ultimately think that it's best to keep a lens of humanism on this. And so I do think, like one of the things that I've helped facilitate in the last year

eight years, has been trying to make sure that the various leading AI labs are talking to each other about what are called alignment issues, which you know, which is how does it align with kind of theories of human well-being. And some people think that means maintaining rigid control. I don't necessarily think that, but

But I do think that putting a theory of the human good at the center of what you're building is actually, in fact, seriously important. And that's part of the role that humanism, humanities, philosophy play in this. And I think that's very important. So it is a chicken and egg because it's a loop.

But keeping that design principle in mind is, I think, one of the things that we really must do. And I think individual agency is a good way of even kind of focusing that lens that could otherwise be too diffuse, which is part of the book I'm publishing in January called Super Agency.

Hi, Reid. My name's Natalie Black. I'm a visiting professor in practice here at LSE, but my day job, I hesitate to admit, is a regulator. Oh, no! So I wanted actually to build on Larry's question. The point about market power, how do you see the market evolving here? Are we going to see five, 10 years time, just massive big companies who have gobbled up the little guys?

And how does that tension sit with what you said on international affairs? Do you think it's going to be a requirement of foreign policy for governments to have national champions in AI in order to stay relevant? So I think there will be a limited number of what are known as frontier models, which is the largest set. I do think that it's very useful to have, if you can manage it,

some access for one particular country, any one particular country's access given its general throw weight. Just like, for example, the US benefits enormously from having most of the international global internet giants. I think that is a useful thing, but it won't be available obviously to most countries.

So then the strategy of that, I think, is to say, well, but it's not the frontier models. It's not like the Lord of the Rings and the One Ring, right? There's actually going to be a whole set of different trained specific models for specific businesses. There's a lot of reasons why, like when, as these models get larger, they get exponentially more costly to compute. And so actually, as the models get larger, I don't think they're going to be putting them out in a

All you can eat, man, because it's too expensive for providing them. So I think when I give advice to call it a national strategy for countries, it's to try to figure out just kind of like also parallels with entrepreneurship and other technologies. What are places where you can have, you know, some areas of global leadership still focusing on AI? Because there's going to be a ton of stuff around AI. There's going to be

AI for all different kinds of areas of science, whether not just medicine, which is an obvious one that's coming up, but also like material science and a bunch of other things. There's going to be AI in terms of, well, what's the characteristic of the agents that we're using? There's going to be literally like

you know, millions of agents, not just the instantiations of agents, but agents in different kinds of things in a marketplace of where does it play in the market? And so how does that play in that? And the thing that a lot of my speeches and content to regulators are trying to do is to say, look, the impulse to say, we want to contain all of it, we want to study it before we allow it,

will then move the things that are in your purview out of the global competitive clock. And then you won't be shaping it. So it's to focus on not just trying to avoid what could possibly go wrong, but what could possibly go right and understand there's a speed clock to it.

And so, and by the way, just to parallel where I'm putting my money, where my mouth is, I'm investing in a whole bunch of AI companies that are not Microsoft, Google, OpenAI, et cetera, because I think there will be great opportunity there. And so I think that it is actually, in fact, possible to do. I just think it's now much harder for a startup to do the, well, first I must build a $10 billion computer. That's very hard for startups.

Harry Smith, Institute of Philosophy, University of London. Hi, Reid. It's a very attractive picture, the idea of iterative deployment and each of the individual agents enhancing their agency, the super agency idea. And they're scaffolding us in the development of our intelligence and agency, and we're scaffolding them.

But I wonder whether there's a question about responsibility to really help people understand what they're engaging with, because some people will think of them as tools and other people will think of them as colleagues, especially as AI assistants get better at mimicking and responding to us and eliciting responses from us. And whereas when we're

When we used to play with ELISA or any of these things, within the frame, they're very convincing, but we can step outside the frame and say, okay, that's a setup. The bigger frame here is that a lot of people believe they really are intelligent, they really are colleagues. And I think there's a kind of responsibility issue there to make sure that the people who are in the iterative deployments understand.

are perfectly aware of the kind of wider context, who's providing these, what they're trying to do, how they're designed. Don't you think? Broadly, I agree. So when I co-founded Inflection AI, the kinds of things we were doing in the human training to this very first question was to try to make it very clear in the interaction

that it is a kind of a catalyst companion, but not a human being. So for example, if you go to Inflection's pie, personal intelligence, and you say, you're my best friend, it says, no, no, no, I'm your agent. Let's talk about your friends. Have you seen any of your friends recently? Is there, you know, da, da, da, da, da, to try to do that. And that's, again, a human design principle. I don't think you need to like stick like warning labels on it as much as what the design principles are.

And by the way, one of the things that's interesting when you get to AI is you say, well, why am I humanistically optimistic is I know from having done this that you can train AIs to have more compassionate conversation on average than your average human being. So that's a good thing for helping us elevate our compassion characteristics. Right? Yeah.

I would be willing to take it back. Right. So anyway, so that's obviously we can go very philosophically deep on that. So over here, then over here, then I'll get you. So go ahead. Hi, I'm Simon Levine, venture capitalist and one of the 10,000 that Larry mentioned at the start. My question is around speed. And you mentioned China as a

perhaps the kind of proxy for the fastest moving kind of rule breaking authority. If you were to pick one or two important spaces like kind of the

bioengineering or privacy you know china's going to be the vanguard of that but some of what they might be doing might be challenging for our values and how do you think about navigating those kinds of trade-offs like is it going to be a good thing if ai is able to through ivf

create a super species of kids and China's at the front of that. Do we want to keep up with them? So part of the thing that I tend to think that the human society, that humanity as a species tends to adopt the compelling things that it first finds. That's one of the reasons why speed matters.

So I think getting AI deployed to a global scale with the kind of values that we hope that it embodies, that speed is one of those coefficients. That's one of the reasons why I say speed. That doesn't mean that we should necessarily endorse a –

bad genetics program. Now, part of the thing is genetics is complicated because you say, well, let's see, let's get rid of Huntington's. So that seems like a good thing, you know, et cetera. So, you know, there's this whole very philosophically slippery slope that you don't want to end up with everyone having blonde hair and blue eyes, right? As an, as a, you know, eugenics gesture.

And so those are questions we'll have to address. But I think that's part of the reason why being multilateral was, I think, so important. But I do think like on a softer topic than that one, but that one's a very real one, like China is not going to care about.

IP and copyright in the models they're going to deploy, and they're going to try to get them to get global scale. So if you want those kind of creator content protection mechanisms, you want to get there first with the kinds of things that people want to use as an instance. Now, on, you know, kind of what does it mean for programming genetics?

I think it's going to be extremely important to do the work and then embody the values and have the discussions about what values we should have. I would settle for hair, though. Okay. Let's talk afterwards.

Hi, thanks very much. Sherry Kutu here, alum from 86, but also entrepreneur. Reid, I hear, again, a super agency and, you know, the friend, you know, our best friend is going to be embracing this with speed. And that if we don't do that, there will be implications that we feel really uncomfortable with. Right.

And, um, there's different stakeholders in our economy that need to pull different levers and that can take different actions. And you mentioned governments and they have some levers that they can pull. Um, there are businesses and many of them, um, more powerful than, um, than, than governments where they can pull those levers and accelerate the, um,

the use of AI and the absorption of AI. What should we be most excited about? Or what are you most optimistic about the lever being pulled first? Is it governments acting so that we can bring about this future? Or is it businesses? I know we all need to move, but what fills you with the greatest optimism? And if there are messages that we got people or stakeholders to adopt,

What's going to move us forward faster? What are we sensitive to that will make the biggest difference? So the first thing for governments is to not fuck it up, which it's very easy for governments to do, mostly in a well-meaning and well-intending sort of way. But you always have to try to calibrate your own sense of knowledge and self-importance on these things.

The next thing is that most people don't recognize that this is intrinsically – and this is true of most general purpose technologies as they get to scale in societies – is going to be developed by businesses. So it's by working with businesses to make it happen and not thinking of businesses as the classic – and I am impressed that I haven't gotten the classic academic –

What are these corporations as sociopathic profit machines going to rule our destinies and end the world, which is the usual academic navigation I have to do? And I'm glad that I haven't gotten that question yet. I have a few people I can call on. Yes. If you really need it, they're out here. No, no, no. I'm fine. I answer those questions relatively regularly. Okay.

But it's how do we nudge and shape through this consent to the governed and interlacking networks that include government governance, but also include customers and employees and communities and press and all the rest for doing that as a way of navigating it. Now, there are ways we can accelerate it. For example, there's two key things.

that are the natural advantages that as China really pulls into this race, that Western

companies face as a challenge. One, to Simon's question, is like, for example, access to the right forms of data, because data is what powers this. And if you don't create data commons, well, all of the genetics and medical information will be dumped on all the Chinese companies from the Chinese sources, maybe even including data that they've hacked from Western sources in order to build those models. And once the products are there,

then that is what it is. So, like, availability of data to use is important. And then energy, right? Because one of the things that autocratic regimes are better at than democratic is bulldozing literally, here's where we're going to get a couple of terawatts. And that's going to be really critical within a two plus, two to five year timeframe for how to play. And that game is now, right, as a way of doing it. So...

Okay, so first, just before we're not...

There's so many hands up. I really appreciate that. We're short on time, so we're going to take the last two questions from the two people who have the microphones. And so we'll do two at once. And then because we're going to run out of time. So go ahead, boom, boom. Hi. Apologies that I can't get to everyone and Reid apologizes even more. Yes, I do. Hi, Reid. Ajit Mehta here, LSE alumni. So my question is, what do you think will be the next new technology which will impact humanity after AI? That's a good both questions. Yes. David?

Hi Reid, David Rowan. As you say, humans are not great at transitions, especially when they fear their job.

their livelihood is at stake. So whose moral responsibility is it to ensure that they have both the mindset and the skills for the higher level tasks when government is kind of short termist and heavily indebted? The tech companies are ruthlessly individualistic, and the universities, with respect, still train people at their late teens and early 20s mainly.

Precisely why I said Colin David. And for those who don't know, David used to be the editor-in-chief of Wired UK. So to the first question, I don't really know. I do think that we're, as per the earlier question, the digital reinforcement into physical reality is on a very fast clock. So with AI, possibly quantum computing,

We're going to invent new medicines. We're going to invent new materials. And so that clock is going. So I tend to stay as an investor and as a focus myself very clear on that digital transformation of society.

you know, people pitch me on fusion power. I've invested some in that. I'm hopeful. It's what I, I don't actually do it as an investor. I do it as a philanthropist because clean energy is the way to solve all this stuff. Like I tend to think that everyone who is not focusing on plentiful clean energy doesn't understand how the climate dynamics really work. And you're not going to get like, you'll get a, uh,

a delay of bad consequences from various treaties, but the only way you're going to solve it is that. So I invest that. But I don't really, I'm not a specialist in that. I do that philanthropically. So that's what I'd be hopeful for. And David, to your question, look, I think part of the reason why I'm advocating in super agency iterative deployment and an individual focus is

is to make sure that we have the interlocking conversations to solve this. Because I don't think it's a get a blue ribbon panel of, you know, you know, ex-experts and solve this problem. I actually think it's very hard for us as human beings to imagine what the future human beings look like, right? So if you told people before,

they experienced what mobile phones were, we're going to put a GPS tracker in your pocket that's always listening to you. People would have been terrified.

And now they're like, oh, yeah, yeah, yeah. No, everyone's – they're more distressed about misplacing their phone than they are their wallet, right, in terms of how this operates. And we then begin to go, okay, this is what's important to make sure that we're having the right experiences as individuals and societies for making it happen and having that discourse in interlocking networks, not just company to government, right, but in –

around employees, part of the role of press, of universities. And I think that's the key thing to happen. So when, you know...

I was kind of asked to come give this lecture. That's the reason I was willing to do it because I think we need to have a broad discourse on it. So it's in a sense all through conversation. But I actually – the one thing I would also just say is that it's too easy for the press to frequently describe companies as only these –

you know, sociopathic profit machines. And yet when you look at many of them, they don't operate only that way. They operate that way too, but not only that way. Partially because the various people who are in positions of authority and power, which there's a lot of these companies, go home to their families and their friends and their communities

and want to describe why they are being heroes in what they are doing, right? And that is part of how we get into this process and the reason why the discourse is so important and the reason why in the super agency thesis of we've got to get it in people's hands, people have got to play with it. And then that guides our intuitions and guides our conversations.

Okay, I'm going to do one more question because there's somebody back there who has been really wanting to ask a question and I just got to give him the mic. Yeah, yeah, you. Good on you.

Thank you so much. My name is Cesar, I'm a co-founder at Lobby, we are Flatmate Finding Up. I wanted to ask you a question about regulations in AI. I want to ask if by regulating AI and not pursuing AI in some dangerous spaces, does that actually make opportunity for other players who are not as motivated by morality to pursue the AI in those

kind of ways. And another question I want to ask if they do, then should we also pursue it in that way to make it as a kind of deterrent? The same situation that happens with nukes, when two or more countries develop nukes that actually stopped countries from using the nukes.

So I think it's important to get – I think that if you delay in deploying AI things that millions and billions of people use, that once they start using them, that will create a gravity well around it. That's one of the reasons for speed. And so those sorts of things I think are important to do. Now, I have been an advocate for being more careful about releasing –

What is actually technically open weight models? People call it open source because open source, which I was on the board of Mozilla for 11 years, we've open sourced many projects from LinkedIn, et cetera, et cetera, I think is very good. There's ways that that adds a lot of good dialogue and safety. Open weights, right?

don't have all those benefits. And they do empower startup founders and academics, but they also empower criminals, terrorists, and rogue states. And you have to think on the open weights models

On this particular model, is it good to have that equal empowering? Now, part of it, which gets back to your question, is that there will be a number of players, not just the Chinese, but others that will be providing open weight models. And so you have that kind of race condition where you're trying to

orient it the right way, but you also have to fill the void somewhat. And that's the balance you have to hit. But you have to do it with kind of best judgment risk assessment. And that doesn't mean that, for example, even though someone might release a good model for bioterrorism in an open weights model that we should get there first, because that's not something we're trying to empower, right? So it's kind of like that's the reason why it has this kind of very detailed sort of answer.

Thank you. So two last quick things. So one is we've been asked by security if when we finish, which we will do in a minute, if you guys could just hang in your seats until Reid leaves. I'm going with him, but it's more him than me. Nobody cares about me. That would be great. I care about you. Thank you. Second, I'll just say as a way of thank you, you know, one of the things I love.

And it's always struck me about the Academy is the attitudes that we give that you get in so much of your class and teaching kind of run the gamut from blasé cynicism to anger and how bad things are. And so it's really kind of inspiring to have somebody come in and offer such a wonderful, forward-looking, hopeful vision. And with that, I hope on behalf of everybody, let us just say thank you to Reid for taking the time. Thank you.

Thank you for listening. You can subscribe to the LSE Events podcast on your favourite podcast app and help other listeners discover us by leaving a review. Visit lse.ac.uk forward slash events to find out what's on next. We hope you join us at another LSE Events soon.