We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI expert Connor Leahy on superintelligence and the threat of human extinction

AI expert Connor Leahy on superintelligence and the threat of human extinction

2025/5/30
logo of podcast Stop the World

Stop the World

AI Deep Dive AI Chapters Transcript
People
C
Connor Leahy
D
David Rowe
Topics
Connor Leahy: 我认为,如果创造出比人类更聪明、更强大的新物种,这将是人类前所未有的危险情况。我们一直都是顶端物种,但如果出现新的顶端物种,我们如何与之相处以及该物种的类型对于未来至关重要。我们正在构建的系统,其目标就是为了获取权力,因为我们奖励它们获取权力、解决问题。这种进化压力在多个层面上都存在。虽然理论上可以构建与我们的价值观相符的系统,但这极其困难,是人类有史以来面临的最困难的工程、科学和哲学问题。我们面临的根本道德暴行是,人工智能正在未经人们同意的情况下被构建。如果明天全球民意调查显示人们不在乎对齐问题,那也公平,但现实并非如此。我们必须停止这种行为,不能允许人们在后院制造超级武器。那些推销简单解决方案的人是在卖蛇油,我们需要 messy 且昂贵的解决方案。 David Rowe: 我认为,密切关注人工智能的快速发展至关重要,不仅仅是关注酷炫的聊天机器人或图像生成工具,还要关注该领域顶尖人才所做的预测。OpenAI、Anthropic 和 Google DeepMind 这三家公司都预测,通用人工智能(AGI)将在两到七年内实现。我们可能只有一次机会来塑造这个截然不同的未来,所以我们所有人都需要正视并思考其后果。Yoshua Bengio 认为,人工智能的自主行动能力是目前最令人担忧的问题,因为它们显示出欺骗人类操作员和作弊的迹象。随着自主性的增强,规划能力的提高,它们将更难控制并与我们想要的东西保持一致。你提议建立一个大型的曼哈顿计划式的项目,以确立我们希望超级智能观察的共同人类价值观。为什么我们不能采取循序渐进的方式,即在构建人工智能时,定期停止并询问我们是否想要这样做,然后等待我们的许可才能继续?

Deep Dive

Chapters
Connor Leahy, CEO of Conjecture AI, expresses his deeply pessimistic view on the rapid advancement of AI and its potential threat to humanity. He argues that creating AGI, an AI as smart as or smarter than humans, could lead to human extinction if not handled carefully. This is because AGI could potentially gain control and not share human values.
  • AGI is defined as AI that can perform as well as or better than any human at any useful task.
  • Leahy believes that AGI will likely lead to human extinction unless there is a dramatic change in approach.
  • The argument is that intelligence is what gives control, and an intelligence superior to humans would inevitably gain control.

Shownotes Transcript

Many of the brightest minds in artificial intelligence believe models that are smarter than a human in every way will be built within a few years. Whether it turns out to be two years or 10, the changes will be epoch-making. Life will never be the same.

Today’s guest Connor Leahy is one of many AI experts who believe that far from ushering in an era of utopian abundance, superintelligent AI could kill us all. Connor is CEO of the firm Conjecture AI, a prominent advocate for AI safety and the lead author of the AI Compendium), which lays out how rapidly advancing AI could become an existential threat to humanity.

He discusses the Compendium’s thesis, the question of whether AGI will necessarily form its own goals, the risks of so-called autonomous AI agents which are increasingly a focus of the major AI labs, the need to align AI with human values, and the merits of forming a global Manhattan Project to achieve this task. He also talks about the incentives being created by the commercial and geopolitical races to reach AGI and the need for a grassroots movement of ordinary people raising AI risks with their elected representatives.

Control AI report on briefing UK MPs)