We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Yuval Noah Harari on trust, the dangers of AI, power, and revolutions

Yuval Noah Harari on trust, the dangers of AI, power, and revolutions

2025/6/4
logo of podcast Possible

Possible

AI Deep Dive AI Chapters Transcript
People
A
Aria Finger
R
Reid Hoffman
Y
Yuval Noah Harari
以色列历史学家和作家,著名于其对人类历史、AI和未来社会的深刻分析。
Topics
Yuval Noah Harari: 我认为人工智能的潜力比写作更重要,因为它可能导致一个新物种的崛起,甚至取代智人。当然,这也带来了一系列挑战。很多人担心人工智能会导致大量失业,人们需要不断学习新的技能才能适应。更重要的是,人工智能可能被用于操纵和控制,导致极权主义的复兴。因此,我们需要建立自我纠正机制,确保人工智能的发展符合人类的利益。不过,我对人类的未来仍然充满希望。我相信人类有能力建立一个更美好的社会,而人工智能可以成为我们实现这一目标的工具。 Reid Hoffman: 我同意人工智能具有巨大的潜力,但也存在风险。我认为所有技术本质上都是中心化的,但我们可以选择使其去中心化。工业革命为民主、中产阶级社会等提供了潜在的基础,因为它实际上允许了广泛的教育、财富和繁荣。我们需要从历史中学习,并制定一些指导方针,确保人工智能的发展能够促进人类的福祉。我认为,利用技术,特别是人工智能,来提高我们自我纠正机制的速度,是一个关键的策略。 Aria Finger: 我认为技术本身是中立的,关键在于谁来使用这些技术,以及它们是被用来放大还是削弱人性。我们需要确保人工智能的发展能够服务于人类,而不是相反。

Deep Dive

Chapters
Yuval Noah Harari discusses the importance of building trust in AI and the need for a trustworthy philosophy as the foundation for a good society and benevolent AIs. He argues that a cynical, power-hungry worldview hinders the creation of trustworthy AI.
  • Untrustworthy AI tycoons cannot produce trustworthy AI.
  • A trustworthy philosophy is crucial for a good society and benevolent AIs.
  • Observing oneself and acknowledging others' interest in truth are key to building trust.

Shownotes Transcript

What will it take to create AI that is as trustworthy, if not more trustworthy than humans? 

This week, Reid and Aria sit down with Yuval Noah Harari, historian, philosopher, and best-selling author of several books including Nexus, Sapiens and Homo Deus. When it comes to outlook on AI, Yuval, Reid, and Aria agree on the importance of building both human trust in AI and AI that is genuinely truth-seeking, but they differ on how possible it is to achieve. 

Together, they dig into their diverging opinions on the outcomes of the AI revolution, global cooperation, and how AI will learn from humans. They also discuss the differences between intelligence and consciousness, and whether conscious AI is a goal worth pursuing. 

Yuval turns to history to ground his warnings about AI. Even though he’s cautious about technology, he is critical of cynicism. Yuval shares his philosophy on human compassion as a guiding principle that can allow us to steer away from collapse and ultimately, build a better AI future. 

For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/

Topics:

3:38 - Hellos and intros

3:58 - Questions for the Buddha

5:48 - Yuval’s relationship with technology

8:57 - Technologies that help humans share stories and myths

10:37 - Is AI the most significant invention after writing

13:02 - How AI will transform society

20:12 - Guidance for a successful AI revolution

24:24 - Using AI to support humanity's self-correcting mechanisms

26:13 - Midroll

26:45 - How to build self-correcting mechanisms for a better future

31:28 - Humans as parents of AI

36:33 - What political leaders need to do to create a positive AI future

39:11 - Artificial intelligence v.s. artificial consciousness

42:35 - AI as a tool for rebuilding trust

44:50 - Rapid-fire Questions

__Select mentions: __

History of the Franks)** **by Gregory of Tours

Heartstopper)

Possible is an award-winning podcast that sketches out the brightest version of the future—and what it will take to get there. Most of all, it asks: what if, in the future, everything breaks humanity's way? Tune in for grounded and speculative takes on how technology—and, in particular, AI—is inspiring change and transforming the future. Hosted by Reid Hoffman and Aria Finger, each episode features an interview with an ambitious builder or deep thinker on a topic, from art to geopolitics and from healthcare to education. Each episode seeks to enhance and advance our discussion about what humanity could possibly get right if we leverage technology—and our collective effort—effectively.