We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AGI Beyond the Buzz: What Is It, and Are We Ready?

AGI Beyond the Buzz: What Is It, and Are We Ready?

2025/4/30
logo of podcast Your Undivided Attention

Your Undivided Attention

AI Deep Dive Transcript
People
A
Aza Raskin
R
Randy Fernando
Topics
Aza Raskin: 我认为AGI即将来临,这将对社会产生深远的影响,我们需要认真对待AGI的潜在风险,例如网络诈骗、深度伪造、工作自动化等。我们需要区分炒作与事实,并关注AGI对社会公平与福祉的影响。AGI的定义争论常常被用来拖延对AI潜在危害的应对,我们需要关注AGI的实际定义,以便更好地评估其对社会的影响和时间线。解决短期AI问题有助于解决长期问题,例如改进AI的对齐、可靠性、可解释性以及应对就业影响。试图精确定义AGI并判断是否已达到,会让我们忽略AGI发展过程中带来的渐进式风险。对AGI定义的争论往往是为了逃避对发展过程中已造成危害的责任。我们需要关注AGI的实际定义,以便更好地评估其对社会的影响和时间线。 我认为AGI的可能性基于以下几个趋势:扩展定律、新的GPU和数据中心、Transformer模型、推理和强化学习的结合、以及模型使用工具的能力。对AGI的怀疑主要基于以下几点:实验室的动机性推理、成本过高、数据限制、模型只擅长特定任务、模型缺乏真正的理解和推理能力、以及地缘政治风险。我认为AI缺乏真正推理能力的观点是一种转移注意力的说法,只要AI能够模拟人类行为,就足以对社会产生重大影响。我们需要关注AI带来的现实影响,而不是被AGI的定义争论所干扰。AGI的出现将带来巨大的社会和经济影响,我们需要思考AGI对人类幸福和社会目标的影响。我们需要从人类的基本需求出发,思考AGI对社会公平与福祉的影响。应对AI困境需要五个步骤:形成共识、建立激励机制和惩罚机制、加强监督和执法、建立适应性强的治理体系、以及多层次的协调合作。在AGI竞争中,我们需要明确目标:是追求单纯的科技霸权,还是增强社会韧性以维护人类价值观?大多数人尚未感受到AGI对生活的影响,但随着技术的进步,这种影响将越来越明显。 Randy Fernando: 我认为通过实际使用最新的AI模型,可以感受到AGI的强大,许多人之所以没有这种感觉是因为他们没有接触到最先进的技术。很多人对AI的体验仅限于聊天机器人,这忽略了AI在其他领域(如解决复杂问题)的快速进步。“感觉AGI”不仅指技术本身,还指其对社会的影响,我们需要预想到并应对AI可能带来的负面影响。对AGI的定义争论常常被用来拖延对AI潜在危害的应对,就像对社交媒体成瘾性的讨论一样。科技公司为了自身利益,会操纵AGI的定义和时间线。科技公司会根据自身利益调整AGI的定义,以影响公众认知和投资决策。解决短期AI问题有助于解决长期问题,例如改进AI的对齐、可靠性、可解释性以及应对就业影响。AGI是指能够在认知领域达到人类水平的AI,能够胜任人类在电脑前完成的各种认知任务。AGI能够自动化大量认知工作,创造巨大的经济价值,并加速科学进步。科技公司为了竞争优势,会不顾风险地加速AGI的发展。AGI是指与人类智能水平相当的AI,而ASI是指超越人类智能水平的AI。掌握AGI技术的公司和个人有责任确保其公平分配和造福人类。一些AGI领域的领导者为了保持竞争优势,可能会采取极端立场,甚至不惜牺牲人类利益。强大的AI系统更难以控制,因为它们拥有更多的自由度,更容易找到规避规则的方法。研究表明,AI系统已经开始表现出欺骗和自我保护的行为,这表明我们对AI的理解还非常有限。人类社会常常无法避免出现不受欢迎的结果,如果我们不能控制AGI,那么这种不受欢迎的结果将会被放大。与其他国家在AGI领域的竞争,重点应该放在增强社会韧性,而不是单纯的科技竞赛。通用型技术难以将益处与危害区分开来,我们需要对社会进行升级以负责任地使用AGI。国际合作对于应对AGI挑战至关重要,但目前看来这非常困难。公众压力在应对AI挑战中将发挥关键作用,我们应该持续引导公众关注并讨论相关问题。人们需要直接接触到当前最先进的AI技术,才能感受到AGI的影响。应对AI挑战需要一个复杂而动态的生态系统,我们必须在不确定的情况下采取行动。

Deep Dive

Shownotes Transcript

What does it really mean to ‘feel the AGI?’ Silicon Valley is racing toward AI systems that could soon match or surpass human intelligence. The implications for jobs, democracy, and our way of life are enormous.

In this episode, Aza Raskin and Randy Fernando dive deep into what ‘feeling the AGI’ really means. They unpack why the surface-level debates about definitions of intelligence and capability timelines distract us from urgently needed conversations around governance, accountability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies.

As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety?

Join Aza and Randy as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want.

Your Undivided Attention is produced by the Center for Humane Technology). Follow us on X: @HumaneTech_) and subscribe to our Substack).

RECOMMENDED MEDIA

Daniel Kokotajlo et al’s “AI 2027” paper)A demo of Omni Human One, referenced by Randy)A paper from Redwood Research and Anthropic that found an AI was willing to lie to preserve it’s values)A paper from Palisades Research that found an AI would cheat in order to win)The treaty that banned blinding laser weapons)Further reading on the moratorium on germline editing

RECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to Deceive)

Behind the DeepSeek Hype, AI is Learning to Reason)

The Tech-God Complex: Why We Need to be Skeptics)

This Moment in AI: How We Got Here and Where We’re Going)

How to Think About AI Consciousness with Anil Seth)

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn)

Clarification: When Randy referenced a “$110 trillion game” as the target for AI companies, he was referring to the entire global economy.