We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Good Robot #1: The Magic Intelligence in the Sky

Good Robot #1: The Magic Intelligence in the Sky

2025/3/12
logo of podcast Unexplainable

Unexplainable

AI Deep Dive AI Chapters Transcript
People
E
Eliezer Yudkowsky
J
Julia Longoria
K
Kelsey Piper
N
Noam Hassenfeld
S
Sam Altman
领导 OpenAI 实现 AGI 和超智能,重新定义 AI 发展路径,并推动 AI 技术的商业化和应用。
Topics
Noam Hassenfeld: 我们将推出一系列节目,探讨人工智能背后的故事、人们的信仰、以及这些故事如何塑造人工智能的未来。我们将关注人工智能的潜在风险,例如一个超级智能的人工智能可能会被赋予简单的目标,最终导致灾难性的后果,甚至毁灭整个银河系。 Julia Longoria: 理性主义者社区的思想实验,例如“回形针最大化”,清晰地展现了人们对人工智能的经典恐惧,即如何控制一个能力超过我们的人工智能。理性主义者们担心的是,随着我们构建更强大的人工智能系统,我们可能会失去对它们的控制,从而导致灾难性后果。人工智能末日的威胁无处不在,从亿万富翁埃隆·马斯克到联合国,都认为人工智能构成了生存风险。本系列节目将探讨人们是如何相信人工智能末日论的,以及我们是否应该为此感到害怕。 Kelsey Piper: 我15岁时,通过阅读《哈利·波特与理性方法》这篇同人小说,接触到了理性主义社区,并开始思考人工智能。埃利泽·尤德考斯基认为,我们将建造比人类更聪明的人工智能,这将改变世界,但做到这一点极其困难,事情很可能会出错。 Eliezer Yudkowsky: 世界正在严重失误地处理机器超级智能的问题,如果有人在当前体制下建造超级人工智能,那么每个人都会死。 Sam Altman: 超级人工智能即将到来,我们需要思考如何部署、管理和确保其安全,使其造福人类。

Deep Dive

Shownotes Transcript

Before AI became a mainstream obsession, one thinker sounded the alarm about its catastrophic potential. So why are so many billionaires and tech leaders worried about… paper clips?

This is the first episode of our new four-part series) about the stories shaping the future of AI.

Good Robot was made in partnership with Vox’s Future Perfect team. Episodes will be released on Wednesdays and Saturdays over the next two weeks.

For show transcripts, go to vox.com/unxtranscripts)

For more, go to vox.com/unexplainable)

And please email us! [email protected])

We read every email.

Support Unexplainable by becoming a Vox Member today: vox.com/members)

Learn more about your ad choices. Visit podcastchoices.com/adchoices)