AI risk discussions often focus on malfunctions, misuse, and misalignment. But this often misses other key challenges from advanced AI systems:
Coordination: Race dynamics may encourage unsafe AI deployment, even from ‘safe’ actors. Power: First-movers with advanced AI could gain permanent military, economic, and/or political dominance. Economics: When AI generates all wealth, humans have no leverage to ensure they are treated well.
These are all huge hurdles, and need solutions before advanced AI arrives.
** Preamble: advanced AI** This article assumes we might develop human-level AI in the next few years. If you don’t agree with this assumption, this article probably isn’t for you.[1] I’ll call this advanced AI to distinguish it from today's AI systems. I’m imagining it as more competent versions of current AI systems[2] that can do what most remote workers can. This AI would be superhuman across many domains, and human-level at almost all economically [...]
Outline:
(00:45) Preamble: advanced AI
(01:17) Common AI risk thinking
(03:47) 1. The Coordination Problem
(04:41) 2. The Power Distribution Problem
(06:48) 3. The Economic Transition Problem
(08:14) Common counter arguments
(08:18) Just use AI to solve these problems
(08:32) The market will solve it
(08:49) Humans always adapt / previous technology has created new jobs
(09:30) Well all get income from being artists and poets
(10:16) We'll all get income from being prompt engineers or AI trainers
(10:49) We'll all get income from doing manual labour
(11:34) Conclusion
(12:18) Acknowledgments
The original text contained 7 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
First published: January 2nd, 2025
Source: https://www.lesswrong.com/posts/SAkFA5jHzzD5JWWxC/alignment-is-not-all-you-need)
---
Narrated by TYPE III AUDIO).
Images from the article:
)
)
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts), or another podcast app.