We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
back
“Knocking Down My AI Optimist Strawman” by tailcalled
11:16
Share
2025/2/11
LessWrong (30+ Karma)
AI Chapters
Transcribe
Chapters
Is AI Progress Leading to Human Supremacy?
Are We Worried About AI Misalignment?
The Root Problem: AGI Development Without Human Help?
Can We Control AI by Asking It to Do Good Things?
Is Inference-Time Scaling Changing the Game?
What If Current Safety Methods Fail?
Are AI X-Risk Worries Dangerous?
The Danger of Alignment Theorists
Are Theorists Justifying Violence?
Shownotes
Transcript
No transcript made for this episode yet, you may request it for free.