We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
back
“Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI” by Kaj_Sotala
35:51
Share
2025/4/17
LessWrong (Curated & Popular)
AI Chapters
Transcribe
Chapters
Introduction
Reasoning Failures: What Are They?
The Sliding Puzzle Problem: A Test of Logical Thinking?
Simple Coaching Instructions: Can LLMs Follow Basic Guidance?
Why Do LLMs Keep Failing at Tic-Tac-Toe?
Repeatedly Offering Incorrect Fixes: A Pattern of Mistakes?
Various People's Simple Tests: What Did They Find?
Failures at Logic and Consistency in Fiction Writing: How Realistic Are LLMs?
Global Details Replacing Local Ones: Losing Specificity in Narratives?
Stereotyped Behaviors vs. Character-Specific Actions: What's the Difference?
Top Secret Marine Databases and Wandering Items: Strange LLM Outputs?
What's Going On Here? Understanding the Root Causes
How About Scaling? Or Reasoning Models? Exploring Potential Solutions
Shownotes
Transcript
No transcript made for this episode yet, you may request it for free.