We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
back
“Why Modelling Multi-Objective Homeostasis Is Necessary for AI Alignment (And How It Helps With AI Safety as Well)” by Roland Pihlakas
22:04
Share
2025/1/13
LessWrong (30+ Karma)
AI Chapters
Transcribe
Chapters
Why is Utility Maximisation Insufficient for AI Safety?
How Does Homeostasis Offer a Safer Goal Architecture?
What Are the Key Components of Homeostatic Goals?
How Do Diminishing Returns and the Golden Middle Way Enhance AI Safety?
Can Homeostatic Goals Be Formalised?
What Parallels Exist Between Homeostatic Goals and Other Computer Science Concepts?
What Are the Open Challenges and Future Directions in Homeostatic AI?
Why Are Unbounded Objectives Problematic in AI?
What Are the Key Takeaways and Conclusions?
Shownotes
Transcript
No transcript made for this episode yet, you may request it for free.