I notice that there has been very little if any discussion on why and how considering homeostasis is significant, even essential for AI alignment and safety. Current post aims to begin amending that situation. In this post I will treat alignment and safety as explicitly separate subjects, which both benefit from homeostatic approaches. This text is a distillation and reorganisation of three of my older blog posts at Medium:
Making AI less dangerous: Using homeostasis-based goal structures (2017) Project proposal: Corrigibility and interruptibility of homeostasis based agents (2018) Diminishing returns and conjunctive goals: Mitigating Goodhart's law with common sense. Towards corrigibility and interruptibility via the golden middle way. (2018)
I will probably share more such distillations or weaves of my old writings in the future.
** Introduction** Much of AI safety discussion revolves around the potential dangers posed by goal-driven artificial agents. In many of these discussions, the [...]
Outline:
(01:09) Introduction
(02:53) Why Utility Maximisation Is Insufficient
(04:20) Homeostasis as a More Correct and Safer Goal Architecture
(04:25) 1. Multiple Conjunctive Objectives
(05:23) 2. Task-Based Agents or Taskishness -- Do the Deed and Cool Down
(06:22) 3. Bounded Stakes: Reduced Incentive for Extremes
(06:49) 4. Natural Corrigibility and Interruptibility
(08:12) Diminishing Returns and the Golden Middle Way
(09:27) Formalising Homeostatic Goals
(11:32) Parallels with Other Ideas in Computer Science
(13:46) Open Challenges and Future Directions
(18:51) Addendum about Unbounded Objectives
(20:23) Conclusion
First published: January 12th, 2025
---
Narrated by TYPE III AUDIO).