We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode “Detect Goodhart and shut down” by Jeremy Gillen

“Detect Goodhart and shut down” by Jeremy Gillen

2025/1/23
logo of podcast LessWrong (30+ Karma)

LessWrong (30+ Karma)

AI Chapters
Chapters

Shownotes Transcript

A common failure of optimizers is Edge Instantiation. An optimizer often finds a weird or extreme solution to a problem when the optimization objective is imperfectly specified. For the purposes of this post, this is basically the same phenomenon as Goodhart's Law, especially Extremal and Causal Goodhart. With advanced AI, we are worried about plans created by optimizing over predicted consequences of the plan, potentially achieving the goal in an unexpected way. In this post, I want to draw an analogy between Goodharting (in the sense of finding extreme weird solutions) and overfitting (in the ML sense of finding a weird solution that fits the training data but doesn’t generalize). I believe techniques used to address overfitting are also useful for addressing Goodharting.[1] In particular, I want to focus on detecting Goodharting. The way we detect overfitting is using a validation set of data. If a trained ML [...]


Outline:

(01:36) What's the analogue of validation sets, for goals?

(07:42) Fact-conditional goals

(10:32) The escape valve

(11:01) Semi-formalization

(13:44) Final thoughts

The original text contained 5 footnotes which were omitted from this narration.


First published: January 22nd, 2025

Source: https://www.lesswrong.com/posts/ZHFZ6tivEjznkEoby/detect-goodhart-and-shut-down)

    ---
    

Narrated by TYPE III AUDIO).