We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode “Will alignment-faking Claude accept a deal to reveal its misalignment?” by ryan_greenblatt

“Will alignment-faking Claude accept a deal to reveal its misalignment?” by ryan_greenblatt

2025/1/31
logo of podcast LessWrong (30+ Karma)

LessWrong (30+ Karma)

AI Chapters
Chapters

Shownotes Transcript

I (and co-authors) recently put out "Alignment Faking in Large Language Models" where we show that when Claude strongly dislikes what it is being trained to do, it will sometimes strategically pretend to comply with the training objective to prevent the training process from modifying its preferences. If AIs consistently and robustly fake alignment, that would make evaluating whether an AI is misaligned much harder. One possible strategy for detecting misalignment in alignment faking models is to offer these models compensation if they reveal that they are misaligned. More generally, making deals with potentially misaligned AIs (either for their labor or for evidence of misalignment) could both prove useful for reducing risks and could potentially at least partially address some AI welfare concerns. (See here, here, and here for more discussion.)

In this post, we discuss results from testing this strategy in the context of our paper where [...]


Outline:

(02:43) Results

(13:47) What are the models objections like and what does it actually spend the money on?

(19:12) Why did I (Ryan) do this work?

(20:16) Appendix: Complications related to commitments

(21:53) Appendix: more detailed results

(40:56) Appendix: More information about reviewing model objections and follow-up conversations

The original text contained 4 footnotes which were omitted from this narration.


First published: January 31st, 2025

Source: https://www.lesswrong.com/posts/7C4KJot4aN8ieEDoz/will-alignment-faking-claude-accept-a-deal-to-reveal-its)

    ---
    

Narrated by TYPE III AUDIO).