We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode “We should start looking for scheming ‘in the wild’” by Marius Hobbhahn

“We should start looking for scheming ‘in the wild’” by Marius Hobbhahn

2025/3/6
logo of podcast LessWrong (30+ Karma)

LessWrong (30+ Karma)

AI Chapters
Chapters

Shownotes Transcript

TLDR: AI models are now capable enough that we might get relevant information from monitoring for scheming in regular deployments, both in the internal and external deployment settings. We propose concrete ideas for what this could look like while preserving the privacy of customers and developers.

** What do we mean by in the wild?** By “in the wild,” we basically mean any setting that is not intentionally created to measure something (such as evaluations). This could be any deployment setting, from using LLMs as chatbots to long-range agentic tasks. We broadly differentiate between two settings

Developer-internal deployment: AI developers use their AI models internally, for example, as chatbots, research assistants, synthetic data generation, etc.  Developer-external deployment: AI chatbots (ChatGPT, Claude, Gemini, etc.) or API usage. 

Since scheming is especially important in LM agent settings, we suggest prioritizing cases where LLMs are scaffolded as autonomous agents, but this is not [...]


Outline:

(00:23) What do we mean by in the wild?

(01:20) What are we looking for?

(02:42) Why monitor for scheming in the wild?

(03:56) Concrete ideas

(06:16) Privacy concerns


First published: March 6th, 2025

Source: https://www.lesswrong.com/posts/HvWQCWQoYh4WoGZfR/we-should-start-looking-for-scheming-in-the-wild)

    ---
    

Narrated by TYPE III AUDIO).