We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode “AIs at the current capability level may be important for future safety work” by ryan_greenblatt

“AIs at the current capability level may be important for future safety work” by ryan_greenblatt

2025/5/12
logo of podcast LessWrong (30+ Karma)

LessWrong (30+ Karma)

Shownotes Transcript

Sometimes people say that it's much less valuable to do AI safety research today than it will be in the future, because the current models are very different—in particular, much less capable—than the models that we're actually worried about. I think that argument is mostly right, but it misses a couple of reasons models as capable as current models might still be important at the point where AI is posing substantial existential risk. In this post, I'll describe those reasons.

Our best trusted models might be at a similar level of capabilities to current models: It might be the case that we are unable to make models which are much more powerful than current models while still being confident that these models aren't scheming against us. That is, the most powerful trusted models we have access to are similar in capabilities to the models we have today. I [...]

The original text contained 1 footnote which was omitted from this narration.


First published: May 12th, 2025

Source: https://www.lesswrong.com/posts/cJQZAueoPC6aTncKK/untitled-draft-udzv)

    ---
    

Narrated by TYPE III AUDIO).