We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode “I underestimated safety research speedups from safe AI” by Dan Braun

“I underestimated safety research speedups from safe AI” by Dan Braun

2025/6/30
logo of podcast LessWrong (30+ Karma)

LessWrong (30+ Karma)

AI Chapters
Chapters

Shownotes Transcript

A year or so ago, I thought that 10x speedups in safety research would require AIs so capable that takeover risks would be very high. 2-3x gains seemed plausible, 10x seemed unlikely. I no longer think this.

What changed? I continue to think that AI x-risk research is predominantly bottlenecked on good ideas, but I suffered from a failure of imagination of the speedups that could be gained from AIs that are unable to produce great high-level ideas. I’ve realised that humans trying to get empirical feedback from their ideas waste a huge amount of thought cycles on tasks that could be done by just moderately capable AI agents.

I don’t expect this to be an unpopular position, but I thought it might be useful to share some details of how I see this speedup happening in my current research.

** Tooling alone could 3-5x progress**

If we stopped frontier [...]


Outline:

(00:59) Tooling alone could 3-5x progress

(03:52) Going from 3-5x to 10x speedup

(04:25) My takeaways

The original text contained 1 footnote which was omitted from this narration.


First published: June 29th, 2025

Source: https://www.lesswrong.com/posts/p28GHSYskzsGKvABH/i-underestimated-safety-research-speedups-from-safe-ai)


Narrated by TYPE III AUDIO).