We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode “Building Big Science from the Bottom-Up: A Fractal Approach to AI Safety” by Lauren Greenspan

“Building Big Science from the Bottom-Up: A Fractal Approach to AI Safety” by Lauren Greenspan

2025/1/7
logo of podcast LessWrong (30+ Karma)

LessWrong (30+ Karma)

AI Chapters
Chapters

Shownotes Transcript

Epistemic Status: This post is an attempt to condense some ideas I've been thinking about for quite some time. I took some care grounding the main body of the text, but some parts (particularly the appendix) are pretty off the cuff, and should be treated as such. The magnitude and scope of the problems related to AI safety have led to an increasingly public discussion about how to address them. Risks of sufficiently advanced AI systems involve unknown unknowns that could impact the global economy, national and personal security, and the way we investigate, innovate, and learn. Clearly, the response from the AI safety community should be as multi-faceted and expansive as the problems it aims to address. In a previous post, we framed fruitful collaborations between applied science, basic science, and governance as trading zones mediated by a safety-relevant boundary object (a safety case sketch) without discussing [...]


Outline:

(02:35) What is Big Science?

(05:53) Paths toward a Big Science of AI Safety

(06:38) The Way Things Are

(17:42) Why bottom-up could be better

(21:43) How to scale from where we are now

(23:40) Appendix: Potential Pushback of this Approach

The original text contained 3 footnotes which were omitted from this narration.


First published: January 7th, 2025

Source: https://www.lesswrong.com/posts/5JJ4AxQRzJGWdj4pN/building-big-science-from-the-bottom-up-a-fractal-approach)

    ---
    

Narrated by TYPE III AUDIO).