** Abstract** AI safety is a young science. In its early history, deep and speculative questions regarding the risks of artificial superintelligence attracted the attention of basic science and philosophy, but the near-horizon focus of industry and governance, coupled with shorter safety timelines, means that applied science is now taking a bigger hand in the field's future. As the science of AI safety matures, it is critical to establish a research tradition that balances the need for foundational understanding with the demand for relevance to practical applications. We claim that “use-inspired basic research” – which grounds basic science with a clear-cut purpose – strikes a good balance between pragmatism and rigor that will ensure the solid foundations, flexibility, and utility appropriate for a problem of this scope and magnitude. In this post, we build on the established concept of ‘safety cases’ to elevate the role of basic research in [...]
Outline:
(00:05) Abstract
(01:43) Introduction: Applied and Basic Research in AI Safety
(05:54) Safety Cases: An Applied Science Artifact
(09:28) Use-Inspired Basic Research and Safety Case Trading Zones
(11:43) Safety Cases as Boundary Objects
(20:21) Laying the Groundwork for Trading with Basic Research
(24:01) Future Work
(24:40) Acknowledgments
The original text contained 7 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
First published: October 31st, 2024
Source: https://www.lesswrong.com/posts/EzrFeLRuww7zTSXhy/toward-safety-case-inspired-basic-research)
---
Narrated by TYPE III AUDIO).
Images from the article:
)
)
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts), or another podcast app.