This post reflects my personal opinion and not necessarily that of other members of Apollo Research or any of the people acknowledged below. Thanks to Jarrah Bloomfield, Lucius Bushnaq, Marius Hobbhahn, Axel Højmark, and Stefan Heimersheim for comments/discussions. I find that people in the AI/AI safety community have not considered many of the important implications that security in AI companies has on catastrophic risks. In this post, I’ve laid out some of these implications:
AI companies are a long way from state-proof security Implementing state-proof security will slow down safety (and capabilities) research a lot Sabotage is sufficient for catastrophe What will happen if timelines are short? Security level matters, even if you’re not robust to top cyber operations
** AI companies are a long way from state-proof security** I’m of course not the first one to make this claim (e.g. see Aschenbrenner). But it bears repeating. Last year [...]
Outline:
(00:56) AI companies are a long way from state-proof security
(05:36) Implementing state-proof security will slow down safety (and capabilities) research a lot
(08:36) Sabotage is sufficient for catastrophe
(11:24) What will happen if timelines are short?
(14:34) Security level matters, even if you're not robust to top cyber operations
(15:21) Advice to frontier AI companies
The original text contained 1 footnote which was omitted from this narration.
First published: January 8th, 2025
Source: https://www.lesswrong.com/posts/gG4EhhWtD2is9Cx7m/implications-of-the-ai-security-gap)
---
Narrated by TYPE III AUDIO).