Cross-posted on the EA Forum here
** Introduction** Several developments over the past few months should cause you to re-evaluate what you are doing. These include:
Updates toward short timelines The Trump presidency The o1 (inference-time compute scaling) paradigm Deepseek Stargate/AI datacenter spending Increased internal deployment Absence of AI x-risk/safety considerations in mainstream AI discourse
Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are [...]
Outline:
(00:11) Introduction
(01:24) Implications of recent developments
(01:29) Updates toward short timelines
(04:29) The Trump Presidency
(07:37) The o1 paradigm
(09:27) Deepseek
(12:11) Stargate/AI data center spending
(13:15) Increased internal deployment
(15:47) Absence of AI x-risk/safety considerations in mainstream AI discourse
(17:17) Implications for strategic priorities
(17:21) Broader implications for US-China competition
(19:36) What seems less likely to work?
(20:59) What should people concerned about AI safety do now?
(24:04) Acknowledgements
The original text contained 6 footnotes which were omitted from this narration.
First published: January 28th, 2025
---
Narrated by TYPE III AUDIO).