The longtermist funding ecosystem needs certain functions to exist at a reasonable scale. I argue LTFF should continue to be funded because we're currently one of the only organizations comprehensively serving these functions. Specifically, we:
Getting these functions right takes meaningful resources - well over $1M annually. This figure isn't arbitrary: $1M funds roughly 10 person-years of work, split between supporting career transitions and independent research. Given what we're trying to achieve - from maintaining independent AI safety voices to seeding new fields like x-risk focused information security - this is arguably a minimum viable scale.
While I'm excited to see some of these functions [...]
Outline:
(01:55) Core Argument
(04:11) Key Functions Currently (Almost) Unique to LTFF
(04:17) Technical AI Safety Funding
(04:40) Why arent other funders investing in GCR-focused technical AI Safety?
(06:27) Career Transitions and Early Researcher Funding
(07:44) Why arent other groups investing in improving career transitions in existential risk reduction?
(09:05) Going Forwards
(10:02) Providing (Some) Counterbalance to AI Companies on AI Safety
(11:39) Going Forwards
(12:21) Funding New Project Areas and Approaches
(13:18) Going Forwards
(14:01) Broader Funding Case
(14:05) Why Current Funding Levels Matter
(16:03) Going Forwards
(17:36) Conclusion
(18:41) Appendix: LTFFs Institutional Features
(18:57) Transparency and Communication
(19:41) Operational Features
First published: December 3rd, 2024
---
Narrated by TYPE III AUDIO).