We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode “Overview: AI Safety Outreach Grassroots Orgs” by Severin T. Seehrich

“Overview: AI Safety Outreach Grassroots Orgs” by Severin T. Seehrich

2025/5/5
logo of podcast LessWrong (30+ Karma)

LessWrong (30+ Karma)

AI Chapters
Chapters

Shownotes Transcript

We’ve been looking for joinable endeavors in AI safety outreach over the past weeks and would like to share our findings with you. Let us know if we missed any and we’ll add them to the list.

For comprehensive directories of AI safety communities spanning general interest, technical focus, and local chapters, check out https://www.aisafety.com/communities and https://www.aisafety.com/map. If you're uncertain where to start, https://aisafety.quest/ offers personalized guidance.

** ControlAI**

ControlAI started out as a think tank. Over the past months, they developed a theory of change for how to prevent ASI development (“Direct Institutional Plan”). As a pilot campaign they cold-mailed British MPs and Lords to talk to them about AI risk. So far, they talked to 70 representatives of which 31 agreed to publicly stand against ASI development.

Control AI is also supporting grassroots activism: On https://controlai.com/take-action , you can find templates to send to your representatives yourself, as [...]


Outline:

(00:36) ControlAI

(01:44) EncodeAI

(02:17) PauseAI

(03:31) StopAI

(03:48) Collective Action for Existential Safety (CAES)

(04:35) Call to action


First published: May 4th, 2025

Source: https://www.lesswrong.com/posts/hmds9eDjqFaadCk4F/overview-ai-safety-outreach-grassroots-orgs)

    ---
    

Narrated by TYPE III AUDIO).