In light of the recent news from Mechanize/Epoch and the community discussion it sparked, I'd like to open a conversation about a question some of us grapple with: What constitutes a net-positive AI startup from an AI safety perspective, and what steps can founders take to demonstrate goodwill and navigate the inherent pressures?
This feels like an important conversation to have because AI safety seems to have been increasingly encouraging people in the community to build startups (due to a lack of funding, potentially higher ceiling for society-wide impact, etc.). I've thought a lot about this, and have been going back and forth on this for the past three years. You get constant whiplash.
The original text contained 1 footnote which was omitted from this narration.
First published: April 18th, 2025
Source: https://www.lesswrong.com/posts/o3sEHE8cqQ5hcqgkG/what-makes-an-ai-startup-net-positive-for-safety)
---
Narrated by TYPE III AUDIO).