We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode “AI Safety as a YC Startup” by Lukas Petersson

“AI Safety as a YC Startup” by Lukas Petersson

2025/1/8
logo of podcast LessWrong (30+ Karma)

LessWrong (30+ Karma)

AI Chapters
Chapters

Shownotes Transcript

A while back I gave a talk about doing AI safety as a YC startup. I wrote a blog post about it and thought it would be interesting to share it with both the YC and AI safety communities. Please share any feedback or thoughts. I would love to hear them!

** AI Safety is a problem and people pay to solve problems**

Intelligence is dangerous, and I think there's a significant chance that the default scenario of AI progress poses an existential risk to humanity. While it's far from certain, even small probabilities are significant when the stakes are this high. This is an enormous abstract problem, but there are thousands of sub-problems waiting to be solved.

Some of these sub-problems already exist today, but most are in the future (GPT is not capable of killing us yet). When people start feeling these pains, they will [...]


Outline:

(00:22) AI Safety is a problem and people pay to solve problems

(01:24) More startups should solve these problems

(02:39) Y Combinator

(04:23) YC advice in the context of AI safety

(04:39) The problems they are solving are in the future

(05:13) Customers are often researchers from AI labs or from the government

(05:54) The pool of potential customers is often small

(06:54) Doing good


First published: January 8th, 2025

Source: https://www.lesswrong.com/posts/QxJFjqT6oFY3jo47s/ai-safety-as-a-yc-startup-1)

    ---
    

Narrated by TYPE III AUDIO).