We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions

LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you

Episodes

Total: 543

(Cross-post from https://amistrongeryet.substack.com/p/are-we-on-the-brink-of-agi, lightly edited fo

I mostly work on risks from scheming (that is, misaligned, power-seeking AIs that plot against their

This week, Altman offers a post called Reflections, and he has an interview in Bloomberg. There&apos

As someone who writes for fun, I don't need to get people onto my site: If I write a post and s

This is a low-effort post. I mostly want to get other people's takes and express concern about

from aisafety.world The following is a list of live agendas in technical AI safety, updating our pos

I've heard many people say something like "money won't matter post-AGI". This ha

Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey. Mix

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has

TL;DR: If you want to know whether getting insurance is worth it, use the Kelly Insurance Calculator

My median expectation is that AGI[1] will be created 3 years from now. This has implications on how

There are people I can talk to, where all of the following statements are obvious. They go without s

I'm editing this post.OpenAI announced (but hasn't released) o3 (skipping o2 for trademark

I like the research. I mostly trust the results. I dislike the 'Alignment Faking' name and

Increasingly, we have seen papers eliciting in AI models various shenanigans.There are a wide variet

Six months ago, I was a high school English teacher.I wasn’t looking to change careers, even after n

A new article in Science Policy Forum voices concern about a particular line of biological research

A fool learns from their own mistakes The wise learn from the mistakes of others.– Otto von Bismark

Someone I know, Carson Loughridge, wrote this very nice post explaining the core intuition around S