We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions

LessWrong (30+ Karma)

Audio narrations of LessWrong posts.

Episodes

Total: 1064

A quick post on a probably-real inadequate equilibrium mostly inspired by trying to think through w

Epistemic status: One week empirical project from a theoretical computer scientist. My analysis and

This is a link post. As you know, histograms are decent visualizations for PDFs with lots of samples

This is a link post. We know what linear functions are. A function f is linear iff it satisfies addi

AISI is being rebranded highly non-confusingly as CAISI. Is it the end of AISI and a huge disaster,

Epistemic Status: Attempt to codify new term. I want to define a new term that will be useful in

Previously: #1, #2, #3, #4, #5 Dating Roundup #4 covered dating apps. Roundup #5 covered opening w

Has someone you know ever had a “breakthrough” from coaching, meditation, or psychedelics — only to

Imagine each of us has an AI representative, aligned to us, personally. Is gradual disempowerment s

Midjourney, “engraving of Apollo shooting his bow at a distant cancer cell” Introduction and Princip

I have been forced recently to cover many statements by US AI Czar David Sacks. Here I will do so

This is a link post. When it comes to disputes about AI coding capabilities, I think a lot of people

(This sequence assumes basic familiarity with longtermist cause prioritization concepts, though the

This research was completed for LASR Labs 2025 by Benjamin Arnav, Pablo Bernabeu-Pérez, Nathan Helm

What's the main value proposition of romantic relationships? Now, look, I know that when people dro

Adapted from this twitter thread. See this as a quick take. Mitigation Strategies How to mitigate S

There have recently been various proposals for mitigations to "the intelligence curse" or "gradual

On prioritizing orgs by theory of change, identifying effective giving opportunities, and how Manif

Letting kids be kids seems more and more important to me over time. Our safetyism and paranoia abou

In previous posts in this sequence, I laid out a case for why most AI governance research is too ac