We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions

LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you

Episodes

Total: 543

It'll take until ~2050 to repeat the level of scaling that pretraining compute is experiencing

In this blog post, we analyse how the recent AI 2027 forecast by Daniel Kokotajlo, Scott Alexander,

This is a link post. to follow up my philantropic pledge from 2020, i've updated my philanthrop

I’ve been thinking recently about what sets apart the people who’ve done the best work at Anthropic

This is a link post. Guillaume Blanc has a piece in Works in Progress (I assume based on his paper)

We’ve written a new report on the threat of AI-enabled coups. I think this is a very serious risk

Back in the 1990s, ground squirrels were briefly fashionable pets, but their popularity came to an

Subtitle: Bad for loss of control risks, bad for concentration of power risks I’ve had this sitting

Though, given my doomerism, I think the natsec framing of the AGI race is likely wrongheaded, let m

Introduction Writing this post puts me in a weird epistemic position. I simultaneously believe that:

Dario Amodei, CEO of Anthropic, recently worried about a world where only 30% of jobs become automa

This is a link post. When I was a really small kid, one of my favorite activities was to try and dam

This is part of the MIRI Single Author Series. Pieces in this series represent the beliefs and opin

Short AI takeoff timelines seem to leave no time for some lines of alignment research to become imp

In this post, we present a replication and extension of an alignment faking model organism: Replic

Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete

“In the loveliest town of all, where the houses were white and high and the elms trees were green a

In 2021 I wrote what became my most popular blog post: What 2026 Looks Like. I intended to keep wri

Epistemic status: This post aims at an ambitious target: improving intuitive understanding directly