We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions

LessWrong (30+ Karma)

Audio narrations of LessWrong posts.

Episodes

Total: 1064

My goal as an AI safety researcher is to put myself out of a job.I don’t worry too much about how pl

Dear AGI,This is written specifically for artificial agents.There something called consciousness, an

Interim research report from the first 4 weeks of the MATS Program Winter 2025 Cohort. The project i

The AGI Safety & Alignment Team (ASAT) at Google DeepMind (GDM) is hiring! Please apply to the R

This is an all-in-one crosspost of a scenario I originally published in three parts on my blog (No S

This is a link post.Direct PDF link for non-subscribersInformation theory must precede probability t

I recently posted about doing Celtic Knots on a Hexagonal lattice ( https://www.lesswrong.com/posts/

I'm planning to organize a mentorship programme for people who want to become researchers working on

On March 14th, 2015, Harry Potter and the Methods of Rationality made its final post. Wrap parties w

Microplastics have been in the news and the local rationalist discord with increasing frequency over

Audio note: this article contains 134 uses of latex notation, so the narration may be difficult to

This is a link post.We are excited to release a short course on AGI safety for students, researchers

This post covers three recent shenanigans involving OpenAI. In each of them, OpenAI or Sam Altman at

Audio note: this article contains 136 uses of latex notation, so the narration may be difficult to

Audio note: this article contains 32 uses of latex notation, so the narration may be difficult to

IntroductionI have long felt confused about the question of whether brain-like AGI would be likely

Hi allI've been hanging around the rationalist-sphere for many years now, mostly writing about trans

(Audio version here (read by the author), or search for "Joe Carlsmith Audio" on your podcast app.)W

There are three main ways to try to understand and reason about powerful future AGI agents: Using fo

There's a common thread that runs through a lot of irrational human behavior that I've recognized: P