Audio narrations of LessWrong posts.
My few most productive individual weeks at Anthropic have all been “crisis project management:” coor
One concept from Cal Newport's Deep Work that has stuck with me is that of the any-benefit mindset:T
There's this popular trope in fiction about a character being mind controlled without losing awarene
EXP is an experimental summer workshop combining applied rationality with immersive experiential edu
I have, over the last year, become fairly well-known in a small corner of the internet tangentially
(Audio version here (read by the author), or search for "Joe Carlsmith Audio" on your podcast app. T
Dan Hendrycks, Eric Schmidt and Alexandr Wang released an extensive paper titled Superintelligence S
Thanks to everyone who took the Unofficial 2024 LessWrong Survey. For the results, check out the dat
AI for Epistemics is about helping to leverage AI for better truthseeking mechanisms — at the level
This post is a distillation of a recent work in AI-assisted human coordination from Google DeepMind.
Audio note: this article contains 212 uses of latex notation, so the narration may be difficult to
TLDR: Vacuum decay is a hypothesized scenario where the universe's apparent vacuum state could tran
The most hyped event of the week, by far, was the Manus Marketing Madness. Manus wasn’t entirely hyp
This research was conducted at AE Studio and supported by the AI Safety Grants programme administere
We study alignment audits—systematic investigations into whether an AI is pursuing hidden objectives
This is a link post.If there is an international project to build artificial general intelligence (“
(As an employee of the European AI Office, it's important for me to emphasize this point: The views
So, I have a lot of complaints about Anthropic, and about how EA / AI safety people often relate to
The Most Forbidden Technique is training an AI using interpretability techniques. An AI produces a f
Back in November 2024, Scott Alexander asked: Do longer prison sentences reduce crime? As a marker,