Audio narrations of LessWrong posts.
Background: With the release of Claude 3.7 Sonnet, Anthropic promoted a new benchmark: beating Poké
Though, given my doomerism, I think the natsec framing of the AGI race is likely wrongheaded, let m
OpenAI has finally introduced us to the full o3 along with o4-mini. Greg Brockman (OpenAI): Just rel
Epistemic status: Exploratory A scaffold is a lightweight, temporary construction whose point is to
In light of the recent news from Mechanize/Epoch and the community discussion it sparked, I'd like
Bertrand Russell noted how people often describe the same factual behavior using emotionally opposi
What if getting strong evidence of scheming isn't the end of your scheming problems, but merely the
Subtitle: Bad for loss of control risks, bad for concentration of power risks I’ve had this sitting
I recall seeing three “rationalist” cases for Trump: Richard Ngo on Twitter and elsewhere focused
This is an entry in the 'Dungeons & Data Science' series, a set of puzzles where players are gi
Summary We conducted a small investigation into using SAE features to recover sandbagged cap
SUMMARY: ALLFED is making an emergency appeal here due to a serious funding shortfall. Without n
We’ve written a new report on the threat of AI-enabled coups. I think this is a very serious risk
Audio note: this article contains 37 uses of latex notation, so the narration may be difficult to
New: https://openai.com/index/updating-our-preparedness-framework/ Old: https://cdn.openai.com/open
Three big OpenAI news items this week were the FT article describing the cutting of corners on safet
A post by Michael Nielsen that I found quite interesting. I decided to reproduce the full essay con
Introduction Writing this post puts me in a weird epistemic position. I simultaneously believe that:
Context Disney's Tangled (2010) is a great movie. Spoilers if you haven't seen it. The heroine, hav
The original Map of AI Existential Safety became a popular reference tool within the community afte