Audio narrations of LessWrong posts.
Epistemic status: Briefer and more to the point than my model of what is going on with LLMs, but al
This is a link post. When I was a really small kid, one of my favorite activities was to try and dam
When complex systems fail, it is often because they have succumbed to what we call "disempowerment
I spent a couple of weeks writing this new introduction to AI timelines. Posting here in case useful
This is a link post. Diffractor is the first author of this paper. Official title: "Regret Bounds fo
This is part of the MIRI Single Author Series. Pieces in this series represent the beliefs and opin
Timothy and I have recorded a new episode of our podcast with Austin Chen of Manifund (formerly of
Short AI takeoff timelines seem to leave no time for some lines of alignment research to become imp
This is a link post. Researchers used RNA sequencing to observe how cell types change during brain d
The first AI war will be in your computer and/or smartphone. Companies want to get customers / user
If there turns out not to be an AI crash, you get a 1/(1+7) * $25,000 = $3,125 If there is an AI cr
In this post, we present a replication and extension of an alignment faking model organism: Replic
Yesterday I covered Dwarkesh Patel's excellent podcast coverage of AI 2027 with Daniel Kokotajlo and
Spoiler: “So after removing the international students from the calculations, and using the middle-
My thoughts on the recently posted story. Caveats I think it's great that the AI Futures Project w
Daniel Kokotajlo has launched AI 2027, Scott Alexander introduces it here. AI 2027 is a serious att
The catholic church has always had a complicated relationship with homosexuality. The central claim
There's an irritating circumstance I call The Black Hat Bobcat, or Bobcatting for short. The Blackh
I just published A Slow Guide to Confronting Doom, containing my own approach to living in a world
Following a few events[1] in April 2022 that caused a many people to update sharply and negatively