Audio narrations of LessWrong posts.
Epistemic status: You probably already know if you want to read this kind of post, but in case you h
Introduction: some contemporary AI governance contextIt's a confusing time in AI governance. Severa
Who this post is for? Someone who either: Wonders if they should start lifting weights, and could be
When, exactly, should we consider humanity to have properly "lost the game", with respect to agentic
It doesn’t look good. What used to be the AI Safety Summits were perhaps the most promising thing ha
This is a link post. As AIs rapidly advance and become more agentic, the risk they pose is governed
Audio note: this article contains 60 uses of latex notation, so the narration may be difficult to
Not too long ago, OpenAI presented a paper on their new strategy of Deliberative Alignment. The way
Dear Lsusr,I am inspired by your stories about Effective Evil. My teachers at school told me I shoul
AI alignment is probably the most pressing issue of our time. Unfortunately it's also become one of
I recently posted my model of an optimistic view of AI, asserting that I disagree with every sentenc
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the i
Scott Alexander famously warned us to Beware Trivial Inconveniences. When you make a thing easy to d
Crossposted from my Substack. Rational choice theory is commonly thought of as being about what to d
This week we got a revision of DeepMind's safety framework, and the first version of Meta's framewor
This is a link post.January 2020, Gary Marcus wrote GPT-2 And The Nature Of Intelligence, demonstrat
I am going to address some misconceptions about brain hemispheres -- in popular culture, and in Zizi
This is a succinct worksheet version of the "Think It Faster" Exercise. You can use this worksheet e
TL;DR: If you are thinking of using interpretability to help with strategic deception, then there's
Crossposted from my blog which many people are saying you should check out! Imagine that you came a