Audio narrations of LessWrong posts.
Politico writes: The [Ukrainian] program […] rewards soldiers with points if they upload videos pro
Burnout. Burn out? Whatever, it sucks. Burnout is a pretty confusing thing made harder by our naiv
As an employee of the European AI Office, it's important for me to emphasize this point: The views
Gemini 2.5 Pro is sitting in the corner, sulking. It's not a liar, a sycophant or a cheater. It does
AI progress is driven by improved algorithms and additional compute for training runs. Understandin
Right before releasing o3, OpenAI updated its Preparedness Framework to 2.0. I previously wrote an
The video is about extrapolating the future of AI progress, following a timeline that starts from t
We’re excited to release a new AI governance research agenda from the MIRI Technical Governance Tea
This is a link post. I argue that you shouldn't accuse your interlocutor of being insufficiently tru
It is often noted that anthropomorphizing AI can be dangerous. People likely have prosocial instinc
Thank you @elifland for reviewing this post. He and AI Futures are planning to publish updates to t
It'll take until ~2050 to repeat the level of scaling that pretraining compute is experiencing this
I recently read a blog post that concluded with: When I'm on my deathbed, I won't look
Misaligned AIs might engage in research sabotage: making safety research go poorly by doing things
Whoops. Sorry everyone. Rolling back to a previous version. Here's where we are at this poin
(This is the fifth essay in a series that I’m calling “How do we solve the alignment problem?”. I’m
In this blog post, we analyse how the recent AI 2027 forecast by Daniel Kokotajlo, Scott Alexander,
[This has been lightly edited from the original post, eliminating some introductory material that L