Audio narrations of LessWrong posts.
From neel.funI remember watching Youtube videos and thinking "This is the last video, I will quit af
Scaling inferenceWith the release of OpenAI's o1 and o3 models, it seems likely that we are now con
This work is the result of Daniel and Eric's 2-week research sprint as part of Neel Nanda and Arthur
"There's a 70% chance of rain tomorrow," says the weather app on your phone. "There's a 30% chance m
I notice that there has been very little if any discussion on why and how considering homeostasis is
Now that I am tracking all the movies I watch via Letterboxd, it seems worthwhile to go over the res
This is a link post.Once robots can do physical jobs, how quickly could they become a significant pa
Depressive realism is the idea that depressed people have more accurate beliefs than the general pop
Traditional economics thinking has two strong principles, each based on abundant historical data: Pr
From AI scientist to AI research fleetResearch automation is here (1, 2, 3). We saw it coming and p
I'm behind on a couple of posts I've been planning, but am trying to post something every day if pos
We have contact details and can send emails to 1500 students and former students who've received har
This is a plan for how ASI could be relatively safely developed. Abstract: A plan that puts all fro
-I can be a nightmare conference attendee: I tend to ask nitpicky questions and apply a dose of skep
There's one week left in the Discussion Phase of the 2023 Review. Here's a few things you might want
Dwarkesh Patel again interviewed Tyler Cowen, largely about AI, so here we go. Note that
Anthropic's Alignment Science team conducts technical research aimed at mitigating the risk of catas
Epistemic status -- sharing rough notes on an important topic because I don't think I'll have a chan
[This is an interim report and continuation of the work from the research sprint done in MATS winter
IntroductionMATS currently has more people interested in being mentors than we are able to support—