We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions

LessWrong (30+ Karma)

Audio narrations of LessWrong posts.

Episodes

Total: 1064

Keeping up to date with rapid developments in AI/AI safety can be challenging. In addition, many AI

Background: After the release of Claude 3.7 Sonnet,[1] an Anthropic employee started livestreaming C

This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the i

TLDR: AI models are now capable enough that we might get relevant information from monitoring for sc

Every day, thousands of people lie to artificial intelligences. They promise imaginary “$200 cash ti

This isn’t primarily about how I write. It's about how other people write, and what advice they give

I’m releasing a new paper “Superintelligence Strategy” alongside Eric Schmidt (formerly Google), and

LLM-based coding-assistance tools have been out for ~2 years now. Many developers have been reportin

OpenAI's recent transparency on safety and alignment strategies has been extremely helpful and refre

This isn't really a "timeline", as such – I don't know the timings – but this is my current, fairly

This is a personal post and does not necessarily reflect the opinion of other members of Apollo Rese

LessWrong Context:I didn’t want to write this.Not for lack of courage—I’d meme-storm Putin's in the

One-line summary: Most policy change outside a prior Overton Window comes about by policy advocates

This is a link post.We interviewed five AI researchers from leading AI companies about a scenario wh

My colleagues and I have written a scenario in which AGI-level AI systems are trained around 2027 us

Note: an audio narration is not available for this article. Please see the original text. The origi

This is a critique of How to Make Superbabies on LessWrong.Disclaimer: I am not a geneticist[1], and

One of my takeaways from EA Global this year was that most alignment people aren't explicitly focuse

This is a link post.Your AI's training data might make it more “evil” and more able to circumvent yo

Crossposted from my personal blog. Recent advances have begun to move AI beyond pretrained amortized