We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode "What I mean by "alignment is in large part about making cognition aimable at all"" by Nate Soares

"What I mean by "alignment is in large part about making cognition aimable at all"" by Nate Soares

2023/2/13
logo of podcast LessWrong (Curated & Popular)

LessWrong (Curated & Popular)

Shownotes Transcript

https://www.lesswrong.com/posts/NJYmovr9ZZAyyTBwM/what-i-mean-by-alignment-is-in-large-part-about-making)Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.(Epistemic status: attempting to clear up a misunderstanding about points I have attempted to make in the past. This post is not intended as an argument for those points.)I have long said that the lion's share of the AI alignment problem seems to me to be about pointing powerful cognition at anything at all, rather than figuring out what to point it at.It’s recently come to my attention that some people have misunderstood this point, so I’ll attempt to clarify here.