We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode “A shortcoming of concrete demonstrations as AGI risk advocacy” by Steven Byrnes

“A shortcoming of concrete demonstrations as AGI risk advocacy” by Steven Byrnes

2024/12/11
logo of podcast LessWrong (30+ Karma)

LessWrong (30+ Karma)

Shownotes Transcript

Given any particular concrete demonstration of an AI algorithm doing seemingly-bad-thing X, a knowledgeable AGI optimist can look closely at the code, training data, etc., and say: “Well of course, it's obvious that the AI algorithm would do X under these circumstances. Duh. Why am I supposed to find that scary?” And yes, it is true that, if you have enough of a knack for reasoning about algorithms, then you will never ever be surprised by any demonstration of any behavior from any algorithm. Algorithms ultimately just follow their source code. (Indeed, even if you don’t have much of a knack for algorithms, such that you might not have correctly predicted what the algorithm did in advance, it will nevertheless feel obvious in hindsight!) From the AGI optimist's perspective: If I'm not scared of AGI extinction right now, and nothing surprising has happened, then I won’t feel like I [...]


First published: December 11th, 2024

Source: https://www.lesswrong.com/posts/L7t3sKnS7DedfTFFu/a-shortcoming-of-concrete-demonstrations-as-agi-risk)

    ---
    

Narrated by TYPE III AUDIO).


Images from the article: undefined) Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts), or another podcast app.