We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode “My AGI safety research—2024 review, ’25 plans” by Steven Byrnes

“My AGI safety research—2024 review, ’25 plans” by Steven Byrnes

2024/12/31
logo of podcast LessWrong (30+ Karma)

LessWrong (30+ Karma)

AI Chapters
Chapters

Shownotes Transcript

Previous: My AGI safety research—2022 review, ’23 plans. (I guess I skipped it last year.) “Our greatest fear should not be of failure, but of succeeding at something that doesn't really matter.” –attributed to DL Moody

** Tl;dr**

Section 1 goes through my main research project, “reverse-engineering human social instincts”: what does that even mean, what's the path-to-impact, what progress did I make in 2024 (spoiler: lots!!), and how can I keep pushing it forward in the future? Section 2 is what I’m expecting to work on in 2025: most likely, I’ll start the year with some bigger-picture thinking about Safe & Beneficial AGI, then eventually get back to reverse-engineering human social instincts after that. Plus, a smattering of pedagogy, outreach, etc. Section 3 is a sorted list of all my blog posts from 2024 Section 4 is acknowledgements 

** 1. Main research project: reverse-engineering human social instincts**

** 1.1 Background [...]**


Outline:

(00:25) Tl;dr

(01:12) 1. Main research project: reverse-engineering human social instincts

(01:21) 1.1 Background: What's the problem and why should we care?

(04:31) 1.2 More on the path-to-impact

(07:41) 1.3 Progress towards reverse-engineering human social instincts

(10:09) 1.4 What's next?

(11:22) 2. My plans going forward

(12:58) 3. Sorted list of my blog posts from 2024

(15:58) 4. Acknowledgements

The original text contained 2 footnotes which were omitted from this narration.


First published: December 31st, 2024

Source: https://www.lesswrong.com/posts/2wHaCimHehsF36av3/my-agi-safety-research-2024-review-25-plans)

    ---
    

Narrated by TYPE III AUDIO).