We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
back
“LLM AGI will have memory, and memory changes alignment” by Seth Herd
17:06
Share
2025/4/4
LessWrong (30+ Karma)
AI Chapters
Transcribe
Chapters
Why is memory useful for many tasks?
Are memory systems ready for agentic use?
Are agents ready to direct memory systems?
How does learning new beliefs change goals and values?
What value change phenomena have been observed in LLMs to date?
How does value crystallization and reflective stability result from memory?
What are the provisional conclusions?
Shownotes
Transcript
No transcript made for this episode yet, you may request it for free.