It is often noted that anthropomorphizing AI can be dangerous. People likely have prosocial instincts that AI systems lack (see below). Assuming AGI will be aligned because humans with similar behavior are usually mostly harmless is probably wrong and quite dangerous.
I want to discuss a flip side of using humans as an intuition pump for thinking about AI. Humans have many of the properties we are worried about for truly dangerous AGI:
Given this list, I currently weakly believe that the advantages of tapping these intuitions probably outweigh the disadvantages.
** Differential progress toward anthropomorphic AI may be net-helpful**
And progress may carry us in that direction, with or without the alignment community pushing for it. I currently hope we see rapid progress on better assistant and companion [...]
Outline:
(01:03) Differential progress toward anthropomorphic AI may be net-helpful
(03:10) AI rights movements will anthropomorphize AI
(04:01) AI is actually looking fairly anthropomorphic
(05:45) Provisional conclusions
First published: May 1st, 2025
Source: https://www.lesswrong.com/posts/JfgME2Kdo5tuWkP59/anthropomorphizing-ai-might-be-good-actually)
---
Narrated by TYPE III AUDIO).