We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #2345 - Roman Yampolskiy

#2345 - Roman Yampolskiy

2025/7/3
logo of podcast The Joe Rogan Experience

The Joe Rogan Experience

AI Chapters Transcript
Chapters
The discussion begins by contrasting the optimistic views of AI's future from those invested in the field versus the concerns of AI safety researchers. The conversation explores the timeline of AI development, the potential for uncontrolled superintelligence, and the challenges of defining and achieving AGI.
  • Optimistic views of AI's future often come from those with financial interests in the field.
  • AI safety researchers express concerns about the potential for uncontrolled superintelligence.
  • There is no universally agreed-upon definition of AGI.
  • Predictions about AGI's arrival have consistently been inaccurate.

Shownotes Transcript

Dr. Roman Yampolskiy is a computer scientist, AI safety researcher, and professor at the University of Louisville. He’s the author of several books, including "Considerations on the AI Endgame," co-authored with Soenke Ziesche, and "AI: Unexplained, Unpredictable, Uncontrollable."http://cecs.louisville.edu/ry/)

Upgrade your wardrobe and save on @TrueClassic at https://trueclassic.com/rogan)

Learn more about your ad choices. Visit podcastchoices.com/adchoices)