We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AGI is Almost Here! What comes next? What's left to do? How do we adapt and prepare?

AGI is Almost Here! What comes next? What's left to do? How do we adapt and prepare?

2024/10/11
logo of podcast Artificial Intelligence Masterclass

Artificial Intelligence Masterclass

AI Deep Dive AI Insights AI Chapters Transcript
People
D
David Shapiro
Topics
David Shapiro 认为,01 预览版模型已经非常接近人工通用智能 (AGI)。他指出,该模型在许多能力上已经达到或超过了人类水平,这体现在其对学术、科学和经济的影响上。他认为 AGI 不需要实体化,因为其处理的都是数据,并且许多 AGI 的重要任务并不需要实体化。他认为判断 AGI 的主要标准是其经济影响和科学影响。他认为,虽然 AGI 已经出现,但我们仍有时间解决安全问题。他讨论了 AGI 的推广将面临的挑战,例如法规、流程和工会的阻力。他还讨论了企业对 AGI 的采用将面临更大的挑战,因为企业通常倾向于采用来自可信供应商的产品,并且对新技术持观望态度。他认为,市场反馈、企业反馈、军事反馈和政府反馈等五个主要反馈回路将引导 AI 安全性的发展。他还讨论了开源模型与大型模型之间的差距,以及下一代 AI 模型的到来。最后,他还回应了关于 AGI 是否真正理解其行为的争论,他认为这无关紧要,重要的是其输出的准确性和价值。 David Shapiro 还讨论了 AGI 的发展模式,他认为 OpenAI 的发展模式类似于芯片行业,先扩大模型规模,然后进行优化。他预测,未来 6 到 24 个月内,新一代 AI 模型的价格将持续下降。他还讨论了人类特殊性,认为 AGI 是否具有主观体验并不重要,重要的是其实用性。他认为,大量工作岗位面临自动化风险,即使没有机器人技术,也可能达到 65%。他认为,AGI 的推广将受到法规和流程等因素的阻碍,并且可能面临工会的阻力。他认为,需要警惕那些试图通过各种手段阻碍 AI 和机器人技术推广的人。他认为,快速发布产品并收集反馈,比在封闭环境中进行长时间的安全测试更有效。他认为,资源受限的环境可能会促使更具创造性的解决方案。他认为,到 2024 年底或 2025 年,我们可能已经拥有了完整的 AGI。

Deep Dive

Key Insights

Why is O1 Preview considered close to AGI?

O1 Preview is close to AGI because it rivals or surpasses human capabilities in various tasks, such as completing a year-long thesis in an hour, and is comparable to a good grad student in usefulness.

Why does the AI not need to be embodied to be considered AGI?

Embodiment is not necessary for AGI because most valuable tasks, like scientific modeling and software development, do not require physical presence. An API can control external resources, making physical embodiment redundant.

What percentage of jobs are at risk for automation without robots?

Between 13% and 65% of jobs are at risk for automation without robots, depending on whether you consider only sedentary jobs or include office jobs.

Why will enterprise adoption of AGI be slow?

Enterprise adoption will be slow due to risk aversion, lack of trust in new vendors, and a wait-and-see approach until economic proof is evident. Big tech's history of over-promising and under-delivering also contributes to this hesitance.

What are the primary feedback loops for AI safety?

The primary feedback loops for AI safety include market feedback, enterprise feedback, military feedback, government feedback, and regulatory feedback. These loops will influence the safety and commercial viability of AGI products.

How will open-source AI models compare to proprietary ones?

Open-source AI models will likely be 6 to 12 months behind proprietary models but may offer more creative solutions due to constrained resources. However, they may lack the scale and funding of flagship models.

What is the expected timeline for achieving full AGI?

Full AGI is expected to be achieved by the end of 2024 or in 2025, with continued advancements in model capabilities and optimizations.

Why is human exceptionalism arguments against AGI irrelevant?

Human exceptionalism arguments, such as AGI not having true understanding or real experience, are irrelevant because the economic and scientific value of AGI's outputs are measurable and valuable, regardless of subjective experiences.

Chapters
This chapter explores the capabilities of Strawberry 01 Preview, comparing its performance to that of a human grad student and discussing whether it meets the criteria for Artificial General Intelligence (AGI). The debate around embodiment and the significance of economic and scientific impact in defining AGI are also addressed.
  • Strawberry 01 Preview's performance rivals or surpasses humans in several capabilities.
  • The model's usefulness is compared to that of a good grad student.
  • Debate exists on whether embodiment is a necessary criterion for AGI.
  • Economic and scientific impact are considered primary metrics for AGI.

Shownotes Transcript

If you liked this episode, Follow the podcast to keep up with the AI Masterclass. Turn on the notifications for the latest developments in AI. Find David Shapiro on: Patreon: https://patreon.com/daveshap (Discord via Patreon) Substack: https://daveshap.substack.com (Free Mailing List) LinkedIn: linkedin.com/in/dave-shap-automator GitHub: https://github.com/daveshap Disclaimer: All content rights belong to David Shapiro. No copyright infringement intended. Contact [email protected] for removal or queries.


Learn more about your ad choices. Visit megaphone.fm/adchoices)