We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode GSMSymbolic paper - Iman Mirzadeh (Apple)

GSMSymbolic paper - Iman Mirzadeh (Apple)

2025/3/19
logo of podcast Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

AI Deep Dive AI Chapters Transcript
People
I
Iman Mirzadeh
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
Topics
Iman Mirzadeh: 我认为,如果要从本次演讲中汲取一条重要的信息,那就是要理解智力与成就之间的区别。当前领域非常关注成就、数字和准确性,却忽略了对智能系统的理解。这意味着要理解一个系统如何理解、如何推理,而不是将其与某个基准测试的特定数字联系起来。我们需要构建更好的抽象世界模型和知识表示。这并非易事,因为我们甚至对一些基本问题都缺乏答案。例如,AlphaZero的出现提升了国际象棋水平,并非因为人们单纯记忆招式,而是因为他们试图理解AlphaZero的工作原理。国际象棋大师们利用AI工具来发展理论、创造新的策略,而非单纯记忆招式。ImageNet等基准测试的饱和表明,仅仅达到高精度并不意味着问题解决,现实世界并非静态的。构建智能系统应该关注理解和推理,而非仅仅追求精确的数字结果。人们常认为LLM的推理能力不足是因为缺少某些功能或调整,或者仅仅需要访问工具,但这忽略了理解和创造新知识的重要性。人类也使用工具,但重要的是理解和创造新知识,而非仅仅完成任务或达到特定精度。解决复杂问题需要多种工具,这最终会归结为相同的问题:如何理解和推理。使用工具可以解决问题,但不一定能理解解决问题的过程。使用外部工具可以赢得比赛,但不一定能理解其中的原因和策略。AlphaZero的一些策略以前从未见过,人们需要理解这些策略背后的原因,而非仅仅关注其带来的优势。理解AlphaZero的策略能够改进国际象棋,这比单纯记忆招式更有意义。利用AI工具改进国际象棋,关键在于理解策略背后的原因,而非单纯追求高分。LLM的推理能力存在局限性,其脆弱性表明其并非真正的推理。将LLM看作是掌握大量分布的系统,提示只是调整这些分布,而非真正的理解。提示可以引导模型,但这并不意味着模型理解其背后的知识。目前的训练方法将所有内容视为分布,模型的目标是学习并最小化与数据分布的距离,这限制了模型的理解能力。目前的训练方法限制了模型对分布之外内容的理解能力。目前的训练方法导致模型容易出错,难以处理不同分布的数据。交叉熵损失函数只关注模型输出结果的正确性,而忽略了模型对底层概念的理解。交叉熵损失函数只关注模型输出结果的正确性,而忽略了模型对底层概念的理解。交叉熵损失函数无法保证模型理解数字和加法的概念。交叉熵损失函数只关注模型输出结果的正确性,而忽略了模型对底层概念的理解。目前的训练方法无法保证模型能够构建世界模型和概念。人工智能领域取得了很大进展,但研究方法和指标设计方面仍有改进空间。目前人工智能研究方法并非最佳,我们缺乏对系统工作原理的深入理解。人工智能研究应在理解问题基础上寻找解决方案,而非反之。阅读大量论文后,我们对人工智能系统的理解并没有显著提高。目前人工智能研究缺乏一个统一的理论框架来解释系统的工作原理。目前人工智能研究缺乏一个统一的理论框架来解释系统的工作原理。其他领域,例如物理学,拥有更完善的理论框架来指导研究。目前人工智能研究更关注解决方案,而忽略了对问题的理解。在寻找解决方案之前,应该先理解问题。目前对提示机制的研究缺乏统一的理论框架。目前对模型的研究缺乏统一的理论框架,这阻碍了模型的改进。目前的理论研究过于严格,与实践脱节,这阻碍了理论的快速发展。目前的理论研究过于严格,与实践脱节,这阻碍了理论的快速发展。人工智能领域面临的挑战包括同行评审过程的不足。人工智能领域面临的挑战包括同行评审过程的不足。如何桥接符号人工智能和连接主义是一个重要问题。可以考虑将符号模型和非符号模型结合起来,但不能将它们视为完全独立的系统。可以考虑将符号模型和非符号模型结合起来,但不能将它们视为完全独立的系统。将符号人工智能和连接主义结合起来,需要模型能够构建世界模型并更新自身的信念系统。模型需要具备判断信息正确性的能力。模型需要具备判断信息正确性的能力。模型需要具备自身的信念系统和知识表示,并能够根据新的信息更新自身。模型需要具备自身的信念系统和知识表示,并能够根据新的信息更新自身。模型的信念系统不应是孤立的,而应该是一个集成循环。人类并非总是进行推理,LLM也存在多种模式,有时进行表面统计,有时进行近似推理。LLM擅长插值,这给人造成其具有推理能力的错觉。LLM擅长插值,这给人造成其具有推理能力的错觉。LLM的插值能力在封闭领域表现良好,但在开放领域则存在局限性。LLM的成就和能力与智力是不同的概念,目前我们常常混淆两者。目前我们常常混淆人工智能系统的成就和智力。目前我们常常混淆人工智能系统的成就和智力。智力指的是系统的潜在能力和发展潜力,而非当前的成就。智力指的是系统的潜在能力和发展潜力,成就指的是系统在特定任务上的表现。系统在基准测试上的良好表现并不一定意味着其具有智力。智力指的是系统长期发展的能力。智力指的是系统长期发展的能力,而非当前的成就。智力可以用大脑质量与体重质量之间的关系来衡量,关键在于增长速度而非当前水平。智力指的是增长速度,而非当前水平。“伊曼登月测试”用来衡量智力,即从原始人到登月所需时间。当前的LLM在基准测试上的高分并不意味着其比原始人更聪明。智力指的是学习和成长的能力,而非在特定任务上的表现。智力指的是学习和成长的能力,而非在特定任务上的表现。衡量智力是一个难题,目前的基准测试容易饱和。衡量智力需要从定义智力的特性出发,这可能需要一个非客观的方法。衡量智力需要从定义智力的特性出发,这可能需要一个非客观的方法。衡量智力可以从系统在全新任务上的表现出发。衡量智力可以从系统在全新任务上的表现出发,例如学习新编程语言的速度。Gilles Gignac等人的论文对智力的定义进行了形式化的探讨,值得参考。 主持人: 我们需要构建更好的抽象世界模型和知识表示。人类并非总是进行推理,LLM也存在多种模式,有时进行表面统计,有时进行近似推理。智力并非技能,而是适应新事物的能力。 supporting_evidences Iman Mirzadeh: 'To me it looks nearly impossible to build an intelligent system that operates without an abstract model of the environment and the world and knowledge.' Iman Mirzadeh: 'In image and computer vision, we had these benchmarks like ImageNet and all those benchmarks and we saturated them and we thought, okay, the vision is solved.' Iman Mirzadeh: 'number of examples. You have to build an agent that understands and reasons.' Iman Mirzadeh: 'So intelligence, by default, it means, like, by definition, it means about the capability of system and how it can grow and at some point eventually becomes capable.' Iman Mirzadeh: 'So how could we measure this? Because, you know, like Jolet had this formalism for measuring intelligence, but it wasn't computable.' Iman Mirzadeh: 'If you look back, obviously we can admit that kind of what these systems today are capable of kind of surprised the field and everyone, I think.' Iman Mirzadeh: 'So, yeah, I mean, about sampling, there are a couple of things. Like, sometimes sampling in general doesn't make sense.'

Deep Dive

Chapters
This chapter discusses the crucial distinction between intelligence and achievement in AI systems. Current AI research heavily emphasizes achievement metrics like accuracy, neglecting the fundamental understanding of intelligence, reasoning, and knowledge representation.
  • Overemphasis on achievement metrics in AI research.
  • Need for better abstract world models in AI systems.
  • Lack of basic answers to fundamental questions about intelligent systems.

Shownotes Transcript

Iman Mirzadeh from Apple, who recently published the GSM-Symbolic paper discusses the crucial distinction between intelligence and achievement in AI systems. He critiques current AI research methodologies, highlighting the limitations of Large Language Models (LLMs) in reasoning and knowledge representation.

SPONSOR MESSAGES:


Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.

Goto https://tufalabs.ai/


TRANSCRIPT + RESEARCH:

https://www.dropbox.com/scl/fi/mlcjl9cd5p1kem4l0vqd3/IMAN.pdf?rlkey=dqfqb74zr81a5gqr8r6c8isg3&dl=0

TOC:

  1. Intelligence vs Achievement in AI Systems

[00:00:00] 1.1 Intelligence vs Achievement Metrics in AI Systems

[00:03:27] 1.2 AlphaZero and Abstract Understanding in Chess

[00:10:10] 1.3 Language Models and Distribution Learning Limitations

[00:14:47] 1.4 Research Methodology and Theoretical Frameworks

  1. Intelligence Measurement and Learning

    [00:24:24] 2.1 LLM Capabilities: Interpolation vs True Reasoning

    [00:29:00] 2.2 Intelligence Definition and Measurement Approaches

    [00:34:35] 2.3 Learning Capabilities and Agency in AI Systems

    [00:39:26] 2.4 Abstract Reasoning and Symbol Understanding

  2. LLM Performance and Evaluation

    [00:47:15] 3.1 Scaling Laws and Fundamental Limitations

    [00:54:33] 3.2 Connectionism vs Symbolism Debate in Neural Networks

    [00:58:09] 3.3 GSM-Symbolic: Testing Mathematical Reasoning in LLMs

    [01:08:38] 3.4 Benchmark Evaluation and Model Performance Assessment

REFS:

[00:01:00] AlphaZero chess AI system, Silver et al.

https://arxiv.org/abs/1712.01815

[00:07:10] Game Changer: AlphaZero's Groundbreaking Chess Strategies, Sadler & Regan

https://www.amazon.com/Game-Changer-AlphaZeros-Groundbreaking-Strategies/dp/9056918184

[00:11:35] Cross-entropy loss in language modeling, Voita

http://lena-voita.github.io/nlp_course/language_modeling.html

[00:17:20] GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in LLMs, Mirzadeh et al.

https://arxiv.org/abs/2410.05229

[00:21:25] Connectionism and Cognitive Architecture: A Critical Analysis, Fodor & Pylyshyn

https://www.sciencedirect.com/science/article/pii/001002779090014B

[00:28:55] Brain-to-body mass ratio scaling laws, Sutskever

https://www.theverge.com/2024/12/13/24320811/what-ilya-sutskever-sees-openai-model-data-training

[00:29:40] On the Measure of Intelligence, Chollet

https://arxiv.org/abs/1911.01547

[00:33:30] On definition of intelligence, Gignac et al.

https://www.sciencedirect.com/science/article/pii/S0160289624000266

[00:35:30] Defining intelligence, Wang

https://cis.temple.edu/~wangp/papers.html

[00:37:40] How We Learn: Why Brains Learn Better Than Any Machine... for Now, Dehaene

https://www.amazon.com/How-We-Learn-Brains-Machine/dp/0525559884

[00:39:35] Surfaces and Essences: Analogy as the Fuel and Fire of Thinking, Hofstadter and Sander

https://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinking/dp/0465018475

[00:43:15] Chain-of-thought prompting, Wei et al.

https://arxiv.org/abs/2201.11903

[00:47:20] Test-time scaling laws in machine learning, Brown

https://podcasts.apple.com/mv/podcast/openais-noam-brown-ilge-akkaya-and-hunter-lightman-on/id1750736528?i=1000671532058

[00:47:50] Scaling Laws for Neural Language Models, Kaplan et al.

https://arxiv.org/abs/2001.08361

[00:55:15] Tensor product variable binding, Smolensky

https://www.sciencedirect.com/science/article/abs/pii/000437029090007M

[01:08:45] GSM-8K dataset, OpenAI

https://huggingface.co/datasets/openai/gsm8k