We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Jonas Hübotter (ETH) - Test Time Inference

Jonas Hübotter (ETH) - Test Time Inference

2024/12/1
logo of podcast Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

AI Deep Dive AI Chapters Transcript
People
J
Jonas Hübotter
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
Topics
Jonas Hübotter 博士生介绍了其关于测试时计算和局部学习的突破性研究。他展示了如何通过策略性测试时计算,使小型模型的性能提高 30 倍,并介绍了一种结合归纳学习和转导学习方法的新范式。他解释了如何使用贝叶斯线性回归作为代理模型进行不确定性估计,使模型能够有效地适应特定任务,而无需进行大规模预训练。他还提出了将本地计算和云计算相结合的混合部署策略,建议未来根据任务的复杂性而不是固定的模型大小来分配计算资源。 主持人与 Jonas Hübotter 就测试时计算的基础知识、系统架构和智能、资源优化和局部学习、信息检索和模型可解释性以及分布式系统和部署等方面进行了深入探讨。讨论涵盖了主动学习与局部学习方法的比较、信息检索和最近邻的局限性、贝叶斯不确定性估计和代理模型、以及从静态到分布式学习系统的演变等主题。

Deep Dive

Chapters
Smaller models can outperform much larger models by strategically using test-time computation. This involves automating data selection, allowing the model to determine the data it needs for accurate predictions.
  • Outperforming larger models on the Pile benchmark by 30x through test-time computation.
  • Automating data selection to improve prediction accuracy.
  • Using the model's intuitions and learned instructions to guide compute spending at test time.

Shownotes Transcript

Jonas Hübotter, PhD student at ETH Zurich's Institute for Machine Learning, discusses his groundbreaking research on test-time computation and local learning. He demonstrates how smaller models can outperform larger ones by 30x through strategic test-time computation and introduces a novel paradigm combining inductive and transductive learning approaches.

Using Bayesian linear regression as a surrogate model for uncertainty estimation, Jonas explains how models can efficiently adapt to specific tasks without massive pre-training. He draws an analogy to Google Earth's variable resolution system to illustrate dynamic resource allocation based on task complexity.

The conversation explores the future of AI architecture, envisioning systems that continuously learn and adapt beyond current monolithic models. Jonas concludes by proposing hybrid deployment strategies combining local and cloud computation, suggesting a future where compute resources are allocated based on task complexity rather than fixed model size.

This research represents a significant shift in machine learning, prioritizing intelligent resource allocation and adaptive learning over traditional scaling approaches.

SPONSOR MESSAGES:

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/

Transcription, references and show notes PDF download:

https://www.dropbox.com/scl/fi/cxg80p388snwt6qbp4m52/JonasFinal.pdf?rlkey=glk9mhpzjvesanlc14rtpvk4r&st=6qwi8n3x&dl=0

Jonas Hübotter

https://jonhue.github.io/

https://scholar.google.com/citations?user=pxi_RkwAAAAJ

Transductive Active Learning: Theory and Applications (NeurIPS 2024)

https://arxiv.org/pdf/2402.15898

EFFICIENTLY LEARNING AT TEST-TIME: ACTIVE FINE-TUNING OF LLMS (SIFT)

https://arxiv.org/pdf/2410.08020

TOC:

  1. Test-Time Computation Fundamentals

[00:00:00] Intro

[00:03:10] 1.1 Test-Time Computation and Model Performance Comparison

[00:05:52] 1.2 Retrieval Augmentation and Machine Teaching Strategies

[00:09:40] 1.3 In-Context Learning vs Fine-Tuning Trade-offs

  1. System Architecture and Intelligence

[00:15:58] 2.1 System Architecture and Intelligence Emergence

[00:23:22] 2.2 Active Inference and Constrained Agency in AI

[00:29:52] 2.3 Evolution of Local Learning Methods

[00:32:05] 2.4 Vapnik's Contributions to Transductive Learning

  1. Resource Optimization and Local Learning

[00:34:35] 3.1 Computational Resource Allocation in ML Models

[00:35:30] 3.2 Historical Context and Traditional ML Optimization

[00:37:55] 3.3 Variable Resolution Processing and Active Inference in ML

[00:43:01] 3.4 Local Learning and Base Model Capacity Trade-offs

[00:48:04] 3.5 Active Learning vs Local Learning Approaches

  1. Information Retrieval and Model Interpretability

[00:51:08] 4.1 Information Retrieval and Nearest Neighbor Limitations

[01:03:07] 4.2 Model Interpretability and Surrogate Models

[01:15:03] 4.3 Bayesian Uncertainty Estimation and Surrogate Models

  1. Distributed Systems and Deployment

[01:23:56] 5.1 Memory Architecture and Controller Systems

[01:28:14] 5.2 Evolution from Static to Distributed Learning Systems

[01:38:03] 5.3 Transductive Learning and Model Specialization

[01:41:58] 5.4 Hybrid Local-Cloud Deployment Strategies