We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Smarter Models, Less Compute: The Fast-Paced Progress of Language Model Efficiency

Smarter Models, Less Compute: The Fast-Paced Progress of Language Model Efficiency

2024/11/10
logo of podcast AI Horizon: Navigating the Future with NotebookLM

AI Horizon: Navigating the Future with NotebookLM

Shownotes Transcript

Language models are improving rapidly—not just through more compute, but with smarter algorithms. In this episode, we unpack Epoch AI's analysis of how algorithmic progress in language models is advancing at a rate that doubles compute efficiency every 5 to 14 months. We’ll explore the innovations driving this efficiency, from transformer architectures to new scaling laws, and discuss what this means for the future of AI research. How far can we push AI performance through algorithmic improvements alone? Tune in for a deep dive into the data shaping AI’s future trajectory.

Download Link:https://epochai.org/blog/algorithmic-progress-in-language-models)