Language models are improving rapidly—not just through more compute, but with smarter algorithms. In this episode, we unpack Epoch AI's analysis of how algorithmic progress in language models is advancing at a rate that doubles compute efficiency every 5 to 14 months. We’ll explore the innovations driving this efficiency, from transformer architectures to new scaling laws, and discuss what this means for the future of AI research. How far can we push AI performance through algorithmic improvements alone? Tune in for a deep dive into the data shaping AI’s future trajectory.
Download Link:https://epochai.org/blog/algorithmic-progress-in-language-models)