He felt the timing was right for commercialization as AI technologies had matured, making it easier to apply AI to industry with foundation models. The process of applying AI had become much simpler, requiring only prompt tuning and retrieval-augmented generation (RAG) on top of pre-trained models.
The quality of the retrieval part is the main bottleneck. If the retrieved documents are relevant, the large language model can synthesize good answers, but poor retrieval quality significantly impacts the response quality.
RAG is much cheaper than long-context transformers because the latter requires storing all intermediate computations for large contexts, which can be prohibitively expensive. RAG, being a hierarchical system, is more cost-efficient as it retrieves only relevant information for each query.
One is to improve the neural networks, such as embedding models and re-rankers, which require heavy data-driven training. The other is to improve the software engineering aspects, like better data chunking, iterative retrieval, and incorporating metadata.
Domain-specific fine-tuning allows embedding models to excel in particular domains by customizing the limited number of parameters to focus on specific tasks. This can lead to improvements of 5% to 20% in retrieval quality, depending on the domain and the amount of data available.
He suggests starting with a prototype and immediately profiling both latency and retrieval quality. If retrieval quality is the bottleneck, companies should consider swapping components like embedding models or re-rankers to improve performance.
He predicts that RAG systems will become simpler, with fewer components and less need for complex software engineering. Embedding models will handle multi-modality and data formats more effectively, reducing the need for manual preprocessing.
He believes academia should focus on long-term innovations and research questions that industry may not prioritize due to short-term incentives. This includes working on efficiency improvements and challenging reasoning tasks that require innovative approaches.
After Tengyu Ma spent years at Stanford researching AI optimization, embedding models, and transformers, he took a break from academia to start Voyage AI which allows enterprise customers to have the most accurate retrieval possible through the most useful foundational data. Tengyu joins Sarah on this week’s episode of No priors to discuss why RAG systems are winning as the dominant architecture in enterprise and the evolution of foundational data that has allowed RAG to flourish. And while fine-tuning is still in the conversation, Tengyu argues that RAG will continue to evolve as the cheapest, quickest, and most accurate system for data retrieval.
They also discuss methods for growing context windows and managing latency budgets, how Tengyu’s research has informed his work at Voyage, and the role academia should play as AI grows as an industry.
Show Links:
Tengyu Ma Key Research Papers:
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training)
Non-convex optimization for machine learning: design, analysis, and understanding)
Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss)
Larger language models do in-context learning differently, 2023)
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning)
On the Optimization Landscape of Tensor Decompositions)
Sign up) for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod) | @Saranormous) | @EladGil) | @tengyuma)
**Show Notes: **
(0:00) Introduction
(1:59) Key points of Tengyu’s research
(4:28) Academia compared to industry
(6:46) Voyage AI overview
(9:44) Enterprise RAG use cases
(15:23) LLM long-term memory and token limitations
(18:03) Agent chaining and data management
(22:01) Improving enterprise RAG
(25:44) Latency budgets
(27:48) Advice for building RAG systems
(31:06) Learnings as an AI founder
(32:55) The role of academia in AI