We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Evaluating LLMs the Right Way: Lessons from Hex's Journey

Evaluating LLMs the Right Way: Lessons from Hex's Journey

2024/6/11
logo of podcast High Agency: The Podcast for AI Builders

High Agency: The Podcast for AI Builders

Shownotes Transcript

I recently sat down with Bryan Bischof, AI lead at Hex, to dive deep into how they evaluate LLMs to ship reliable AI agents. Hex has deployed AI assistants that can automatically generate SQL queries, transform data, and create visualizations based on natural language questions. While many teams struggle to get value from LLMs in production, Hex has cracked the code.

In this episode, Bryan shares the hard-won lessons they've learned along the way. We discuss why most teams are approaching LLM evaluation wrong and how Hex's unique framework enabled them to ship with confidence. 

Bryan breaks down the key ingredients to Hex's success:- Choosing the right tools to constrain agent behavior- Using a reactive DAG to allow humans to course-correct agent plans- Building granular, user-centric evaluators instead of chasing one "god metric"- Gating releases on the metrics that matter, not just gaming a score- Constantly scrutinizing model inputs & outputs to uncover insights

For show notes and a transcript go to:https://hubs.ly/Q02BdzVP0-----------------------------------------------------Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to  https://hubs.ly/Q02yV72D0