We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Making AI Reliable is the Greatest Challenge of the 2020s // Alon Bochman // #312

Making AI Reliable is the Greatest Challenge of the 2020s // Alon Bochman // #312

2025/5/6
logo of podcast MLOps.community

MLOps.community

AI Deep Dive Transcript
People
A
Alon Bochman
Topics
Alon Bochman: 我认为构建可靠的AI系统是2020年代最大的挑战。我们不应该盲目相信权威或流行的观点,而应该依靠数据驱动的方法,通过实验来验证不同的模型、提示和配置,找到最适合自己任务的解决方案。构建AI系统的评估应该从最简单的开始,逐步增加复杂度,并不断测试和改进。用户会以意想不到的方式使用AI系统,因此我们需要不断测试和改进系统,以应对各种情况。评估的数量取决于系统的复杂性和重要性,以及用户的反馈。评估可能会冗余,我们需要关注关键路径和边缘情况。在项目早期,可以使用LLM生成评估案例,以快速迭代;在项目后期,应更多地使用人工评估,以覆盖边缘情况。为了有效地利用主题专家的知识,我们需要创建一个学习反馈循环,让LLM和人类评估者互相学习和改进。通过微调或更新提示的方式,可以改进LLM评估者的性能。当有多个主题专家时,可以通过聚类分析等方法来规范化他们的反馈,并识别出不同专家之间意见分歧的领域。AI的答案并不总是黑白分明的,我们需要根据具体情况来判断。AI可以帮助我们更高效地获取和利用领域专家的知识,从而提高效率并创造价值。通过LLM,我们可以将领域专家的知识和经验进行规模化应用,从而提高效率并提升用户体验。在构建AI辅助工具时,必须充分利用主题专家的知识,否则很难取得成功。为了加快学习过程,可以先使用一些通用的数据集或案例来训练LLM评估者,然后再引入领域专家的知识。LLM评估者的评价标准会随着系统的演变而不断变化,需要不断改进和完善。我们可以通过可视化的方法来展示AI的输出结果,从而提高用户参与度和评估效率。AI输出结果的可视化方法会随着时间的推移而不断改进,从简单的指标到复杂的图表和可视化工具。 Demetrios Brinkmann: (Demetrios Brinkmann在对话中主要起到引导和提问的作用,没有提出具体的核心论点,因此此处不进行总结。)

Deep Dive

Shownotes Transcript

Making AI Reliable is the Greatest Challenge of the 2020s // MLOps Podcast #312 with Alon Bochman, CEO of RagMetrics.

Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter

Huge shout-out to  @RagMetrics ) for sponsoring this episode!

// Abstract

Demetrios talks with Alon Bochman, CEO of RagMetrics, about testing in machine learning systems. Alon stresses the value of empirical evaluation over influencer advice, highlights the need for evolving benchmarks, and shares how to effectively involve subject matter experts without technical barriers. They also discuss using LLMs as judges and measuring their alignment with human evaluators.

// Bio

Alon is a product leader with a fintech and adtech background, ex-Google, ex-Microsoft. Co-founded and sold a software company to Thomson Reuters for $30M, grew an AI consulting practice from 0 to over $ 1 Bn in 4 years. 20-year AI veteran, winner of three medals in model-building competitions. In a prior life, he was a top-performing hedge fund portfolio manager.Alon lives near NYC with his wife and two daughters. He is an avid reader, runner, and tennis player, an amateur piano player, and a retired chess player.

// Related Links

Website: ragmetrics.ai


Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore

Join our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] 

Sign up for the next meetup: [https://go.mlops.community/register]

MLOps Swag/Merch: [https://shop.mlops.community/]

Connect with Demetrios on LinkedIn: /dpbrinkm

Connect with Alon on LinkedIn: /alonbochman





Timestamps:

[00:00] Alon's preferred coffee[00:15] Takeaways[00:47] Testing Multi-Agent Systems[05:55] Tracking ML Experiments[12:28] AI Eval Redundancy Balance[17:07] Handcrafted vs LLM Eval Tradeoffs[28:15] LLM Judging Mechanisms[36:03] AI and Human Judgment[38:55] Document Evaluation with LLM[42:08] Subject Matter Expertise in Co-Pilots[46:33] LLMs as Judges[51:40] LLM Evaluation Best Practices[55:26] LM Judge Evaluation Criteria[58:15] Visualizing AI Outputs[1:01:16] Wrap up