We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Zeta-Alpha-E5-Mistral: Finetuning LLMs for Retrieval (with Arthur Câmara)

Zeta-Alpha-E5-Mistral: Finetuning LLMs for Retrieval (with Arthur Câmara)

2024/11/14
logo of podcast Neural Search Talks — Zeta Alpha

Neural Search Talks — Zeta Alpha

Shownotes Transcript

In the 30th episode of Neural Search Talks, we have our very own Arthur Câmara, Senior Research Engineer at Zeta Alpha, presenting a 20-minute guide on how we fine-tune Large Language Models for effective text retrieval. Arthur discusses the common issues with embedding models in a general-purpose RAG pipeline, how to tackle the lack of retrieval-oriented data for fine-tuning with InPars, and how we adapted E5-Mistral to rank in the top 10 on the BEIR benchmark.

Sources

InPars

Zeta-Alpha-E5-Mistral

NanoBEIR