We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
back
Domino: Communication-Free LLM Training Engine // Guanhua Wang // #278
49:47
Share
2024/12/17
MLOps.community
AI Chapters
Transcribe
Chapters
What's Guanhua's Preferred Coffee?
Key Takeaways from the Episode
Please Like, Share, and Subscribe!
Explaining the Phi Model
Challenges in Optimizing Small Language Models
Overview and Benefits of DeepSpeed
Crazy Unimplemented AI Ideas?
Post-Training vs. Quantization-Aware Training
Why Quantization Over Distillation?
Using Lauras in LLMs
Finding the LLM Scaling Sweet Spot
Advanced Quantization Techniques
Introducing Domino: Communication-Free LLM Training Engine
Training Performance Benchmarks with Domino
Strategies for Breaking Data Dependencies
Wrap Up and Final Thoughts
Shownotes
Transcript
No transcript made for this episode yet, you may request it for free.