We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
back
Detecting Harmful Content at Scale // Matar Haller // #246
51:27
Share
2024/7/9
MLOps.community
AI Chapters
Transcribe
Chapters
What's Matar's Preferred Coffee?
Key Takeaways from the Episode
The Talk That Stood Out
What Are the Biggest Challenges in Detecting Online Hate Speech?
How Does the Harmful Media API Work?
AI Models for Content Moderation: What's the Approach?
Optimizing Speed and Accuracy in Content Moderation
How Does Cultural Reference Impact AI Training?
Functional Tests in AI Models: What Are They?
Continuous Adaptation of AI Models: How Is It Achieved?
What Are the Concerns Around AI Detection?
Fine-Tuned vs Off-the-Shelf Models: Which Is Better?
Monitoring Transformer Model Hallucinations: Why Is It Important?
How Does the Auditing Process Ensure Accuracy?
Testing Strategies for Machine Learning Models
Deploying Hate Speech Models: What Are the Challenges?
Improving Production Code Quality in ML Models
Finding the Right Balance in Content Moderation
How Does the Model Ensure Cultural Sensitivity?
Wrap Up
Shownotes
Transcript
No transcript made for this episode yet, you may request it for free.