We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 2025.06.26 | 高质量多模态模型;4比特量化提升性能

2025.06.26 | 高质量多模态模型;4比特量化提升性能

2025/6/26
logo of podcast HuggingFace 每日AI论文速递

HuggingFace 每日AI论文速递

AI Chapters
Chapters

Shownotes Transcript

本期的 14 篇论文如下:

[00:23] 🖼 ShareGPT-4o-Image: Aligning Multimodal Models with GPT-4o-Level Image Generation(ShareGPT-4o-Image:通过GPT-4o级别的图像生成能力对齐多模态模型)

[01:05] 🛡 Outlier-Safe Pre-Training for Robust 4-Bit Quantization of Large Language Models(面向稳健4比特量化的异常值安全预训练大语言模型)

[01:49] 🎨 Inverse-and-Edit: Effective and Fast Image Editing by Cycle Consistency Models(逆向与编辑:基于循环一致性模型的高效快速图像编辑)

[02:30] 🧠 OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling(OctoThinker:中期训练激励强化学习扩展)

[03:13] 🤖 DualTHOR: A Dual-Arm Humanoid Simulation Platform for Contingency-Aware Planning(DualTHOR:一个用于情境感知规划的双臂人形机器人仿真平台)

[03:49] 🦾 RoboTwin 2.0: A Scalable Data Generator and Benchmark with Strong Domain Randomization for Robust Bimanual Robotic Manipulation(RoboTwin 2.0:一种可扩展的数据生成器和基准,具有强大的领域随机化,用于鲁棒的双臂机器人操作)

[04:33] 🧪 Use Property-Based Testing to Bridge LLM Code Generation and Validation(利用基于属性的测试弥合LLM代码生成与验证之间的差距)

[05:18] 🌍 When Life Gives You Samples: The Benefits of Scaling up Inference Compute for Multilingual LLMs(当生活给你样本时:扩展多语言LLM的推理计算的益处)

[05:56] 🖼 HiWave: Training-Free High-Resolution Image Generation via Wavelet-Based Diffusion Sampling(HiWave:基于小波变换扩散采样的免训练高分辨率图像生成)

[06:39] 🤖 ReCode: Updating Code API Knowledge with Reinforcement Learning(ReCode:利用强化学习更新代码API知识)

[07:15] 💬 Is There a Case for Conversation Optimized Tokenizers in Large Language Models?(大型语言模型中,面向对话优化的分词器是否有意义?)

[07:59] 🔬 Biomed-Enriched: A Biomedical Dataset Enriched with LLMs for Pretraining and Extracting Rare and Hidden Content(Biomed-Enriched:一个利用大型语言模型富集的生物医学数据集,用于预训练和提取稀有及隐藏内容)

[08:47] 🤖 MATE: LLM-Powered Multi-Agent Translation Environment for Accessibility Applications(MATE:基于LLM的多智能体翻译环境,用于辅助应用)

[09:28] 📉 The Debugging Decay Index: Rethinking Debugging Strategies for Code LLMs(调试衰减指数:重新思考代码大语言模型的调试策略) 【关注我们】

您还可以在以下平台找到我们,获得播客内容以外更多信息

小红书: AI速递