00:01:19 AI的“偏科”难题:学好数理化,走遍天下真的不怕吗?
00:05:08 AI 也会“复盘”?聊聊如何让机器像高手一样思考
00:09:19 语言的“橡皮泥”:我们如何“捏”出更智能的AI?
00:13:57 AI科学家的新玩法:它不猜答案,专找“意外”
00:17:42 AI“长篇阅读”的秘密:如何让机器像螺旋一样思考?
本期介绍的五篇论文:
[LG] Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning
[CMU & University of Washington & M-A-P]
https://arxiv.org/abs/2507.00432
[LG] ASTRO: Teaching Language Models to Reason by Reflecting and Backtracking In-Context
[AI at Meta]
https://arxiv.org/abs/2507.00417
[LG] Flexible Language Modeling in Continuous Space with Transformer-based Autoregressive Flows
[Apple]
https://arxiv.org/abs/2507.00425
[LG] Open-ended Scientific Discovery via Bayesian Surprise
[University of Massachusetts Amherst & Allen Institute for AI]
https://arxiv.org/abs/2507.00310
[LG] HelixPipe: Efficient Distributed Training of Long Sequence Transformers with Attention Parallel Pipeline Parallelism
[National University of Singapore]