We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts

MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts

2024/1/11
logo of podcast Papers Read on AI

Papers Read on AI

Shownotes Transcript

State Space Models (SSMs) have become serious contenders in the field of sequential modeling, challenging the dominance of Transformers. At the same time, Mixture of Experts (MoE) has significantly improved Transformer-based LLMs, including recent state-of-the-art open-source models. We propose that to unlock the potential of SSMs for scaling, they should be combined with MoE. We showcase this on Mamba, a recent SSM-based model that achieves remarkable, Transformer-like performance. Our model, MoE-Mamba, outperforms both Mamba and Transformer-MoE. In particular, MoE-Mamba reaches the same performance as Mamba in 2.2x less training steps while preserving the inference performance gains of Mamba against the Transformer.

2024: Maciej Pi'oro, Kamil Ciebiera, Krystian Kr'ol, Jan Ludziejewski, Sebastian Jaszczur

https://arxiv.org/pdf/2401.04081.pdf