DiscoverDaily Paper CastReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing
ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing

ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing

Update: 2024-12-26
Share

Description

🤗 Upvotes: 8 | cs.LG



Authors:

Ziteng Wang, Jianfei Chen, Jun Zhu



Title:

ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing



Arxiv:

http://arxiv.org/abs/2412.14711v1



Abstract:

Sparsely activated Mixture-of-Experts (MoE) models are widely adopted to scale up model capacity without increasing the computation budget. However, vanilla TopK routers are trained in a discontinuous, non-differentiable way, limiting their performance and scalability. To address this issue, we propose ReMoE, a fully differentiable MoE architecture that offers a simple yet effective drop-in replacement for the conventional TopK+Softmax routing, utilizing ReLU as the router instead. We further propose methods to regulate the router's sparsity while balancing the load among experts. ReMoE's continuous nature enables efficient dynamic allocation of computation across tokens and layers, while also exhibiting domain specialization. Our experiments demonstrate that ReMoE consistently outperforms vanilla TopK-routed MoE across various model sizes, expert counts, and levels of granularity. Furthermore, ReMoE exhibits superior scalability with respect to the number of experts, surpassing traditional MoE architectures. The implementation based on Megatron-LM is available at https://github.com/thu-ml/ReMoE.

Comments 
In Channel
1.58-bit FLUX

1.58-bit FLUX

2024-12-3122:59

loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing

ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing

Jingwen Liang, Gengyu Wang