DiscoverDaily Paper CastEfficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning
Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning

Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning

Update: 2024-12-20
Share

Description

🤗 Upvotes: 10 | cs.LG, cs.RO



Authors:

Moritz Reuss, Jyothish Pari, Pulkit Agrawal, Rudolf Lioutikov



Title:

Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning



Arxiv:

http://arxiv.org/abs/2412.12953v1



Abstract:

Diffusion Policies have become widely used in Imitation Learning, offering several appealing properties, such as generating multimodal and discontinuous behavior. As models are becoming larger to capture more complex capabilities, their computational demands increase, as shown by recent scaling laws. Therefore, continuing with the current architectures will present a computational roadblock. To address this gap, we propose Mixture-of-Denoising Experts (MoDE) as a novel policy for Imitation Learning. MoDE surpasses current state-of-the-art Transformer-based Diffusion Policies while enabling parameter-efficient scaling through sparse experts and noise-conditioned routing, reducing both active parameters by 40% and inference costs by 90% via expert caching. Our architecture combines this efficient scaling with noise-conditioned self-attention mechanism, enabling more effective denoising across different noise levels. MoDE achieves state-of-the-art performance on 134 tasks in four established imitation learning benchmarks (CALVIN and LIBERO). Notably, by pretraining MoDE on diverse robotics data, we achieve 4.01 on CALVIN ABC and 0.95 on LIBERO-90. It surpasses both CNN-based and Transformer Diffusion Policies by an average of 57% across 4 benchmarks, while using 90% fewer FLOPs and fewer active parameters compared to default Diffusion Transformer architectures. Furthermore, we conduct comprehensive ablations on MoDE's components, providing insights for designing efficient and scalable Transformer architectures for Diffusion Policies. Code and demonstrations are available at https://mbreuss.github.io/MoDE_Diffusion_Policy/.

Comments 
loading
In Channel
GUI Agents: A Survey

GUI Agents: A Survey

2024-12-2021:01

loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning

Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning

Jingwen Liang, Gengyu Wang