DiscoverAI: post transformersNeurIPS 2025: MoBA: Mixture of Block Attention for Long-Context LLMs
NeurIPS 2025: MoBA: Mixture of Block Attention for Long-Context LLMs

NeurIPS 2025: MoBA: Mixture of Block Attention for Long-Context LLMs

Update: 2025-11-29
Share

Description

This paper introduces Mixture of Block Attention (MoBA) to address the prohibitive quadratic computational overhead inherent in traditional attention mechanisms when scaling large language models (LLMs) for long contexts. MoBA is a novel architecture that strategically applies the established Mixture of Experts (MoE) paradigm directly to the attention mechanism itself. Instead of attending to the entire sequence, MoBA partitions the context into discrete blocks and utilizes a dynamic gating network to selectively route queries to only the most relevant blocks of keys and values. This block-sparse approach drastically increases computational efficiency, achieving sub-quadratic complexity and demonstrating speedups of up to 16 times when processing sequences up to 10 million tokens. Crucially, the research demonstrates that MoBA maintains performance comparable to full attention across scaling laws and real-world benchmarks. Furthermore, the architecture is highly flexible, allowing for seamless transitions between sparse MoBA and full attention layers during both training and inference.


Source: https://openreview.net/pdf?id=RlqYCpTu1P

Comments 
loading
In Channel
Meta: SAM 3

Meta: SAM 3

2025-11-2014:22

loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

NeurIPS 2025: MoBA: Mixture of Block Attention for Long-Context LLMs

NeurIPS 2025: MoBA: Mixture of Block Attention for Long-Context LLMs

mcgrof