DiscoverAI: post transformersNeurIPS 2025: FlashBias: Fast Computation of Attention with Bias
NeurIPS 2025: FlashBias: Fast Computation of Attention with Bias

NeurIPS 2025: FlashBias: Fast Computation of Attention with Bias

Update: 2025-11-29
Share

Description

The source introduces FlashBias, an innovative algorithm designed to significantly accelerate the efficiency of the Transformer attention mechanism when incorporating an additive bias term. Current methods, like those optimized for attention masks, cannot handle bias because these terms are generally dense and continuous rather than sparse. FlashBias overcomes this limitation by exploiting the mathematical principle that attention bias matrices exhibit an inherent low-rank structure. The technique utilizes several decomposition methods, including exact, SVD, and neural decomposition, to represent the dense bias matrix in a much smaller, compressible form. Experiments showcase substantial time and memory savings when applying FlashBias across various demanding models, such as Large Language Models, Vision Transformers, and AlphaFold 3. This new approach provides crucial efficiency for training and inference, especially for tasks involving dynamic or complex prior knowledge.


Source:

https://openreview.net/pdf?id=7L4NvUtZY3

Comments 
loading
In Channel
Meta: SAM 3

Meta: SAM 3

2025-11-2014:22

loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

NeurIPS 2025: FlashBias: Fast Computation of Attention with Bias

NeurIPS 2025: FlashBias: Fast Computation of Attention with Bias

mcgrof