DiscoverLessWrong (30+ Karma)“Weight-sparse transformers have interpretable circuits” by leogao
“Weight-sparse transformers have interpretable circuits” by leogao

“Weight-sparse transformers have interpretable circuits” by leogao

Update: 2025-11-13
Share

Description

TL;DR: We develop a novel method for finding interpretable circuits in Transformers, by training them to have sparse weights. This results in models that contain very high quality circuits: our circuits are global rather than datapoint dependent; we explain the circuit down to very granular objects, like individual neurons and attention channels, rather than entire MLP layers, attention heads, or groups of nodes; and the circuits are often simple enough to draw in their entirety on a whiteboard. The downside is that our method produces de novo sparse language models, which are extremely expensive to train and deploy, making it unlikely that we will ever be able to use this method to directly pretrain frontier models. We share preliminary results on using sparse models to explain an existing dense model, but our main theory of impact is to eventually scale our method to train a fully interpretable moderate-sized model. If we could fully interpret even (say) a GPT-3 level intelligence, it could aid dramatically in developing a theory of cognition in general.

[Blog] [Paper] [Code]

Abstract

Finding human-understandable circuits in language models is a central goal of the field of mechanistic interpretability. We train models to have more understandable [...]

---


First published:

November 13th, 2025



Source:

https://www.lesswrong.com/posts/yQMQXFAK4mfJjHBpN/weight-sparse-transformers-have-interpretable-circuits


---


Narrated by TYPE III AUDIO.

Comments 
In Channel
“10” by Ben Pace

“10” by Ben Pace

2025-11-1407:55

loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“Weight-sparse transformers have interpretable circuits” by leogao

“Weight-sparse transformers have interpretable circuits” by leogao