DiscoverLessWrong (30+ Karma)“Examining Language Model Performance with Reconstructed Activations using Sparse Autoencoders” by Evan Anders, Joseph Bloom
“Examining Language Model Performance with Reconstructed Activations using Sparse Autoencoders” by Evan Anders, Joseph Bloom

“Examining Language Model Performance with Reconstructed Activations using Sparse Autoencoders” by Evan Anders, Joseph Bloom

Update: 2024-02-27
Share

Description

Summary.

  • Sparse Autoencoders (SAEs) reveal interpretable features in the activation spaces of language models, but SAEs don’t reconstruct activations perfectly. We lack good metrics for evaluating which parts of model activations SAEs fail to reconstruct, which makes it hard to evaluate SAEs themselves. In this post, we argue that SAE reconstructions should be tested using well-established benchmarks to help determine what kinds of tasks they degrade model performance on.
  • We stress-test a recently released set of SAEs for each layer of the gpt2-small residual stream using randomly sampled tokens from Open WebText and the Lambada benchmark where the model must predict a specific next token. 
  • The SAEs perform well on prompts with context sizes up to the training context size, but their performance degrades on longer prompts. 
  • In contexts shorter than or equal to the training context, the SAEs that we study generally perform well. We find [...]


---

Outline:

(05:43 ) Experiment Overview

(05:47 ) Open WebText next-token prediction (randomly sampled data)

(07:27 ) Benchmarks (correct answer prediction)

(08:55 ) How does context length affect SAE performance on randomly sampled data?

(12:08 ) SAE Downstream Error Propagation

(14:27 ) How does using SAE output in place of activations affect model performance on the Lambada Benchmark?

(22:07 ) Takeaways

(23:28 ) Future work

(26:01 ) Code

(26:15 ) Acknowledgments

(27:56 ) Citing this post

(28:02 ) Appendix: Children's Book Test (CBT, Common Noun \[CN\] split)

---


First published:

February 27th, 2024



Source:

https://www.lesswrong.com/posts/8QRH8wKcnKGhpAu2o/examining-language-model-performance-with-reconstructed


---

Narrated by TYPE III AUDIO.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“Examining Language Model Performance with Reconstructed Activations using Sparse Autoencoders” by Evan Anders, Joseph Bloom

“Examining Language Model Performance with Reconstructed Activations using Sparse Autoencoders” by Evan Anders, Joseph Bloom