DiscoverAXRP - the AI X-risk Research Podcast21 - Interpretability for Engineers with Stephen Casper
21 - Interpretability for Engineers with Stephen Casper

21 - Interpretability for Engineers with Stephen Casper

Update: 2023-05-02
Share

Description

Lots of people in the field of machine learning study 'interpretability', developing tools that they say give us useful information about neural networks. But how do we know if meaningful progress is actually being made? What should we want out of these tools? In this episode, I speak to Stephen Casper about these questions, as well as about a benchmark he's co-developed to evaluate whether interpretability tools can find 'Trojan horses' hidden inside neural nets.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

 

Topics we discuss, and timestamps:

 - 00:00:42 - Interpretability for engineers

   - 00:00:42 - Why interpretability?

   - 00:12:55 - Adversaries and interpretability

   - 00:24:30 - Scaling interpretability

   - 00:42:29 - Critiques of the AI safety interpretability community

   - 00:56:10 - Deceptive alignment and interpretability

 - 01:09:48 - Benchmarking Interpretability Tools (for Deep Neural Networks) (Using Trojan Discovery)

   - 01:10:40 - Why Trojans?

   - 01:14:53 - Which interpretability tools?

   - 01:28:40 - Trojan generation

   - 01:38:13 - Evaluation

 - 01:46:07 - Interpretability for shaping policy

 - 01:53:55 - Following Casper's work

 

The transcript: axrp.net/episode/2023/05/02/episode-21-interpretability-for-engineers-stephen-casper.html

 

Links for Casper:

 - Personal website: stephencasper.com/

 - Twitter: twitter.com/StephenLCasper

 - Electronic mail: scasper [at] mit [dot] edu

 

Research we discuss:

 - The Engineer's Interpretability Sequence: alignmentforum.org/s/a6ne2ve5uturEEQK7

 - Benchmarking Interpretability Tools for Deep Neural Networks: arxiv.org/abs/2302.10894

 - Adversarial Policies beat Superhuman Go AIs: goattack.far.ai/

 - Adversarial Examples Are Not Bugs, They Are Features: arxiv.org/abs/1905.02175

 - Planting Undetectable Backdoors in Machine Learning Models: arxiv.org/abs/2204.06974

 - Softmax Linear Units: transformer-circuits.pub/2022/solu/index.html

 - Red-Teaming the Stable Diffusion Safety Filter: arxiv.org/abs/2210.04610

 

Episode art by Hamish Doodles: hamishdoodles.com

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

21 - Interpretability for Engineers with Stephen Casper

21 - Interpretability for Engineers with Stephen Casper