DiscoverAXRP - the AI X-risk Research Podcast39 - Evan Hubinger on Model Organisms of Misalignment
39 - Evan Hubinger on Model Organisms of Misalignment

39 - Evan Hubinger on Model Organisms of Misalignment

Update: 2024-12-01
Share

Description

The 'model organisms of misalignment' line of research creates AI models that exhibit various types of misalignment, and studies them to try to understand how the misalignment occurs and whether it can be somehow removed. In this episode, Evan Hubinger talks about two papers he's worked on at Anthropic under this agenda: "Sleeper Agents" and "Sycophancy to Subterfuge".

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/12/01/episode-39-evan-hubinger-model-organisms-misalignment.html

 

Topics we discuss, and timestamps:

0:00:36 - Model organisms and stress-testing

0:07:38 - Sleeper Agents

0:22:32 - Do 'sleeper agents' properly model deceptive alignment?

0:38:32 - Surprising results in "Sleeper Agents"

0:57:25 - Sycophancy to Subterfuge

1:09:21 - How models generalize from sycophancy to subterfuge

1:16:37 - Is the reward editing task valid?

1:21:46 - Training away sycophancy and subterfuge

1:29:22 - Model organisms, AI control, and evaluations

1:33:45 - Other model organisms research

1:35:27 - Alignment stress-testing at Anthropic

1:43:32 - Following Evan's work

 

Main papers:

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training: https://arxiv.org/abs/2401.05566

Sycophancy to Subterfuge: Investigating Reward-Tampering in Large Language Models: https://arxiv.org/abs/2406.10162

 

Anthropic links:

Anthropic's newsroom: https://www.anthropic.com/news

Careers at Anthropic: https://www.anthropic.com/careers

 

Other links:

Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research: https://www.alignmentforum.org/posts/ChDH335ckdvpxXaXX/model-organisms-of-misalignment-the-case-for-a-new-pillar-of-1

Simple probes can catch sleeper agents: https://www.anthropic.com/research/probes-catch-sleeper-agents

Studying Large Language Model Generalization with Influence Functions: https://arxiv.org/abs/2308.03296

Stress-Testing Capability Elicitation With Password-Locked Models [aka model organisms of sandbagging]: https://arxiv.org/abs/2405.19550

 

Episode art by Hamish Doodles: hamishdoodles.com

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

39 - Evan Hubinger on Model Organisms of Misalignment

39 - Evan Hubinger on Model Organisms of Misalignment