DiscoverLessWrong posts by zvi“Reward Mismatches in RL Cause Emergent Misalignment” by Zvi
“Reward Mismatches in RL Cause Emergent Misalignment” by Zvi

“Reward Mismatches in RL Cause Emergent Misalignment” by Zvi

Update: 2025-12-02
Share

Description

Learning to do misaligned-coded things anywhere teaches an AI (or a human) to do misaligned-coded things everywhere. So be sure you never, ever teach any mind to do what it sees, in context, as misaligned-coded things.


If the optimal solution (as in, the one you most reinforce) to an RL training problem is one that the model perceives as something you wouldn’t want it to do, it will generally learn to do things you don’t want it to do.


You can solve this by ensuring that the misaligned-coded things are not what the AI will learn to do. Or you can solve this by making those things not misaligned-coded.









If you then teaching aligned behavior in one set of spots, this can fix the problem in those spots, but the fix does not generalize to other tasks or outside of distribution. If you manage to hit the entire distribution of tasks you care about in this way, that will work for now, but it still won’t generalize, so it's a terrible long term strategy.


Yo Shavit: Extremely important finding.


Don’t tell your model you’re rewarding it for A and then reward it for B [...]

---

Outline:

(02:59 ) Abstract Of The Paper

(04:12 ) The Problem Statement

(05:35 ) The Inoculation Solution

(07:02 ) Cleaning The Data Versus Cleaning The Environments

(08:16 ) No All Of This Does Not Solve Our Most Important Problems

(13:18 ) It Does Help On Important Short Term Problems

---


First published:

December 2nd, 2025



Source:

https://www.lesswrong.com/posts/a2nW8buG2Lw9AdPtH/reward-mismatches-in-rl-cause-emergent-misalignment


---


Narrated by TYPE III AUDIO.


---

Images from the article:

Line graph showing hack rate increasing from near zero at 50 to approximately 1 by 100, then stabilizing.
Graph showing malign generalization score and hack rate over training steps with breakdown of hacking types.
Chart showing system prompt addendums with five colored bars and corresponding text descriptions.
Bar graph showing misalignment after learning hacks with different system prompt addendums.
Table comparing mitigation strategies for preventing hacking and misalignment in reinforcement learning models.
Graph showing malign generalization score and hack rate across training steps with different hackable environment conditions.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“Reward Mismatches in RL Cause Emergent Misalignment” by Zvi

“Reward Mismatches in RL Cause Emergent Misalignment” by Zvi