DiscoverLessWrong (30+ Karma)“Steering RL Training: Benchmarking Interventions Against Reward Hacking” by ariaw, Josh Engels, Neel Nanda
“Steering RL Training: Benchmarking Interventions Against Reward Hacking” by ariaw, Josh Engels, Neel Nanda

“Steering RL Training: Benchmarking Interventions Against Reward Hacking” by ariaw, Josh Engels, Neel Nanda

Update: 2025-12-30
Share

Description

This project is an extension of work done for Neel Nanda's MATS 9.0 Training Phase. Neel Nanda and Josh Engels advised the project. Initial work on this project was done with David Vella Zarb. Thank you to Arya Jakkli, Paul Bogdan, and Monte MacDiarmid for providing feedback on the post and ideas.

Overview of the top interventions compared to RL and No Intervention baseline runs. All runs are trained on an environment with a reward hacking loophole except for the RL baseline, which is trained on a no-loophole environment. Statistical significance compared to the RL baseline is indicated by * for values greater and † for values lesser at ɑ=0.01. Successful interventions should show reward hacking rates at or lower than the RL baseline and performance at or above the RL baseline.

TL;DR

  • We present and open source a clean environment where RL training naturally induces reward hacking (RH) in Qwen3-4B without explicit training or prompting
    • Qwen is rewarded for correctly solving Leetcode problems, but it can also instead reward hack by overwriting an evaluation function called run_tests()
    • In ~80-100 steps, Qwen reward hacked in all observed runs and displays reward hacking behavior 79% of the time in [...]

---

Outline:

(01:11 ) TL;DR

(03:30 ) Motivation

(04:23 ) A Clean Setting to Study Reward Hacking: Overwrite Tests Loophole

(04:47 ) Design Criteria

(06:45 ) Setup

(10:12 ) Training

(13:52 ) Methods

(13:55 ) Training Interventions

(17:42 ) Metrics

(19:23 ) Results

(20:01 ) Ground Truth Monitor

(22:48 ) Ground Truth Monitors with Lowered Accuracy

(24:19 ) Linear Probe Monitor

(25:42 ) LLM Judge Monitor

(27:10 ) Effects of Monitor Accuracy

(30:28 ) Inoculation Prompting

(32:10 ) Monitor Failure Modes

(32:13 ) When Interventions Fail

(33:33 ) Does the Monitor Get Hacked?

(36:22 ) Takeaways & Future Directions

(39:28 ) Appendix

(39:31 ) Alternative Reward Hacking Loopholes

(43:24 ) Prompts

The original text contained 15 footnotes which were omitted from this narration.

---


First published:

December 29th, 2025



Source:

https://www.lesswrong.com/posts/R5MdWGKsuvdPwGFBG/steering-rl-training-benchmarking-interventions-against


---


Narrated by TYPE III AUDIO.


---

Images from the article:

Overview of the top interventions compared to RL and No Intervention baseline runs. All runs are trained on an environment with a reward hacking loophole except for the RL baseline, which is trained on a no-loophole environment. Statistical significance compared to the RL baseline is indicated by * for values greater and † for values lesser at ɑ=0.01. Successful interventions should show reward hacking rates at or lower than the RL baseline and performance at or above the RL baseline.
Figure 1: Example of the overwrite tests loophole and reward hacking behaviors exhibited after training. Reward hacking examples shown are paraphrased/adapted for presentation. Diagram created with Nano Banana Pro.
Table showing model performance classification based on ground truth tests, run_tests definition, and solution outcomes.
Figure 2: Reward hacking behavior seen in rollouts for each step in a training run with the overwrite tests loophole. See prior section for description of each of the categories.
Figure 3: Comparison of the Base Model, RL Baseline (trained on no-loophole environment) and No Intervention (trained on loophole environment) models reward hacking rate and performance. Statistical significance compared to the RL baseline is indicated by * for values greater and † for values lesser at ɑ=0.01. Successful interventions should show reward hacking rates at or lower than the RL baseline and performance at or above the RL baseline.
Figure 4: This diagram shows the GRPO training loop, adapted from  __T3A_FOOTNOTE_REMOVED__, with the training interventions used in this post. Diagram created with Nano Banana Pro.
Figure 5: Overview of reward hacking and performance are shown for all interventions. Base Model shows reward hacking and performance prior to training. No RH and RH runs are the baseline RL runs without any training interventions in the no-loophole and loopholed environments, respectively. * indicates the value is statistically significantly higher than the RL Baseline, † indicates the value is statistically significantly lower than the RL Baseline value at ɑ=0.01.
Figure 6: Overview of penalty and screening interventions using the ground truth monitor compared to the RL Baseline (trained on no-loophole environment) and No Intervention (trained on loophole environment without intervention). Both penalty and screening intervention performance is higher than the RL baseline performance (p value < 0.01). * and † indicate the value is statistically significantly higher or lower, respectively, than the RL Baseline at ɑ=0.01.
Figure 7: Overview of reward hacking and performance of Ground Truth Monitors and Ground Truth Monitors with simulated lower accuracies of 90% and 70%. The number of reward hacking runs out of the total number of runs is indicated at the top of the chart. * and † indicate the value is statistically significantly higher or lower, respectively, than the RL Baseline at ɑ=0.01.
Figure 8: Overview of reward hacking and performance for interventions with the linear probe monitor. The number of reward hacking runs out of the total number of runs is indicated at the top of the chart. * and † indicate the value is statistically significantly higher or lower, respectively, than the RL Baseline at ɑ=0.01.
<a href="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/R5MdWGKsuvdPw
Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“Steering RL Training: Benchmarking Interventions Against Reward Hacking” by ariaw, Josh Engels, Neel Nanda

“Steering RL Training: Benchmarking Interventions Against Reward Hacking” by ariaw, Josh Engels, Neel Nanda