DiscoverLessWrong (30+ Karma)“2025-Era “Reward Hacking” Does Not Show that Reward Is the Optimization Target” by TurnTrout
“2025-Era “Reward Hacking” Does Not Show that Reward Is the Optimization Target” by TurnTrout

“2025-Era “Reward Hacking” Does Not Show that Reward Is the Optimization Target” by TurnTrout

Update: 2025-12-19
Share

Description

Folks ask me, "LLMs seem to reward hack a lot. Does that mean that reward is the optimization target?". In 2022, I wrote the essay Reward is not the optimization target, which I here abbreviate to "Reward≠OT".

Reward still is not the optimization target: Reward≠OT said that (policy-gradient) RL will not train systems which primarily try to optimize the reward function for its own sake (e.g. searching at inference time for an input which maximally activates the AI's specific reward model). In contrast, empirically observed "reward hacking" almost always involves the AI finding unintended "solutions" (e.g. hardcoding answers to unit tests).

"Reward hacking" and "Reward≠OT" refer to different meanings of "reward"

We confront yet another situation where common word choice clouds discourse. In 2016, Amodei et al. defined "reward hacking" to cover two quite different behaviors:

  1. Reward optimization: The AI tries to increase the numerical reward signal for its own sake. Examples: overwriting its reward function to always output MAXINT ("reward tampering") or searching at inference time for an input which maximally activates the AI's specific reward model. Such an AI would prefer to find the optimal input to its specific reward function.
  2. Specification gaming: The AI [...]

---

Outline:

(00:57 ) Reward hacking and Reward≠OT refer to different meanings of reward

(02:53 ) Reward≠OT was about reward optimization

(04:39 ) Why did people misremember Reward≠OT as conflicting with reward hacking results?

(06:22 ) Evaluating Reward≠OTs actual claims

(06:56 ) Claim 3: RL-trained systems wont primarily optimize the reward signal

(07:28 ) My concrete predictions on reward optimization

(10:14 ) I made a few mistakes in Reward≠OT

(11:54 ) Conclusion

The original text contained 4 footnotes which were omitted from this narration.

---


First published:

December 18th, 2025



Source:

https://www.lesswrong.com/posts/wwRgR3K8FKShjwwL5/2025-era-reward-hacking-does-not-show-that-reward-is-the


---


Narrated by TYPE III AUDIO.


---

Images from the article:

A Scooby Doo meme. Panel 1: Fred looks at a man in a ghost costume, overlaid by text “philosophical alignment mistake.” Panel 2: Fred unmasks the “ghost”, with the man's face overlaid by “using the word 'reward.'”
Social media post stating an RL-trained system will spontaneously become a reward optimizer.
Prediction market question about RL systems becoming reward optimizers, resolving December 31st 2034.
A split-screen comparison illustration with a comic book aesthetic. On the left, labeled

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“2025-Era “Reward Hacking” Does Not Show that Reward Is the Optimization Target” by TurnTrout

“2025-Era “Reward Hacking” Does Not Show that Reward Is the Optimization Target” by TurnTrout