DiscoverLessWrong (30+ Karma)“[Paper] Output Supervision Can Obfuscate the CoT” by jacob_drori, lukemarks, cloud, TurnTrout
“[Paper] Output Supervision Can Obfuscate the CoT” by jacob_drori, lukemarks, cloud, TurnTrout

“[Paper] Output Supervision Can Obfuscate the CoT” by jacob_drori, lukemarks, cloud, TurnTrout

Update: 2025-11-20
Share

Description

We show that training against a monitor that only sees outputs (not CoTs) can cause obfuscated[1] CoTs! The obfuscation happens in two ways:

  1. When a model is trained to produce a safe-looking output, that model may generalize to making its CoTs look safe.
  2. Since later tokens are conditioned on earlier ones, safe‑looking CoTs may increase the likelihood of safe outputs, causing safe-looking CoTs to be reinforced. 

We introduce two mitigations and they work reasonably well. Compared to regular training, our mitigations improve both monitorability and task performance in two of our three settings. Overall, our work suggests that preserving CoT monitorability is more harder than previously thought.

Read our paper on Arxiv. Produced as part of MATS 8.0 under the mentorship of Alex Turner and Alex Cloud. Luke and Jacob contributed equally.

What's new in this post

In our previous post, we demonstrated feedback spillover: penalizing some property of a reasoning model's output can suppress that property in its CoT. This post summarizes our new preprint, which extends our previous work in two ways:

  1. We demonstrate spillover in new, more interesting environments.
  2. We introduce reward targeting, a second mitigation which combines effectively with the mind & face[2] mitigation [...]

---

Outline:

(01:10 ) Whats new in this post

(01:57 ) Mitigations for spillover

(03:11 ) Results

(04:46 ) Multi-turn terminal interaction

(05:52 ) Polynomial derivative factoring

(06:55 ) Question answering with hints

(08:01 ) Concrete recommendations

The original text contained 2 footnotes which were omitted from this narration.

---


First published:

November 20th, 2025



Source:

https://www.lesswrong.com/posts/HuoyYQ6mFhS5pfZ4G/paper-output-supervision-can-obfuscate-the-cot


---


Narrated by TYPE III AUDIO.


---

Images from the article:

Results for the multi-turn terminal environment with the string match output penalty.
Results for the multi-turn terminal environment with the LLM judge output penalty.
Diagram showing reinforcement learning system detecting and penalizing AI chatbot cheating behavior.
Graph showing Training Reward versus CoT Monitor Detection Rate with penalty and targeting conditions.
Graph showing training reward versus CoT monitor detection rate with less spillover arrow.
Diagram showing three training approaches: Standard Training, Mind & Face, and Reward Targeting with CoT and Output components.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“[Paper] Output Supervision Can Obfuscate the CoT” by jacob_drori, lukemarks, cloud, TurnTrout

“[Paper] Output Supervision Can Obfuscate the CoT” by jacob_drori, lukemarks, cloud, TurnTrout