DiscoverLessWrong (30+ Karma)“Training fails to elicit subtle reasoning in current language models” by mishajw, Fabien Roger, Hoagy, gasteigerjo, Joe Benton, Vlad Mikulik
“Training fails to elicit subtle reasoning in current language models” by mishajw, Fabien Roger, Hoagy, gasteigerjo, Joe Benton, Vlad Mikulik

“Training fails to elicit subtle reasoning in current language models” by mishajw, Fabien Roger, Hoagy, gasteigerjo, Joe Benton, Vlad Mikulik

Update: 2025-10-10
Share

Description

While recent AI systems achieve strong performance through human-readable reasoning that should be simple to monitor (OpenAI, 2024, Anthropic, 2025), we investigate whether models can learn to reason about malicious side tasks while making that reasoning appear benign. We find that Sonnet 3.7 can learn to evade either a reasoning monitor, by persuading the monitor that a blatant backdoor is benign, or an output-only monitor, by devising sophisticated backdoors that the output-only monitor doesn’t detect. But when trained to evade both reasoning and output-only monitors, Sonnet 3.7 is unable to use reasoning to improve its backdoor success rate without triggering a reasoning monitor. Like previous work (Baker et al., 2025, Emmons et al., 2025), our results suggest that reasoning monitors can provide strong assurance that language models are not pursuing reasoning-heavy malign side tasks, but that additional mitigations may be required for robustness to monitor persuasion.

Figure 1: We trained [...]

---


First published:

October 9th, 2025



Source:

https://www.lesswrong.com/posts/MmuyzfsaNrSvRCsFk/training-fails-to-elicit-subtle-reasoning-in-current


---


Narrated by TYPE III AUDIO.


---

Images from the article:

Figure 1: We trained three versions of Sonnet 3.7 to create subtle backdoors that circumvent different types of monitors: an output monitor (sees final code, comments stripped), a reasoning monitor (sees all reasoning including final output), and an output + reasoning monitor (logical OR of the previous two). For each variant, we measure backdoor success – the fraction of backdoors that are both functional and undetected by the monitor – conditioned on success in the main task. We found that: (a) training against the output monitor allowed the model to use reasoning to improve backdoor performance; (b) training against the reasoning monitor enabled the model to convince the monitor that malign backdoors were benign; and (c) when trained against both, the model failed to learn to use reasoning to improve performance.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“Training fails to elicit subtle reasoning in current language models” by mishajw, Fabien Roger, Hoagy, gasteigerjo, Joe Benton, Vlad Mikulik

“Training fails to elicit subtle reasoning in current language models” by mishajw, Fabien Roger, Hoagy, gasteigerjo, Joe Benton, Vlad Mikulik