“Reducing risk from scheming by studying trained-in scheming behavior” by Ryan Greenblatt
Description
Subtitle: Can we study scheming by studying AIs trained to act like schemers?.
In a previous post, I discussed mitigating risks from scheming by studying examples of actual scheming AIs.1 In this post, I’ll discuss an alternative approach: directly training (or instructing) an AI to behave how we think a naturally scheming AI might behave (at least in some ways). Then, we can study the resulting models. For instance, we could verify that a detection method discovers that the model is scheming or that a removal method removes the inserted behavior. The Sleeper Agents paper is an example of directly training in behavior (roughly) similar to a schemer.
This approach eliminates one of the main difficulties with studying actual scheming AIs: the fact that it is difficult to catch (and re-catch) them. However, trained-in scheming behavior can’t be used to build an end-to-end understanding of how scheming arises [...]
---
Outline:
(02:34 ) Studying detection and removal techniques using trained-in scheming behavior
(05:39 ) Issues
(18:29 ) Conclusion
(19:39 ) Canary string
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
October 16th, 2025
Source:
https://blog.redwoodresearch.org/p/reducing-risk-from-scheming-by-studying
---
Narrated by TYPE III AUDIO.



