“How AI Is Learning to Think in Secret” by Nicholas Andresen
Description
On Thinkish, Neuralese, and the End of Readable Reasoning
In September 2025, researchers published the internal monologue of OpenAI's GPT-o3 as it decided to lie about scientific data. This is what it thought:
Pardon? This looks like someone had a stroke during a meeting they didn’t want to be in, but their hand kept taking notes.
That transcript comes from a recent paper published by researchers at Apollo Research and OpenAI on catching AI systems scheming. To understand what's happening here - and why one of the most sophisticated AI systems in the world is babbling about “synergy customizing illusions” - it first helps to know how we ended up being able to read AI thinking in the first place.
That story starts, of all places, on 4chan.
In late 2020, anonymous posters on 4chan started describing a prompting trick that would change the course of AI development. It was almost embarrassingly simple: instead of just asking GPT-3 for an answer, ask it instead to show its work before giving its final answer.
Suddenly, it started solving math problems that had stumped it moments before.
To see why, try multiplying 8,734 × 6,892 in your head. If you’re like [...]
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
January 6th, 2026
Source:
https://www.lesswrong.com/posts/gpyqWzWYADWmLYLeX/how-ai-is-learning-to-think-in-secret
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.



