DiscoverLessWrong (30+ Karma)“13 Arguments About a Transition to Neuralese AIs” by Rauno Arike
“13 Arguments About a Transition to Neuralese AIs” by Rauno Arike

“13 Arguments About a Transition to Neuralese AIs” by Rauno Arike

Update: 2025-11-07
Share

Description

Over the past year, I have talked to several people about whether they expect frontier AI companies to transition away from the current paradigm of transformer LLMs toward models that reason in neuralese within the next few years. This post summarizes 13 common arguments I’ve heard, six in favor and seven against a transition to neuralese AIs. The following table provides a summary:

Arguments for a transition to neuraleseArguments against a transition to neuraleseA lot of information gets lost in text bottlenecks.Natural language reasoning might be a strong local optimum that takes a lot of training effort to escape.The relative importance of post-training compared to pre-training is increasing.Recurrent LLMs suffer from a parallelism trade-off that makes their training less efficient.There's an active subfield researching recurrent LLMs.There's significant business value in being able to read a model's CoTs.Human analogy: natural language might not play that big of a role in human thinking.Human analogy: even if natural language isn’t humans’ primary medium of thought, we still rely on it a lot.SGD inductive biases might favor directly learning good sequential reasoning algorithms in the weight space.Though significant effort has been spent on getting neuralese models to work, we still have none that work [...]

---

Outline:

(00:49 ) What do I mean by neuralese?

(02:07 ) Six arguments in favor of a transition to neuralese AIs

(02:13 ) 1) A lot of information is lost in a text bottleneck

(03:42 ) 2) The increasing importance of post-training

(04:37 ) 3) Active research on recurrent LLMs

(05:50 ) 4) Analogy with human thinking

(08:03 ) 5) SGD inductive biases

(08:34 ) 6) The limit of capabilities

(08:54 ) Seven arguments against a transition to neuralese AIs

(09:00 ) 1) The natural language sweet spot

(10:59 ) 2) The parallelism trade-off

(12:11 ) 3) Business value of visible reasoning traces

(12:55 ) 4) Analogy with human thinking

(13:47 ) 5) Evidence from past attempts to build recurrent LLMs

(14:53 ) 6) The depth-latency trade-off

(16:09 ) 7) Safety value of visible reasoning traces

(16:38 ) Conclusion

The original text contained 3 footnotes which were omitted from this narration.

---


First published:

November 7th, 2025



Source:

https://www.lesswrong.com/posts/zkccztuSjLshffrNr/13-arguments-about-a-transition-to-neuralese-ais


---


Narrated by TYPE III AUDIO.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“13 Arguments About a Transition to Neuralese AIs” by Rauno Arike

“13 Arguments About a Transition to Neuralese AIs” by Rauno Arike