[Linkpost] “We are likely in an AI overhang, and this is bad.” by Gabriel Alfour
Description
By racing to the next generation of models faster than we can understand the current one, AI companies are creating an overhang. This overhang is not visible, and our current safety frameworks do not take it into account.
1) AI models have untapped capabilities
At the time GPT-3 was released, most of its currently-known capabilities were unknown.
As we play more with models, build better scaffolding, get better at prompting, inspect their internals, and study them, we discover more about what's possible to do with them.
This has also been my direct experience studying and researching open-source models at Conjecture.
2) SOTA models have a lot of untapped capabilities
Companies are racing hard.
There's a trade-off between studying existing models and pushing forward. They are doing the latter, and they are doing it hard.
There is much more research into boosting SOTA models than [...]
---
Outline:
(00:26 ) 1) AI models have untapped capabilities
(00:53 ) 2) SOTA models have a lot of untapped capabilities
(01:29 ) 3) This is bad news.
(01:48 ) 4) This is not accounted for.
---
First published:
September 23rd, 2025
Source:
https://www.lesswrong.com/posts/4YvSSKTPhPC43K3vn/we-are-likely-in-an-ai-overhang-and-this-is-bad
Linkpost URL:
https://cognition.cafe/p/we-are-likely-in-an-ai-overhang-and
---
Narrated by TYPE III AUDIO.