DiscoverAI Frontiers“AGI’s Last Bottlenecks” by Adam Khoja, Laura Hiscott
“AGI’s Last Bottlenecks” by Adam Khoja, Laura Hiscott

“AGI’s Last Bottlenecks” by Adam Khoja, Laura Hiscott

Update: 2025-10-22
Share

Description

Adam Khoja is a co-author of the recent study, “A Definition of AGI.” The opinions expressed in this article are his own and do not necessarily represent those of the study's other authors.

Laura Hiscott is a core contributor at

AI Frontiers and collaborated on the development and writing of this article.

‍Dan Hendrycks, lead author of “A Definition of AGI,” provided substantial input throughout this article's drafting.

----

In a recent interview on the “Dwarkesh Podcast,” OpenAI co-founder Andrej Karpathy claimed that artificial general intelligence (AGI) is around a decade away, expressing doubt about “over-predictions in the industry.” Coming amid growing discussion of an “AI bubble,” Karpathy's comment throws cold water on some of the more bullish predictions from leading tech figures. Yet those figures don’t seem to be reconsidering their positions. Following Anthropic CEO Dario Amodei's prediction last year that we might have “a country of geniuses [...]

---

Outline:

(03:50 ) Missing Capabilities and the Path to Solving Them

(05:13 ) Visual Processing

(07:38 ) On-the-Spot Reasoning

(10:15 ) Auditory Processing

(11:09 ) Speed

(12:04 ) Working Memory

(13:16 ) Long-Term Memory Retrieval (Hallucinations)

(14:24 ) Long-Term Memory Storage (Continual Learning)

(16:36 ) Conclusion

(18:47 ) Discussion about this post

---


First published:

October 22nd, 2025



Source:

https://aifrontiersmedia.substack.com/p/agis-last-bottlenecks


---


Narrated by TYPE III AUDIO.


---

Images from the article:

Grid-based pattern matching task showing colored lines forming intersecting paths. Three example input-output pairs are shown, followed by a test case and its predicted solution.
AGI can be analogized to an engine that converts inputs to outputs by using a collection of cognitive abilities, consisting of General Knowledge (K); Reading and Writing Ability (RW); Mathematical Ability (M); On-the-Spot Reasoning (R); Working Memory (WM); Long-Term Memory Storage (MS); Long-Term Memory Retrieval (MR); Visual Processing (V); Auditory Processing (A); Speed (S). Source __T3A_LINK_IN_POST__.
Changing the size of a visual logic puzzle can degrade a model’s reasoning performance, suggesting that the failure may be to do with perception, rather than reasoning. Source __T3A_LINK_IN_POST__.
The IntPhys 2 benchmark tests intuitive physics understanding by asking whether a video is physically plausible. The best existing models perform only slightly better than chance. Source __T3A_LINK_IN_POST__.
The SPACE benchmark assesses spatial reasoning. Models do not yet match average human scores on these tasks, but they are improving rapidly. Source __T3A_LINK_IN_POST__.
The ten components of our AGI definition cover the breadth of human cognitive abilities. The detailed scores of GPT-4 and GPT-5 demonstrate the progress between the models, as well as unaddressed issues. Source __T3A_LINK_IN_POST__.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“AGI’s Last Bottlenecks” by Adam Khoja, Laura Hiscott

“AGI’s Last Bottlenecks” by Adam Khoja, Laura Hiscott