DiscoverState of AIState of AI: The Scaling Law Myth - Why Bigger Isn’t Always Better
State of AI: The Scaling Law Myth - Why Bigger Isn’t Always Better

State of AI: The Scaling Law Myth - Why Bigger Isn’t Always Better

Update: 2025-10-23
Share

Description

In this episode of State of AI, we dissect one of the most provocative new findings in AI research — Scaling Laws Are Unreliable for Downstream Tasks by Nicholas Lourie, Michael Y. Hu, and Kyunghyun Cho of NYU. This study delivers a reality check to one of deep learning’s core assumptions: that increasing model size, data, and compute always leads to better downstream performance.

The paper’s meta-analysis across 46 tasks reveals that predictable, linear scaling occurs only 39% of the time — meaning the majority of tasks show irregular, noisy, or even inverse scaling, where larger models perform worse.

We explore:

  • ⚖️ Why downstream scaling laws often break, even when pretraining scales perfectly.

  • 🧩 How dataset choice, validation corpus, and task formulation can flip scaling trends.

  • 🔄 Why some models show “breakthrough scaling” — sudden jumps in capability after long plateaus.

  • 🧠 What this means for the future of AI forecasting, model evaluation, and cost-efficient research.

  • 🧪 The implications for reproducibility and why scaling may be investigator-specific.

If you’ve ever heard “just make it bigger” as the answer to AI progress — this episode will challenge that belief.

📊 Keywords: AI scaling laws, NYU AI research, Kyunghyun Cho, deep learning limits, downstream tasks, inverse scaling, emergent abilities, AI reproducibility, model evaluation, State of AI podcast.

Comments 
loading
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

State of AI: The Scaling Law Myth - Why Bigger Isn’t Always Better

State of AI: The Scaling Law Myth - Why Bigger Isn’t Always Better

Ali Mehedi