DiscoverBest AI papers explainedThe Coverage Principle: How Pre-Training Enables Post-Training
The Coverage Principle: How Pre-Training Enables Post-Training

The Coverage Principle: How Pre-Training Enables Post-Training

Update: 2025-10-24
Share

Description

This paper provides a theoretical analysis of next-token prediction in language models, introducing the concept of the coverage profile ($\text{Cov}_N$) as a superior metric to cross-entropy for predicting downstream performance with Best-of-N (BoN) sampling. The authors establish a "coverage principle," demonstrating that maximum likelihood, or next-token prediction, implicitly optimizes the coverage profile, leading to faster generalization that avoids the spurious dependence on sequence length seen in cross-entropy/KL divergence. The research shows that achieving a good coverage profile is necessary and sufficient for BoN success and derives scaling laws relating cross-entropy to coverage, while also exploring various optimization methods like stochastic gradient descent (SGD) and gradient normalization to provably improve coverage bounds. Finally, the text proposes tournament-style estimators for selecting models with optimal coverage, particularly in scenarios where the true data distribution is unknown.

Comments 
In Channel
A Definition of AGI

A Definition of AGI

2025-10-2216:28

loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

The Coverage Principle: How Pre-Training Enables Post-Training

The Coverage Principle: How Pre-Training Enables Post-Training

Enoch H. Kang