DiscoverDeep PapersSleep-time Compute: Beyond Inference Scaling at Test-time
Sleep-time Compute: Beyond Inference Scaling at Test-time

Sleep-time Compute: Beyond Inference Scaling at Test-time

Update: 2025-05-02
Share

Description

What if your LLM could think ahead—preparing answers before questions are even asked?

In this week's paper read, we dive into a groundbreaking new paper from researchers at Letta, introducing sleep-time compute: a novel technique that lets models do their heavy lifting offline, well before the user query arrives. By predicting likely questions and precomputing key reasoning steps, sleep-time compute dramatically reduces test-time latency and cost—without sacrificing performance.

​We explore new benchmarks—Stateful GSM-Symbolic, Stateful AIME, and the multi-query extension of GSM—that show up to 5x lower compute at inference, 2.5x lower cost per query, and up to 18% higher accuracy when scaled.

​You’ll also see how this method applies to realistic agent use cases and what makes it most effective.If you care about LLM efficiency, scalability, or cutting-edge research.

Explore more AI research, or sign up to hear the next session live



Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Sleep-time Compute: Beyond Inference Scaling at Test-time

Sleep-time Compute: Beyond Inference Scaling at Test-time

Arize AI