DiscoverHumans of ReliabilityHow AI broke serverless and what to do about it with Vercel’s Mariano Fernández Cocirio
How AI broke serverless and what to do about it with Vercel’s Mariano Fernández Cocirio

How AI broke serverless and what to do about it with Vercel’s Mariano Fernández Cocirio

Update: 2025-03-06
Share

Description

Mariano, Staff Product Manager at Vercel, explains why serverless architectures are hitting unexpected limits—they’re too fast. 

The industry has spent millions optimizing serverless for speed, but AI workloads are changing the game. In the AI realm, slower execution often leads to better results. The challenge? Paying for all that idle compute time while waiting for AI responses. 

Mariano explains how Vercel Fluid is introducing a new execution model that blends the best of serverless and traditional servers—scaling efficiently while reducing costs. Mariano breaks down Fluid’s architecture, its built-in reliability features, and how it redefines cloud computing for LLM-powered applications. 

Tune in to learn how Fluid could reshape the industry and what it means for developers. 

EPISODE LINKS:



Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

How AI broke serverless and what to do about it with Vercel’s Mariano Fernández Cocirio

How AI broke serverless and what to do about it with Vercel’s Mariano Fernández Cocirio

Rootly