DiscoverTechnically Speaking with Chris WrightInside distributed inference with llm-d ft. Carlos Costa
Inside distributed inference with llm-d ft. Carlos Costa

Inside distributed inference with llm-d ft. Carlos Costa

Update: 2025-08-06
Share

Description

Scaling LLM inference for production isn't just about adding more machines, it demands new intelligence in the infrastructure itself. In this episode, we're joined by Carlos Costa, Distinguished Engineer at IBM Research, a leader in large-scale compute and a key figure in the llm-d project. We discuss how to move beyond single-server deployments and build the intelligent, AI-aware infrastructure needed to manage complex workloads efficiently.

Carlos Costa shares insights from his deep background in HPC and distributed systems, including:

• The evolution from traditional HPC and large-scale training to the unique challenges of distributed inference for massive models.
• The origin story of the llm-d project, a collaborative, open-source effort to create a much-needed ""common AI stack"" and control plane for the entire community.
• How llm-d extends Kubernetes with the specialization required for AI, enabling state-aware scheduling that standard Kubernetes wasn't designed for.
• Key architectural innovations like the disaggregation of prefill and decode stages and support for wide parallelism to efficiently run complex Mixture of Experts (MOE) models.

Tune in to discover how this collaborative, open-source approach is building the standardized, AI-aware infrastructure necessary to make massive AI models practical, efficient, and accessible for everyone.
Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Inside distributed inference with llm-d ft. Carlos Costa

Inside distributed inference with llm-d ft. Carlos Costa

Carlos Costa, Chris Wright