DiscoverTechnically Speaking with Chris WrightBuilding more efficient AI with vLLM ft. Nick Hill
Building more efficient AI with vLLM ft. Nick Hill

Building more efficient AI with vLLM ft. Nick Hill

Update: 2025-07-02
Share

Description

Explore what it takes to run massive language models efficiently with Red Hat's Senior Principal Software Engineer in AI Engineering, Nick Hill. In this episode, we go behind the headlines to uncover the systems-level engineering making AI practical, focusing on the pivotal challenge of inference optimization and the transformative power of the vLLM open-source project.

Nick Hill shares his experiences working in AI including:

• The evolution of AI optimization, from early handcrafted systems like IBM Watson to the complex demands of today's generative AI.
• The critical role of open-source projects like vLLM in creating a common, efficient inference stack for diverse hardware platforms.
• Key innovations like PagedAttention that solve GPU memory fragmentation and manage the KV cache for scalable, high-throughput performance.
• How the open-source community is rapidly translating academic research into real-world, production-ready solutions for AI.

Join us to explore the infrastructure and optimization strategies making large-scale AI a reality. This conversation is essential for any technologist, engineer, or leader who wants to understand the how and why of AI performance. You’ll come away with a new appreciation for the clever, systems-level work required to build a truly scalable and open AI future.
Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Building more efficient AI with vLLM ft. Nick Hill

Building more efficient AI with vLLM ft. Nick Hill

Nick Hill, Chris Wright