Building Scalable ML Systems on Kubernetes
Update: 2024-08-15
Description
Summary
In this episode of the AI Engineering podcast, host Tobias Macy interviews Tammer Saleh, founder of SuperOrbital, about the potentials and pitfalls of using Kubernetes for machine learning workloads. The conversation delves into the specific needs of machine learning workflows, such as model tracking, versioning, and the use of Jupyter Notebooks, and how Kubernetes can support these tasks. Tammer emphasizes the importance of a unified API for different teams and the flexibility Kubernetes provides in handling various workloads. Finally, Tammer offers advice for teams considering Kubernetes for their machine learning workloads and discusses the future of Kubernetes in the ML ecosystem, including areas for improvement and innovation.
Announcements
Parting Question
In this episode of the AI Engineering podcast, host Tobias Macy interviews Tammer Saleh, founder of SuperOrbital, about the potentials and pitfalls of using Kubernetes for machine learning workloads. The conversation delves into the specific needs of machine learning workflows, such as model tracking, versioning, and the use of Jupyter Notebooks, and how Kubernetes can support these tasks. Tammer emphasizes the importance of a unified API for different teams and the flexibility Kubernetes provides in handling various workloads. Finally, Tammer offers advice for teams considering Kubernetes for their machine learning workloads and discusses the future of Kubernetes in the ML ecosystem, including areas for improvement and innovation.
Announcements
- Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
- Your host is Tobias Macey and today I'm interviewing Tammer Saleh about the potentials and pitfalls of using Kubernetes for your ML workloads.
- Introduction
- How did you get involved in Kubernetes?
- For someone who is unfamiliar with Kubernetes, how would you summarize it?
- For the context of this conversation, can you describe the different phases of ML that we're talking about?
- Kubernetes was originally designed to handle scaling and distribution of stateless processes. ML is an inherently stateful problem domain. What challenges does that add for K8s environments?
- What are the elements of an ML workflow that lend themselves well to a Kubernetes environment?
- How much Kubernetes knowledge does an ML/data engineer need to know to get their work done?
- What are the sharp edges of Kubernetes in the context of ML projects?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working with Kubernetes?
- When is Kubernetes the wrong choice for ML?
- What are the aspects of Kubernetes (core or the ecosystem) that you are keeping an eye on which will help improve its utility for ML workloads?
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for ML workloads today?
- SuperOrbital
- CloudFoundry
- Heroku
- 12 Factor Model
- Kubernetes
- Docker Compose
- Core K8s Class
- Jupyter Notebook
- Crossplane
- Ochre Jelly
- CNCF (Cloud Native Computing Foundation) Landscape
- Stateful Set
- RAG == Retrieval Augmented Generation
- Kubeflow
- Flyte
- Pachyderm
- CoreWeave
- Kubectl ("koob-cuddle")
- Helm
- CRD == Custom Resource Definition
- Horovod
- Temporal
- Slurm
- Ray
- Dask
- Infiniband
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
In Channel