Discover
nCast: The Cloud Optimization Podcast from nOps
nCast: The Cloud Optimization Podcast from nOps
Author: nOps
Subscribed: 1Played: 16Subscribe
Share
© nOps
Description
Introducing nCast, the cloud optimization podcast. Each episode features thought leaders and cloud industry experts sharing their real-world experiences and knowledge about cloud management, FinOps, AWS optimization and more. Listen now for tech news, cloud engineering insights, and anecdotes from engineering leaders on the front lines of cloud innovation.
14 Episodes
Reverse
Karpenter has achieved GA and is disrupted the autoscaling game, with data pointing to accelerated adoption.
Today, Josh Cypher, DevOps leader at Sonos, joins us to talk about some unexpected byproducts of adopting Karpenter at Sonos. Josh dives into his favorite features and efficiency gains, from node consolidation to better disruption controls.
The public cloud bill is a massive operational expense for tech organizations, yet tracking the success of optimization efforts often frustrates engineers. Josh and James explore these challenges and how to address them effectively.
As big-time Spot adopters, Sonos has unlocked impressive savings (50%?!) by focusing on high-impact, low-overhead strategies. Josh explains how visibility brought them quick wins and paved the way for further optimization across Sonos’s infrastructure.
Plus, Josh and James preview what they’re looking forward to at KubeCon, including key conversations on Kubernetes, AI, and cloud sustainability.
Dr. Haoran Qiu, a fresh PhD from the University of Illinois Urbana-Champaign, joins our host James Wilson, VP of Engineering at nOps. They’re diving into multidimensional autoscaling, an area in which Haoran’s pioneering research is making waves in the Kubernetes community.
Some workloads work better with Horizontal Pod Autoscaler (HPA), others with Vertical Pod Autoscaler (VPA). Running them together can create conflicts, but using only one limits efficiency gains. A Multidimensional Pod Autoscaler solves this dilemma by combining the benefits of both VPA and HPA to dynamically adjust both the number and size of pods.
But is MPA poised to redefine resource optimization? What problems does it solve, and what fresh complexities are involved in its implementation?
Haoran and James dig into these questions while debating traditional heuristic versus Machine Learning approaches, industry versus academia, and other hot topics in Kubernetes.
Listen now to discover if MPA is the holy grail of cloud optimization as we discuss the evolution of autoscaling technologies and their impact on cost, sustainability, and developer experience.
Chapters:
0:00 - 2:20: Haoran Chu and the state of cloud resource management
2:20-6:00: Historical evolution of autoscaling
6:01 - 10:45: HPA, VPA and Multidimensional Autoscaling
10:46 - 18:50: Challenges of MPA: heuristics versus machine learning
18:51 - 24:20: How to quantify excess capacity?
24:21 - 32:16: The state of ML in autoscaling
32:16 - 37:37: Operationalizing ML in production environments37:37 - 42:01: The near-term future of autoscaling
Tech thought leader and host of the Kubernetes Unpacked podcast Kristina Devochko joins nCast today to talk all things cloud cost optimization, Kubernetes and green tech.
We start by talking about the fact that many companies aren’t even using HALF of their compute resources. But does slashing your AWS bill necessarily mean that you’re saving the plant? We delve into cost optimization and how it aligns (or not) with sustainability.
Kristina shares her insights on measuring your cloud carbon footprint and the tools you need (KEDA, Karpenter, Kepler) to increase cloud sustainability. We discuss key practical ways to get started cutting unnecessary cloud waste, from eliminating orphaned resources to scheduling during off hours.
Plus, we're revealing how nOps has managed to run our production on Spot instances — talk about recycling!
0:00 - 1:09: Introduction
1:10 - 4:20: Sustainability at Kubecon Europe and other recent events
4:21 - 9:31: Is cost optimization the same as sustainability?
9:32 - 12:53: Green data centers and your carbon footprint
12:54 - 15:21: Portability and the downsides of over-committing to pricing plans
15:22 - 19:51: Measuring your organization’s cloud sustainability
19:53 - 26:51: KEDA, Karpenter, Kepler and the tools you need
26:52 - 31:12: Leveraging available Spot capacity and choosing instances
31:13 - 37:15: Running production environments on Spot
37:15 - 44:46: Continual rightsizing and automated tools
44:47 - 48:18: Carbon-efficient Karpenter scaling
Show notes
GitHub issue for proposal of carbon-efficient design to Karpenter that needs some community support
Kepler project
Carbon-aware KEDA operator
Cloud Carbon Footprint open source tool
BoaviztAPI open source API for environmental impacts of ICT
APIs that provide electricity data, data on carbon emissions and electricity sources: https://app.electricitymaps.com and https://watttime.org
CNCF TAG Environmental Sustainability
Contact Kristina Devochko
Kristina Devochko’s Tech blog
Today we’re joined by Wade Piehl, Senior FinOps Success Manager on the Optics Team at AWS, to discuss all things On Demand, Reserved, and Spot.
Good purchasing decisions have an enormous impact on your bottom line — but how do you know what and how much to buy? We walk through the step-by-step decision-making process and the most common pitfalls that can cost you.
Wade shares all the practical advice you need to decide between Reserved Instances vs. Savings Plans. Find out how the equation changes from compute spend to database, storage and networking. We’ll also dig into strategies like layering purchase plans, leveraging Spot, and other methods of maximizing your savings.
Today we’re joined by Savanna Jensen, Senior FinOps Success Manager on the Optics Team at AWS, to discuss how to implement a Cloud FinOps automation strategy. The less amount of “people time” you can dedicate to cloud management and the more automation you can bake into the system, the easier it will be — but what are the right tools to use?
We start out by tapping Savanna’s insider knowledge on the latest and greatest AWS Cost Management tools. Get the latest on the shiny new updates to Cost Explorer that just launched. Plus, pro tips on the best filters and features to use for various use cases when it comes to the Cost and Usage Report (CUR), Cost Explorer, and QuickSight.
We dive into Engineering vs. FinOps perspectives and frustrations. How can you orchestrate a culture where cost optimization feels motivating rather than punitive to Engineers (recognition, career rewards, gamification…)? How do you troubleshoot if you’re a FinOps leader seeing zero traction on your initiatives?
Hear real-world battle stories about organizations tackling cost management challenges — and the takeaways for achieving true visibility and control over cloud costs. And stay tuned next week for Part 2 with the AWS Optics team.
In Episode 9, we’re joined by AI expert Marcos Heidemann to pull the curtain back on GenAI and whether the hype means we’re at the cusp of a massive transformation. Will it automate away our jobs in a dystopian future, or is it just glorified autocomplete?
Our panelists do a technical dive into new technologies like ChatGPT and LangChain. How does GenAI think, and what does that mean for building commercial products on it?
We’ll give you a sneak peak into nOps’s firsthand experience using GenAI to solve real-world cost optimization problems in novel ways that have never been tried before — and what we’re currently in the process of rolling out.
The conversation wraps up with hot takes on the current state of AI technology and where it’s going. What do ChatGPT 5, 6, 7, and their competitors look like? What’s the moonshot for GenAI over the next 2-5 years?
Listen now to find out.
In Episode 8, we’re joined by AWS Partner Solutions Architects Andrew Park and Mike McDonald to discuss the complexities and cost of running today’s ML and AI workloads on the cloud.
From anecdotes of the bad old days before container orchestration, our panelists take you to the present challenge of how to simplify efficient infrastructure operation — with the aim of freeing up Data Scientists and Engineers to focus on building and innovating.
Our panelists discuss the merits, pitfalls, and potential of various cost-optimizing tools and approaches (Ray, Karpenter, Spot, timeslicing) — key to addressing the demand for the expensive computing power generated by ML and AI models at scale.
Watch the full episode for:
The lowdown on AWS Bedrock and where it fits into the current stack of the latest AWS ML and AI offerings — how it works, use cases, the access it grants to new generative AI models
How Karpenter can make your life easy and save you SO much money (especially if you set-it-and-forget-it with nKS)
And hot takes on the controversial question: is ECS dead?!
We’re thrilled to host Marit Hughes, a Specialist Master for Government and Public Sector at Deloitte. After debating the best FinOps conferences to attend this year, we tackle the question: with billions and billions of dollars being spent on cloud resources, why is it so hard to make cost optimization actually happen?
And as we navigate the unique maze of public sector cloud optimization, get the inside scoop on why it's a different beast from commercial and the conversations taking place behind the scenes.
We discuss the realities of engineering life in the trenches — from lack of tooling and visibility to 18-hour days flooded by JIRA tickets. (Plus, bonkers things we’ve seen in bills). Whether you’re in public sector or private, how do you make cost management a lot less painful for engineers?
The other side of the question is where and how FinOps practitioners should insert themselves to execute and achieve quantifiable results. (Hint: telling engineers their baby is ugly isn’t going to help). Marit reveals the secret sauce for turning defensive conversations into collaborative ones — find out how to unruffle feathers, bridge Engineering and Finance, and deflect the default reaction of “I can’t do that”.
Today we’re joined by Sanjna Srivatsa of VMWare to discuss the complexities of multicloud enterprise cost management. The already complex business of reservation management gets infinitely more difficult when hundreds of teams are all making individual decisions and using different platforms.
Find out how VMWare overcame these difficulties to achieve one of the best effective savings rates in the industry. Learn all about the unique features of VMWare’s cloud cost management approach, from equitable chargebacks to hierarchical forecasting to in-house reservation management systems.
We tackle the age-old debate: should enterprises build or buy? We talk VMWare’s acquisition of CloudHealth and the differences in mindset from hypergrowth startup mode to large enterprise.
And finally, peek into the crystal ball as we discuss some promising advances awaiting teams in the realm of multicloud workloads.
Arnold de Leon joins to talk about cloud economics. The cloud can be great, but only if you embrace elastic — if you’re not actively using a resource, you shouldn’t have it. However, navigating the complexities of cloud costs, from network fees to Savings Plans, is often easier said than done. We'll discuss common pitfalls, epic fails, and how to fix them.
And because we love a good plot twist, we’re diving into why engineers should be taking more risks when it comes to cost optimization. Sometimes you have to spend a little to save a lot; opportunity cost is a thing.
We dive into the present and future of cost allocation tooling. Showbacks are crucial if you want to match usage to actual spend. But how do you get showbacks across the dimensions that are important to engineering, rather than finance? And: once you have the right insights, how do you get engineers to take action on them? Listen now to find out.
In today’s episode, we’re joined by Tim Cassell, VP of Product at nOps, to talk about the future of FinOps.
What are the biggest obstacles that engineering teams encounter today? How are AI and ML going to influence cost allocation and forecasting, automation, and day-to-day operations? And what does the evolving Spot market mean for teams?
We debate these questions and share hot takes on ChatGPT, ClickOps, shifting left in the development process, and other trends we see (or don’t see) on the horizon. What's the moonshot over the next few years? Some ideas may seem far-fetched today but could be the standard sooner than you think.
Karpenter is a powerful tool for optimizing EKS workloads and improving compute utilization in real-time. But what exactly is Karpenter, and how does it compare to the more traditional Cluster Autoscaler?
We explore the lay of the land and the limitations of the Cluster Autoscaler Spot support and discuss some of the primary problems that EKS users face today. These include custom configurations for different workload types, adopting Spot, optimizing EC2 usage through node packing and instance selection, and addressing latency issues.
Fortunately, Karpenter makes it possible for users to enjoy improved availability, lower compute costs, and minimized operational overhead. But how easy is it to migrate to Karpenter? Our experts share their best practices and tips for a successful migration, including the use of Blueprints and EKS add-ons.
Don't miss out on this informative episode that's packed with insights and practical tips from our experts. Tune in now to learn more about Karpenter and how it can benefit your EKS workload optimization needs.
Engineers are good at building and innovating, but keeping track of AWS pricing plans is a whole different story. And should even they have to? We’re joined by JT Giri and Andrew Lewis to discuss why long-term spending commitments are so challenging for engineers.
Commitments can last 2-3 years, and they’re bucketed by the hour. We’ll dive into common misconceptions, pitfalls, and all the ways it can get complicated. Ever found yourself locked into openSearch when Databricks and Druid comes along? We’ll share advice for what you should do and how to avoid backing yourself into a corner.
Plus, we’ll talk through what to do when the CFO’s yelling and management comes along to demands cuts. What tools do you have at your disposal to do that? We’ll share some battle stories of what we did when facing the heat, what went right, what went wrong, and what you can learn from it.
And the essential question: how do you free engineers from the shackles of all this complexity so they can make great things? Spoiler alert: there’s a game-changer in the mix.
We get into advanced tooling like RI automation, Spot automation, variable workloads and hacks for doing it easy.
Let's dive into everything Kubernetes with Andrew Lewis, Principal Cloud Optimization Architect, alongside nCast's host, James Wilson, VP of Engineering at nOps.
We’ll journey from the stone age of manual config mayhem with Packer, Chef, and Vagrant… until along came Docker and the dawn of the Container Orchestration Wars.
Our speakers unravel the past, present and future of EKS cost optimization. When engineers are measured by the number of 99.99s that they deliver, how do you make cost optimization less punitive? Why is everyone so afraid of Spot?
Find out which tools and strategies can come to the rescue, from Kubecost to binpacking to VPA hacks. But beware the hidden traps — like when the most optimal node sizing decision unwittingly explodes your Datadog bill.
Tune in to hear straight from the engineering leaders and luminaries out there in the field, in the mines, doing the work and innovating with EKS.

















