DiscoverThe New Stack Podcast
The New Stack Podcast
Claim Ownership

The New Stack Podcast

Author: The New Stack

Subscribed: 660Played: 37,062
Share

Description

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software.

For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
628 Episodes
Reverse
Most enterprises today run workloads across multiple IT infrastructures rather than a single platform, creating significant operational challenges. According to Nutanix CTO Deepak Goel, organizations face three major hurdles: managing operational complexity amid a shortage of cloud-native skills, migrating legacy virtual machine (VM) workloads to microservices-based cloud-native platforms, and running VM-based workloads alongside containerized applications. Many engineers have deep infrastructure experience but lack Kubernetes expertise, making the transition especially difficult and increasing the learning curve for IT administrators. To address these issues, organizations are turning to platform engineering and internal developer platforms that abstract infrastructure complexity and provide standardized “golden paths” for deployment. Integrated development environments (IDEs) further reduce friction by embedding capabilities like observability and security. Nutanix contributes through its hyper converged platform, which unifies compute and storage while supporting both VMs and containers. At KubeCon North America, Nutanix announced version 2.0 of Nutanix Data Services for Kubernetes (NDK), adding advanced data protection, fault-tolerant replication, and enhanced security through a partnership with Canonical to deliver a hardened operating system for Kubernetes environments.Learn more from The New Stack about operational complexity in cloud native environments:Q&A: Nutanix CEO Rajiv Ramaswami on the Cloud Native Enterprise Kubernetes Complexity Realigns Platform Engineering Strategy Platform Engineering on the Brink: Breakthrough or Bust? Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
GPUs dominate today’s AI landscape, but Google argues they are not necessary for every workload. As AI adoption has grown, customers have increasingly demanded compute options that deliver high performance with lower cost and power consumption. Drawing on its long history of custom silicon, Google introduced Axion CPUs in 2024 to meet needs for massive scale, flexibility, and general-purpose computing alongside AI workloads. The Axion-based C4A instance is generally available, while the newer N4A virtual machines promise up to 2x price performance.In this episode, Andrei Gueletii, a technical solutions consultant for Google Cloud joined Gari Singh, a product manager for Google Kubernetes Engine (GKE), and Pranay Bakre, a principal solutions engineer at Arm for this episode, recorded at KubeCon + CloudNativeCon North America, in Atlanta. Built on Arm Neoverse V2 cores, Axion processors emphasize energy efficiency and customization, including flexible machine shapes that let users tailor memory and CPU resources. These features are particularly valuable for platform engineering teams, which must optimize centralized infrastructure for cost, FinOps goals, and price performance as they scale.Importantly, many AI tasks—such as inference for smaller models or batch-oriented jobs—do not require GPUs. CPUs can be more efficient when GPU memory is underutilized or latency demands are low. By decoupling workloads and choosing the right compute for each task, organizations can significantly reduce AI compute costs.Learn more from The New Stack about the Axion-based C4A: Beyond Speed: Why Your Next App Must Be Multi-ArchitectureArm: See a Demo About Migrating a x86-Based App to ARM64Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Enterprises are racing to deploy AI services, but the teams responsible for running them in production are seeing familiar problems reemerge—most notably, silos between data scientists and operations teams, reminiscent of the old DevOps divide. In a discussion recorded at AWS re:Invent 2025, IBM’s Thanos Matzanas and Martin Fuentes argue that the challenge isn’t new technology but repeating organizational patterns. As data teams move from internal projects to revenue-critical, customer-facing applications, they face new pressures around reliability, observability, and accountability.The speakers stress that many existing observability and governance practices still apply. Standard metrics, KPIs, SLOs, access controls, and audit logs remain essential foundations, even as AI introduces non-determinism and a heavier reliance on human feedback to assess quality. Tools like OpenTelemetry provide common ground, but culture matters more than tooling.Both emphasize starting with business value and breaking down silos early by involving data teams in production discussions. Rather than replacing observability professionals, AI should augment human expertise, especially in critical systems where trust, safety, and compliance are paramount.Learn more from The New Stack about enabling AI with silos: Are Your AI Co-Pilots Trapping Data in Isolated Silos?Break the AI Gridlock at the Intersection of Velocity and TrustTaming AI Observability: Control Is the Key to SuccessJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Rob Whiteley, CEO of Coder, argues that the biggest winners in today’s AI boom resemble the “picks and shovels” sellers of the California Gold Rush: companies that provide tools enabling others to build with AI. Speaking onThe New Stack Makersat AWS re:Invent, Whiteley described the current AI moment as the fastest-moving shift he’s seen in 25 years of tech. Developers are rapidly adopting AI tools, while platform teams face pressure to approve them, as saying “no” is no longer viable. Whiteley warns of a widening gap between organizations that extract real value from AI and those that don’t, driven by skills shortages and insufficient investment in training. He sees parallels with the cloud-native transition and predicts the rise of “AI-native” companies. As agentic AI grows, developers increasingly act as managers overseeing many parallel AI agents, creating new challenges around governance, security, and state management. To address this, Coder introduced Mux, an open source coding agent multiplexer designed to help developers manage and evaluate large volumes of AI-generated code efficiently.Learn more from The New Stack about AI Parallelization The Production Generative AI Stack: Architecture and ComponentsEnable ParallelFrontend/Backend Development to Unlock VelocityJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Nvidia Distinguished Engineer Kevin Klues noted that low-level systems work is invisible when done well and highly visible when it fails — a dynamic that frames current Kubernetes innovations for AI. At KubeCon + CloudNativeCon North America 2025, Klues and AWS product manager Jesse Butler discussed two emerging capabilities: dynamic resource allocation (DRA) and a new workload abstraction designed for sophisticated AI scheduling.DRA, now generally available in Kubernetes 1.34, fixes long-standing limitations in GPU requests. Instead of simply asking for a number of GPUs, users can specify types and configurations. Modeled after persistent volumes, DRA allows any specialized hardware to be exposed through standardized interfaces, enabling vendors to deliver custom device drivers cleanly. Butler called it one of the most elegant designs in Kubernetes.Yet complex AI workloads require more coordination. A forthcoming workload abstraction, debuting in Kubernetes 1.35, will let users define pod groups with strict scheduling and topology rules — ensuring multi-node jobs start fully or not at all. Klues emphasized that this abstraction will shape Kubernetes’ AI trajectory for the next decade and encouraged community involvement.Learn more from The New Stack about dynamic resource allocation: Kubernetes Primer: Dynamic Resource Allocation (DRA) for GPU WorkloadsKubernetes v1.34 Introduces Benefits but Also New Blind SpotsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
At KubeCon North America 2025, GitLab’s Emilio Salvador outlined how developers are shifting from individual coders to leaders of hybrid human–AI teams. He envisions developers evolving into “cognitive architects,” responsible for breaking down large, complex problems and distributing work across both AI agents and humans. Complementing this is the emerging role of the “AI guardian,” reflecting growing skepticism around AI-generated code. Even as AI produces more code, humans remain accountable for reviewing quality, security, and compliance.Salvador also described GitLab’s “AI paradox”: developers may code faster with AI, but overall productivity stalls because testing, security, and compliance processes haven’t kept pace. To fix this, he argues organizations must apply AI across the entire development lifecycle, not just in coding. GitLab’s Duo Agent Platform aims to support that end-to-end transformation.Looking ahead, Salvador predicts the rise of a proactive “meta agent” that functions like a full team member. Still, he warns that enterprise adoption remains slow and advises organizations to start small, build skills, and scale gradually.Learn more from The New Stack about the evolving role of "cognitive architects":The Engineer in the AI Age: The Orchestrator and ArchitectThe New Role of Enterprise Architecture in the AI EraThe Architect’s Guide to Understanding Agentic AIJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Jonathan Bryce, the new CNCF executive director, argues that inference—not model training—will define the next decade of computing. Speaking at KubeCon North America 2025, he emphasized that while the industry obsesses over massive LLM training runs, the real opportunity lies in efficiently serving these models at scale. Cloud-native infrastructure, he says, is uniquely suited to this shift because inference requires real-time deployment, security, scaling, and observability—strengths of the CNCF ecosystem. Bryce believes Kubernetes is already central to modern inference stacks, with projects like Ray, KServe, and emerging GPU-oriented tooling enabling teams to deploy and operationalize models. To bring consistency to this fast-moving space, the CNCF launched a Kubernetes AI Conformance Program, ensuring environments support GPU workloads and Dynamic Resource Allocation. With AI agents poised to multiply inference demand by executing parallel, multi-step tasks, efficiency becomes essential. Bryce predicts that smaller, task-specific models and cloud-native routing optimizations will drive major performance gains. Ultimately, he sees CNCF technologies forming the foundation for what he calls “the biggest workload mankind will ever have.” Learn more from The New Stack about inference: Confronting AI’s Next Big Challenge: Inference Compute Deep Infra Is Building an AI Inference Cloud for Developers Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The Cloud Native Computing Foundation has introduced the Certified Kubernetes AI Conformance Program to bring consistency to an increasingly fragmented AI ecosystem. Announced at KubeCon + CloudNativeCon North America 2025, the program establishes open, community-driven standards to ensure AI applications run reliably and portably across different Kubernetes platforms. VMware by Broadcom’s vSphere Kubernetes Service (VKS) is among the first platforms to achieve certification.In an interview with The New Stack, Broadcom leaders Dilpreet Bindra and Himanshu Singh explained that the program applies lessons from Kubernetes’ early evolution, aiming to reduce the “muddiness” in AI tooling and improve cross-platform interoperability. They emphasized portability as a core value: organizations should be able to move AI workloads between public and private clouds with minimal friction.VKS integrates tightly with vSphere, using Kubernetes APIs directly to manage infrastructure components declaratively. This approach, along with new add-on management capabilities, reflects Kubernetes’ growing maturity. According to Bindra and Singh, this stability now enables enterprises to trust Kubernetes as a foundation for production-grade AI. Learn more from The New Stack about Broadcom’s latest updates with Kubernetes: Has VMware Finally Caught Up with Kubernetes?VMware VCF 9.0 Finally Unifies Container and VM ManagementJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The etcd project — a distributed key-value store older than Kubernetes — recently faced significant challenges due to maintainer turnover and the resulting loss of unwritten institutional knowledge. Lead maintainer Marek Siarkowicz explained that as longtime contributors left, crucial expertise about testing procedures and correctness guarantees disappeared. This gap led to a problematic release that introduced critical reliability issues, including potential data inconsistencies after crashes.To rebuild confidence in etcd’s correctness, the new maintainer team introduced “robustness testing,” creating a framework inspired by Jepsen to validate both basic and distributed-system behavior. Their goal was to ensure linearizability, the “Holy Grail” of distributed systems, which required developing custom failure-injection tools and teaching the community how to debug complex scenarios.The team later partnered with Antithesis to apply deterministic simulation testing, enabling fully reproducible execution paths and easier detection of subtle race conditions. This approach helped codify implicit knowledge into explicit properties and assertions. Siarkowicz emphasized that such rigorous testing is essential for safeguarding the sensitive “core” of large open source projects, ensuring correctness even as maintainers change.Learn more from The New Stack about the etcd projectTutorial: Install a Highly Available K3s Cluster at the Edge Join our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Helm — originally a hackathon project called Kate’s Place — turned 10 in 2025, marking the milestone with the release of Helm 4, its first major update in six years. Created by Matt Butcher and colleagues as a playful take on “K8s,” the early project won a small prize but quickly grew into a serious effort when Deus leadership recognized the need for a Kubernetes package manager. Renamed Helm, it rapidly expanded with community contributors and became one of the first CNCF graduating projects.Helm 4 reflects years of accumulated design debt and evolving use cases. After the rapid iterations of Helm 1, 2, and 3, the latest version modernizes logging, improves dependency management, and introduces WebAssembly-based plugins for cross-platform portability—addressing the growing diversity of operating systems and architectures. Beyond headline features, maintainers emphasize that mature projects increasingly deliver “boring” but essential improvements, such as better logging, which simplify workflows and integrate more cleanly with other tools. Helm’s re-architected internals also lay the foundation for new chart and package capabilities in upcoming 4.x releases. Learn more from The New Stack about Helm: The Super Helm Chart: To Deploy or Not To Deploy?Kubernetes Gets a New Resource Orchestrator in the Form of KroJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Kubernetes has relied on role-based access control (RBAC) since 2017, but its simplicity limits what developers can express, said Micah Hausler, principal engineer at AWS, on The New Stack Makers. RBAC only allows actions; it can’t enforce conditions, denials, or attribute-based rules. Seeking a more expressive authorization model for Kubernetes, Hausler explored Cedar, an authorization engine and policy language created at AWS in 2022 and later open-sourced. Although not designed specifically for Kubernetes, Cedar proved capable of modeling its authorization needs in a concise, readable way. Hausler highlighted Cedar’s clarity—nontechnical users can often understand policies at a glance—as well as its schema validation, autocomplete support, and formal verification, which ensures policies are correct and produce only allow or deny outcomes.Now onboarding to the CNCF sandbox, Cedar is used by companies like Cloudflare and MongoDB and offers language-agnostic tooling, including a Go implementation donated by StrongDM. The project is actively seeking contributors, especially to expand bindings for languages like TypeScript, JavaScript, and Python.Learn more from The New Stack about Cedar:Ceph: 20 Years of Cutting-Edge Storage at the Edge The Cedar Programming Language: Authorization SimplifiedJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
JupyterLite, a fully browser-based distribution of JupyterLab, is enabling new levels of global scalability in technical education. Developed by Sylvain Corlay’s QuantStack team, it allows math and programming lessons to run entirely in students’ browsers — kernel included — without relying on Docker or cloud-scale infrastructure. Its most prominent success is Capytale, a French national deployment that supports half a million high school students and over 200,000 weekly sessions from essentially a single server, which hosts only teaching content while computation happens locally in each browser.QuantStack, founded in 2016 as what Corlay calls an “accidental startup,” has since grown into a 30-person team contributing across Jupyter, Conda-Forge, and Apache Arrow. But JupyterLite embodies its most ambitious goal: making programming education accessible to countries with rapidly growing youth populations, such as Nigeria, where traditional cloud-hosted notebooks are impractical. Achieving a billion-user future will require advances in accessibility, collaboration, and expanding browser-based package support — efforts that depend on grants and foundation backing.Learn more from The New Stack about Project JupyterFrom Physics to the Future: Brian Granger on Project Jupyter in the Age of AIJupyter AI v3: Could It Generate an ‘Ecosystem of AI Personas?’Join our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
AWS’s approach to Elastic Kubernetes Service has evolved significantly since its 2018 launch. According to Mike Stefanik, Senior Manager of Product Management for EKS and ECR, today’s users increasingly represent the late majority—teams that want Kubernetes without managing every component themselves. In a conversation onThe New Stack Makers, Stefanik described how AI workloads are reshaping Kubernetes operations and why AWS open-sourced an MCP server for EKS. Early feedback showed that meaningful, task-oriented tool names—not simple API mirrors—made MCP servers more effective for LLMs, prompting AWS to design tools focused on troubleshooting, runbooks, and full application workflows. AWS also introduced a hosted knowledge base built from years of support cases to power more capable agents.While “agentic AI” gets plenty of buzz, most customers still rely on human-in-the-loop workflows. Stefanik expects that to shift, predicting 2026 as the year agentic workloads move into production. For experimentation, he recommends the open-source Strands SDK. Internally, he has already seen major productivity gains from BI agents that automate complex data analysis tasks.Learn more from The New Stack about Amazon Web Services’ approach to Elastic Kubernetes ServiceHow Amazon EKS Auto Mode Simplifies Kubernetes Cluster Management (Part 1)A Deep Dive Into Amazon EKS Auto (Part 2)Join our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
At KubeCon + CloudNativeCon 2025 in Atlanta, the panel of experts - Kate Goldenring of Fermyon Technologies, Idit Levine of Solo.io, Shaun O'Meara of Mirantis, Sean O'Dell of Dynatrace and James Harmison of Red Hat - explored whether the cloud native era has evolved into an AI native era — and what that shift means for infrastructure, security and development practices. Jonathan Bryce of the CNCF argued that true AI-native systems depend on robust inference layers, which have been overshadowed by the hype around chatbots and agents. As organizations push AI to the edge and demand faster, more personalized experiences, Fermyon’s Kate Goldenring highlighted WebAssembly as a way to bundle and securely deploy models directly to GPU-equipped hardware, reducing latency while adding sandboxed security.Dynatrace’s Sean O’Dell noted that AI dramatically increases observability needs: integrating LLM-based intelligence adds value but also expands the challenge of filtering massive data streams to understand user behavior. Meanwhile, Mirantis CTO Shaun O’Meara emphasized a return to deeper infrastructure awareness. Unlike abstracted cloud native workloads, AI workloads running on GPUs require careful attention to hardware performance, orchestration, and energy constraints. Managing power-hungry data centers efficiently, he argued, will be a defining challenge of the AI native era.Learn more from The New Stack about evolving cloud native ecosystem to an AI native eraCloud Native and AI: Why Open Source Needs Standards Like MCPA Decade of Cloud Native: From CNCF, to the Pandemic, to AICrossing the AI Chasm: Lessons From the Early Days of CloudJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
AWS re:Invent has long featured CTO Werner Vogels’ closing keynote, but this year he signaled it may be his last, emphasizing it’s time for “younger voices” at Amazon. After 21 years with the company, Vogels reflected on arriving as an academic and being stunned by Amazon’s technical scale—an energy that still drives him today. He released his annual predictions ahead of re:Invent, with this year’s five themes focused heavily on AI and broader societal impacts.Vogels highlights technology’s growing role in addressing loneliness, noting how devices like Alexa can offer comfort to those who feel isolated. He foresees a “Renaissance developer,” where engineers must pair deep expertise with broad business and creative awareness. He warns quantum-safe encryption is becoming urgent as data harvested today may be decrypted within five years. Military innovations, he notes, continue to influence civilian tech, for better and worse. Finally, he argues personalized learning can preserve children’s curiosity and better support teachers, which he views as essential for future education.Learn more from The New Stack about evolving role of technology systems from past to future: Werner Vogels’ 6 Lessons for Keeping Systems Simple50 Years Later: Remembering How the Future Looked in 1974Join our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
DevOps practitioners — whether developers, operators, SREs or business stakeholders — increasingly rely on telemetry to guide decisions, yet face growing complexity, siloed teams and rising observability costs. In a conversation at KubeCon + CloudNativeCon North America, IBM’s Jacob Yackenovich emphasized the importance of collecting high-granularity, full-capture data to avoid missing critical performance signals across hybrid application stacks that blend legacy and cloud-native components. He argued that observability must evolve to serve both technical and nontechnical users, enabling teams to focus on issues based on real business impact rather than subjective judgment.AI’s rapid integration into applications introduces new observability challenges. Yackenovich described two patterns: add-on AI services, such as chatbots, whose failures don’t disrupt core workflows, and blocking-style AI components embedded in essential processes like fraud detection, where errors directly affect application function.Rising cloud and ingestion costs further complicate telemetry strategies. Yackenovich cautioned against limiting visibility for budget reasons, advocating instead for predictable, fixed-price observability models that let organizations innovate without financial uncertainty.Learn more from The New Stack about the latest in observability: Introduction to ObservabilityObservability 2.0? Or Just Logs All Over Again?Building an Observability Culture: Getting Everyone OnboardJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Major banks once built their own Linux kernels because no distributions existed, but today commercial distros — and Kubernetes — are universal. At KubeCon + CloudNativeCon North America, AWS’s Jesse Butler noted that Kubernetes has reached the same maturity Linux once did: organizations no longer build bespoke control planes but rely on shared standards. That shift influences how AWS contributes to open source, emphasizing community-wide solutions rather than AWS-specific products.Butler highlighted two AWS EKS projects donated to Kubernetes SIGs: KRO and Karpenter. KRO addresses the proliferation of custom controllers that emerged once CRDs made everything representable as Kubernetes resources. By generating CRDs and microcontrollers from simple YAML schemas, KRO transforms “glue code” into an automated service within Kubernetes itself. Karpenter tackles the limits of traditional autoscaling by delivering just-in-time, cost-optimized node provisioning with a flexible, intuitive API. Both projects embody AWS’s evolving philosophy: building features that serve the entire Kubernetes ecosystem as it matures into a true enterprise standard.Learn more from The New Stack about the latest in Kube Resource Orchestrator and Karpenter:  Migrating From Cluster Autoscaler to Karpenter v0.32How Amazon EKS Auto Mode Simplifies Kubernetes Cluster Management (Part 1) Kubernetes Gets a New Resource Orchestrator in the Form of KroJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Clockwork began with a narrow goal—keeping clocks synchronized across servers—but soon realized that its precise latency measurements could reveal deeper data center networking issues. This insight led the company to build a hardware-agnostic monitoring and remediation platform capable of automatically routing around faults. Today, Clockwork’s technology is especially valuable for large GPU clusters used in training LLMs, where communication efficiency and reliability are critical. CEO Suresh Vasudevan explains that AI workloads are among the most demanding distributed applications ever, and Clockwork provides building blocks that improve visibility, performance and fault tolerance. Its flagship feature, FleetIQ, can reroute traffic around failing switches, preventing costly interruptions that might otherwise force teams to restart training from hours-old checkpoints. Although the company originated from Stanford research focused on clock synchronization for financial institutions, the team eventually recognized that packet-timing data could underpin powerful network telemetry and dynamic traffic control. By integrating with NVIDIA NCCL, TCP and RDMA libraries, Clockwork can not only measure congestion but also actively manage GPU communication to enhance both uptime and training efficiency. Learn more from The New Stack about the latest in Clockwork: Clockwork’s FleetIQ Aims To Fix AI’s Costly Network Bottleneck What Happens When 116 Makers Reimagine the Clock? Join our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
At JupyterCon 2025, Jupyter Deploy was introduced as an open source command-line tool designed to make cloud-based Jupyter deployments quick and accessible for small teams, educators, and researchers who lack cloud engineering expertise. As described by AWS engineer Jonathan Guinegagne, these users often struggle in an “in-between” space—needing more computing power and collaboration features than a laptop offers, but without the resources for complex cloud setups. Jupyter Deploy simplifies this by orchestrating an entire encrypted stack—using Docker, Terraform, OAuth2, and Let’s Encrypt—with minimal setup, removing the need to manually manage 15–20 cloud components. While it offers an easy on-ramp, Guinegagne notes that long-term use still requires some cloud understanding. Built by AWS’s AI Open Source team but deliberately vendor-neutral, it uses a template-based approach, enabling community-contributed deployment recipes for any cloud. Led by Brian Granger, the project aims to join the official Jupyter ecosystem, with future plans including Kubernetes integration for enterprise scalability. Learn more from The New Stack about the latest in Jupyter AI development: Introduction to Jupyter Notebooks for DevelopersDisplay AI-Generated Images in a Jupyter Notebook Join our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In an interview at JupyterCon, Brian Granger — co-creator of Project Jupyter and senior principal technologist at AWS — reflected on Jupyter’s evolution and how AI is redefining open source sustainability. Originally inspired by physics’ modular principles, Granger and co-founder Fernando Pérez designed Jupyter with flexible, extensible components like the notebook format and kernel message protocol. This architecture has endured as the ecosystem expanded from data science into AI and machine learning. Now, AI is accelerating development itself: Granger described rewriting Jupyter Server in Go, complete with tests, in just 30 minutes using an AI coding agent — a task once considered impossible. This shift challenges traditional notions of technical debt and could reshape how large open source projects evolve. Jupyter’s 2017 ACM Software System Award placed it among computing’s greats, but also underscored its global responsibility. Granger emphasized that sustaining Jupyter’s mission — empowering human reasoning, collaboration, and innovation — remains the team’s top priority in the AI era. Learn more from The New Stack about the latest in Jupyter AI development: Introduction to Jupyter Notebooks for Developers Display AI-Generated Images in a Jupyter Notebook  Join our community of newsletter subscribers to stay on top of the news and at the top of your game.    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
loading
Comments