DiscoverMLOps.community
MLOps.community
Claim Ownership

MLOps.community

Author: Demetrios

Subscribed: 273Played: 18,483
Share

Description

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)
502 Episodes
Reverse
March 3rd, Computer History Museum CODING AGENTS CONFERENCE, come join us while there are still tickets left.https://luma.com/codingagentsChris Fregly is currently focused on building and scaling high-performance AI systems, writing and teaching about AI infrastructure, helping organizations adopt generative AI and performance engineering principles on AWS, and fostering large developer communities around these topics.Performance Optimization and Software/Hardware Co-design across PyTorch, CUDA, and NVIDIA GPUs // MLOps Podcast #363 with Chris Fregly, Founder, AI Performance Engineer, and InvestorJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractIn today’s era of massive generative models, it's important to understand the full scope of AI systems' performance engineering. This talk discusses the new O'Reilly book, AI Systems Performance Engineering, and the accompanying GitHub repo (https://github.com/cfregly/ai-performance-engineering). This talk provides engineers, researchers, and developers with a set of actionable optimization strategies. You'll learn techniques to co-design and co-optimize hardware, software, and algorithms to build resilient, scalable, and cost-effective AI systems for both training and inference. // BioChris Fregly is an AI performance engineer and startup founder with experience at AWS, Databricks, and Netflix. He's the author of three (3) O'Reilly books, including Data Science on AWS (2021), Generative AI on AWS (2023), and AI Systems Performance Engineering (2025). He also runs the global AI Performance Engineering meetup and speaks at many AI-related conferences, including Nvidia GTC, ODSC, Big Data London, and more.// Related LinksAI Systems Performance Engineering: Optimizing Model Training and Inference Workloads with GPUs, CUDA, and PyTorch 1st Edition by Chris Fregly: https://www.amazon.com/Systems-Performance-Engineering-Optimizing-Algorithms/dp/B0F47689K8/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Chris on LinkedIn: /cfreglyTimestamps:[00:00] SageMaker HyperPod Resilience[00:27] Book Creation and Software Engineering[04:57] Software Engineers and Maintenance[11:49] AI Systems Performance Engineering[22:03] Cognitive Biases and Optimization / "Mechanical Sympathy"[29:36] GPU Rack-Scale Architecture[33:58] Data Center Reliability Issues[43:52] AI Compute Platforms[49:05] Hardware vs Ecosystem Choice[1:00:05] Claude vs Codex vs Gemini[1:14:53] Kernel Budget Allocation[1:18:49] Steerable Reasoning Challenges[1:24:18] Data Chain Value Awareness
Roundtable CAST AI episode: Serving LLMs in Production: Performance, Cost & Scale. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractExperimenting with LLMs is easy. Running them reliably and cost-effectively in production is where things break. Most AI teams never make it past demos and proofs of concept. A smaller group is pushing real workloads to production—and running into very real challenges around infrastructure efficiency, runaway cloud costs, and reliability at scale.This session is for engineers and platform teams moving beyond experimentation and building AI systems that actually hold up in production.// BioIoana ApetreiIoana is a Senior Product Manager at CAST AI, leading the AI Enabler product, an AI Gateway platform for cost-effective LLM infrastructure deployment. She brings 12 years of experience building B2C and B2B products reaching over 10 million users. Outside of work, she enjoys assembling puzzles and LEGOs and watching motorsports.Igor ŠušićIgor is a founding Machine Learning Engineer at CAST AI’s AI Enabler, where he focuses on optimizing inference and training at scale. With a strong background in Natural Language Processing (NLP) and Recommender Systems, Igor has been tackling the challenges of large-scale model optimization long before transformers became mainstream. Prior to CAST AI, he worked at industry leaders like Bloomreach and Infobip, where he contributed to the development and deployment of large-scale AI and personalization systems from the early days of the field.// Related LinksWebsite: https://cast.ai/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Ioana on LinkedIn: /ioanaapetrei/Connect with Igor on LinkedIn: /igor-%C5%A1u%C5%A1i%C4%87/
Rahul Raja is a Staff Software Engineer at LinkedIn, working on large-scale search infrastructure, information retrieval systems, and integrating AI/ML to improve ranking and semantic search experiences.The Future of Information Retrieval: From Dense Vectors to Cognitive Search // MLOps Podcast #362 with Rahul Raja, Staff Software Engineer at LinkedInJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractInformation Retrieval is evolving from keyword matching to intelligent, vector-based understanding. In this talk, Rahul Raja explores how dense retrieval, vector databases, and hybrid search systems are redefining how modern AI retrieves, ranks, and reasons over information. He discusses how retrieval now powers large language models through Retrieval-Augmented Generation (RAG) and the new MLOps challenges that arise, embedding drift, continuous evaluation, and large-scale vector maintenance.Looking ahead, the session envisions a future of Cognitive Search, where retrieval systems move beyond recall to genuine reasoning, contextual understanding, and multimodal awareness. Listeners will gain insight into how the next generation of retrieval will bridge semantics, scalability, and intelligence, powering everything from search and recommendations to generative AI.// BioRahul is a Staff Engineer at LinkedIn, where he focuses on search and deployment systems at scale. Rahul is a graduate from Carnegie Mellon University and has a strong background in building reliable, high-performance infrastructure. He has led many initiatives to improve search relevance and streamline ML deployment workflows.// Related LinksWebsite: https://www.linkedin.com/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rahul on LinkedIn: /rahulraja963/Timestamps:[00:00] Vector Search for Media[00:33] RAG and Search Evolution[04:45] Cognitive vs Semantic Search[08:26] High Value Search Signals[16:43] Scaling with Embeddings[22:37] BM25 Benchmark Bias[29:00] Video Search Use Cases[31:21] Context and Search Tradeoff[35:04] Personal Memory Augmentation[39:03] Future of Cognitive Search[44:51] Access Control in Vectors[49:14] Search Ranking Challenge[54:43] Hard Search Problems Solved[58:29] Freshness vs Cost[1:02:12] Wrap up
Vincent Warmerdam is a Founding Engineer at marimo, working on reinventing Python notebooks as reactive, reproducible, interactive, and Git-friendly environments for data workflows and AI prototyping. He helps build the core marimo notebook platform, pushing its reactive execution model, UI interactivity, and integration with modern development and AI tooling so that notebooks behave like dependable, shareable programs and apps rather than error-prone scratchpads.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractVincent Warmerdam joins Demetrios fresh off marimo’s acquisition by Weights & Biases—and makes a bold claim: notebooks as we know them are outdated.They talk Molab (GPU-backed, cloud-hosted notebooks), LLMs that don’t just chat but actually fix your SQL and debug your code, and why most data folks are consuming tools instead of experimenting. Vincent argues we should stop treating notebooks like static scratchpads and start treating them like dynamic apps powered by AI.It’s a conversation about rethinking workflows, reclaiming creativity, and not outsourcing your brain to the model.// BioVincent is a senior data professional who worked as an engineer, researcher, team lead, and educator in the past. You might know him from tech talks with an attempt to defend common sense over hype in the data space. He is especially interested in understanding algorithmic systems so that one may prevent failure. As such, he has always had a preference to keep calm and check the dataset before flowing tonnes of tensors. He currently works at marimo, where he spends his time rethinking everything related to Python notebooks.// Related LinksWebsite: https://marimo.io/Coding Agent Conference: https://luma.com/codingagentsHyperbolic GPU Cloud: app.hyperbolic.ai~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]MLOps GPU Guide: https://go.mlops.community/gpuguideConnect with Demetrios on LinkedIn: /dpbrinkmConnect with Vincent on LinkedIn: /vincentwarmerdam/Timestamps:[00:00] Context in Notebooks[00:24] Acquisition and Team Continuity[04:43] Coding Agent Conference Announcement![05:56] Hyperbolic GPU Cloud Ad[06:54] marimo and W&B Synergies[09:31] marimo Cloud Code Support[12:59] Hardest Code to Generate[16:22] Trough of Disillusionment[20:38] Agent Interaction in Notebooks[25:41] Wrap up
Ereli Eran is the Founding Engineer at 7AI, where he’s focused on building and scaling the company’s agentic AI-driven cybersecurity platform — developing autonomous AI agents that triage alerts, investigate threats, enrich security data, and enable end-to-end automated security operations so human teams can focus on higher-value strategic work.Software Engineering in the Age of Coding Agents: Testing, Evals, and Shipping Safely at Scale // MLOps Podcast #361 with Ereli Eran, Founding Engineer at 7AIJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractA conversation on how AI coding agents are changing the way we build and operate production systems. We explore the practical boundaries between agentic and deterministic code, strategies for shared responsibility across models, engineering teams, and customers, and how to evaluate agent performance at scale. Topics include production quality gates, safety and cost tradeoffs, managing long-tail failures, and deployment patterns that let you ship agents with confidence.// BioEreli Eran is a founding engineer at 7AI, where he builds agentic AI systems for security operations and the production infrastructure that powers them. His work spans the full stack - from designing experiment frameworks for LLM-based alert investigation to architecting secure multi-tenant systems with proper authentication boundaries. Previously, he worked in data science and software engineering roles at Stripe, VMware Carbon Black, and was an early employee of Ravelin and Normalyze.// Related LinksWebsite: https://7ai.com/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Ereli on LinkedIn: /erelieran/Timestamps:[00:00] Language Sensitivity in Reasoning[00:25] Value of Claude Code[01:54] AI in Security Workflows[06:21] Agentic Systems Failures[12:50] Progressive Disclosure in Voice Agents[16:39] LLM vs Classic ML[19:44] Hybrid Approach to Fraud[25:58] Debugging with User Feedback[33:52] Prompts as Code[42:07] LLM Security Workflow[45:10] Shared Memory in Security[49:11] Common Agent Failure Modes[53:34] Wrap up
Nick Gillian is the Co-Founder and CTO at Archetype AI, working on physical AI foundation models that understand and reason over real-world sensor data.Physical AI: Teaching Machines to Understand the Real World // MLOps Podcast #360 with Nick Gillian, Co-Founder and CTO of Archetype AIJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide/ AbstractAs AI moves beyond the cloud and simulation, the next frontier is Physical AI: systems that can perceive, understand, and act within real-world environments in real time. In this conversation, Nick Gillian, Co-Founder and CTO of Archetype AI, explores what it actually takes to turn raw sensor and video data into reliable, deployable intelligence.Drawing on his experience building Google’s Soli and Jacquard and now leading development of Newton, a foundational model for Physical AI, Nick discusses how real-time physical understanding changes what’s possible across safety monitoring, infrastructure, and human–machine interaction. He’ll share lessons learned translating advanced research into products that operate safely in dynamic environments, and why many organizations underestimate the challenges and opportunities of AI in the physical world.// BioNick Gillian, Ph.D., is Co-Founder and CTO of Archetype AI with over 15 years of experience turning advanced AI and interaction research into real-world products. At Archetype, he leads the AI and engineering teams behind Newton—a first-of-its-kind Physical AI foundational model that can perceive, understand, and reason about the physical world. Before co-founding Archetype, Nick was a Senior Staff Machine Learning Engineer at Google and a researcher at MIT, where he developed AI and ML methods for real-time sensor understanding. At Google’s Advanced Technology and Projects group, he led machine learning research that powered breakthrough products like Soli radar and Jacquard, and helped advance sensing algorithms across Pixel, Nest, and wearable devices.// Related LinksWebsite: https://www.archetypeai.io/https://www.archetypeai.io/blog/timefusion-newton https://www.nature.com/articles/s41598-023-44714-2https://www.youtube.com/watch?v=Pow4utY9teU https://www.youtube.com/watch?v=uE0jjdzwe9w https://arxiv.org/abs/2410.14724 Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Nick on LinkedIn: /nick-gillian-b27b1094/Timestamps:[00:00] Physical Agent Framework[00:56] Physical AI Clarification[06:53] Building a Repair Model[12:41] World Models and LLMs[17:17] Data Weighting Strategies[24:19] Data Diversity vs Quantity[38:30] R&D and Product Creation[41:22] Construction Site Data Shipping[50:33] Wrap up
Kris Beevers is the CEO at NetBox Labs, working on turning NetBox into the system of record and automation backbone for modern and AI-driven infrastructure.Speed and Scale: How Today's AI Datacenters Are Operating Through Hypergrowth // MLOps Podcast #359 with Kris Beevers, CEO of NetBox LabsJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractHundreds of neocloud operators and "AI Factory" builders have emerged to serve the insatiable demand for AI infrastructure. These teams are compressing the design, build, deploy, operate, scale cycle of their infrastructures down to months, while managing massive footprints with lean teams. How? By applying modern intent-driven infrastructure automation principles to greenfield deployments. We'll explore how these teams carry design intent through to production, and how operating and automating around consistent infrastructure data is compressing "time to first train".// BioKris Beevers is the Co-founder and CEO of NetBox Labs. NetBox is used by nearly every Neocloud and AI datacenter to manage their networks and infrastructure. Kris is an engineer at heart and by background, and loves the leverage infrastructure innovation creates to accelerate technology and empower engineers to do their best work. A serial entrepreneur, Kris has founded and helped lead multiple other successful businesses in the internet and network infrastructure. Most recently, he co-founded and led NS1, which was acquired by IBM in 2023. He holds a Ph.D. in Computer Science from Rensselaer Polytechnic Institute and is based in New Jersey.// Related LinksWebsite: https://netboxlabs.com/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Kris on LinkedIn: /beevek/Timestamps:[00:00] Observability and Delta Analysis[00:26] New World Exploration[04:06] Bottlenecks in AI Infrastructure[13:37] Data Center Optimization Challenges[19:58] Tech Stack Breakdown[25:26] Data Center Design Principles[31:32] Constraints and Automation in Design[40:00] Complexity in Data Centers[45:02] GPU Cloud Landscape[50:24] Data Centers in Containers[57:45] Observability Beyond Software[1:04:43] Tighter Integrations vs NetBox[1:06:47] Wrap up
Mike Oaten is the Founder and CEO of TIKOS, working on building AI assurance, explainability, and trustworthy AI infrastructure, helping organizations test, monitor, and govern AI models and systems to make them transparent, fair, robust, and compliant with emerging regulations.Cracking the Black Box: Real-Time Neuron Monitoring & Causality Traces // MLOps Podcast #358 with Mike Oaten, Founder and CEO of TIKOSJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAs AI models move into high-stakes environments like Defence and Financial Services, standard input/output testing, evals, and monitoring are becoming dangerously insufficient. To achieve true compliance, MLOps teams need to access and analyse the internal reasoning of their models to achieve compliance with the EU AI Act, NIST AI RMF, and other requirements.In this session, Mike introduces the company's patent-pending AI assurance technology that moves beyond statistical proxies. He will break down the architecture of the Synapses Logger, a patent-pending technology that embeds directly into the neural activation flow to capture weights, activations, and activation paths in real-time.// BioMike Oaten serves as the CEO of TIKOS, leading the company’s mission to progress trustworthy AI through unique, high-performance AI model assurance technology. A seasoned technical and data entrepreneur, Mike brings experience from successfully co-founding and exiting two previous data science startups: Riskopy Inc. (acquired by Nasdaq-listed Coupa Software in 2017) and Regulation Technologies Limited (acquired by mnAi Data Solutions in 2022).Mike's expertise spans data, analytics, and ML product and governance leadership. At TIKOS, Mike leads a VC-backed team developing technology to test and monitor deep-learning models in high-stakes environments, such as defence and financial services, so they comply with the stringent new laws and regulations.// Related LinksWebsite: https://tikos.tech/LLM guardrails: https://medium.com/tikos-tech/your-llm-output-is-confidently-wrong-heres-how-to-fix-it-08194fdf92b9Model Bias: https://medium.com/tikos-tech/from-hints-to-hard-evidence-finally-how-to-find-and-fix-model-bias-in-dnns-2553b072fd83Model Robustness: https://medium.com/tikos-tech/tikos-spots-neural-network-weaknesses-before-they-fail-the-iris-dataset-b079265c04daGPU Optimisation: https://medium.com/tikos-tech/400x-performance-a-lightweight-open-source-python-cuda-utility-to-break-vram-barriers-d545e5b6492fHyperbolic GPU Cloud: app.hyperbolic.ai.Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Mike on LinkedIn: /mike-oaten/Timestamps:[00:00] Regulations as Opportunity[00:25] Regulation Compliance Fun[02:49] AI Act Layers Explained[05:19] Observability in Systems vs ML[09:05] Risk Transfer in AI[11:26] LLMs and Model Approval[14:53] LLMs in Finance[17:17] Hyperbolic GPU Cloud Ad[18:16] Stakeholder Alignment and Tech[22:20] AI in Regulated Environments[28:55] Autonomous Boat Regulations[34:20] Data Compliance Mapping[39:11] Data Capture Strategy[41:13] EU AI Act Insights[44:52] Wrap up[45:45] Join the Coding Agents Conference!
Paulo Vasconcellos is the Principal Data Scientist for Generative AI Products at Hotmart, working on AI-powered creator and learning experiences, including intelligent tutoring, content automation, and multilingual localization at scale.Join us at Coding Agents: The AI Driven Developer Conference - https://luma.com/codingagentsMLOps GPU Guide: ⁠https://go.mlops.community/gpuguideJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// Abstract“Agent as a product” sounds like hype, until Hotmart turns creators’ content into AI businesses that actually work.// BioPaulo Vasconcellos is the Principal Data Scientist for Generative AI Products at Hotmart, where he leads efforts in applied AI, machine learning, and generative technologies to power intelligent experiences for creators and learners. He holds an MSc in Computer Science with a focus on artificial intelligence and is also a co-founder of Data Hackers, a prominent data science and AI community in Brazil. Paulo regularly speaks and publishes on topics spanning data science, ML infrastructure, and AI innovation.// Related LinksWebsite: paulovasconcellos.com.brCoding Agent - Virtual Conference: https://home.mlops.community/home/events/coding-agents-virtual ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]MLOps GPU Guide: https://go.mlops.community/gpuguideConnect with Demetrios on LinkedIn: /dpbrinkmConnect with Paulo on LinkedIn: /paulovasconcellos/Timestamps:[00:00] Hotmart Data Science Challenges[02:38] LLMs vs spaCy[11:38] Use Cases in Production[19:04] Coding Agents Virtual Conference Announcement![29:27] ML to AI Product Shift[34:49] Tool-Augmented Agent Approach[38:28] MLOps GPU Guide[41:24] AI Use Cases at Hotmart[49:34] Agent Tool Access Explained[51:04] MLOps Community Gratitude[53:22] Wrap up
Wilder Lopes is the CEO and Founder of Ogre.run, working on AI-driven dependency resolution and reproducible code execution across environments.How Universal Resource Management Transforms AI Infrastructure Economics // MLOps Podcast #357 with Wilder Lopes, CEO / Founder of Ogre.runJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractEnterprise organizations face a critical paradox in AI deployment: while 52% struggle to access needed GPU resources with 6-12 month waitlists, 83% of existing CPU capacity sits idle. This talk introduces an approach to AI infrastructure optimization through universal resource management that reshapes applications to run efficiently on any available hardware—CPUs, GPUs, or accelerators.We explore how code reshaping technology can unlock the untapped potential of enterprise computing infrastructure, enabling organizations to serve 2-3x more workloads while dramatically reducing dependency on scarce GPU resources. The presentation demonstrates why CPUs often outperform GPUs for memory-intensive AI workloads, offering superior cost-effectiveness and immediate availability without architectural complexity.// BioWilder Lopes is a second-time founder, developer, and research engineer focused on building practical infrastructure for developers. He is currently building Ogre.run, an AI agent designed to solve code reproducibility.Ogre enables developers to package source code into fully reproducible environments in seconds. Unlike traditional tools that require extensive manual setup, Ogre uses AI to analyze codebases and automatically generate the artifacts needed to make code run reliably on any machine. The result is faster development workflows and applications that work out of the box, anywhere.// Related LinksWebsite: https://ogre.runhttps://lopes.aihttps://substack.com/@wilderlopes https://youtu.be/YCWkUub5x8c?si=7RPKqRhu0Uf9LTql~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Wilder on LinkedIn: /wilderlopes/Timestamps:[00:00] Secondhand Data Centers Challenges[00:27] AI Hardware Optimization Debate[03:40] LLMs on Older Hardware[07:15] CXL Tradeoffs[12:04] LLM on CPU Constraints[17:07] Leveraging Existing Hardware[22:31] Inference Chips Overview[27:57] Fundamental Innovation in AI[30:22] GPU CPU Combinations[40:19] AI Hardware Challenges[43:21] AI Perception Divide[47:25] Wrap up
Corey Zumar is a Product Manager at Databricks, working on MLflow and LLM evaluation, tracing, and lifecycle tooling for generative AI.Jules Damji is a Lead Developer Advocate at Databricks, working on Spark, lakehouse technologies, and developer education across the data and AI community.Danny Chiao is an Engineering Leader at Databricks, working on data and AI observability, quality, and production-grade governance for ML and agent systems.MLflow Leading Open Source // MLOps Podcast #356 with Databricks' Corey Zumar, Jules Damji, and Danny ChiaoJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterShoutout to Databricks for powering this MLOps Podcast episode.// AbstractMLflow isn’t just for data scientists anymore—and pretending it is is holding teams back. Corey Zumar, Jules Damji, and Danny Chiao break down how MLflow is being rebuilt for GenAI, agents, and real production systems where evals are messy, memory is risky, and governance actually matters. The takeaway: if your AI stack treats agents like fancy chatbots or splits ML and software tooling, you’re already behind.// BioCorey ZumarCorey has been working as a Software Engineer at Databricks for the last 4 years and has been an active contributor to and maintainer of MLflow since its first release. Jules Damji Jules is a developer advocate at Databricks Inc., an MLflow and Apache Spark™ contributor, and Learning Spark, 2nd Edition coauthor. He is a hands-on developer with over 25 years of experience. He has worked at leading companies, such as Sun Microsystems, Netscape, @Home, Opsware/LoudCloud, VeriSign, ProQuest, Hortonworks, Anyscale, and Databricks, building large-scale distributed systems. He holds a B.Sc. and M.Sc. in computer science (from Oregon State University and Cal State, Chico, respectively) and an MA in political advocacy and communication (from Johns Hopkins University)Danny ChiaoDanny is an engineering lead at Databricks, leading efforts around data observability (quality, data classification). Previously, Danny led efforts at Tecton (+ Feast, an open source feature store) and Google to build ML infrastructure and large-scale ML-powered features. Danny holds a Bachelor’s Degree in Computer Science from MIT.// Related LinksWebsite: https://mlflow.org/https://www.databricks.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Corey on LinkedIn: /corey-zumar/Connect with Jules on LinkedIn: /dmatrix/Connect with Danny on LinkedIn: /danny-chiao/Timestamps:[00:00] MLflow Open Source Focus[00:49] MLflow Agents in Production[00:00] AI UX Design Patterns[12:19] Context Management in Chat[19:24] Human Feedback in MLflow[24:37] Prompt Entropy and Optimization[30:55] Evolving MLFlow Personas[36:27] Persona Expansion vs Separation[47:27] Product Ecosystem Design[54:03] PII vs Business Sensitivity[57:51] Wrap up
Leadership on AI

Leadership on AI

2026-01-1347:24

Euro Beinat is the Global Head of AI and Data Science at Prosus Group, working on scaling AI-driven tools and agent-based systems across Prosus’s global portfolio, deploying internal assistants like Toqan and generative AI platforms such as PlusOne, and building initiatives like AI House Amsterdam and interdisciplinary AI residencies to explore intent-driven AI and strengthen Europe’s AI ecosystem.Mert Öztekin is the Chief Technology Officer at Just Eat Takeaway.com, working on advancing the company’s platform with AI-driven ordering and personalised user experiences, scaling cloud and generative AI tooling for engineering productivity, and exploring innovative delivery technologies like automation to make ordering and delivery more seamless. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractAgents sound smart until millions of users show up. A real talk on tools, UX, and why autonomy is overrated.// BioEuro Beinat Euro is a technology executive and entrepreneur specializing in data science, machine learning, and AI. He works with global corporations and startups to build data- and ML-driven products and businesses. His current focus is on Generative AI and the use of AI as a tool for invention and innovation.Mert ÖztekinMert is the current Chief Technology Officer at Just Eat Takeaway.com with previous experience as a CTO at Delivery Hero Germany GmbH, Director of Engineering at Delivery Hero, and IT Manager at yemeksepeti.com. They have a background in software engineering, system-business analysis, and project management, with a master's degree in Computer Engineering. Mert has also worked as an IT Project Team Lead and has experience in managing mobile teams and global expansions in the online food ordering industry.// Related LinksWebsite: https://www.prosus.com/Website: https://justeattakeaway.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]MLOps GPU Guide: https://go.mlops.community/gpuguideConnect with Demetrios on LinkedIn: /dpbrinkmConnect with Euro on LinkedIn: /eurobeinat/Connect with Mert on LinkedIn: /mertoztekin/Timestamps:[00:00] AI Transformation Challenges[00:29] AI Productivity[04:30] Developer Tool Freedom[09:40] AI Alignment Bottleneck[22:17] Exploring Agent Potential[25:59] Governance of AI Agents[33:24] Shadow AI Governance[40:57] AI Budgeting for Growth[46:27] MLOps GPU Guide announcement!
Zengyi Qin is the Founder of the OpenAGI Foundation, working on computer-use models and open, agent-centric AI infrastructure.Computers that Think and Take Actions for You, Zengy Qin // MLOps Podcast #355Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps Merch: https://shop.mlops.community/// AbstractWhat if the computer itself can think and take actions for you? You just give it a goal, and it performs every click, type, drag, and gets work done across the desktop and web. In this talk, Zengyi reveals the breakthrough technology that his company OpenAGI is developing: AI that can use computers like humans do. He talks about how his team developed the model, why it outperforms similar models from OpenAI and Google, and its wide use cases across different domains. // Related LinksWebsite: https://www.qinzy.tech/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Zengyi on LinkedIn: /qinzy/Timestamps:[00:00] AI and Human Interaction[00:30] Zengyi's story[08:19] Why Expensive Models Lost[06:30] Bigger Models Are Lazy[10:24] Training Computer-Use vs LLMs[13:53] World Models and Sandboxes[19:42] Dealing with Non-Stationary States[23:56] Training with Software[26:44] Sandbox Training Process[41:33] Infrastructure for Computer Models[44:36] Wrap up
Varant Zanoyan is the Co-founder & CEO at Zipline AI, working on building a next-generation AI/ML infrastructure platform that streamlines data pipelines, model deployment, observability, and governance to accelerate enterprise AI development. Nikhil Simha Raprolu is the Co-founder & CTO at Zipline AI, focused on architecting and scaling the company’s AI data platform — extending the open-source Chronon engine into a developer-friendly system that simplifies building and operating production AI applications.Real-time features, AI search, Agentic similarities, Varant Zanoyan & Nikhil Simha Raprolu // MLOps Podcast #354Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps Swag/Merch: [https://shop.mlops.community/]And huge thanks to Chroma for hosting us in their recording studio// AbstractFeature stores might be the wrong abstraction. Varant Zanoyan and Nikhil Simha Raprolu explain why Cronon ditched “store-first” thinking and focused on compute, orchestration, and real-time correctness—born at Airbnb, battle-tested with Stripe. If embeddings, agents, and real-time ML feel painful, this episode explains why.// Related LinksWebsite: https://zipline.ai/ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Varant on LinkedIn: /vzanoyan/Connect with Nikhil on LinkedIn: /nikhilsimha/Timestamps:[00:00] Feature Platform Insights[02:00] Zipline and Feature Stores[05:19] Cronon and Zipline Origins[10:49] Feast and Feather Comparison[13:27] Open source challenges[20:52] Zipline and Iceberg Integration [23:54] Airbnb Agent Systems[28:16] Features vs Embeddings[29:07] Wrap up
Alex Salazar is the CEO and Co-Founder of Arcade.dev, working on secure AI agents and real-world automation integrations.Chiara Caratelli is a Data Scientist at Prosus Group, working on AI agents, web automation, and evaluation of robust multimodal models.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: ⁠https://go.mlops.community/gpuguide// AbstractAgents sound smart until millions of users show up. A real talk on tools, UX, and why autonomy is overrated.// BioChiara CaratelliChiara is a Data Scientist at Prosus, where she develops AI-driven solutions with a focus on AI agents, multimodal models, and new user experiences. With a PhD in Computational Science and a background in machine learning engineering and data science, she has worked on deploying AI-powered applications at scale, collaborating with Prosus portfolio companies to drive real-world impact.Beyond her work at Prosus, she enjoys experimenting with generative AI and art. She is also an avid climber and book reader, always eager to explore new ideas and share knowledge with the AI and ML community.Alex SalazarAlex is the CEO and co-founder of Arcade.dev, the unified agent action platform that makes AI agents production-ready. Previously, Salazar co-founded Stormpath, the first authentication API for developers, which was acquired by Okta. At Okta, he led developer products, accounting for 25% of total bookings, and launched a new auth-centric proxy server product that reached $9M in revenue within a year. He also managed Okta's network of over 7,000 auth integrations. Alex holds a computer science degree from Georgia Tech and an MBA from Stanford University.// Related LinksWebsite: https://www.prosus.com/Website: https://www.arcade.dev/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Alex on LinkedIn: /alexsalazar/Connect with Chiara on LinkedIn: /chiara-caratelli/Timestamps:[00:00] Intro[00:15] Insights from iFood[06:22] API vs agent intention[09:45] Tool definition clarity[15:37] Preemptive context loading[27:50] Contextualizing agent data[33:27] Prompt bloat in payments[41:33] Agent building evolution[50:09] Agent program scalability[55:29] Why multi-agent is a dead end[56:17] Wrap up
Jonathan Wall is the CEO at Runloop.ai, working on enterprise-grade infrastructure and execution environments for AI coding agents.The Future of AI Agents is Sandboxed // MLOps Podcast #353 with Jonathan Wall, CEO at Runloop.ai.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterShoutout to  @runloop-ai  for powering this MLOps Podcast episode.// AbstractEveryone’s arguing about agents. Jonathan Wall says the real fight is about sandboxes, isolation, and why most “agent platforms” are doing it wrong.// BioJon was the techlead of Google File System, a founding engineer at Google Wallet, and then the founder of Inde, which was acquired by Stripe. He is building Runloop.ai to bridge the production gap for AI Agents by building a one-stop sandbox infrastructure for building, deploying, and refining agents. // Related LinksWebsite: runloop.aiBlogs and content at https://www.runloop.ai/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jon on LinkedIn: /jonathantwall/Timestamps:[00:00] GitHubification of workflows[00:29] Sandbox definitions explained[04:47] Agent setup explanation[08:03] Sandbox vs API agent[13:51] Resource usage in sandbox [22:50] Agent evaluation setup[28:08] Failure cases value[31:06] Sandbox isolation vs multi-tenancy[36:14] Frameworks vs Harnesses[39:02] Langraph vs Harness comparison[43:22] Agent flexibility and verification[52:51] Training data focus[57:10] Wrap up
Simba Khadder is the founder and CEO of Featureform, now at Redis, working on real-time feature orchestration and building a context engine for AI and agents.Context Engineering 2.0, Simba Khadder // MLOps Podcast #352Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractFeature stores aren’t dead — they were just misunderstood. Simba Khadder argues the real bottleneck in agents isn’t models, it’s context, and why Redis is quietly turning into an AI data platform. Context engineering matters more than clever prompt hacks.// BioSimba Khadder leads Redis Context Engine and Redis Featureform, building both the feature and context layer for production AI agents and ML models. He joined Redis via the acquisition of Featureform, where he was Founder & CEO. At Redis, he continues to lead the feature store product as well as spearhead Context Engine to deliver a unified, navigable interface connecting documents, databases, events, and live APIs for real-time, reliable agent workflows. He also loves to surf, go sailing with his wife, and hang out with his dog Chupacabra.// Related LinksWebsite: featureform.comhttps://marketing.redis.io/blog/real-time-structured-data-for-ai-agents-featureform-is-joining-redis/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Simba on LinkedIn: /simba-k/Timestamps:[00:00] Context engineering explanation[00:25] MLOps and feature stores[03:36] Selling a company experience[06:34] Redis feature store evolution[12:42] Embedding hub[20:42] Human vs agent semantics[26:41] Enrich MCP data flow[29:55] Data understanding and embeddings[35:18] Search and context tools[39:45] MCP explained without hype[45:15] Wrap up
Satish Bhambri is a Sr Data Scientist at Walmart Labs, working on large-scale recommendation systems and conversational AI, including RAG-powered GroceryBot agents, vector-search personalization, and transformer-based ad relevance models.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractThe MLOps Community Podcast features Satish Bhambri, Senior Data Scientist with the Personalization and Ranking team at Walmart Labs and one of the emerging leaders in applied AI, in its newest episode. Satish has quietly built one of the most diverse and impactful AI portfolios in his field, spanning quantum computing, deep learning, astrophysics, computer vision, NLP, fraud detection, and enterprise-scale recommendation systems. Bhambri's nearly a decade of research across deep learning, astrophysics, quantum computing, NLP, and computer vision culminated in over 10 peer-reviewed publications released in 2025 through IEEE and Springer, and his early papers are indexed by NASA ADS and Harvard SAO, marking the start of his long-term research arc. He also holds a patent for an AI-powered smart grid optimization framework that integrates deep learning, real-time IoT sensing, and adaptive control algorithms to improve grid stability and efficiency, a demonstration of his original, high-impact contributions to intelligent infrastructure. Bhambri leads personalization and ranking initiatives at Walmart Labs, where his AI systems serve more than (5% of the world’s population) 531 million users every month, roughly based on traffic data. His work with Transformers, Vision-Language Models, RAG and agentic-RAG systems, and GPU-accelerated pipelines has driven significant improvements in scale and performance, including increases in ad engagement, faster compute by and improved recommendation diversity.Satish is a Distinguished Fellow & Assessor at the Soft Computing Research Society (SCRS), a reviewer for IEEE and Springer, and has served as a judge and program evaluator for several elite platforms. He was invited to the NeurIPS Program Judge Committee, the most prestigious AI conference in the world, and to evaluate innovations for DeepInvent AI, where he reviews high-impact research and commercialization efforts. He has also judged Y Combinator Startup Hackathons, evaluating pitches for an accelerator that produced companies like Airbnb, Stripe, Coinbase, Instacart, and Reddit.Before Walmart, Satish built supply-chain intelligence systems at BlueYonder that reduced ETA errors and saved retailers millions while also bringing containers to the production pipeline. Earlier, at ASU’s School of Earth & Space Exploration, he collaborated with astrophysicists on galaxy emission simulations, radio burst detection, and dark matter modeling, including work alongside Dr. Lawrence Krauss, Dr. Karen Olsen, and Dr. Adam Beardsley.On the podcast, Bhambri discusses the evolution of deep learning architectures from RNNs and CNNs to transformers and agentic RAG systems, the design of production-grade AI architectures with examples, and his long-term vision for intelligent systems that bridge research and real-world impact. and the engineering principles behind building production-grade AI at a global scale.// Related LinksPapers: https://scholar.google.com/citations?user=2cpV5GUAAAAJ&hl=enPatent: https://search.ipindia.gov.in/DesignApplicationStatus ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkm
Zack Reneau-Wedeen is the Head of Product at Sierra, leading the development of enterprise-ready AI agents — from Agent Studio 2.0 to the Agent Data Platform — with a focus on richer workflows, persistent memory, and high-quality voice interactions.How Sierra Does Context Engineering, Zack Reneau-Wedeen // MLOps Podcast #350Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractSierra’s Zack Reneau-Wedeen claims we’re building AI all wrong and that “context engineering,” not bigger models, is where the real breakthroughs will come from. In this episode, he and Demetrios Brinkmann unpack why AI behaves more like a moody coworker than traditional software, why testing it with real-world chaos (noise, accents, abuse, even bad mics) matters, and how Sierra’s simulations and model “constellations” aim to fix the industry’s reliability problems. They even argue that decision trees are dead, replaced by goals, guardrails, and speculative execution tricks that make voice AI actually usable. Plus: how Sierra trains grads to become product-engineering hybrids, and why obsessing over customers might be the only way AI agents stop disappointing everyone.// Related LinksWebsite: https://www.zackrw.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Zack on LinkedIn: /zackrw/Timestamps:[00:00] Electron cloud vs energy levels[03:47] Simulation vs red teaming[06:51] Access control in models[10:12] Voice vs text simulations[13:12] Speaker-adaptive turn-taking[18:26] Accents and model behavior[23:52] Outcome-based pricing risks[31:40] AI cross-pollination strategies[41:26] Ensemble of models explanation[46:47] Real-time agents vs decision trees[50:15] Code and no-code mix[54:04] Goals and guardrails explained[56:23] Wrap up[57:31] APX program!
Spencer Reagan leads R&D at Airia, working on secure AI-agent orchestration, data governance systems, and real-time signal fusion technologies for regulated and defense environments.Overcoming Challenges in AI Agent Deployment: The Sweet Spot for Governance and Security // MLOps Podcast #349 with Spencer Reagan, R&D at Airia.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterShoutout to Airia for powering this MLOps Podcast episode.// AbstractSpencer Reagan thinks it might be, and he’s not shy about saying so. In this episode, he and Demetrios Brinkmann get real about the messy, over-engineered state of agent systems, why LLMs still struggle in the wild, and how enterprises keep tripping over their own data chaos. They unpack red-teaming, security headaches, and the uncomfortable truth that most “AI platforms” still don’t scale. If you want a sharp, no-fluff take on where agents are actually headed, this one’s worth a listen.// BioPassionate about technology, software, and building products that improve people's lives.// Related LinksWebsite: https://airia.com/Machine Learning, AI Agents, and Autonomy // Egor Kraev // MLOps Podcast #282 - https://youtu.be/zte3QDbQSekRe-Platforming Your Tech Stack // Michelle Marie Conway & Andrew Baker // MLOps Podcast #281 - https://youtu.be/1ouSuBETkdA~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Spencer on LinkedIn: /spencerreagan/Timestamps:[00:00] AI industry future[00:55] Use cases in software[05:44] LLMs for data normalization[11:02] ROI and overengineering[15:58] Street width history[20:58] High ROI examples[25:16] AI building challenges[33:37] Budget control challenges[39:30] Airia Orchestration platform[46:25] Agent evaluation breakdown[53:48] Wrap up
loading
Comments (2)

Marco Gorelli

"in Kaggle you normally see a 1-1 ratio of positive to negative examples" huh? has he ever done a Kaggle competition? this statement is totally off

Jul 27th
Reply (1)
loading