DiscoverODSC's Ai X Podcast
ODSC's Ai X Podcast

ODSC's Ai X Podcast

Author: ODSC

Subscribed: 21Played: 433
Share

Description

With Ai X Podcast, Open Data Science Conference (ODSC) brings its vast experience in building community and its knowledge of the data science and AI fields to the podcast platform. The interests and challenges of the data science community are wide ranging. To reflect this Ai X Podcast will offer a similarly wide range of content, from one-on-one interviews with leading experts, to career talks, to educational interviews, to profiles of AI Startup Founders. Join us every two weeks to discover what’s going on in the data science community.

Find more ODSC lightning interviews, webinars, live trainings, certifications, bootcamps here – aiplus.training/
Don't miss out on this exciting opportunity to expand your knowledge and stay ahead of the curve.

82 Episodes
Reverse
In this episode, Sheamus McGovern sits down with João “Joe” Moura, founder and CEO of CrewAI, to discuss the rapid evolution of multi-agent AI systems and their real-world impact in enterprise environments. From his early experiments with agent-based automations to supporting clients like PwC, PepsiCo, and the U.S. Department of Defense, Joe shares how CrewAI is enabling companies to move beyond hype and flashy demos toward scalable, secure agent workflows. This conversation covers everything from open-source momentum to agent orchestration, governance, deployment challenges, and what it takes to go from prototype to production.Key Topics Covered:João’s background in AI and the early days of CrewAIThe problem CrewAI was originally built to solve—and how it scaled from a personal tool to a global platformDefinition of a true agent: “Agency is the key. If there’s no decision-making, it’s not an agent.”The evolution to robust multi-agent enterprise systemsCrewAI’s dual product strategy: open source for developers, enterprise-grade platform for production useUse cases driving the most traction: go-to-market workflows, internal GPTs, code automation at scaleThe MIT report on AI adoption: why many enterprises are failing to see ROI—and how CrewAI is addressing the gapEmpowering non-technical users through CrewAI StudioSecurity, governance, and the importance of deterministic control in agent deploymentWhy waiting for the next-gen LLM is a mistake: “The models we have today are good enough for 99% of enterprise use cases.”CrewAI’s take on interoperability, agent-to-agent protocols, and why standards matterMemorable Outtakes:“If you want an agent, you’ve got to have agency. Otherwise, it’s just a script.”“LLMs today are good enough for 99% of enterprise use cases.” “We’re now running over 475 million agent automations a month.”References & Resources:João “Joe” Moura – LinkedIn https://www.linkedin.com/in/joaomdmoura/CrewAI https://www.crewai.com/MIT Report on GenAI adoption (referenced in discussion) https://ide.mit.edu/research/95-of-generative-ai-projects-fail-how-to-make-yours-succeedHumanity's Last Exam https://agi.safe.ai/AgentBench⁠ https://github.com/THUDM/AgentBench⁠HumanEval Benchmark⁠ https://github.com/openai/human-eval⁠MCP – Model Coordination Protocolhttps://modelcontextprotocol.io/docs/getting-started/introAgent to Agent Communication https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/Sponsored by: 🔥 ODSC AI West 2025 – The Leading AI Training Conference Join us in San Francisco from October 28th–30th for expert-led sessions on generative AI, LLMOps, and AI-driven automation. Use the code podcastfor 10% off any ticket. Learn more:⁠ https://odsc.ai⁠
In this episode of the ODSC AiX Podcast, host Sheamus McGovern reconnects with Paige Bailey, Engineering Lead at Google DeepMind for the Developer Experience team. Paige shares how the Gemini ecosystem has evolved since her last appearance, including the launch of Gemini 2.5 DeepThink, multimodal video generation with Veo 3, real-time music creation with Lyria RT, and groundbreaking advances in agentic and on-device AI systems. The conversation explores the rapid rise of agent-based workflows, AI-powered robotics, and the growing divide between cutting-edge tools and real-world adoption.Key Topics Covered:Gemini 2.5 DeepThink & Reasoning ModelsThe model that won gold at the International Mathematical Olympiad (IMO)Use cases for DeepThink, Pro, Flash, and FlashLite variantsUsing Gemini Live API for real-world robotics and decision planningRole of multimodal inputs (video, audio, text) in enabling embodied AIOn-Device AI & UbiquityImplications for edge deployment, cost reduction, and accessibilityVeo 3: Multimodal Video GenerationLyria RT: Real-Time Music GenerationGemini Live API & Voice InterfacesReal-time bidirectional voice, screen understanding, and tool callingRise of voice as the dominant AI interfaceUse of SynthID and digital watermarking to detect deepfakesFuture of AI-agent orchestration via MCP serversMemorable Outtakes:On the pace of model development: “A 4-billion parameter model on-device now outperforms our best cloud model from six months ago. That’s pretty magical.”  — Paige Bailey On the role of AI agents in robotics: “You can say, ‘Hey robot, go get me that apple,’ and Gemini will plan the task, route it, and call the right control models.” — Paige Bailey On the AI adoption gap: “In the Bay Area, we use AI hourly. But when I talk to developers in the Midwest, they often aren’t using it at all.” — Paige Bailey References & Resources:Paige BaileyDynamic Web Paige: https://webpaige.dev/LinkedIn: https://www.linkedin.com/in/dynamicwebpaigeGitHub: https://github.com/dynamicwebpaigeMedium: https://medium.com/@dynamicwebpaigePrevious podcast with Paige: https://podcasters.spotify.com/pod/show/ai-x-podcast/episodes/Googles-AI-Powered-Tools-for-Data-Scientists-Building-the-Automated-Future-of-Data-Science-with-Paige-Bailey-e2p3t6eInternational Mathematical Olympiad (IMO): https://www.imo-official.orgModel Context Protocol (MCP): https://modelcontextprotocol.io/docs/getting-started/introGemini 2.5 Deep Think: https://blog.google/products/gemini/gemini-2-5-deep-think/Veo 3: https://deepmind.google/technologies/veo/Lyria RT & Music AI Sandbox: https://deepmind.google/technologies/lyria/SynthID & Deepfake Watermarking: https://deepmind.google/technologies/synthid/Gemma Models: https://ai.google.dev/gemmaGemini Live API Docs: https://ai.google.dev/gemini-api/docs/liveGoogle AI Studio: https://ai.google.devSponsored by: 🔥 ODSC AI West 2025 – The Leading AI Training ConferenceJoin us in San Francisco from October 28th–30th for expert-led sessions on generative AI, LLMOps, and AI-driven automation.Use the code podcast for 10% off any ticket.Learn more: https://odsc.ai
In this episode of the ODSC Ai X Podcast, host Alex Landa sits down with Julian Togelius, Associate Professor in the Department of Computer Science and Engineering at New York University and Co-Director of the NYU Game Innovation Lab.Julian is a pioneer in the intersection of artificial intelligence and gaming, with groundbreaking work on AI for game design, computational creativity, and the role of AI as both a research tool and creative collaborator. He’s also the co-founder of modl.ai, a company developing AI for game quality assurance.Together, Alex and Julian explore how AI is transforming game design, what’s remained the same since the early days of AI in games, and what the future may hold for creativity, interactivity, and NPCs.Key Topics Covered:Julian’s academic journey from philosophy and psychology into computer science and AI research.The origins of AI in games, from Turing’s chess experiments to reinforcement learning milestones.How AI can be used not only to play games but also to design them.The evolution from the first edition of Artificial Intelligence and Games (2018) to the expanded second edition (2025).The tension between human creativity and AI-generated content in the gaming industry.Challenges and opportunities of using AI to create immersive NPCs and dynamic game worlds.The future of AI in gaming, from neural game engines to experience-driven content generation.Insights into ongoing projects at the NYU Game Innovation Lab and at modl.ai.Memorable Outtakes:“Games are designed not to need AI… Just putting AI solutions into existing designs is not going to help. You need to design from the ground up for the AI capabilities.” – Julian Togelius“One of the most productive things you can do is to play with these models and allow yourself to be provoked and outraged—to do more out-there things you wouldn’t otherwise.” – Julian Togelius“At some point, someone is going to do an RPG that features characters you can actually talk to—and that’s going to revolutionize how we think about RPGs.” – Julian TogeliusReferences and Resources:About Julian:Website: http://julian.togelius.com/LinkedIn: https://www.linkedin.com/in/togelius/NYU Page: https://engineering.nyu.edu/faculty/julian-togeliusTwitter/X: https://x.com/togeliusBlog: https://togelius.blogspot.com/Artificial Intelligence and Games book: https://gameaibook.org/modl AI Engine business: https://modl.ai/Other Resources Mentioned:Podcast with Nick Walton on AI and Games: https://opendatascience.com/ai-dungeon-and-the-future-of-ai-powered-storytelling-a-conversation-with-creator-nick-walton/AI Dungeon: https://aidungeon.com/ODSC talk “Playing Super Smash Bros with Agentic Gemini”: https://www.youtube.com/watch?v=AR0o9DLF0H0Turochamp, the Alan Turing chess game: https://en.wikipedia.org/wiki/TurochampReinforcement learning for games: https://opendatascience.com/the-paladin-the-cleric-and-the-reinforcement-learning/The NYU Game Innovation Lab: https://game.engineering.nyu.edu/Sponsored by:🔥 ODSC AI West 2025 – The Leading AI Training ConferenceJoin us in San Francisco from October 28th–30th for expert-led sessions on generative AI, LLMOps, and AI-driven automation.Use the code podcast for 10% off any ticket.
In this episode of the ODSC Ai X Podcast, host Sheamus McGovern sits down with Dr. Nataliya Kosmyna, neuroscientist and researcher at MIT Media Lab, to explore her viral research paper “Your Brain on ChatGPT.” Dr. Kosmyna discusses the study’s startling findings on how AI writing tools like ChatGPT can impact memory, learning, cognitive engagement, and even long-term brain development. She also shares how science fiction inspired her career in brain-computer interfaces and offers powerful insights for educators, technologists, and everyday users of generative AI.Key Topics Covered:What “cognitive debt” means and how it differs from “cognitive offloading” and the “Google effect”How over-reliance on LLMs like ChatGPT can reduce cognitive engagement and memory recallThe role of EEG and neural connectivity in assessing cognitive load during essay writingWhy essays written with ChatGPT lack originality, personal voice, and ownershipThe potential dangers of using LLMs in education—especially among developing brainsWhat Session 4 of the study revealed about users’ inability to adapt once LLM support was removedConcrete strategies for using AI responsibly and mitigating long-term cognitive riskHow AI tools should be designed with the human brain—and human development—in mindThe hidden energy and environmental costs of constant AI useWhy younger users may be the most vulnerable to long-term cognitive effectsMemorable Outtakes:"There is no cognitive credit card. You cannot pay this debt off." — Dr. Nataliya Kosmyna"You do not talk to a calculator about your feelings." — Dr. Nataliya Kosmyna, on why LLMs are fundamentally different from traditional tools"If you don’t feel ownership over your work, what is there left to remember?" — Dr. Kosmyna on the link between memory encoding and cognitive agencyReferences & Resources:Paper: Your Brain on ChatGPT https://arxiv.org/pdf/2506.08872Dr. Nataliya Kosmyna –https://www.media.mit.edu/people/nkosmyna/overview/MIT Media Lab – https://www.media.mit.edu/Brain & LLM Project – https://www.brainonllm.com/MIT OpenCourseWare (mentioned during the interview) – https://ocw.mit.edu/Sponsored by: 🔥 ODSC AI West 2025 – The Leading AI Training ConferenceJoin us in San Francisco from October 28th–30th for expert-led sessions on generative AI, LLMOps, and AI-driven automation.Use the code podcast for 10% off any ticket.Learn more: https://odsc.ai
In this special episode of the ODSC Ai X Podcast, host Sheamus McGovern dives into the real-world impact of GPT-5—from routing and hallucination issues to cost savings and open-weight models.Joining him are two expert guests:Ivan Lee: Founder and CEO of Datasaur, who helps enterprises build private LLM stacks and has deep experience evaluating model upgrades.Nir Gazit: Co-founder and CEO of Traceloop, and co-creator of the OpenTelemetry Generative AI SIG, who brings insight into model routing, evaluation strategies, and observability tooling.Together, they unpack what GPT-5 actually changed—and what teams should do next.Key Topics Covered:Why GPT-5’s biggest shift is routing, not reasoningWhat casual vs. power users gained (or lost) with the rolloutHallucination benchmarks vs. real-world resultsEvaluation strategies using open-source tools like Phoenix and LangChainOpenAI’s OSS model release and its enterprise implicationsWhy developers worry about black-box routing and lack of traceabilityHow to migrate safely: pinning snapshots, running evals, shadow testingWhether GPT-5 gets us closer to AGI—or just better infrastructureWhat to expect from agent workflows, tool selection, and model specializationMemorable Outtakes:Ivan Lee: “GPT-5 is an upgrade for 98% of users—but for the power users, the loss of model choice felt like control was taken away.”Nir Gazit: “Of course every new model crushes it on benchmarks—they’re optimizing for the benchmarks. That doesn’t mean it works for your use case.”Ivan Lee: “OpenAI’s OSS release might be the bigger story than GPT-5. Suddenly, enterprises are back at the table.”References & Resources:GuestsIvan Lee – CEO of DatasaurWebsite: https://www.datasaur.aiLinkedIn: https://www.linkedin.com/in/iylee/Nir Gazit – CEO of TraceloopWebsite: https://www.traceloop.comBlog: https://www.traceloop.com/blogLinkedIn: https://www.linkedin.com/in/nirga/Resources MentionedOpenAI GPT-5  https://openai.com/gpt-5OpenTelemetry Project: https://opentelemetry.ioTraceloop OpenLLMetry: https://www.traceloop.com/openllmetryPhoenix (Arize AI open-source evals): https://github.com/Arize-ai/phoenixLangChain Evals: https://python.langchain.com/api_reference/langchain/evaluation.htmlGPT-OSS Open Weight Models by OpenAI: https://platform.openai.com/docs/models/gpt-ossClaude + Model Context Protocol (Anthropic): https://docs.anthropic.com/en/docs/tool-useARC-AGI Leaderboard: https://arcprize.org/leaderboardSponsored by: 🔥 ODSC AI West 2025 – The Leading AI Training ConferenceJoin us in San Francisco from October 28th–30th for expert-led sessions on generative AI, LLMOps, and AI-driven automation.Use the code podcast for 10% off any ticket.Learn more: https://odsc.ai
In this episode of ODSC’s AiX Podcast, Sheamus McGovern sits down with Dan Huss, the Founder & CEO of Gravity AI, to unpack how AI is changing product management. The conversation explores how AI is reshaping product management, from evaluating AI’s true value, avoiding AI washing, and differentiating between AI as a feature versus AI as a core product, to Dan’s concept of the Minimum Viable Experiment (MVE) and Minimum Viable AI (MVAI ) as an alternative to MVP. They also dive into the evolving skill sets product managers need in the AI era, cross-functional collaboration with data science teams, and responsible AI implementation.Key Topics Covered:MVAI (Minimum Viable AI)  and the future of product management.Why MVP gets bloated in the enterprise and why MVE (Minimum Viable Experiment) prioritizes learning over shipping.A product lens for AI decisions using desirability, feasibility, viability — including consequences when models are wrong.AI-as-a-feature vs. AI-as-a-product — what changes in data strategy and UI (e.g., conversational interfaces).Where AI impacts the PM workflow: user stories, specs, research synthesis, and “vibe coding” for rapid prototypes.Collaboration playbook for PMs ↔ Data Science / AI Engineering on hallucinations, accuracy, monitoring, and risk.Practical MLOps: catalogs, deployment, observability, and compliance (e.g., EU AI Act implications).Avoiding AI washing: How to find low‑risk, high‑learning internal use cases and build toward differentiationMemorable Outtakes:“Ditch MVP. Let’s call it an MVE—Minimum Viable Experiment. Put the emphasis on learning, not building a ‘version one’ that gets bloated.”“Product management is fundamentally a communications role. Anywhere there’s a language task, AI will become a collaborator.”References & Resources:Guest & CompanyDan Huss (Founder & CEO, Gravity AI) — LinkedIn: ⁠https://www.linkedin.com/in/danielhuss/⁠Gravity AI: ⁠https://www.gravity-ai.com/⁠Concepts, Tools & Mentions from the EpisodeMinimum Viable Product: ⁠https://en.wikipedia.org/wiki/Minimum_viable_product⁠The Lean Startup ⁠https://en.wikipedia.org/wiki/The_Lean_Startup⁠AI Snake Oil (book/site): ⁠https://www.aisnakeoil.com⁠Podcast Episode - The AI Superintelligence Myth with Arvind Narayanan: ⁠https://creators.spotify.com/pod/profile/ai-x-podcast/episodes/The-AI-Superintelligence-Myth-with-Arvind-Narayanan-e32it16⁠EU AI Act (overview resource): ⁠https://artificialintelligenceact.eu/⁠Notion: ⁠https://www.notion.so/⁠ Figma: ⁠https://www.figma.com/⁠Sponsored by: 🔥 ODSC West 2025 – The Leading AI Training ConferenceJoin us in San Francisco from October 28th–30th for expert-led sessions on generative AI, LLMOps, and AI-driven automation.Use the code podcast for 10% off any ticket.Learn more:⁠ https://odsc.com/california⁠
In this episode of the ODSC AI Podcast, host Sheamus McGovern speaks with Ian Cairns, cofounder and CEO of Freeplay, a platform built to help teams evaluate, monitor, and iterate on LLM and agent-based systems in production. Ian brings a deep product background from Twitter, Gnip, and Mapbox, and offers an insider’s look into what it actually takes to make AI work beyond the prototype phase. The conversation centers on evaluation — widely regarded as one of the most difficult and underdeveloped aspects of deploying AI in 2025.Key Topics Covered:The real-world AI maturity curve: from vibe prompting to productionOffline vs. online evaluation: definitions, trade-offs, and toolingWhy teams struggle post-deployment — and how to break through the “we don’t know what’s going wrong” phaseEvaluation challenges with agents, memory, RAG, and tool useThe role of observability, telemetry, and human-in-the-loop reviewLessons learned from Freeplay customers, including PostscriptThe growing importance of domain experts in evaluation workflowsBuilding multi-layer eval architectures for agent systemsVoice agent challenges — like turn detection and latencyEmerging roles like AI Evaluation Engineer and how orgs should staff for evaluation maturityMemorable Outtakes:"The most mature teams start with their evals. They define what good looks like, then hill-climb toward that metric.""The breakthrough in quality comes from people getting close to the data. Sometimes, thousands of rows."References & Resources:Freeplay website: ⁠https://www.freeplay.ai⁠Deployed: The AI Product Podcast by Freeplay: ⁠https://open.spotify.com/show/6nZS3a7iYb2EzHcl78iNmi?si=de766e786a41461c&nd=1&dlsi=0cb3351f79644bfc⁠Freeplay blog: ⁠https://www.freeplay.ai/blog⁠Freeplay community newsletter: ⁠https://freeplay.ai/newsletter⁠PipeCat (open-source voice agent toolkit):⁠ https://github.com/pipecat-ai/pipecat⁠OpenTelemetry (agent observability framework): ⁠https://opentelemetry.io/⁠Postscript (Freeplay customer case mentioned): ⁠https://www.postscript.io⁠Colorado AI community meetups: ⁠https://www.boulderaibuilders.org/⁠Speaker Bio: Ian Cairns is the CEO and co-founder of Freeplay. Previously, he served as Head of Product for Twitter’s Developer Platform, where he helped grow their enterprise data business from $40M to $400M ARR. He’s also worked at Gnip (acquired by Twitter), Mapbox, and in the Obama administration on open data initiatives. LinkedIn:⁠https://www.linkedin.com/in/iancairns/⁠Sponsored by:🔥 ODSC West 2025 – The Leading AI Training ConferenceJoin us in San Francisco from October 28th–30th for expert-led sessions on generative AI, LLMOps, and AI-driven automation.Use the code podcast for 10% off any ticket.Learn more:⁠ https://odsc.com/california⁠
In this episode of the ODSC Podcast, host Sheamus McGovern is joined by Veronika Durgin, Vice President of Data at Saks. Veronika brings over two decades of experience in data engineering, platform architecture, data modeling, and analytics. She joins us to expand on her ODSC presentation, The 10 Most Neglected Data Engineering Tasks, offering practical insights into often-overlooked practices that can make or break modern data systems. From defining what “done” really means to preparing for seasonality and building self-recovering pipelines, Veronika shares hard-won lessons, entertaining stories, and a deep well of practical advice.💻 Key Topics Covered:- Veronika’s unconventional career journey from biology to data leadership- The “bridge strategy” in build vs. buy decisions and how it improves agility- Why “definition of done” must include more than just working code- The value of empathy: walking a mile in the business team’s shoes- The dangers of ignoring business seasonality in data pipelines- What makes a truly self-healing data pipeline- Testing with production data vs. data in production- Handling date/time data with precision and naming conventions- The forgotten bucket of work: tech debt, improvements, and innovation- The environmental impact of data systems and responsible engineering- The rise of “vibe coding” and the challenges it poses for data engineering💬 Memorable Outtakes:- “If you don't know why you're working on this and who needs it, you shouldn't be working on it.” - “Everything we build today is tomorrow's legacy. Build things to be replaceable.” - “We automate, not because we’re lazy—but because we care about sleeping at night.” 🖥 References & Resources:- Veronika Durgin on LinkedIn:⁠ ⁠⁠https://www.linkedin.com/in/vdurgin/⁠- Veronika Durgin on Medium: ⁠https://veronikadurgin.medium.com⁠- The Phoenix Project (pdf of book):⁠ https://shorturl.at/Ce5gA⁠- “Definition of Done”:⁠ ⁠⁠https://medium.com/@durginv/definition-of-done-eea52c472cc3⁠- Tsedal Neeley Digital Mindset (book referenced): ⁠https://www.tsedal.com/book/the-digital-mindset/⁠- Stargate AI data center project (referenced during energy discussion): ⁠https://openai.com/index/announcing-the-stargate-project/⁠Sponsored by:🔥 ODSC West 2025 – The Leading AI Training ConferenceJoin us in San Francisco from October 28th–30th for expert-led sessions on generative AI, LLMOps, and AI-driven automation.Use the code podcast for 10% off any ticket.Learn more:⁠ https://odsc.com/california
In this episode of the ODSC Podcast, Alex Landa, the head of content at ODSC, interviews Nick Walton, CEO and co-founder of Latitude and creator of the groundbreaking AI-powered storytelling game, AI Dungeon. Nick shares his journey from machine learning for self-driving cars to reimagining interactive fiction through generative models. They dive into the challenges and magic of building persistent, emergent game worlds with AI, and discuss how AI enables new modes of player agency and storytelling that go far beyond traditional game mechanics.🔥 Key Topics Covered:-The origin story of AI Dungeon and how GPT-2 sparked its creation-How AI Dungeon enables open-ended, player-driven storytelling-Technical and creative challenges in AI gaming: model curation, memory, and narrative tension-What makes AI-generated characters compelling and emotionally resonant-The upcoming game engine Latitude is building to power rich, persistent AI-driven worlds-How AI can unlock new creative possibilities for indie and non-technical game creators-Multiplayer and long-term visions for dynamic, socially evolving AI worlds-Ethical considerations in using AI for gaming and storytellingMemorable Outtakes:💬 Nick Walton: “If every dwarf you meet is a gruff miner whose clan got eaten by goblins, it gets a little old after a while... you need higher variation.”💬 Nick Walton: “You’re not just playing a game. You're building a story, and when loss happens, it becomes meaningful. That's the magic of AI Dungeon.”💬 Nick Walton: “We’re building not just a game, but a living world—one where your choices shape everything, and the AI evolves with you.”💬  Alex (Host): “It’s not about replacing anyone—it’s about creating something entirely new that couldn't exist without AI.”🖥 References & Resources:- AI Dungeon: https://aidungeon.com-Latitude (Nick’s company): https://latitude.io-Whispers from the Star (AI game mentioned): https://store.steampowered.com/app/3730100/Whispers_from_the_Star/-Nick Walton’s X (Twitter) profile: https://twitter.com/nickwalton00-Nick Walton’s LinkedIn: https://www.linkedin.com/in/waltonnick/-Nick Walton's profile on Crunchbase: https://www.crunchbase.com/person/nick-walton-AI Dungeon Dev Blog: https://latitude.io/blog🌟 Sponsored by:The Agentic AI Summit 2025Attend the premier virtual event for AI builders from July 16–31. Gain hands-on skills in designing, deploying, and scaling autonomous AI agents.🔥 Use code podcast for 10% off any ticket.Register now: https://www.summit.ai/ODSC West 2025 – The Leading AI Training ConferenceJoin us in San Francisco from October 28–30 for expert-led sessions on generative AI, LLMOps, and AI-driven automation.🔥 Use code podcast for 10% off any ticket.Learn more: https://odsc.com/california
In this episode of the ODSC Ai X Podcast, host Sheamus McGovern sits down with Michael Lanham, author of AI Agents in Action, to explore the rapidly evolving world of AI agents. Michael brings decades of experience across industries—game development, fintech, oil and gas, and agtech—and provides listeners with a hands-on view into what makes AI agents different from chatbots, how agency enables autonomy, and why frameworks like MCP (Message Control Protocol) are reshaping agent workflows.This discussion guides listeners from foundational principles of AI agents to advanced topics like memory architecture, agent collaboration, cost-performance trade-offs, and the future of digital coworkers.Key Topics Covered:What defines an AI agent and how it differs from chatbots and automation toolsThe meaning of "agency" in AI and how LLMs are gaining decision-making capabilitiesImportance of prompt engineering in agent design and workflow controlOverview of MCP (Message Control Protocol): structure, use cases, and why it's gaining tractionTypes of memory in agents: short-term, long-term, episodic, and proceduralEvaluating agent performance using grounding evaluation, OpenTelemetry, and tools like PhoenixComparison of agent frameworks like AutoGen, CrewAI, and OpenAI’s Agent SDKPractical challenges of token cost, latency, and scaling agent workflowsThe rise of multi-agent systems and collaboration patternsFuture of AI agents as digital coworkers and their role in creative industriesMemorable Outtakes:"A key principle that many miss is that an AI agent is about one word: agency.""You can now drop in an MCP server, connect it to Claude or OpenAI’s SDK, and your agent can suddenly use six or seven tools without extra wiring. That’s power.""We're moving from automation to collaboration. The next shift is treating agents as digital coworkers with decision ownership, not just executors of tasks."References & Resources:AI Agents in Action by Michael Lanham: https://www.manning.com/books/ai-agents-in-actionMichael Lanham's LinkedIn:  https://www.linkedin.com/in/micheal-lanham-189693123/Michael’s Medium blog: https://medium.com/@Micheal-LanhamSmithery (MCP server library): https://smithery.ai/MCP blog from Anthropic: https://www.anthropic.com/news/model-context-protocolOpenAI Agents SDK: https://openai.github.io/openai-agents-python/CrewAI: https://github.com/joaomdmoura/crewAIAutoGen: https://github.com/microsoft/autogenArize Phoenix (LLM evaluation tool): https://github.com/Arize-ai/phoenixLangFuse (evaluation & observability for LLM apps): hhttps://github.com/langfuse/langfuseGoogle Veo: https://deepmind.google/technologies/veoOpenTelemetry:  https://opentelemetry.io/🌟 Sponsored by:The Agentic AI Summit 2025Attend the premier virtual event for AI builders from July 16–31. Gain hands-on skills in designing, deploying, and scaling autonomous AI agents.🔥 Use code podcast for 10% off any ticket.Register now: https://www.summit.ai/ODSC West 2025 – The Leading AI Training ConferenceJoin us in San Francisco from October 28–30 for expert-led sessions on generative AI, LLMOps, and AI-driven automation.🔥 Use code podcast for 10% off any ticket.Learn more: https://odsc.com/california
In this episode of the ODSC AI Podcast, host Sheamus McGovern, founder of ODSC, sits down with Hugo Shi, Co-Founder and CTO of Saturn Cloud, a platform that gives data scientists and ML engineers the tools and flexibility they need to work the way they want. From his early days as a desk quant during the 2008 financial crisis to founding Saturn Cloud, Hugo brings a wealth of experience across finance, open source, and AI infrastructure.This conversation dives deep into the realities of building AI infrastructure at scale, advocating for self-service tools for AI practitioners, managing cloud costs, and why flexibility—not control—is the foundation for productive data teams. It’s a must-listen for anyone working with machine learning infrastructure, whether you're a beginner navigating your first platform or a seasoned engineer scaling multi-cloud operations.Key Topics Covered:Hugo’s career journey: From quant finance to co-founding Anaconda and then Saturn CloudWhat working as a desk quant during the 2008 crisis taught him about speed, impatience, and iterationThe pivotal role Anaconda played in democratizing Python data scienceWhy Saturn Cloud was founded: common infra pain points across data teamsHow Saturn Cloud empowers teams through:Interactive compute environmentsScheduled jobsLong-running deploymentsThe importance of flexibility vs. opinionated platforms Why data scientists should not suffer in silence over infra painHidden cloud costs: compute, storage, and network—and how to manage themDifferences between AI cloud providers (CoreWeave, Lambda Labs) and traditional hyperscalers (AWS, Azure, GCP)Scaling AI: lessons from working with massive clusters and thousands of parallel jobsSecurity best practices in ML platforms, including role-based access and cost attributionWhy ML teams should collaborate across IT, product, and data engineeringHard-won lessons from real-world AI infrastructure scaling Memorable Outtakes:On infrastructure friction and self-advocacy:“Data scientists, ML engineers, and AI engineers suffer in silence… They don’t perceive themselves as tech experts, so they think they have to accept infrastructure pain. They shouldn’t.”On why Saturn Cloud avoids being too opinionated:“Notebooks are fine—but making them the only way to work? That’s a career trap. People should graduate to full IDEs and better practices.”3. On scaling AI operations:“What can be done, will be done. If it’s possible someone will try it. At scale, low-probability failures become inevitable.”References & Resources:Hugo Shi:LinkedIn: https://www.linkedin.com/in/hugo-shiCTO & Co-Founder, Saturn Cloud: https://www.saturncloud.io/Mentioned Companies and Tools:Saturn Cloud: https://www.saturncloud.io/Anaconda: https://www.anaconda.com/Phoenix Framework( For evaluation) https://github.com/phoenixframework/phoenixPrometheus (for resource monitoring): https://prometheus.io/CoreWeave: https://www.coreweave.com/Lambda Labs: https://lambdalabs.com/Neurelo (Neas): https://www.neurelo.com/RunPod: https://www.runpod.io/Cursor IDE: https://www.cursor.so/Streamlit: https://streamlit.io/Jupyter: https://jupyter.org/PyCharm: https://www.jetbrains.com/pycharm/Sponsored by:Agentic AI Summit 2025 Join the premier virtual event for AI builders from July 15–30. Gain hands-on skills in designing, deploying, and scaling autonomous AI agents. 🔥 Use code podcast for 10% off any ticket. Register now: https://www.summit.ai/ODSC West 2025 – The Leading AI Training Conference Attend in San Francisco from October 28–30 for expert-led sessions on generative AI, LLMOps, and AI-driven automation. 🔥 Use code podcast for 10% off any ticket. Learn more: https://odsc.com/california
In this episode of the ODSC AIx Podcast, host Sheamus McGovern speaks with Alexandra Ebert, Chief AI and Data Democratization Officer at MOSTLY AI, and one of the foremost voices on privacy-preserving synthetic data and responsible AI. With a diverse background spanning AI ethics, data access policy, and generative AI regulation, Alexandra brings clarity to how synthetic data is not just a privacy tool but a lever for innovation, fairness, and scalable AI adoption.The discussion explores what synthetic data is (and isn’t), its core advantages and limitations, its role in addressing fairness and access challenges in data-driven organizations, and how practitioners can actively shape better downstream model performance. The episode also dives into the MOSTLY AI Prize—a $100,000 global competition to advance privacy-safe, high-utility synthetic data generation.Key Topics Covered:- The different types and use cases of synthetic data (privacy-preserving, simulation-based, creative)- How synthetic data helps solve the “data access paradox” in regulated industries- Key advantages and limitations of synthetic data vs. real-world and legacy anonymized data- Privacy mechanisms: Outlier suppression, statistical mimicry, empirical differential privacyReal-world use cases in healthcare, finance, telco, and simulation environments- Fairness-aware synthetic data generation using statistical parity constraints- Imputing missing data with synthetic distributions- Agentic AI and the role of synthetic data in enabling secure access layers for autonomous agents- Up-sampling rare events (e.g. fraud) to support more explainable models- Open innovation and the mission behind the MOSTLY AI Prize- Tools, SDKs, and open-source workflows for getting started with synthetic data- The MOSTLY AI Prize—a $100,000 global competition Memorable Outtakes“We need to move beyond thinking of real data as the gold standard. It’s often inaccessible, messy, biased—and by design, it's limited to how it was collected. Synthetic data lets us ask: what if our data was as inclusive as we needed it to be?”“So much of synthetic data’s value is in unlocking what’s been locked away—allowing teams to safely build, test, and deploy where real data just isn’t viable.”“It’s not just about boosting performance. Synthetic upsampling lets you use simpler, more explainable models—ones you can actually audit.”References & Resources- Alexandra Ebert – Chief AI & Data Democratization Officer, MOSTLY AI Industry profile:⁠https://mostly.ai/team/alexandra-ebert⁠ LinkedIn:⁠ https://www.linkedin.com/in/alexandraebert/ ⁠-MOSTLY AI Prize – Global competition to advance privacy-preserving synthetic data Website:⁠ https://www.mostlyaiprize.com/⁠- MOSTLY AI GitHub & SDK – Open-source tools for structured synthetic data⁠ https://github.com/mostly-ai⁠-  https://github.com/mostly-ai/mostly-ai-sdk- Synthetic Data Fairness Paper (ICLR) "Representative & Fair Synthetic Data" Paper link:⁠https://arxiv.org/abs/2104.03007⁠- Synthetic Data Vault (SDV)  ⁠https://sdv.dev⁠- TVAE under the SDV umbrella: https://github.com/sdv-dev/SDVSponsored byAgentic AI Summit 2025 Join the premier virtual event for AI builders from July 15–30. Gain hands-on skills in designing, deploying, and scaling autonomous AI agents. 🔥 Use code podcast for 10% off any ticket. Register now:⁠https://www.summit.ai/⁠ ODSC West 2025 – The Leading AI Training ConferenceJoin us in San Francisco from October 28–30 for expert-led sessions on generative AI, LLMOps, and AI-driven automation. 🔥 Use code podcast for 10% off any ticket. Learn more:⁠ https://odsc.com/california
This special episode captures compelling highlights from ODSC East 2025, held May 13–15. Join host Sheamus McGovern as he discusses transformative AI developments with leading experts, researchers, and industry practitioners. From the practical deployment of AI agents and the rising importance of synthetic data to ethical AI strategies and robust risk management practices, this episode offers diverse perspectives shaping the future of AI.Guests and Key Topics Covered:Rob Bailey (CrewAI): Accelerating AI agent adoption in enterprises.Ivan Lee (Datasaur): The rise and effectiveness of specialized small language models.Tony Kipkemboi (CrewAI): Getting started with AI agents through practical, foundational skills.Samuel Colvin (Pydantic): Building robust, type-safe, and observable AI applications.Rajiv Shah (Contextual AI): Advanced evaluation methods for generative AI applications.Sinan Ozdemir (Author and entrepreneur) Sinan breaks down a comprehensive evaluation of AI agents and multi-agent systems, beyond benchmarks.Alexandra Ebert (MOSTLY AI): Ethical AI and synthetic data democratization.Kanchana Patlolla (Google Cloud): Overcoming common intelligent agent development and deployment challenges.Cal Al-Dhubaib & Lauren Burke-McCarthy (Further): AI risk management, red teaming, and sustainable data science adoption strategies.Dr. Andre Franca (Ergodic): Applying causal AI to improve decision-making processes.Noah Giansiracusa (Bentley University): Insights into social media algorithms and enhancing user autonomy.Allen Downey (PyMC Labs): Practical insights into traditional time series analysis and Bayesian modeling.Memorable Outtakes:"Automation is no longer about wiring APIs together; it’s about launching self-directed agents that negotiate, learn, and act on our behalf." – Rob Bailey, CrewAI"Generative AI has turned traditional data science upside down. People forget that you still need rigorous evaluation to align models with the intended problem." – Rajiv Shah, Contextual AI"When introducing low-code and no-code platforms, you open the door to build quickly on shaky foundations." – Lauren Burke-McCarthy, FurtherReferences & Resources:Rob Bailey (CrewAI): https://www.linkedin.com/in/robmbailey/Ivan Lee (Datasaur): https://www.linkedin.com/in/iylee/Tony Kipkemboi (CrewAI): https://www.linkedin.com/in/tonykipkemboi/Samuel Colvin (Pydantic): https://www.linkedin.com/in/samuel-colvin/Rajiv Shah (Contextual AI): https://www.linkedin.com/in/rajistics/Sinan Ozdemir (LoopGenius): https://www.linkedin.com/in/sinan-ozdemir/Alexandra Ebert (MOSTLY AI): https://www.linkedin.com/in/alexandraebert/Kanchana Patlolla (Google Cloud): https://www.linkedin.com/in/kanchanapatlolla/Cal Al-Dhubaib (Further): https://www.linkedin.com/in/dhubaib/Lauren Burke-McCarthy (Further): https://www.linkedin.com/in/lauren-e-burke/Dr. Andre Franca (Ergodic): https://www.linkedin.com/in/francaandre/Noah Giansiracusa, PhD (Bentley University): https://www.noahgian.comAllen Downey, PhD (PyMC Labs): https://www.linkedin.com/in/allendowneyHumanity's Last Exam Benchmark:https://agi.safe.aiPydantic: https://docs.pydantic.dev/MOSTLY AI Synthetic Data Competition: https://www.mostlyaiprize.com/MCP (Model Component Profiling):https://www.anthropic.com/news/model-context-protocolDatasaur: https://datasaur.ai/CrewAI: https://www.crewai.com/PyMC Labs: https://www.pymc-labs.io/Sponsored by: Agentic AI Summit 2025Join the premier virtual event for AI builders from July 16–31. Gain hands-on skills in designing, deploying, and scaling autonomous AI agents.🔥 Use code podcast for 10% off any ticket. Register now: https://www.summit.ai/ODSC West 2025 – The Leading AI Training ConferenceAttend in San Francisco from October 28–30 for expert-led sessions on generative AI, LLMOps, and AI-driven automation.🔥 Use code podcast for 10% off any ticket. Learn more: https://odsc.com/california
In this episode of the ODSC Aix Podcast, host Sheamus McGovern speaks with Dr. Regina Barzilay, a Distinguished Professor of AI and Health at MIT, and one of the leading voices in applying artificial intelligence to real-world medical challenges. Dr. Barzilay is also the AI faculty lead at MIT’s Jameel Clinic, where her work spans early disease detection, personalized treatment, and AI-powered drug discovery.A recipient of the MacArthur “Genius” Fellowship, the AAAI Squirrel AI Award, and a member of both the National Academy of Engineering and the National Academy of Medicine, Dr. Barzilay has built a career bridging state-of-the-art AI with the pressing needs of patients and clinicians.Together, they explore why AI tools that can accurately predict cancer and other diseases are still rarely used in clinical practice—and what it will take to bridge the gap between research and real-world care.Key Topics CoveredWhy most diseases are diagnosed too late—and how AI can detect them before symptoms appearThe story behind MIRAI (for breast cancer) and SYBIL (for lung cancer) predictive modelsWhy powerful clinical AI tools aren’t widely adopted in hospitals and how the system can changeWhat it takes to get machine learning models into real-world care across countries and hospitalsThe promise of generative AI in drug discovery and the discovery of new antibioticsChallenges in detecting and treating neurodegenerative diseases like ALSHow AI could enable truly personalized medicine, predicting treatment side effects and responsesThe role of AI copilots in clinical workflows and the future of clinician-AI collaborationWhy AI in healthcare must go beyond hype to deliver real, measurable value for patientsMemorable Outtakes“The technology to save lives is here. We’re just not using it.” — Dr. Regina Barzilay on the gap between research and clinical adoption“You can’t be an oncologist without using MRI or blood tests—but you can still practice medicine today without AI.” — Dr. Barzilay on structural resistance in healthcare systems“Imagine giving some of your health data and getting back a real answer: what will happen to you. Not the population. You.” — Dr. Barzilay on personalized risk prediction and treatment modelingReferences & ResourcesDr. Regina Barzilay  Academic Profile:https://www.csail.mit.edu/person/regina-barzilay LinkedIn:⁠https://www.linkedin.com/in/reginabarzilay/⁠Resources Mentioned in the EpisodeMIRAI (AI model for breast cancer risk prediction): ⁠https://jclinic.mit.edu/mirai⁠SYBIL (AI model for lung cancer risk prediction):  ⁠https://news.mit.edu/2023/ai-model-can-detect-future-lung-cancer-0120⁠Halicin Antibiotic Discovery: MIT Jameel Clinic for Machine Learning in Health:⁠ https://jclinic.mit.edu/⁠US Preventive Services Task Force (USPSTF):⁠ https://www.uspreventiveservicestaskforce.org/⁠Professor Dina Katabi’s AI monitoring research (referenced): ⁠https://www.csail.mit.edu/person/dina-katabi ⁠Sponsored byAgentic AI Summit 2025 Join the premier virtual event for AI builders from July 15–30. Gain hands-on skills in designing, deploying, and scaling autonomous AI agents. 🔥 Use code podcast for 10% off any ticket. Register now:⁠https://www.summit.ai/⁠ODSC West 2025 – The Leading AI Training Conference Attend in San Francisco from October 28–30 for expert-led sessions on generative AI, LLMOps, and AI-driven automation. 🔥 Use code podcast for 10% off any ticket. Learn more:⁠ https://odsc.com/california
The AI Superintelligence MythIn this episode of the ODSC AIX Podcast, host Sheamus McGovern speaks with Arvind Narayanan, professor of computer science at Princeton University and co-author of the widely acclaimed book AI Snake Oil. Arvind is also the director of the Center for Information Technology Policy and one of TIME's 100 most influential people in AI.Together, they unpack the growing disconnect between AI hype and practical deployment, focusing particularly on agentic AI systems and why current evaluation methods are falling short. Arvind discusses the moral limits of predictive AI, the myth of superintelligence, and how practitioners can ground their optimism in accountability, transparency, and more realistic expectations.Key Topics Covered- The origins and impact of AI Snake Oil- Why many AI systems are overhyped or misapplied- Predictive vs. generative AI: what's really at stake- Why superintelligence is a myth—and a distraction- The flawed assumption behind “upstream” evaluations- Engineering and ethical limitations of AI agents in real-world deployment- Misinformation, misuse, and human responsibility in the AI age- The importance of resilience over restriction in AI governanceMemorable Outtakes💬 “We came to the conclusion that superintelligence is a bit of a myth. AI will continue to improve, but …”💬 “Broken AI is often appealing to broken institutions. You can’t fix a flawed hiring process with more automation—you need more humanity.”💬 "Upstream evaluation is mostly dead."References & Resources- Arvind Narayanan Academic Profile: https://www.cs.princeton.edu/~arvindn/- AI Snake Oil (Book & Newsletter)  https://www.aisnakeoil.com- “How to Recognize AI Snake Oil” (Slides) https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf- AI Agents That Matter (Research Paper) https://arxiv.org/abs/2407.01502- Holistic Agent Leaderboard (HAL)   https://hal.institutions.ai- Essay: “AGI Is Not a Milestone” https://aisnakeoil.substack.com/p/agi-is-not-a-milestone- Essay: “AI as Normal Technology” https://aisnakeoil.substack.com/p/ai-as-normal-technology- HAL: Holistic Agent Leaderboard  https://hal.cs.princeton.edu/- Agentic AI Summit https://www.summit.ai/- ODSC West –  https://odsc.com/california- Arvind’s ODSC East Keynote Session, AI Agents: Taking Stock and Looking Ahead: https://odsc.com/speakers/ai-agents-taking-stock-and-looking-ahead/Sponsored by:Agentic AI Summit 2025 Join the premier virtual event for AI builders from July 15–30. Gain hands-on skills in designing, deploying, and scaling autonomous AI agents. 🔥 Use code podcast for 10% off any ticket. Register now: https://www.summit.ai/ODSC West 2025 – The Leading AI Training Conference Attend in San Francisco from October 28–30 for expert-led sessions on generative AI, LLMOps, and AI-driven automation. 🔥 Use code podcast for 10% off any ticket. Learn more: https://odsc.com/california
In this episode, we sit down with Thomas Wiecki, co-creator of PyMC and CEO of PyMC Labs, to explore the evolving frontier of probabilistic AI. With roots in computational psychiatry and a passion for Bayesian modeling, Thomas walks us through how probabilistic methods unlock better decision-making by quantifying uncertainty—something traditional AI often glosses over.We dive deep into real-world applications, from marketing mix modeling to synthetic consumers, and hear about Thomas's upcoming session at ODSC East. Whether you're a marketer, data scientist, or just AI-curious, this episode provides an illuminating perspective on building interpretable, trustworthy AI systems.Key Topics CoveredDefining Probabilistic AI:What distinguishes probabilistic AI from traditional machine learning and generative models.Why uncertainty is a feature, not a bug—and how it enables better risk-aware decisions.Marketing Mix Modeling (MMM):How PyMC Labs applies Bayesian models to measure marketing effectiveness across channels.Real-world examples of ROI estimation, saturation detection, and spend optimization.AI Agents and Natural Language Interfaces:How PyMC Labs built AI agents to interpret and interact with complex statistical models.Bridging the gap between marketers and data scientists through conversational AI.Synthetic Consumers and Product Testing:Using LLMs to simulate consumer feedback and behavior before market release.How these systems complement surveys and reduce time-to-insight.Building Trustworthy AI:Why transparency and causal reasoning are critical for adoption in business.The role of PyMC in making statistical modeling accessible, powerful, and interpretable.Community and Open Source:The evolution of the PyMC ecosystem and the role of its global contributor base.How open-source collaboration leads to robust, production-ready tools.Scalability and Next-Gen Tooling:Leveraging GPUs, Apple Silicon, and frameworks like Aesara, JAX, and MLX to scale inference.The future of PyMC agents and integrations with platforms like Databricks and MLFlow.Memorable Outtakes“Probabilistic AI doesn’t just make predictions—it tells you how confident it is. That’s a game-changer for real-world decisions.”“The agent can now say, ‘I'm not sure about this prediction—you might want to collect more data.’ That’s the kind of reasoning we need from AI.”“Synthetic consumers sound like science fiction, but they’re a practical tool for product development today.”“We’ve moved from dashboards to conversations. Marketers can now ask a question—and get a data-driven, contextual answer back.”“Open source isn’t just about free tools. It’s about building the best ideas with the best minds from around the world.”Resources:- Thomas’s LinkedIn: https://www.linkedin.com/in/twiecki/- Thomas’s Website: https://www.pymc-labs.com/- Thomas’s GitHub: https://github.com/twiecki- PyMC: https://www.pymc.io/welcome.html- PyMC-Labs: https://www.pymc-labs.com/- PyMC GitHub: https://github.com/pymc-devs/pymc- Marketing Mix Modeling Agent: https://www.pymc-labs.com/blog-posts/the-ai-mmm-agent/- Thomas’s ODSC East session, “Automating Bayesian Marketing Mix Modeling with an AI-Powered Assistant,”: https://odsc.com/speakers/automating-bayesian-marketing-mix-modeling-with-an-ai-powered-assistant/- Allen Downey’s (also with PyMC) ODSC East session, “Mastering Time Series Analysis with StatsModels: From Decomposition to ARIMA,”: https://odsc.com/speakers/mastering-time-series-analysis-with-statsmodels-from-decomposition-to-arima/Sponsored byThis episode was sponsored by:🎤 ODSC East 2025 – The Leading AI Builders Conference – https://odsc.com/boston/Join us from May 13th to 15th in Boston for hands-on workshops, training sessions, and cutting-edge AI talks covering generative AI, LLMOps, and AI-driven automation.🔥 Use the exclusive code ODSCPodcast for an additional 10% off any ticket type!
In this episode, we explore the frontiers of AI research with Eric Xing, President of Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), Professor at Carnegie Mellon University, and founder of GenBio AI. A pioneer in AI Biology, Eric discusses the evolving landscape of world models, agentic AI, and the revolutionary concept of AI-driven digital organisms. With a background that spans computational biology, large-scale ML systems, and foundational research, Eric provides deep insights into how AI is transforming our understanding of intelligence, life, and scientific simulation.Key Topics CoveredDefining World Models: What they are, how they differ from generative models, and why they are essential for reasoning and simulation.The Role of World Models in Agentic AI: How simulation and planning rely on internal representations of environments.Why Digital Agents Still Need World Models: Even in digital workflows, world models help with decision-making, abstraction, and tool interaction.Agent Models and Autonomous Reasoning: The limitations of step-by-step optimization and the need for scenario simulation.AI Biology and GenBio AI’s Vision:Introducing the concept of the AI-Driven Digital Organism (AIDO).Multiscale foundation models for DNA, RNA, proteins, and cells.Simulating biology to replace expensive and slow wet-lab experimentation.Challenges in Modeling Life:Why LLM-style architectures don’t work for biology.The need for multimodal, co-trained, interpretable architectures.Explainability in AI Biology: How modular architectures and simulation-based evaluation can help bridge the gap.Ethical Boundaries and Societal Readiness: Why AI progress is bounded by what society is ready and willing to adopt.Memorable Outtakes“World models are simulators of all possibilities—not just in the physical world, but also in mental and cyber spaces.” – Eric Xing, on expanding the scope of simulation in AI.“We want to build a virtual cell—or an AI-driven digital organism—that can simulate biology at any scale.”– Eric Xing, explaining the core vision of AI Biology and how it's redefining biological modeling.“In biology, the research is still done in the wet lab. We want to replace that with a digital system that simulates all plausible hypotheses.”– Eric Xing, on the need to digitize the discovery phase in medicine and drug development.References & Resources MentionedEric Xing on LinkedIn: https://www.linkedin.com/in/eric-xing-a6900418/GenBio AI: https://genbio.aiMBZUAI (Mohamed bin Zayed University of Artificial Intelligence): https://mbzuai.ac.aeDeepMind Gemini Robotics: https://deepmind.google/technologies/geminiBroad Institute: https://www.broadinstitute.orgDavid Ha & Jürgen Schmidhuber “World Models” paper: https://arxiv.org/abs/1803.10122Fei-Fei Li’s World Labs: https://www.technologyreview.com/2024/03/12/1170123/fei-fei-li-world-models-startup/Eric’s ODSC East Keynote Session, “Toward Public and Reproducible Foundation Models Beyond Lingual Intelligence”: https://odsc.com/speakers/toward-public-and-reproducible-foundation-models-beyond-lingual-intelligence/Sponsored byThis episode was sponsored by: 🎤 ODSC East 2025 – The Leading AI Builders Conference – https://odsc.com/boston/ Join us from May 13th to 15th in Boston for hands-on workshops, training sessions, and cutting-edge AI talks covering generative AI, LLMOps, and AI-driven automation.🔥 Use the exclusive code ODSCPodcast for an additional 10% off any ticket type!
In this episode of the ODSC Podcast, we sit down with Sid Probstein, a seasoned enterprise technologist, 10-time CTO, and now CEO of SWIRL. Sid shares deep insights on the evolving role of AI search in the enterprise, the limitations of current AI architectures like RAG, and how federated search powered by large language models (LLMs) can offer a more scalable, secure, and efficient approach.This wide-ranging conversation also touches on the future of agentic AI, the implications of zero-trust architecture, and why moving all your enterprise data into vector databases may not be the answer.Key Topics Covered:- Why many first-generation enterprise AI architectures fall short- The common misconception that RAG requires vector databases- How federated search is being reinvented with the help of LLMs- The differences between RAG, AI search, and hybrid approaches- Why data movement is a privacy, governance, and scaling bottleneck- Using LLMs to evaluate, re-rank, and extract relevant results from existing search engines- The impact of AI search on agentic workflows and enterprise automation- Architectural considerations for building secure, scalable AI systems- How open-source tools like Swirl can jumpstart your AI search strategy- The future of applications and employment in an AI agent-driven worldMemorable Outtakes:💬 "A lot of those first-generation architectures are just dead wrong." – Sid Probstein on the pitfalls of blindly applying RAG and vector database stacks in enterprise environments.💬 "You do not need a vector database to do RAG. The idea that RAG is owned by the vector databases is absurd." – Sid debunks a common myth about RAG and data pipelines.💬 "Federated search was niche—until LLMs changed everything." – Sid on how modern LLMs make sense of heterogeneous data and breathe new life into federated search.References & Resources:- Sid Probstein: https://www.linkedin.com/in/sidprobstein/- Swirl (Open Source Project): https://github.com/swirlai/swirl-search- Swirl Website: https://www.swirl.today- Wired Article on COBOL Dates & Government Data (referenced in episode): https://www.wired.com/story/ai-small-business-administration-loans-errors/- Estonia’s Digital Government Model (mentioned): https://e-estonia.com- LangChain (for prompt engineering and agent workflows): https://www.langchain.com- Open Source LLM Deployment with Ollama: https://ollama.com- Deepseek R1:  https://github.com/deepseek-ai/DeepSeek-R1Sponsored by:This episode was sponsored by: 🎤 ODSC East 2025 – The Leading AI Builders Conference – https://odsc.com/boston/ Join us from May 13th to 15th in Boston for hands-on workshops, training sessions, and cutting-edge AI talks covering generative AI, LLMOps, and AI-driven automation.🔥 Use the exclusive code ODSCPodcast for an additional 10% off any ticket type!
In this episode, we speak with Valentina Alto, a Technical Architect specializing in AI & Apps at Microsoft, about the transformative potential of AI Agents. Valentina shares her insights into why major enterprises are increasingly interested in adopting AI Agent technology, highlighting its role in reshaping automation, enhancing productivity, and redefining customer experiences. With her deep expertise in generative AI, large language models (LLMs), and multi-agent systems, Valentina provides valuable perspectives on how AI Agents differ fundamentally from traditional AI Assistants.Key Topics Covered:Definition and key differentiators of AI Agents versus traditional AI AssistantsCore characteristics of effective AI Agents, including autonomy, specialization, and actionable capabilitiesImportance of LLMs, multimodal models, and advancements in reasoning capabilitiesReal-world applications of AI Agents in industries such as manufacturing, pharmaceuticals, and customer serviceChallenges such as error propagation and alignment of AI Agent autonomy with user intentBest practices for AI alignment, prompt engineering, and domain-specific knowledge integrationPractical guidance on the lifecycle, development, and enterprise deployment of AI AgentsThe future ecosystem of AI Agents and the frameworks necessary for scalable enterprise implementationMemorable Outtakes:💬 "Autonomy, tools, and a clear system message are the main ingredients that define an AI Agent."💬 "Prompt engineering isn't just important, it’s essential—especially for developers shaping what an AI Agent is meant to accomplish."💬 "The future of AI Agents will be about defining robust ecosystems where these agents can effectively collaborate and scale."References & Resources:Valentina Alto’s LinkedIn: https://www.linkedin.com/in/valentina-alto-6a0590148/recent-activity/all/Valentina Alto’s Notion Articles: https://shine-snowman-27f.notion.site/My-Articles-4f2f01923575438ba28e7a78ffb494a1Valentina Alto’s GitHub: https://github.com/valentina-altoBooks by Valentina Alto:Modern Generative AI with ChatGPT and OpenAI Models: https://www.packtpub.com/en-us/product/modern-generative-ai-with-chatgpt-and-openai-models-9781805123330Building LLM Powered Applications: https://www.packtpub.com/product/building-llm-powered-applications/9781835462317ODSC East Conference: https://odsc.com/boston/Sponsored by: 🎤 ODSC East 2025 – The Leading AI Builders Conference – https://odsc.com/boston/ Join us from May 13th to 15th in Boston for hands-on workshops, training sessions, and cutting-edge AI talks covering generative AI, LLMOps, and AI-driven automation.🔥 Use the exclusive code ODSCPodcast for an additional 10% off any ticket type!
In this episode, we explore why graph technology is becoming foundational to the next wave of AI—especially as systems grow more complex and context becomes essential. Our guest is Amy Hodler, founder of GraphGeeks and a longtime leader in connected data, AI strategy, and graph analytics.Amy takes us on a deep dive into how graphs offer a more natural, human-centered way to model data—focusing not just on what things are, but how they relate. We talk about GraphRAG, the blend of graph structures with retrieval-augmented generation, and how it’s giving AI systems a richer, more contextual backbone.Whether you're an ML engineer, a data scientist, or just graph-curious, this conversation is full of insight into why relationships are the key to relevance in AI.Key Topics Covered:Why graphs are a natural fit for how humans think and make decisionsThe evolution of GraphRAG and why it matters for building smarter LLM systemsHow context-rich graphs improve AI explainability, fairness, and performanceWhere graphs are gaining traction: cybersecurity, legal review, systems engineeringThe growing importance of entity resolution and building high-quality graphsBest practices for getting started with GraphRAG—and common pitfalls to avoidEmerging trends: multimodal graph systems, agentic AI workflows, and graph-vectors hybridsWhy AI can’t solve every problem—and what to watch out for in the age of “AI snake oil”Memorable Outtakes 💬 “Graphs aren’t just about data points. They’re about what makes those points meaningful—relationships.” 💬 “If you care about context, you should care about graphs.” 💬 “GraphRAG isn’t just another buzzword. It’s how we bring structure and context into generation.” 💬 “AI without context is guessing. Graphs help it make sense.”🔗 References & Resources- Amy’s LinkedIn: https://www.linkedin.com/in/amyhodler/- Amy’s Twitter/X: https://x.com/amyhodler- GraphGeeks’ Website: https://www.graphgeeks.org/- GraphGeeks’ LinkedIn: https://www.linkedin.com/company/graphgeeks/- Graph Algorithms: Practical Examples in Apache Spark and Neo4j: https://www.amazon.com/Graph-Algorithms-Practical-Examples-Apache/dp/1492047686- David Hughes’ LinkedIn: https://www.linkedin.com/in/dahugh/overlay/about-this-profile/- Amy’s ODSC East 2025 Session, Advancing GraphRAG: Text, Images, and Audio for Multimodal Intelligence: https://odsc.com/speakers/advancing-graphrag-text-images-and-audio-for-multimodal-intelligence/- Blog, What is GraphRAG? An In-Depth Look at This Graph-Based Tool: https://opendatascience.com/what-is-graphrag-an-in-depth-look-at-this-graph-based-tool/- AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference: https://www.amazon.com/Snake-Oil-Artificial-Intelligence-Difference/dp/069124913XThis episode was sponsored by:🎤 ODSC East 2025 – The Leading AI Builders Conference – https://odsc.com/boston/Join us from May 13th to 15th in Boston for hands-on workshops, training sessions, and cutting-edge AI talks covering generative AI, LLMOps, and AI-driven automation.🔥 Use the exclusive code ODSCPodcast for an additional 10% off any ticket type!
loading
Comments