DiscoverPractical AI in Healthcare
Practical AI in Healthcare
Claim Ownership

Practical AI in Healthcare

Author: Steven Labkoff

Subscribed: 4Played: 10
Share

Description

AI promises to transform healthcare—but real, scalable impact remains rare. Practical AI in Healthcare cuts through the noise to showcase real-world use cases delivering business value today. Hosted by senior leaders— former VPs of life science technology groups, clinical informatics professionals from top-tier organizations, and a former Big Four consultant—each episode features candid conversations with the people making AI work inside the healthcare enterprise
32 Episodes
Reverse
Every five episodes, Steve and Leon step back to examine what picture forms when you put their guest conversations side by side. This time, five guests from completely different healthcare domains -- data quality, clinical trials, medical translation, patient data, participatory medicine -- independently converged on the same conclusion: the AI works; the infrastructure around it doesn't yet. From Charlie Harp's data quality metrics to Adam Blum's 60-to-90% scaffolding story to Amy Price's reframing of healthcare AI as "unfinished, not broken," Block 4 reveals what industry maturation actually looks like -- not a breakthrough, but a quiet shift in what the conversation is about.
Amy Price survived a car accident that left her with a broken neck, severe brain injury, and $4 million in medical bills. She was told she'd need to be institutionalized. Instead, she earned a DPhil at Oxford and became Editor-in-Chief of the Journal of Participatory Medicine. In this episode, Amy sits down with Leon to discuss why patients belong inside the AI design process, what it really means to have a "knowledgeable human who cares" in the loop, and why healthcare AI is an unfinished system worth building on, not a broken one worth scrapping. She also shares how she uses AI tools for her own health decisions and what she's learned about closing the patient AI literacy gap.
Shashi Shankar spent nearly a decade at Genentech before a family cancer journey and a broken data landscape pushed him to build something different. His company Novellia works directly with patients — not data brokers — to collect and consolidate health records across multiple providers using SMART on FHIR. The result: longitudinal, patient-authorized real-world data that fills the gaps left by claims databases, single-site EMRs, and health information exchanges. We explore why previous PHR companies failed, how AI catches clinical data errors that humans miss, and whether Big Tech should be trusted with patient data.
When Adam Blum was diagnosed with follicular lymphoma, he tried over a dozen commercial trial matchers. None returned actual matches. So the serial AI entrepreneur built CancerBot, a free precision-matching service that assesses 100% of eligibility criteria — not the five surface-level attributes most matchers use. On this episode, Blum explains the Prompt Workbench (where biomedical experts refine extraction prompts to above 90% accuracy), how conjunctive normal form makes complex eligibility logic tractable, and why "best trial" means something different for every patient. A masterclass in AI scaffolding for healthcare.
For 37 years, Charlie Harp heard the same thing from healthcare organizations: "Our data quality is fine." They were right — for billing and scheduling. But AI changed the equation. Harp, founder of Clinical Architecture, built the PIQI framework to measure patient data quality across four dimensions: availability, accuracy, conformance, and plausibility. His PIQXL Gateway scores data on a 1-100 scale before it enters your systems — not after. Early deployments reveal uncomfortable truths: lab data averages 70% quality against USCDI standards, and one facility coded every blood test to a single LOINC code. The framework is now going through HL7 balloting as an open national standard.
Every day, patients leave US hospitals with discharge instructions they can't read. Giovanni Donatelli, CEO of The Language Group, built FETCH — a patented AI system embedded in Epic that translates discharge documents in 15 minutes with human review. He did it because he was the 8-year-old interpreting for his immigrant parents at doctor's appointments. Hosts Steve Labkoff and Leon Rozenblit explore the discharge instruction gap, the tragic cases that make it personal, FETCH's three-layer translation pipeline, the case for keeping humans in the loop, and why healthcare executives think they've already solved a problem that doesn't yet have a solution.
Steve and Leon have reviewed blocks of guest episodes twice before on Practical AI in Healthcare. Both times the themes snapped into place. This time they didn't -- and the disagreement between them became the episode. Across five recent conversations, they found stories that kept spilling past the edges of their framework: AI that works but can't get paid, laws that already apply but nobody realizes it, and a scientific record under threat from AI-generated paper mills. The hosts' attempt to make sense of it all reveals where their thesis holds, where it breaks, and what needs to change.
Bob Wachter wrote the book on the EHR disaster. Now he's written one about AI.The UCSF Chair of Medicine joins hosts Steve Labkoff and Leon Rozenblit to discuss A Giant Leap, his argument that AI doesn't need to be perfect—it needs to beat a healthcare system already failing at scale. They cover Watson's $3B collapse, why ambient scribes became AI's first clinical success story, the human-in-the-loop problem that nobody has solved, and the dangerous gap between how experts and novices use AI tools.Key topics: productivity paradox, complementary innovations, clinical decision support design, AI literacy, and the "compare me to the alternative" thesis.The link to the book can be found here: https://a.co/d/07JFNwIw
What if early signs of psychosis could be detected from how patients speak—not what they say, but how they organize their thoughts?Amar Mandavia (VA Boston, Boston University) and Enrique "Kike" Gutiérrez (Polytechnic University of Madrid) join hosts Steve Labkoff and Leon Rozenblit to discuss CHiRP, an AI tool that identifies formal thought disorder from routine clinical conversations. They explain why the gold-standard manual test takes 5+ hours, how their system reduces that to minutes, and the hard ethical questions around labeling patients as "at risk."Key topics: prodromal psychosis detection, NLP in mental health, clinical workflow integration, MIT linQ Catalyst, and the payer challenges that make prevention hard to fund.
Real-world evidence was supposed to accelerate drug development. Instead, we've created definitional chaos—over 100 data vendors, inconsistent definitions, and studies that can't be compared.Dr. Aaron Kamauu, CEO of Navidence and co-host of Real World Wednesday, explains why one missing diagnosis code can exclude 30% of your cohort, how GLP-1 eligibility criteria vary wildly between NHS and US guidelines, and what it means to document "the seven definitions you chose NOT to use."A conversation about the unsexy infrastructure that makes evidence trustworthy.
In this episode of Practical AI in Healthcare, we sit down with Dr. Jeff Chuang, a computational biologist at The Jackson Laboratory, to explore how AI is reshaping cancer diagnostics, starting with pediatric sarcoma. Jeff shares his journey from physics and protein folding to computational pathology, where machine learning is being applied to standard H&E pathology slides to deliver faster, cheaper, and more accurate diagnoses.The conversation dives into how AI models trained on relatively small but carefully curated image datasets can outperform traditional diagnostic approaches, especially in rare cancers where expertise is scarce. We also explore the challenges of data sharing, IRB approvals, and real-world deployment, along with a glimpse into the future of spatial genomics and ultra-high-resolution tissue analysis. This episode is a powerful example of how practical AI can directly improve patient care today.
In this episode of Practical AI in Healthcare, we sit down with physician–informaticist Josh Geleris, MD, co-founder and Chief Product Officer of SmarterDx, to unpack one of healthcare’s most overlooked AI opportunities: revenue cycle intelligence. Drawing on his clinical training, deep technical background, and firsthand experience inside large health systems, Josh explains how AI can bridge the gap between clinical reality and billing documentation. The conversation explores how machine learning and large language models translate thousands of data points from an inpatient stay into accurate, compliant coding, helping health systems reduce revenue leakage while staying firmly within regulatory guardrails. From SQL queries to post-trained LLMs, Josh walks us through the evolution of SmarterDx’s AI stack and why human-in-the-loop design remains essential. This is a grounded, practical look at AI delivering real value where healthcare operations and clinical truth collide.
In this episode of Practical AI in Healthcare, we sit down with Dr. Alvin Liu, retinal surgeon and Professor of Artificial Intelligence and Ophthalmology at Johns Hopkins University, to explore one of the earliest and most successful real-world deployments of medical AI.Dr. Liu walks us through the evolution of autonomous AI for diabetic retinopathy screening, from FDA approval to large-scale clinical implementation across health systems. We unpack what it really takes to move AI from validation to impact, including workflow integration, sensitivity and specificity tradeoffs, reimbursement challenges, and post-market monitoring. The conversation also looks ahead to emerging AI applications using retinal imaging to predict cardiovascular disease, dementia, and kidney disease at the population level.This episode is a masterclass in how AI can meaningfully improve access, equity, and outcomes in healthcare when deployed thoughtfully and responsibly.
In this episode of Practical AI in Healthcare, we sit down with Dr. Tiffany Leung, Scientific Editorial Director at JMIR Publications, to explore how artificial intelligence is reshaping scientific publishing from the inside out. As open access journals face unprecedented volumes of submissions, AI is simultaneously enabling faster discovery and creating new challenges around research integrity, peer review, and trust in the scientific record.Tiffany shares how journals are adapting to generative AI tools, from policy development and disclosure norms to editorial decision support systems that help identify potential risks without stifling innovation. The conversation moves beyond hype to examine how AI can act as a co scientist, streamline editorial workflows, and potentially redefine peer review itself. This episode offers a rare look at how AI is influencing not just what gets published, but how knowledge is validated and shared globally.
In this episode, Steven Labkoff and Leon Rozenblit explore the legal, regulatory, and risk-management challenges introduced by AI with two top experts in the field: Kathy Roe, Principal at Health Law Consultancy, and Kenny White, Director of the Managed Care Industry Group at Alliant Insurance Services. Together, they unpack how AI intersects with medical malpractice, product liability, HIPAA privacy, de-identification, intellectual property, contractual risk, and insurance coverage. Kathy and Kenny explain why AI is not yet the standard of care — but why clinicians and health systems must develop AI literacy now as the legal landscape evolves.A must-listen episode for healthcare leaders navigating the risks and realities of AI adoption.
In our second Reflections episode, Steve Labkoff and Leon Rozenblit synthesize the most meaningful insights from guests across Episodes 9–15. Drawing on conversations with Orr Inbar, Martin Leach, Ing Ho, Yuri Quintana, and patient advocate E-Patient Dave, this episode highlights the themes shaping practical AI adoption in today’s healthcare landscape.Key topics include the inflection point AI has created across clinical care, research, and patient engagement; the need for stronger data stewardship to support trustworthy automation; and the emerging promise of AI-driven clinical trial optimization and simulation. We also explore how patients are engaging with AI tools independently, raising new questions about literacy, safety, and empowerment.Additional themes include the cultural alignment required for tools like ambient listening and chart summarization to succeed and what it will take for AI to avoid the missteps of past health-IT transitions.The episode closes with a preview of upcoming guests discussing AI in law, payer innovation, psychiatric diagnostics, and the future of scientific publishing.
We just released one of the most intellectually energizing conversations we’ve had on Practical AI in Healthcare. Our guest this week is Dr. Adam Rodman, physician, informatician, historian of medicine, and one of the most original thinkers at the intersection of AI and clinical reasoning.Adam brings a rare perspective to the field: before becoming an AI researcher, he spent years studying how humans make medical decisions — how clinicians reason, where cognition breaks down, and how technology reshapes the way we conceptualize disease. His reflections on the evolution from QMR and INTERNIST-1 to today’s large language models are a must-hear.In this episode, we explore:• Why LLMs challenge 50 years of clinical decision support assumptions• New collaboration models for doctors, patients, and AI — not just “human in the loop,” but truly redesigned workflows• What’s holding back clinical AI adoption (spoiler: it’s not accuracy)• The regulatory gap—and why the FDA’s old device mindset won’t work for generative models• How urgent-care AI companies are early signals of a broader shiftAdam also discusses what the next 3–5 years may actually look like — and why change will be slow…until the moment it becomes very fast.Listen here and join the conversation on the future of practical, safe, and human-centered AI in healthcare.Apple Podcasts: https://podcasts.apple.com/us/podcast/practical-ai-in-healthcare/id1837172964Spotify: https://open.spotify.com/show/63NBSNTdKsLHO7jmNxiZHv?si=277e9a14c24f41d9Amazon Podcasts: https://music.amazon.com/podcasts/8ed0ba60-15f4-419b-85f1-c4b85af81b41/practical-ai-in-healthcare#HealthcareAI, #ClinicalAI #MedicalInformatics #ClinicalDecisionSupport #AIinMedicine #DigitalHealth #HealthTech #FutureOfHealthcare #GenerativeAI #AIGovernance #DiagnosticExcellence #PatientCareInnovation #HealthcareTransformation #AIThoughtLeadership #PracticalAI
Practical AI in Healthcare just released one of our most eye-opening conversations yet, featuring Orr Inbar, CEO & Co-Founder of QuantHealth.QuantHealth is pushing the boundaries of what’s possible in clinical development with AI-driven clinical trial simulation. Orr walked us through how modern deep learning, massive biological knowledge graphs, and patient-level real-world data can now simulate clinical trials with 80–90% accuracy — across dozens of indications, modalities, and trial phases.In our conversation, we cover:• How QuantHealth models patient-drug interactions at massive scale• Why trial design is still the most critical (and fixable) failure point in drug development• How simulation is becoming the new starting point for protocols• What AI-first trial design could mean for speed, cost, and reducing avoidable trial failures• Where the field is headed in the next 5–10 years — including the provocative question of how far simulation can replace human trialsOrr also discusses the acceleration of biological data, the maturity of real-world data, and transformer-based AI — the perfect storm that made this moment possible.This is one of the best deep dives yet into the practical future of drug development.
In the conclusion of our two-part conversation with Dr. Yin Ho, author of Rushing Headlong: Health IT’s Legacy and the Road to Responsible AI, we explore how healthcare can avoid repeating its digital past. Dr. Ho and hosts Dr. Steven Labkoff and Dr. Leon Rozenblit dive into small language models, decision support vs. decision control, the pitfalls of ambient scribing, and why “rage-building” the next generation of EHRs might be the most responsible act of all.
The newest episode of Practical AI in Healthcare features Dr. Yin Ho—physician, entrepreneur, and author of Rushing Headlong: Health IT's Legacy and the Road to Responsible AI.In Part 1 of our two-part conversation, Dr. Ho joins Dr. Steven Labkoff and Dr. Leon Rozenblit to unpack 25 years of digital health transformation—from the dawn of electronic medical records to the market and policy forces that shaped today’s health IT landscape.Together, we explore how well-intentioned decisions created a fragmented system that prioritizes billing over care—and why understanding that history is essential to building a responsible AI future.#HealthcareAI #DigitalHealth #HealthIT #PracticalAIinHealthcare #Podcast
loading
Comments