Mercy’s AI Leader: Strategy First, Governance Always
Update: 2025-10-16
Description
Kerry Bommarito, PhD, VP, Enterprise AI and Decision Intelligence, Mercy, says the health system’s AI agenda begins with the enterprise plan, not shiny tools. After the executive team sets five organization-wide OKRs each fiscal year, work streams cascade to accountable leaders; Bommarito is a trustee on a key result aimed at revenue-cycle improvements through automation and AI. Early priorities include denials, prior authorization, and the handoffs that follow patients from testing approval to billing—areas where technology can ease staff burden while improving patient experience.
This interview was conducted as part of our recently published Special Report on AI
She emphasizes that scoping starts with problems, not platforms: define the operational objective, then assess whether analytics, workflow standardization, an EMR build, or AI is the right lever. Bommarito argues that this discipline helps avoid scattering resources across ill-fitting pilots and keeps scarce engineering time directed at material value. “AI can’t solve for everything. Obviously, there’s a lot that AI can do. But there’s a lot that it can’t,” she said, noting that even solvable problems may be better addressed via vendor partnerships rather than internal builds, depending on time-to-value and maintainability.
Data Quality and Vendor Realities
Bommarito links solution choice to data readiness. She points to the basics—consistent entry in the EMR, standardized phrases, and workflow uniformity—as determinants of how well downstream technologies can operate. In her view, vendor implementations falter when assumed data specifications collide with site-specific realities. She notes that health systems must pressure-test those assumptions up front and manage to their own data-gathering and governance norms. To that end, she recommends pairing informaticists and operational leaders with engineers throughout discovery so the team can see how documentation patterns and exception paths shape model inputs and outputs.
Governance as a Separate Track
She describes Mercy’s structure as deliberate: the enterprise data and AI office, along with AI governance, sits outside the core IT organization. Dedicated reviewers examine vendor model cards and track compliance with responsible-AI practices; Mercy also embeds AI notification requirements in contracts so suppliers cannot “flip on” new capabilities without an evaluation window. “I think AI governance, depending on how an organization is set up, it should be a standalone process,” Bommarito said. The aim, she adds, is not to block upgrades but to synchronize AI checks with security and IT change controls, ensuring transparency for clinicians and clarity on intended versus unintended uses.
Regulation, Risk, and the Human in the Loop
She draws a firm line around clinical safety. Any feature that could function as a medical device must be reviewed for FDA implications, and large language model use in clinical contexts should retain a human in the loop. Education is essential: clinicians need to know what a tool is—and is not—designed to do, how it reached a recommendation, and where limitations lie. That responsibility, she notes, cannot be outsourced to a marketing label. “It’s still your responsibility because you’re the one using the tool,” adding that health systems must validate governance posture even when a vendor asserts its product is not a regulated device.
Pilots Built for Scale
Bommarito views the word “pilot” as a signal for engineering prudence, not a license for one-offs. Internally developed efforts are architected as platforms—reusable microservices and agents—so that a successful proof of concept in one service line can expand quickly to others. Mindset prevents “throwaway” code and accelerates scale once value is ...
This interview was conducted as part of our recently published Special Report on AI
She emphasizes that scoping starts with problems, not platforms: define the operational objective, then assess whether analytics, workflow standardization, an EMR build, or AI is the right lever. Bommarito argues that this discipline helps avoid scattering resources across ill-fitting pilots and keeps scarce engineering time directed at material value. “AI can’t solve for everything. Obviously, there’s a lot that AI can do. But there’s a lot that it can’t,” she said, noting that even solvable problems may be better addressed via vendor partnerships rather than internal builds, depending on time-to-value and maintainability.
Data Quality and Vendor Realities
Bommarito links solution choice to data readiness. She points to the basics—consistent entry in the EMR, standardized phrases, and workflow uniformity—as determinants of how well downstream technologies can operate. In her view, vendor implementations falter when assumed data specifications collide with site-specific realities. She notes that health systems must pressure-test those assumptions up front and manage to their own data-gathering and governance norms. To that end, she recommends pairing informaticists and operational leaders with engineers throughout discovery so the team can see how documentation patterns and exception paths shape model inputs and outputs.
Governance as a Separate Track
She describes Mercy’s structure as deliberate: the enterprise data and AI office, along with AI governance, sits outside the core IT organization. Dedicated reviewers examine vendor model cards and track compliance with responsible-AI practices; Mercy also embeds AI notification requirements in contracts so suppliers cannot “flip on” new capabilities without an evaluation window. “I think AI governance, depending on how an organization is set up, it should be a standalone process,” Bommarito said. The aim, she adds, is not to block upgrades but to synchronize AI checks with security and IT change controls, ensuring transparency for clinicians and clarity on intended versus unintended uses.
Regulation, Risk, and the Human in the Loop
She draws a firm line around clinical safety. Any feature that could function as a medical device must be reviewed for FDA implications, and large language model use in clinical contexts should retain a human in the loop. Education is essential: clinicians need to know what a tool is—and is not—designed to do, how it reached a recommendation, and where limitations lie. That responsibility, she notes, cannot be outsourced to a marketing label. “It’s still your responsibility because you’re the one using the tool,” adding that health systems must validate governance posture even when a vendor asserts its product is not a regulated device.
Pilots Built for Scale
Bommarito views the word “pilot” as a signal for engineering prudence, not a license for one-offs. Internally developed efforts are architected as platforms—reusable microservices and agents—so that a successful proof of concept in one service line can expand quickly to others. Mindset prevents “throwaway” code and accelerates scale once value is ...
Comments
In Channel