DiscoverMichael Martino Show
Michael Martino Show
Claim Ownership

Michael Martino Show

Author: Michael

Subscribed: 3Played: 56
Share

Description

Hot takes, industry insights, and advice from experts - focusing on the continued pursuit of Digital and Business Transformation, Government Transformation, and digital coaching. Episodes are short, to the point, and jam-packed with info. We will get you in and out with maximum content in short bursts.

283 Episodes
Reverse
When transparency is done right, trust increases, not because everything is going well—but because everyone understands what’s actually happening.   Decision-making becomes faster and more grounded. Issues get resolved earlier and the program becomes resilient, because it can adapt in real time.  
A system integrator is not just a vendor and they’re not just a builder.  They are the execution engine of your transformation and that engine only works if it's: properly integrated into your operating model  governed with clarity and discipline  it’s aligned to outcomes—not just deliverables   If you get the SI model right, you accelerate delivery, reduce risk, and actually achieve transformation.  If you get it wrong, you end up with: delays  cost overruns  a system that doesn’t serve your citizens   In government—that’s not just a project failure it's a service failure. 
What are we comparing At a high level, Waterfall and Agile are not just delivery methods—they represent fundamentally different beliefs about how work gets done.  Waterfall assumes: Requirements can be known upfront Change is a risk to be controlled Delivery should be sequential and gated  Agile assumes: Requirements evolve Change is expected and valuable Delivery should be iterative and incremental 
Should AI shape your operating model?  Eventually, it has to.  You can start by adding AI tools to existing processes but the real opportunity comes when you step back and ask, "If AI could do a large portion of our operational work how would we design this organization from scratch?"  The organizations that ask that question early will be the ones that build the next generation of service delivery. For governments especially, that could mean something incredibly powerful: faster services consistent decisions better outcomes for citizens.  
Today we’re going to talk about something that is quickly becoming one of the most important topics in public-sector leadership: how artificial intelligence can transform government service.  For decades, government modernization has mostly meant digitization.  Governments took paper forms and turned them into online forms. They took in-person transactions and moved them to websites. Portals, contact centers, and mobile apps.  These things helped but the underlying system stayed the same.  Citizens still had to figure out government. They had to navigate programs. They had to understand eligibility rules. They had to know which department to contact. They had to repeat their story multiple times across multiple channels.  Artificial intelligence introduces something fundamentally different. For the first time, governments have the ability to build systems that understand, guide, and resolve citizen needs in real time. When implemented properly, AI doesn't just make services faster. It changes the entire service model. 
Delivering an out-of-the-box implementation is not about resisting change -- it is about choosing the right kind of change. Rather than bending the platform to match the past, the organization adapts its processes to match the future.  That takes discipline and leadership.  It also takes a clear understanding that the value of modern platforms comes not from how much you change them—but from how much you allow them to change you, and when organizations get that balance right, they don’t just implement software -- they modernize how they operate. 
Today we’re going to talk about a decision that quietly determines whether your transformation succeeds or becomes a multi-year recovery effort.  When should you get out of the box and bend the system to your business and when should you let the platform redefine your business processes?  This is not a technology debate – it's an operating model decision.  If you’re in government, or running a large enterprise platform program — ERP, CRM, case management, core modernization — this question is the fulcrum. 
Start with outcomes  There’s a famous line from How Big Things Get Done by Bent Flyvbjerg: Big projects don’t fail because they’re ambitious. They fail because they’re poorly governed and poorly scoped.  In government, transformation often starts with a solution: “We need a new case management system.” “We need AI.” “We need to modernize.”  That’s activity language. Successful programs start with outcome language: Reduce claim processing time by 40%. Increase first contact resolution to 75%. Cut regulatory backlog in half. Improve public trust scores by 15%.  Transformation must be tied to measurable public value.  If the outcome is vague, the program will drift. If the outcome is precise, the system can self-correct. Outcome clarity is not a communications exercise. It is governance architecture.  Governance is design, not oversight Most governments treat governance as reporting: Steering committees Status decks Traffic-light dashboards  That’s oversight, but governance in transformational programs must be design authority.  High-performing jurisdictions embed decision rights clearly: Who owns scope? Who owns funding trade-offs? Who can kill features? Who can redefine policy constraints?  If no one can say no, scope will explode. If everyone can say no, nothing will move. Successful programs establish: A single accountable executive Clear escalation pathways Explicit decision cadences Integrated policy, operations, and technology leadership.  Governance must reduce friction, not create it.  Decompose the transformation Big transformations fail when treated as a single monolithic build. The better pattern is modular decomposition. Instead of replace the entire operating model. You break it into: Service journeys Capability clusters Technology components Regulatory enablers Data architecture layers  This is where program architecture matters.  Successful transformation programs operate like portfolios, not projects.  Each component: Has a defined value hypothesis Has an accountable owner Has delivery milestones Feeds into enterprise outcomes  This mirrors principles found in scaled agile and portfolio governance models, but applied with public-sector rigor.  The key question becomes -- what can we deliver in 6–12 months that moves the outcome metric? Transformation is not an event.  It’s a sequence of value releases.  Align CX with Operations This is where many governments stumble. They redesign experience in isolation from operational process, but experience is an emergent property of process design. If you want faster service you redesign: intake logic decision authority automation triggers escalation thresholds  Not just the front-end portal.  Successful government transformations engineer process-driven experience architecture.  Institutionalize risk management One of the biggest myths in government transformation is that risk can be eliminated before launch.  It cannot.  What matters is risk visibility and structured mitigation. In government: Costs are underestimated. Benefits are overstated. Timelines are compressed for political cycles.  Successful programs do the opposite: Independent cost validation Reference-class forecasting Stage-gate funding tied to evidence Transparent reporting to Treasury and Cabinet.  Transformation requires professional program controls: Integrated master schedules Dependency mapping Benefits realization tracking Risk heatmaps updated monthly.  This is not bureaucracy. This is operational hygiene.  Build internal capability  Another consistent failure pattern -- outsourcing transformation thinking.  Vendors can implement. They cannot own accountability for public value. Successful governments: Retain architectural authority Build internal product management capability Embed business process designers Develop enterprise data governance.  
Governments are very good at approving transformation but they are much less disciplined at benefiting from it.  If you’re leading a large-scale modernization — digital platform replacement, service transformation, AI implementation, operating model redesign — you need a benefit realization framework that is operational, measurable, and governed.  Start with business outcomes Most benefit frameworks fail at the starting line. They define benefits like this: implement new CRM launch new portal reduce manual processing automate intake.  Those are outputs.  A benefit realization framework starts with outcomes. For example: reduced average case processing time increased first contact resolution reduced cost per transaction increased compliance rate improved client satisfaction index  When Treasury Board of Canada Secretariat evaluates transformation proposals, they are not funding “technology.” They are funding performance improvement. Your framework must reflect that discipline in that every initiative must tie to a measurable business outcome.  If it cannot — it is a project, not a transformation.  Define benefit types  A mature framework categorizes benefits into four major types: 1. Financial Benefits cost avoidance cost reduction revenue recovery productivity gains  2. Service benefits reduced wait times increased accessibility improved service standards  3. Risk and compliance benefits reduced audit findings improved regulatory adherence reduced fraud exposure  4. Strategic benefits increased policy agility improved public trust cross-ministry integration  Large programs often over vector on financial benefits because they are easier to quantify, but in public sector transformation, risk and service benefits often carry more long-term value. A good framework balances them.  Assign a benefit owner — not a project owner Here’s where most governments collapse. Benefits are assigned to the project team. That’s a mistake as project teams deliver outputs while operations delivers benefits.  For every benefit in your framework, you need a: named executive owner (usually Director or ADM level) baseline metric target state measurement frequency reporting mechanism.  If no operational executive is accountable for realizing the benefit, it will not materialize.  Establish a baseline  You cannot measure improvement if you don’t know where you started and yet, in many large public programs, baseline measurement is skipped because: data is fragmented metrics are inconsistent reporting systems are immature.  Without a baseline: cost savings are estimated productivity gains are assumed service improvements are anecdotal.  A credible benefit realization framework requires a current: cost per transaction FTE effort processing time satisfaction score error rate.  If you don’t have this data, the first workstream in your program should be performance instrumentation. This is where many transformation offices underestimate the importance of analytics maturity.  Separate “Hard” vs “Soft” benefits  Hard benefits: direct cost savings headcount reduction contract elimination.  Soft benefits: employee engagement client trust reduced complaints improved brand perception.  Hard benefits satisfy finance. Soft benefits drive long-term legitimacy. The key is not dismissing soft benefits — but operationalizing them. For example, instead of “improved trust,” measure: complaint rate reduction net satisfaction movement public sentiment index.  Framework discipline turns soft benefits into observable metrics.  Build a benefit realization register Every large transformation should maintain a living Benefit Register. This is not a slide deck. It’s a structured artifact that includes: Benefit ID Description Category Baseline Target Measurement formula Owner Dependencies Realization date Status. 
Start with the business outcome   Before you build anything, define the operational objective. Are you trying to: Increase first-contact resolution? Reduce case backlog? Improve eligibility accuracy? Shorten processing time? Lower cost per transaction?  This is not about “using AI.” This is about improving a measurable public-sector performance indicator. If you can’t tie your AI agent to: a reduction in processing time a decrease in call volume a increase in compliance accuracy a measurable client outcome.  You are not building an agent -- you are running an experiment. AI agents must be outcome-anchored.  Select the right journey Not every service is ready for an AI agent. Start with a journey that is: high volume rules-based process-heavy data-rich currently constrained by human throughput  Think about: benefits eligibility screening license renewals status inquiries simple case triage document validation.  Do not start with complex discretionary casework -- start where process discipline already exists. AI agents amplify process maturity.  They do not compensate for process chaos.  Decompose the work This is where most agencies get it wrong. They try to build an “AI agent for intake.”  Instead, break the work into micro-decisions: validate identity confirm eligibility criteria cross-reference records flag missing documentation route exceptions draft correspondence.  Formalize the decision logic Before any model is trained or configured, you must extract the institutional logic. That means: policy rules eligibility thresholds exception handling criteria escalation triggers risk thresholds compliance constraints.  Most of this already exists — but it lives in: policy binders tribal knowledge training manuals legacy documentation.  Build the human-in-the-loop control model Government agencies cannot deploy autonomous agents without layered oversight. This is where many agencies should look at how regulated sectors like healthcare and financial services design controls.  Your AI agent must have: confidence thresholds automatic escalation rules audit logging version control explainability outputs override authority  In public service, “black box” is unacceptable, every decision must be defensible.  Human-in-the-loop is not optional, it is a design principle.  Engineer the data layer AI agents are only as good as the data environment they operate in. That means: clean client records structured fields real-time system access API integrations secure identity management.  If your agency still relies on PDF uploads and manual data re-entry, your agent will struggle.  Before scaling AI agents, agencies often need to modernize: case management systems document management systems identity verification layers.  This is why AI is often the forcing function for digital modernization. You cannot layer intelligence on top of fragmentation.  Pilot in a contained environment Do not launch enterprise-wide.  Select one: service line regional office transaction type.  Define: baseline performance metrics clear success criteria controlled workload a rollback plan.  Measure: cycle time error rate escalation frequency client satisfaction staff productivity.  The pilot should run long enough to observe edge cases. Agents fail in the edges — not the happy path.  Redesign the workforce model This is the step leaders underestimate.  If an AI agent performs: intake validation basic eligibility checks standard correspondence drafting.  Then what happens to your employees? They don’t disappear.  They shift to: complex exceptions vulnerable client cases appeals fraud detection quality assurance.  AI agents increase cognitive leverage, but only if the agency intentionally redesigns roles, KPIs, and performance models. If you don’t redesign the workforce, the agent creates friction instead of capacity.    
 First Contact Resolution isn’t a contact center metric -- it’s a journey outcome. If you try to “train your way” to FCR without fixing the journey, you’ll fail every single time.  Why first contact resolution is misunderstood Let’s start with the misconception. Most organizations treat First Contact Resolution as a: frontline performance issue coaching issue script compliance problem.  So what do companies do? Add more knowledge articles Run refresher training Tighten QA scorecards  Then they’re shocked when FCR barely moves. That’s because FCR is not about agent capability alone. FCR breaks when: customers contact you too early in the journey information is fragmented across systems policies force handoffs agents lack authority upstream processes are fundamentally broken.  If a customer has to call you, explain their story, get transferred, wait for a back office action, and then call again—that’s not an agent failure. That’s a journey design failure.  Shifting from contact handling to resolution journeys To deliver First Contact Resolution consistently, you need to stop asking, “How do we resolve this contact faster?” and start asking, “Why is this customer contacting us—and what has to happen so they never have to contact us again?”  That’s a journey mindset. A resolution journey includes, what: happened before the contact information the customer already has systems the agent needs access to decisions can be made in the moment what follow-up actions are triggered automatically  When FCR fails, it’s usually because: the journey crosses too many organizational silos ownership is unclear  resolution authority is split across teams  First Contact Resolution only works when one moment in the journey owns the outcome.  Designing the FCR journey How do you design for FCR journey:  Identify high-volume, high-friction reasons for contact Not all contacts are equal. Start with: repeat callers status-check calls “I already submitted this” calls “I was told to call back” calls  These are journey failures disguised as demand. Map these issues end-to-end—not just from the moment the call starts, but from the customer’s original intent.  Define what resolved means Organizations define resolution as: “We answered the question” “We logged the request” “We handed it off”  Customers define resolution as: “My issue is done” “I don’t have to follow up” “Nothing else is required from me”  If your FCR definition doesn’t include customer confirmation of completion, you’re measuring activity—not outcomes.  Collapse handoffs Every handoff is an FCR killer. To design for FCR: bring policy, process, and authority as close to the first contact as possible eliminate unnecessary approvals pre-authorize common exceptions.  The question to ask is: “What prevents this agent from fully resolving this today?” Then remove that constraint—systematically.  Design agent enablement into the journey This is not just about training. It’s about: unified customer context real-time decision support clear escalation paths permission to act  FCR doesn’t happen because agents are heroic. It happens because the journey is engineered for success.  What leaders get wrong Leaders kill First Contact Resolution when they: obsess over handle time penalize agents for taking ownership measure productivity instead of outcomes separate “front office” and “back office” accountability  You cannot demand FCR while designing a system that rewards deflection, speed, and handoffs. If you want First Contact Resolution: fund journey redesign, not just tools hold journey owners accountable—not just contact center leaders  accept that some calls will take longer so future calls don’t happen at all.  That’s how mature organizations think.  
One of the biggest misconceptions is this belief that AI maturity equals removing humans from decisions.  The narrative goes something like this: “AI will eliminate manual work.”  “AI will replace decision-making.”  “AI will automate the frontline.”  AI reduces repetitive effort but mature AI strategies don’t remove humans. They reposition humans because AI doesn’t replace judgment — it changes where judgment sits.  In service environments, especially government or customer operations, you’re not just optimizing for efficiency. You’re optimizing for: trust fairness compliance transparency experience outcomes.  These aren’t purely technical problems -- they’re human problems. Human-in-the-loop isn’t a safety net added later -- it’s an architectural principle.  What human-in-the-loop means Human-in-the-loop doesn’t mean someone clicking “approve” on every AI output. That’s not strategy — that’s bottleneck engineering. A strong human-in-the-loop model defines where human expertise adds value across the lifecycle.  There are three primary layers: 1. Design-time humans These are your service designers, policy owners, product managers, and domain experts. They define: what the AI is allowed to do what outcomes it should optimize for where escalation happens  If humans aren’t embedded at design time, your AI will scale the wrong behaviors faster.  2. Run-time humans These are frontline staff, supervisors, and operational reviewers. They intervene when: confidence thresholds drop policy ambiguity appears edge cases emerge  This is where AI becomes an augmentation tool — not a replacement.  3. Oversight humans This is governance. Risk leaders. Ethics committees. Service excellence teams. They analyze: model drift bias signals complaint patterns experience impacts.  Human-in-the-loop isn’t one role. It’s a layered system.  Why this matters more in government  In commercial tech, a bad AI decision might cost revenue. In public service, a bad AI decision can cost trust and trust is harder to rebuild than any operational metric.  Think about AI in contexts like: eligibility decisions benefits processing contact centre automation case management digital service navigation.  These environments carry: policy complexity legal obligations vulnerable populations high emotional stakes.  When organizations rush into full automation, they often discover something quickly efficiency goes up as well as escalations and complaints.  Why?  AI handles the predictable middle of the bell curve extremely well, but the edges — the messy, human scenarios — still require interpretation.  A human-in-the-loop strategy protects the system from brittle automation. It acknowledges that service isn’t just about speed. It’s about judgment.  The strategic benefits leaders miss Most conversations about human-in-the-loop focus on risk mitigation but there’s a strategic upside that many leaders underestimate:  If humans don’t have authority or context, they’re not in the loop -- they’re in the queue.  Treating humans as error catchers. Humans shouldn’t only exist to fix AI mistakes -- they should shape strategy, define guardrails, and continuously improve outcomes.  To wrap If you’re building or refining your AI strategy -- human-in-the-loop isn’t a compliance checkbox. It’s a competitive advantage, creates resilience, and  accelerates learning.  Most importantly — it preserves the human trust that every modern service depends on.  As AI becomes more capable, the organizations that win won’t be the ones that remove people fastest. They’ll be the ones that design the smartest partnership between humans and intelligent systems. That’s where real transformation happens. 
AI is already operating inside your organization. Your staff are using generative AI tools to draft emails, summarize policy documents, analyze data, and prep briefing notes.  All of this is happening without a coherent, enterprise-level strategy.  Which means decisions about AI are being made: individually inconsistently invisibly  That’s not innovation. That’s unmanaged risk.  An AI strategy is not about “starting AI.”   Without a strategy, AI amplifies the wrong things  Government systems are very good at one thing--scaling whatever already exists.  If your processes are slow, AI can make them faster—but still slow in the wrong places.   If your data is biased, AI can make those biases more efficient.   If your policies are unclear, AI will apply that ambiguity at machine speed.  This is why an AI strategy has to start before technology.  A real AI strategy answers questions like: what problems are we trying to solve for citizens? where is human judgment essential—and where is it not? what decisions should never be automated? what level of explainability do we require for public trust? how do we ensure AI improves equity instead of undermining it?  Without those answers, AI doesn’t transform government. It industrializes its flaws.  AI strategy is a trust stragtegy In government, trust is the currency.  And AI—used poorly—can burn through trust faster than almost any other technology we’ve seen.  Citizens don’t care whether a decision was made by a: legacy system human caseworker AI model They care whether it was: fair transparent timely accountable  An AI strategy establishes: clear accountability for AI-supported decisions standards for explainability and auditability guardrails around surveillance, consent, and data use.  A strong AI strategy starts with mission outcomes: reducing wait times improving eligibility accuracy increasing compliance through better guidance supporting frontline staff under pressure making services more accessible to vulnerable populations  Your strategy should clearly articulate where: AI creates material public value it does not  where simpler solutions are better  This clarity is what prevents wasted investment—and public embarrassment.  AI changes the operating model, not just the toolset  This is the part most agencies underestimate.  AI is not just another system you plug in.  It changes how work is done decisions are made roles evolve accountability flows.  An AI strategy must address operating model questions: how do humans and AI collaborate in service delivery? what new skills do managers and frontline staff need? how do we redesign processes around AI, not bolt it on? who owns model performance over time?  If you don’t answer these questions deliberately, they get answered accidentally and accidental operating models are never good operating models.  Strategy enables speed There’s a false choice often presented in government--move fast and be reckless or move slow and be safe.  A well-designed AI strategy enables responsible speed.  It allows agencies to: move faster on low-risk, high-value use cases apply stronger controls to high-impact decisions reuse patterns, standards, and governance instead of reinventing them  Strategy reduces friction because people know: what’s allowed what’s not how to proceed That’s how you scale innovation without chaos.  What a government AI strategy should include Let’s get concrete.  A credible government AI strategy typically includes: A clear vision tied to public value and mission outcomes principles for responsible and ethical use A prioritization framework for AI use cases data readiness and quality standards governance and accountability models workforce and capability development vendor and procurement considerations metrics for success beyond cost savings  
Many organizations are building initiatives around activities, projects, and deliverables—instead of anchoring them in clear business outcomes.  If you’ve ever sat in a steering committee where someone says, “We delivered everything we promised… but the metrics didn’t change,”  this episode is for you.  Activity is not impact Most organizations don’t actually have a strategy execution problem.   They have an outcomes discipline problem.  They fund initiatives like: launch a new platform redesign a process implement a tool stand up a new team.  All these things don’t clearly answer one basic question:  What business outcome will be materially different if this succeeds?  Not, what will be: built delivered Launched.  Instead, what will improve reduce grow?  When initiatives aren’t explicitly tied to outcomes, success becomes subjective—and accountability disappears.  What “basing initiatives on business outcomes” means Basing initiatives on business outcomes does not mean adding a KPI slide at the end of a deck.  It means flipping how initiatives are conceived from the start.  Instead of asking. "What should we do next?”  You ask, “What outcome must change for the business to succeed?”  Real business outcomes are reduce cost-to-serve by 15% increase first-contact resolution by 10 points shorten cycle time by 20% improve regulatory compliance confidence increase customer retention in a specific segment.  Only after the outcome is clear do you ask, “What initiatives are most likely to drive that change?”  This sounds obvious, but it is rare.  Three failure modes  When initiatives aren’t anchored in outcomes, three predictable things happen.  Success becomes theater Teams celebrate go-lives, launches, and milestones—but no one can prove impact. The organization gets better at delivery, not results.  Prioritization breaks When everything sounds important, leaders prioritize based on politics, volume, or urgency—not value. Outcome-based initiatives create a common currency for trade-offs.  Continuous improvement dies If you don’t define the outcome, you can’t measure progress, learn, or adjust. Initiatives become “one and done” instead of continuously optimized.  Outcomes create strategic alignment  Business outcomes are the bridge between: strategy and execution leadership intent and frontline action investment and accountability.  When outcomes are explicit: executives know why they’re funding something teams know what success actually means managers can align trade-offs metrics stop being performative and start being operational.  This is especially critical in large, complex organizations—where initiatives cut across silos and no single team owns the full value chain.  Outcomes create shared ownership.  The outcome clarity check  Take any major initiative and ask three questions: What business metric will change if this succeeds? By how much must it change to justify the investment? Who is accountable for realizing that change—not just delivering the work?  If those answers aren’t clear, you don’t have an outcome-based initiative.  You have a well-funded experiment.   Why this matters now  In today’s environment—tight budgets, rising expectations, and increasing complexity—organizations cannot afford activity without impact.  AI, digital transformation, service design, journey management—all of these are powerful.  But none of them are strategies. They are means. Business outcomes are the end.  The organizations that win aren’t the ones doing the most work. They’re the ones that can clearly answer, “What changed because we did this?”  To wrap If you want better results, stop starting with initiatives.  Start with outcomes.   Fund outcomes.   Govern outcomes.   Hold leaders accountable for outcomes. 
Most government operating models are built around: programs policies channels functional silos.  Government is organized by what it does and  not by what the customer experiences. As a result, the customer journey is fragmented.  A citizen starts a service in one channel, gets handed off to another team, repeats their story, hits a policy boundary, and eventually gives up—or escalates.  No one owns that journey end to end.  Everyone owns a piece of the process.   No one owns the experience.  A journey-enabled operating model flips that logic.  Instead of asking, “How do we optimize our functions?”   It asks, “How do we design the organization around the journeys that matter most?”  What does journey enabled mean? A journey-enabled operating model does three critical things, it: treats customer journeys as managed assets, not artifacts embeds journey accountability into governance and decision-making aligns teams, funding, metrics, and technology around outcomes—not outputs.  This is not about replacing functional structures. It’s about overlaying a journey lens on top of them. Think of journeys as the connective tissue across policy, operations, technology, and service delivery.   Assign journey ownership Here’s where most organizations hesitate.  A journey-enabled operating model requires explicit journey ownership.  Not symbolic ownership. Not advisory ownership. Real accountability. A Journey owner is responsible for: end-to-end experience performance identifying friction and failure points prioritizing improvements across silos advocating for the customer in governance forums.  They do not replace operational leaders, instead, they act as horizontal leaders—cutting across vertical structures.  In mature models, journey owners have: decision rights dedicated capacity a formal role in investment and prioritization.  Without this, journeys revert to PowerPoint slides that collect digital dust.  Build journey-aligned teams A journey-enabled organization does not rely solely on centralized CX teams.  Instead, it creates journey-aligned, cross-functional squads—either permanent or federated—bringing together: operations policy technology data and analytics design and research.  These teams work on continuous improvement, not one-off projects.  They are measured on outcomes like: time to resolution first-contact completion effort reduction trust and confidence.  This is where the operating model shifts from episodic change to ongoing journey management.  Embed journeys into governance This is the hardest—and most important—part.  A journey-enabled operating model changes how decisions get made.  Journeys must be embedded into: portfolio planning investment governance performance reviews executive reporting.  Instead of asking, “Which project should we fund?” Leaders should be asking, “Which journey outcome are we improving?”  Instead of channel-based KPIs, organizations track: journey health drop-off points rework and escalation cross-channel failure demand.  This makes the customer visible in rooms where the customer has historically been absent.  Enable with data and technology Journeys cannot be managed without insight.  A journey-enabled operating model relies on: integrated data across channels journey analytics and flow analysis voice-of-customer and operational signals case and workflow visibility.  This is not about perfect data. It’s about directionally accurate insight that allows teams to see, where: customers get stuck effort spikes policies create friction  Technology becomes an enabler of learning—not just automation.  What does this look like? In organizations that do this well, you see real shifts: fewer handoffs faster service recovery reduced repeat contacts better alignment between policy intent and lived experience.  
The Illusion of AI readiness Many governments believe they are AI-ready because they’ve:  published an AI strategy piloted a chatbot created an ethics framework stood up a data or innovation office  All of that is important -- none of it, on its own, equals readiness.  True AI readiness is not about technology adoption--it’s about organizational transformation. AI doesn’t simply automate tasks—it reshapes decision-making, accountability, service models, workforce roles, and citizen expectations.  This is where many governments run into trouble. Governments try to layer AI onto legacy systems, legacy processes, and—most critically—legacy ways of working. That approach creates isolated wins, but systemic failure.  What is AI readiness?  A government is AI-ready when it can: deploy AI safely and ethically at scale integrate AI into core service delivery—not just pilots govern AI decisions with clarity and confidence equip its workforce to work with AI continuously adapt as AI capabilities evolve  What is not on the list? Tools. Vendors. Hype.  AI readiness sits at the intersection of data, governance, operating models, and culture. If any one of those is weak, AI maturity stalls.  The readiness gaps  1. Data readiness AI runs on data—but many governments still struggle with: fragmented data ownership poor data quality limited interoperability across ministries or agencies unclear rules on data sharing.  Without trusted, accessible, and well-governed data, AI systems produce unreliable or biased outputs. AI does not fix bad data.  It amplifies it.  2. Governance and accountability Too often AI governance becomes either so restrictive that nothing can move forward, or so vague that accountability disappears.  Key questions often go unanswered: who is accountable for AI decisions? who approves model use? who monitors bias and drift? who owns outcomes when AI is embedded in services?  AI readiness requires decision clarity, not just ethical principles.  3. Operating model misalignment This is the biggest gap—and the least discussed.  Most government operating models were designed for: linear processes human-only decision making static policies and rules. 4. Workforce confidence AI readiness is not just about skills—it’s about confidence and trust.  Public servants need to know: when to rely on AI when to override it how to explain AI-supported decisions to the public how AI changes—not replaces—their professional judgment  Without deliberate workforce enablement--AI becomes something that happens to employees, not with them.   The goal is not speed-- the goal is trust at scale.  Trust is built when AI is: explainable governed embedded in human-centered service design.  Are governments AI-ready? Some are becoming ready. Most are not yet ready at scale.  Governments are: experimenting responsibly learning what works and what doesn’t building foundational capabilities.  But readiness is uneven and the risk is not that governments move too fast--it's that they are move too cautiously in the wrong areas—focusing on pilots instead of platforms, tools instead of transformation.  What governments should do next 1. Shift from AI Projects to AI Capabilities Stop thinking in terms of pilots and start building reusable AI capabilities—data platforms, governance models, shared services.  2. Redesign the operating model Explicitly design how humans and AI work together. Define roles, escalation paths, and accountability.  3. Invest in data as critical infrastructure Treat data like roads, bridges, and utilities.  4. Build workforce fluency, not just skills Focus on judgment, ethics, and decision-making—not just prompts and tools.  5. Anchor everything in service outcomes AI is not the strategy. Better, faster, fairer services are. 
Before we talk about plans, we need to ground the conversation.  An AI agent is not just a chatbot that answers FAQs.  An AI agent is a system that can: interpret intent take action across systems follow defined rules and policies escalate appropriately learn within controlled boundaries.  In a government context, that could mean an agent that: guides a citizen through eligibility, application, and next steps  supports case workers by summarizing files, flagging risks, or drafting correspondence  proactively notifies citizens of obligations, deadlines, or benefits.   Start outcomes, not technology The biggest mistake I see government organizations make is starting with the tool.  They ask: what AI platform should we buy? should we build or buy? can we pilot something quickly?  Those are the wrong first questions The plan must start with service outcomes.  Instead ask, where do citizens experience the most friction? are staff overwhelmed by repetitive, rules-based work? do delays create risk, cost, or loss of trust?  High-value use cases for AI agents in government usually share three characteristics: high volume high repetition clear policy or decision frameworks  Eligibility checks. Status updates. Intake and triage. Case summarization. Guided self-service.  Your plan should prioritize two or three services, not twenty.   Define guardrails before building This is where government differs fundamentally from the private sector—and where planning really matters.  Before deploying AI agents, your plan must clearly define guardrails in four areas:  Authority What decisions can an AI agent make? What decisions must remain human-led? What decisions require dual control?  If you can’t answer that clearly, you’re not ready to deploy.  Accountability Every AI-enabled service must have a: named service owner business accountable for outcomes clear escalation and remediation model.  AI does not remove accountability. It concentrates it.  Privacy and data use Your plan must explicitly define: what data the agent can access what data it cannot access how data is logged, audited, and retained.  If privacy teams are brought in after the pilot, you’ve already failed.    Design AI Agents as part of the service journey Here’s an important mindset shift--you don’t “add” an AI agent to a service.  You design the service around the agent and the human together.  That means mapping the end-to-end journey and asking where does the agent: lead? assist? step back?  Build the operating model around the agent One of the most overlooked parts of AI planning in government is the operating model.  AI agents require: ongoing training and tuning policy updates content governance performance monitoring.  Your plan must answer who: owns the agent? updates rules and prompts? reviews decisions and outcomes? responds when something goes wrong?  Leading organizations have: product-style ownership for AI agents multidisciplinary teams—policy, service design, legal, technology clear metrics tied to service outcomes, not usage statistics  Measure  Let’s talk about metrics.  Too many AI pilots measure: number of interactions containment rates cost deflection  Those are operational metrics not public value metrics.  A strong AI agent plan measures: reduction in time to resolution increase in first-time-right applications improved staff capacity and satisfaction decrease in repeat contact improved equity of access  Scale intentionally Once the first use cases are live and stable, the plan should shift from experimentation to platform thinking.  That means: reusable components shared governance models consistent citizen experience across services.  The goal is not dozens of disconnected agents. The goal is a coherent AI-enabled service ecosystem. Scaling without a plan creates fragmentation. Scaling with a plan creates momentum.  
Trust in government services isn’t a “nice to have.”  It’s not a branding exercise.  It’s not a communications problem.  Trust is an operational outcome.  Trust matters  Most people don’t want to interact with government. They interact because they need: benefits healthcare licenses permits.  Trust determines whether they: comply willingly or reluctantly believe the information they’re given come back to the same channel next time—or avoid it entirely  When trust is low: call volumes spike complaints increase escalations become the norm frontline staff burn out  When trust is high: digital adoption rises self-service works conversations become shorter, calmer, and more productive  What trust means in government Here’s where governments often get it wrong.  They treat trust as a communications challenge: “Let’s explain better.” “Let’s update the website.” “Let’s issue a statement.”  Trust is not built by what you say. It’s built by what people experience repeatedly.  In government services, trust has four core dimensions:  Reliability Do you do what you say you’ll do—every time?  If you promise a response in five days, is it five days or fifteen?  Competence Do staff know the rules, the process, and the next steps or does the citizen hear, “I’m not sure,” too often?  Transparency Do people understand where they are in the process or does their application disappear into a black hole?  Fairness Do similar cases get similar outcomes or does it feel arbitrary, inconsistent, or dependent on who you talk to?  Trust is the accumulation of these experiences over time.  Trust is created at the journey level If you want to build trust, stop thinking in channels and start thinking in journeys.  Citizens don’t experience the: website customer service area payment area.  A person experiences: trying to get help waiting for a decision fixing a mistake following up when nothing happens.  Trust is most often broken in three moments: Handoffs When a citizen moves from digital to phone, or phone to caseworker, and has to repeat their story.  Waiting Silence kills trust.  If people don’t know what’s happening, they assume the worst.  Exceptions Life doesn’t fit into policy.  When the process can’t handle edge cases, trust collapses fast.  High-trust organizations design journeys that: minimize handoffs make status visible empower staff to resolve, not deflect  Role of employees  Citizens judge the entire government by the last person they spoke to.  That means trust is delivered—or destroyed—by frontline employees.  Trust cannot exist externally if it doesn’t exist internally.   Digital trust  Digital services don’t build trust by being flashy. They build trust by being predictable. Citizens trust digital services when: forms are clear and don’t ask unnecessary questions errors are explained in plain language progress is visible outcomes are consistent with offline channels.  Nothing destroys trust faster than a website that says one thing, a live agent says something else and a letter says something else entirely different.  To wrap Trust is not owned by the: communications team digital team customer service.  Trust is owned by the operating model.  Government agencies that build trust ask where: do citizens get stuck most often? do we force people to call us? does policy override common sense? do employees feel powerless?  If you want to measure trust, don’t start with surveys--start with friction.  Every unnecessary step, delay, and handoff is a withdrawal from the trust account.  Every clear answer, timely update, and fair outcome is a deposit.  Trust in government services is built quietly--journey by journey.  It’s not about perfection but consistency, transparency, and respect for the citizen’s time and reality.  Once trust is earned, everything else—digital adoption, efficiency, compliance—gets easier. 
AI is not a single thing. It’s not one system. It's also not a magic button you bolt onto a broken process.  When most people say “AI,” they’re lumping together: machine learning predictive analytics natural language processing Generative AI intelligent automation  Each of these has very different implications for government.  The mistake many agencies make is jumping straight to the technology conversation without asking the more important questions: What decisions are we trying to improve? What work is repetitive, rules-based, or data-heavy? Where are citizens experiencing friction or delay?  AI does not replace strategy. It amplifies whatever strategy you already have—good or bad. If your processes are fragmented, AI will scale fragmentation. If your data is unreliable, AI will industrialize bad decisions.  This is why AI in government is not primarily a technology transformation. It is an operating model transformation.   Operations, not chatbots Public attention tends to focus on visible AI use cases:  Chatbots virtual assistants  automated responses.  Those matter—but the biggest impact of AI in government will happen behind the scenes.  Consider operational realities most agencies face: large backlogs manual case processing inconsistent decisioning limited visibility into demand and workload workforce shortages  AI is already changing this in three major ways. First: Intelligent triage and prioritization  AI can assess incoming applications, claims, or requests and: route them to the right team flag high-risk or high-impact cases identify missing information early  This alone can reduce cycle times dramatically—without changing legislation or service promises.  Second: Decision support, not decision replacement In government, AI should rarely make final decisions.   It can: surface patterns humans can’t see provide probability scores highlight anomalies or potential errors  This leads to more consistent, defensible, and auditable outcomes.  Third: Predictive operations Instead of reacting to spikes in demand, AI enables agencies to: forecast volumes anticipate capacity gaps adjust staffing and channels proactively  That is a fundamental shift—from reactive service delivery to managed demand.   The citizen experience  The biggest change AI brings to the citizen experience is not “faster answers.”   Historically, government has been structured around programs, not people.   Citizens are forced to: navigate complex eligibility rules re-enter the same information interact through channels the agency prefers.  AI starts to change that dynamic. With the right data foundations, AI can enable: personalized guidance instead of generic instructions proactive outreach instead of reactive enforcement seamless handoffs across channels and departments.  Imagine a government experience where: citizens are guided to the right service the first time life events trigger coordinated responses repetition and redundancy are designed out.  That is not science fiction but it requires agencies to think in terms of journeys, not transactions.  AI accelerates this shift—but only if the organization has done the journey design work first.  The agencies that struggle with AI adoption won’t fail because of technology—they will fail because they didn’t redesign work.  Governments need to be transparent about: where AI is used what decisions it supports where humans remain accountable  AI done poorly erodes trust quickly. AI done well can actually strengthen legitimacy by making decisions more consistent and fair.   To wrap AI will not make government smaller. It will make government different.  It will make government more predictive and consistent.   The agencies that succeed won’t be the ones with the most advanced algorithms, they’ll be the ones that align technology, operating models, and public values.  
Most government operating models were not designed for the environment we are operating in today.  They were designed for stability.   They were designed for predictability.  They were designed for policy-driven, siloed execution.  Today’s environment demands something very different: citizens expect seamless digital services policy changes are faster and more frequent funding pressures are constant technology—especially AI—is changing how work gets done almost monthly. What Is a dynamic operating model? A dynamic operating model is the way an organization continuously aligns: strategy policy customer needs processes technology people  In real time—or as close to real time as government can reasonably get.  The key word here is dynamic.  This model is designed to change without chaos.  It allows an agency to: respond to new policy direction shift resources to priority outcomes introduce new channels or technologies improve services based on lived customer experience  All without breaking delivery.   Why government needs this now Government agencies are facing a perfect storm.  First, citizen expectations are shaped by the private sector. People compare government services to banks, retailers, and digital platforms—even if that comparison is not always fair.  Second, policy volatility has increased. Programs are launched, amended, or paused faster than ever.  Third, legacy operating models are holding agencies back: siloed program ownership channel-centric delivery rigid funding and workforce models technology that dictates process instead of enabling it  The result is predictable: slow change inconsistent service experiences burnout in frontline staff  frustrated citizens.  A dynamic operating model gives government a way to modernize how it operates, not just what it delivers.   The dynamic operating model  A dynamic operating model as having six integrated components.  1. Clear Outcome-Based Strategy Everything starts with outcomes—not outputs: not “process more applications” not “launch a new portal”  But outcomes like: faster access to benefits reduced administrative burden improved trust in government services  These outcomes guide decisions across policy, operations, and technology.  2. Customer-led service design In a dynamic model, journeys—not programs—are the organizing principle.  That means: mapping end-to-end citizen journeys understanding pain points across channels designing services around life events, not internal structures  Journey management becomes a core capability, not a side project.  3. Agile governance and decision-making Traditional governance is built for control.   Dynamic governance is built for speed with accountability.  This includes: clear decision rights delegated authority where appropriate shorter approval cycles data-driven prioritization  Governance should enable movement—not block it.  4. Modular processes and technology Dynamic models rely on modularity.  Processes are designed in components that can be adjusted without redesigning everything.  Technology is: API-enabled Cloud-based Configurable rather than custom-built  This is what allows agencies to evolve incrementally instead of through massive transformation programs.  5. Workforce Enablement A dynamic model requires a workforce that is: multi-skilled empowered to solve problems supported by automation and AI roles shift from task execution to judgment, exception handling, and service recovery change management is not a phase—it is continuous.  6. Performance management and feedback loops Dynamic operating models are measured and adjusted constantly.  This includes: operational KPIs customer experience metrics employee feedback policy and compliance indicators  The model improves itself over time.  
loading
Comments