Governing AI in the Age of Risk
Update: 2025-11-14
Description
Guest article by Paul Dongha . Co-author of Governing the Machine: How to navigate the risks of AI and unlock its true potential.
Artificial Intelligence (AI) has moved beyond the realm of IT, it is now the defining strategic challenge for every modern organisation. The global rush to adopt AI is shifting from a sprint for innovation to a race for survival. Yet as businesses scramble to deploy powerful systems, from predictive analytics to generative AI, they risk unleashing a wave of unintended consequences that could cripple them. That warning sits at the heart of Governing the Machine: How to navigate the risks of AI and unlock its true potential, a timely new guide for business leaders.
Governing the Machine
The authors, Dr Paul Dongha, Ray Eitel-Porter, and Miriam Vogel, argue that the drive to embrace AI must be matched by an equally urgent determination to govern it. Drawing on extensive experience advising global boardrooms, they cut through technical jargon to focus on the organisational realities of AI risk. Their step-by-step approach shows how companies can build responsible AI capability, adopting new systems effectively without waiting for perfect regulation or fully mature technology. That wait-and-see strategy, they warn, is a losing one: delay risks irrelevance, while reckless deployment invites legal and reputational harm.
The evidence is already visible in a growing list of AI failures, from discriminatory algorithms in public services to generative models fabricating news or infringing intellectual property. These are not abstract technical flaws but concrete business risks with real-world consequences.
Whose problem is it anyway? According to the authors, it is everyone's. The book forcefully argues that AI governance cannot be siloed within the technology department. It demands a cross-enterprise approach, requiring active leadership driven from the C-suite, Legal counsel, Human Resources, Privacy and Information Security teams as well as frontline staff alike.
Rather than just sounding the alarm, the book provides a practical framework for action. It guides readers through the steps of building a robust AI governance programme. This includes defining clear principles and policies, establishing accountability, and implementing crucial checkpoints.
A core part of this framework is a clear-eyed look at the nine key risks organisations must manage: accuracy, fairness and bias, explainability, accountability, privacy, security, intellectual property, safety, and the impact on the workforce and environment. Each risk area is explained, and numerous controls that mitigate and manage these risks are listed with ample references to allow the interested reader to follow-up.
Organisations should carefully consider implementing a Governance Risk and Compliance (GRC) system, which brings together all key aspects of AI governance. GRC systems are available, both from large tech companies and from specialist vendors. A GRC system ties together all key components of AI governance, providing management with a single view of their deployed AI systems, and a window into all stages of AI governance for systems under development.
The book is populated with numerous case studies and interviews with senior executives from some of the largest and well-known origanisations in the world that are grappling with AI risk management.
The authors also navigate the complex and rapidly evolving global regulatory landscape. With the European Union implementing its comprehensive AI Act and the United States advancing a fragmented patchwork of state and federal rules, a strong, adaptable internal governance system is presented as the only viable path forward. The EU AI Act, which has now come into force, with staggered compliance deadlines in the coming two years, requires all organisations that operate within the EU, to implement risk mitigation controls with evidence of compliance. A key date is August 2nd 2026, by which time all 'Hig...
Artificial Intelligence (AI) has moved beyond the realm of IT, it is now the defining strategic challenge for every modern organisation. The global rush to adopt AI is shifting from a sprint for innovation to a race for survival. Yet as businesses scramble to deploy powerful systems, from predictive analytics to generative AI, they risk unleashing a wave of unintended consequences that could cripple them. That warning sits at the heart of Governing the Machine: How to navigate the risks of AI and unlock its true potential, a timely new guide for business leaders.
Governing the Machine
The authors, Dr Paul Dongha, Ray Eitel-Porter, and Miriam Vogel, argue that the drive to embrace AI must be matched by an equally urgent determination to govern it. Drawing on extensive experience advising global boardrooms, they cut through technical jargon to focus on the organisational realities of AI risk. Their step-by-step approach shows how companies can build responsible AI capability, adopting new systems effectively without waiting for perfect regulation or fully mature technology. That wait-and-see strategy, they warn, is a losing one: delay risks irrelevance, while reckless deployment invites legal and reputational harm.
The evidence is already visible in a growing list of AI failures, from discriminatory algorithms in public services to generative models fabricating news or infringing intellectual property. These are not abstract technical flaws but concrete business risks with real-world consequences.
Whose problem is it anyway? According to the authors, it is everyone's. The book forcefully argues that AI governance cannot be siloed within the technology department. It demands a cross-enterprise approach, requiring active leadership driven from the C-suite, Legal counsel, Human Resources, Privacy and Information Security teams as well as frontline staff alike.
Rather than just sounding the alarm, the book provides a practical framework for action. It guides readers through the steps of building a robust AI governance programme. This includes defining clear principles and policies, establishing accountability, and implementing crucial checkpoints.
A core part of this framework is a clear-eyed look at the nine key risks organisations must manage: accuracy, fairness and bias, explainability, accountability, privacy, security, intellectual property, safety, and the impact on the workforce and environment. Each risk area is explained, and numerous controls that mitigate and manage these risks are listed with ample references to allow the interested reader to follow-up.
Organisations should carefully consider implementing a Governance Risk and Compliance (GRC) system, which brings together all key aspects of AI governance. GRC systems are available, both from large tech companies and from specialist vendors. A GRC system ties together all key components of AI governance, providing management with a single view of their deployed AI systems, and a window into all stages of AI governance for systems under development.
The book is populated with numerous case studies and interviews with senior executives from some of the largest and well-known origanisations in the world that are grappling with AI risk management.
The authors also navigate the complex and rapidly evolving global regulatory landscape. With the European Union implementing its comprehensive AI Act and the United States advancing a fragmented patchwork of state and federal rules, a strong, adaptable internal governance system is presented as the only viable path forward. The EU AI Act, which has now come into force, with staggered compliance deadlines in the coming two years, requires all organisations that operate within the EU, to implement risk mitigation controls with evidence of compliance. A key date is August 2nd 2026, by which time all 'Hig...
Comments
In Channel




