DiscoverInsight OnThe 'Moral Crumple Zone': Who Takes the Blame When AI Makes a Mistake? | EP 06
The 'Moral Crumple Zone': Who Takes the Blame When AI Makes a Mistake? | EP 06

The 'Moral Crumple Zone': Who Takes the Blame When AI Makes a Mistake? | EP 06

Update: 2025-09-24
Share

Description

When AI makes a mistake, who's accountable — the developer, the user, or the system itself?

In this episode, Meagan Gentry, National AI Practice Senior Manager & Distinguished Technologist at Insight, unpacks the concept of agentic AI and how organizations can embed accountability into autonomous workflows.

From the "moral crumple zone" to use case feasibility mapping, Meagan shares frameworks for building trust and driving ROI with AI agents.

Jump right to… 
00:00 : Welcome/intro 
03:12 : What is agentic AI? 
06:45 : Why accountability matters now 
09:30 : Explainability vs. performance tradeoffs 
13:10 : Ownership and moral crumple zones 
17:15 : Mapping accountability across AI lifecycle 
20:21 : Empowering users with AI awareness 
25:32 : Human in the loop vs. human in command 
27:24 : What CEOs must ask before greenlighting AI 
29:30 : Who belongs at the AI strategy table 
30:58 : Culture shifts and trust in AI agents 

 🎯 Related resources:

https://www.insight.com/en_US/content-and-resources/blog/the-truth-about-ai-agent-risks-and-what-to-do-about-them.html

https://www.insight.com/en_US/content-and-resources/blog/6-high-impact-agentic-ai-use-cases-executives-should-champion-today.html

Learn more: https://www.insight.com/en_US/what-we-do/expertise/data-and-ai/generative-ai.html

Subscribe for more episodes.

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

The 'Moral Crumple Zone': Who Takes the Blame When AI Makes a Mistake? | EP 06

The 'Moral Crumple Zone': Who Takes the Blame When AI Makes a Mistake? | EP 06

Insight Enterprise