Researcher uncovers weaknesses in AI systems (#16)
Update: 2025-11-14
Description
Artificial intelligence is advancing rapidly - but so are the risks hidden in its systems.
In this episode, Sandra Casalini talks with Maura Pintor, Assistant Professor at the PRA Laboratory of the University of Cagliari, about the unseen vulnerabilities of AI models. Pintor explains why proactive security testing - modeling attacks before they happen - is essential for building trustworthy AI. She discusses how misjudgments and confirmation bias slow progress, and why the fast-paced evolution of generative AI poses challenges for data quality and reliability. Despite the hurdles, Pintor remains optimistic: new approaches in automated testing and validation show that secure AI is possible - if we are ready to challenge our assumptions.
In this episode, Sandra Casalini talks with Maura Pintor, Assistant Professor at the PRA Laboratory of the University of Cagliari, about the unseen vulnerabilities of AI models. Pintor explains why proactive security testing - modeling attacks before they happen - is essential for building trustworthy AI. She discusses how misjudgments and confirmation bias slow progress, and why the fast-paced evolution of generative AI poses challenges for data quality and reliability. Despite the hurdles, Pintor remains optimistic: new approaches in automated testing and validation show that secure AI is possible - if we are ready to challenge our assumptions.
Comments
In Channel











