DiscoverUNESCO’s Hands-On AI Supervision
UNESCO’s Hands-On AI Supervision
Claim Ownership

UNESCO’s Hands-On AI Supervision

Author: UNESCO

Subscribed: 0Played: 0
Share

Description


UNESCO’s Hands-On AI Supervision: Lessons from Practice is a six-episode mini podcast series showcasing concrete lessons from the 2nd Expert Roundtable on AI Supervision, convened by UNESCO. Each episode distils insights from hands-on exercises with leading experts on AI risk mapping, evaluations, red teaming, benchmarking, cybersecurity, and engagement with market actors. Designed for regulators, policymakers, and practitioners, the series explores practical methodologies, emerging challenges, and the institutional capacities needed for effective AI oversight. Through focused conversations with specialists, the series provides accessible, actionable knowledge to strengthen technical readiness and foster ongoing dialogue across the global AI supervision community.

Hosted on Ausha. See ausha.co/privacy-policy for more information.
3 Episodes
Reverse
A deep dive into the metrics and methodologies essential for robust AI evaluations. Agnès Delaborde examines measurement challenges, standards alignment, and the tools supervisory authorities need to assess AI system performance.The conversation highlights gaps between emerging benchmarks and real-world regulatory needs.Speaker: Agnès Delaborde (Laboratoire national de métrologie et d'essais – LNE)Interviewer: Lihui Xu, Programme Specialist, Ethics of AI Unit, UNESCOHosted on Ausha. See ausha.co/privacy-policy for more information.
This episode explores how supervisory authorities can translate high-level AI risk principles into practical, operational risk-mapping processes. Nathalie Cohen discusses evaluation frameworks, data considerations, and real-world challenges identified during the roundtable exercise, providing regulators with concrete steps for structuring risk identification and prioritisation.Speaker: Nathalie Cohen (OECD)Interviewer: Max Kendrick, AI Strategy Coordinator & Senior Advisor, Office of the Director General, UNESCOHosted on Ausha. See ausha.co/privacy-policy for more information.
Effective AI supervision requires reliable benchmarking ecosystems. Nicholas Miailhe discusses why benchmarks matter, how they should be constructed, and what regulators need to know about safety evaluations. The conversation highlights emerging international efforts to standardise safety testing and ensure comparability across models.Speaker: Nicholas Miailhe (PRISM Eval)Interviewer: Doaa Abu Elyounes, Programme Specialist, Ethics of AI Unit, UNESCOHosted on Ausha. See ausha.co/privacy-policy for more information.
Comments