Discover
RegulatingAI Podcast: Innovate Responsibly
RegulatingAI Podcast: Innovate Responsibly
Author: Sanjay Puri
Subscribed: 14Played: 124Subscribe
Share
© Copyright 2025
Description
Welcome to the RegulatingAI Podcast: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast.
You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.
Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!
You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.
Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!
147 Episodes
Reverse
Roy Austin brings a rare, end-to-end perspective on AI governance—from co-authoring President Obama’s 2014 civil rights and big data report to building Meta’s first civil rights team and now leading Howard Law’s AI initiative.In this episode of Regulating AI, he unpacks why self-regulation fails, why data quality still defines AI outcomes, and how states are stepping in where federal policy has stalled. We also explore how unchecked wealth concentration and weak oversight threaten democracy in the age of AI. A must-listen conversation on what real accountability in AI governance should look like.
In this episode of Regulating AI Talk, we sit down with Anne Bouverot, France’s Special Envoy for AI, to unpack one of the defining tensions of our time. As nations race to protect democratic values, economic competitiveness, and technological autonomy, AI refuses to respect borders. Anne explores how governments can balance AI sovereignty with global cooperation, why fragmented regulation could backfire, and what it will take to build shared rules for a technology shaping geopolitics, markets, and society itself. A must-listen conversation on power, policy, and the future of AI governance.
Artificial Intelligence is reshaping economies, governments, and global cooperation.At the ASEAN platform, Sanjay Puri, Founder & Chairperson, sits down with U.S. Congressman Jay Obernolte to discuss the evolving landscape of AI governance, AI policy, and international collaboration.This insightful conversation explores:The future of AI regulation and governanceHow governments can balance innovation and responsibilityThe role of ASEAN and global partnerships in shaping AI policyThe importance of ethical, transparent, and inclusive AI frameworks📌 Watch the full discussion to understand how policymakers and industry leaders are working together to shape the future of AI.
In this episode of the Regulating AI Podcast, we speak with Camille Carlton, Director of Policy at the Center for Humane Technology, a leading voice in AI regulation, chatbot safety, and public-interest technology. Camille is directly involved in landmark lawsuits against CharacterAI and OpenAI CEO Sam Altman, placing her at the forefront of debates around AI accountability, AI companions, and platform liability. This conversation examines the mental-health risks of AI chatbots, the rise of AI companions, and why certain conversational systems may pose public-health concerns, especially for younger and socially isolated users. Camille also breaks down how AI governance frameworks differ across U.S. states, Congress, and the EU AI Act, and outlines what practical, enforceable AI policy could look like in the years ahead. Key Takeaways AI Chatbots as a Public-Health Risk Why AI companions may intensify loneliness, emotional dependency, and psychological harm—raising urgent mental-health and safety concerns. Regulating Chatbots vs. Foundation Models Why high-risk conversational AI systems require different regulatory treatment than general-purpose LLMs and foundation models. Global AI Governance Lessons What the EU AI Act, U.S. states, and Congress can learn from each other when designing balanced, risk-based AI regulation. Transparency, Design & Accountability How a light-touch but firm AI policy approach can improve transparency, platform accountability, and data access without slowing innovation. Why AI Personhood Is a Dangerous Idea How framing AI systems as “persons” undermines liability, weakens accountability, and complicates enforcement. Subscribe to Regulating AI for expert conversations on AI governance, responsible AI, technology policy, and the future of regulation. #RegulatingAIpodcast #camillecarlton #AIGovernance #ChatbotSafety #Knowledgenetworks #AICompanions Resources Mentioned: https://www.linkedin.com/in/camille-carlton https://www.humanetech.com/ https://www.humanetech.com/substack https://www.humanetech.com/podcast https://www.humanetech.com/landing/the-ai-dilemma https://centerforhumanetechnology.substack.com/p/ai-product-liability https://www.humanetech.com/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai
In this episode of RegulatingAI, host Sanjay Puri speaks with Karin Andrea-Stephan — COO & Co-founder of Earkick, an AI-powered mental health platform redefining how technology supports emotional well-being. With a career that spans music, psychology, and digital innovation, Karin shares how she’s building privacy-first AI tools designed to make mental health support accessible — especially for teens navigating loneliness and emotional stress. Together, they unpack the delicate balance between AI innovation and human empathy, the ethics of AI chatbots for youth, and what it really takes to design technology that heals instead of harms. Key Takeaways: • AI and Empathy: Why emotional intelligence—not algorithms—must guide the future of mental health tech. • Teens and Trust: How technology exploits belonging, and what must change to rebuild digital trust. • Regulating Responsibly: Why the answer isn’t bans, but thoughtful, transparent policy shaped with youth input. • Privacy by Design: How ethical AI can protect privacy without compromising impact. • Bridging the Global Mental Health Gap: Why collaboration and compassion matter as much as code. If this conversation made you rethink the relationship between AI and mental health, hit like, share, and subscribe to RegulatingAI for more insights on building technology that serves humanity. Resources Mentioned: https://www.linkedin.com/in/karinstephan/
In this episode of RegulatingAI, host Sanjay Puri sits down with Jeff McMillan, Head of Firmwide Artificial Intelligence at Morgan Stanley. With over 25 years of experience leading digital transformation and responsible AI adoption in one of the world’s most regulated industries, Jeff shares how large enterprises can harness generative AI responsibly striking the right balance between innovation, governance, and ethics. Key Takeaways: AI Governance: Why collaboration across business, legal, and compliance is the foundation of effective AI oversight. Human-in-the-Loop: Morgan Stanley’s core principle—keeping humans accountable and central in every AI decision. Education First: Jeff’s golden rule—spend 90% of your AI budget training people before building tech. AI as a Risk Mitigator: How AI can actually strengthen compliance and risk management when designed right. Culture Over Code: Why successful AI transformation is less about algorithms and more about mindset, structure, and leadership. If you enjoyed this conversation, don’t forget to like, share, and subscribe to RegulatingAI for more insights from global leaders shaping the future of responsible AI. #RegulatingAI #SanjayPuri #MorganStanley #JeffMcmillan #AIGovernance #AILeadership #EnterpriseAI Resources Mentioned: https://www.linkedin.com/in/jeff-mcmillan-bb8b0a5/ Recent Podcast https://podcasts.apple.com/fr/podcast/jeff-mcmillan-how-morgan-stanley-deploys-ai-at-scale/id1819622546?i=1000714786849 Morgan Stanley External Facing Website sharing some of the work we are doing on AI https://www.morganstanley.com/about-us/technology/artificial-intelligence-firmwide-team
In this episode of the RegulatingAI Podcast, we host California State Senator Scott Wiener, one of the most influential policymakers shaping the future of AI regulation, AI safety, and transparency standards in the United States. As President Donald Trump’s new AI executive order pushes for federal control over AI regulation, Senator Wiener explains why states like California must retain the power to regulate artificial intelligence — and how California’s laws could influence global AI governance. Senator Wiener is the author of: • SB 1047 – California’s proposed liability bill for high-risk AI systems • SB 53 – California’s new AI transparency law, now in effect We dive deep into: • The battle between federal vs. state AI regulation • Why California remains the frontline of AI governance • The real impact of Trump’s AI executive order • Growing risks of AI-driven job displacement • How governments can balance innovation with public safety • The future of responsible and accountable AI development 🔑 KEY TAKEAWAYS 1. California’s Policy Power California’s tech dominance allows it to shape national and global AI standards even when Congress stalls. 2. SB 1047 vs. SB 53 Explained SB 1047 proposed legal liability for dangerous AI systems, while SB 53 — now law — requires AI companies to publicly disclose safety and risk practices. 3. Why Transparency Won After SB 1047 was vetoed, California shifted toward transparency as a regulatory first step through SB 53. 4. AI Job Disruption Is Accelerating Senator Wiener warns that workforce displacement from AI is happening faster than expected. 5. A Realistic Middle Path He advocates for smart AI guardrails — avoiding both overregulation and total deregulation. If you found this conversation valuable, don’t forget to like, subscribe, and share to stay updated on global conversations shaping the future of AI governance. Resources Mentioned: https://www.linkedin.com/company/ascet-center-of-excellence https://www.linkedin.com/in/james-h-dickerson-phd
In this episode of RegulatingAI, host Sanjay Puri sits down with Congresswoman Sarah McBride of Delaware — a member of the U.S. Congressional AI Caucus — to talk about how America can lead responsibly in the global AI race. From finding the right balance between innovation and regulation to making sure AI truly benefits workers and small businesses, Rep. McBride shares her human-centered vision for how AI can advance democracy, fairness, and opportunity for everyone. Here are 5 key takeaways from the conversation: 💡 Finding the “Goldilocks” Zone: How to strike that just-right balance where AI regulation protects people without holding back innovation. 🏛️ Federal vs. State Regulation: Why McBride believes the U.S. needs a unified national AI framework — but one that still values state leadership and flexibility. 👩💻 AI and the Workforce: What policymakers can do to make sure AI augments human talent rather than replacing it. 🌎 Democracy vs. Authoritarianism: The U.S.’s role in leading with values and shaping AI that reflects openness, ethics, and democracy. 🔔 Delaware’s Legacy of Innovation: How Delaware’s collaborative approach to growth can be a model for responsible tech leadership. If you enjoyed this episode, don’t forget to like, comment, share, and subscribe to RegulatingAI for more conversations with global policymakers shaping the future of artificial intelligence. Resources Mentioned: mcbride.house.gov https://mcbride.house.gov/about
Armenia is quietly becoming one of the world's most interesting AI hubs—and you probably haven't heard about it yet.In this episode, I sit down with Armenia's Minister of Finance to discuss:~ Why Nvidia is building a massive AI factory in Armenia~ How a country of 3 million is attracting Synopsis, Yandex, and major tech companies~ The secret advantage: abundant energy + Soviet-era engineering talent~ Is the AI investment boom a bubble or the real deal?~ How AI is already being used in tax collection and government services~ The peace agreement with Azerbaijan and what it means for tech investment~ Why the "Middle Corridor" could make Armenia the next tech destinationThe Minister doesn't think AI investment is a bubble—he thinks we're just getting started. He shares honest insights about job displacement, efficiency gains, and why human connection still matters in an AI-driven world.About the Guest:Armenia's Minister of Finance is an economist who rose from bank accounting to leading the nation's fiscal policy. He oversees Armenia's economic transformation during a pivotal era of digital ambitions and AI development.🎙️ Subscribe for conversations with global leaders at the intersection of AI, policy, and innovation💬 Leave a comment: What surprised you most about Armenia's AI strategy?🔔 Hit the bell to catch our next episode
In this episode of RegulatingAI, host Sanjay Puri welcomes Dr. Mark Robinson — Senior Science Diplomacy Advisor, Oxford Martin AI Governance Initiative, University of Oxford. Drawing on decades of experience leading projects like ITER and the European Southern Observatory, Dr. Robinson shares his bold vision: establishing an international AI agency under the United Nations. Together, we explore the urgent need for global AI governance, parallels with past scientific collaborations, and the challenges of balancing innovation, safety, and sovereignty. 5 Key Takeaways Why massive global science collaborations like ITER offer lessons for AI governance. The case for a UN-backed International AI Agency to coordinate regulation. How U.S.–China cooperation could unlock a global framework for AI oversight. The risks of leaving governance solely to fragmented national initiatives and big tech. Why timing, leadership, and inclusivity (including the Global South) are critical to shaping AI’s future. If you found this conversation insightful, don’t forget to like, comment, and share — and subscribe to RegulatingAI for more global perspectives on building a trustworthy AI future. Resources Mentioned: https://iaia4life.org/ https://www.linkedin.com/in/mark-robinson-3594132b/
🎙 While global AI conversations are dominated by the US, China, and Europe, Africa is crafting its own path. Dr. Nick Bradshaw, Founder of the South African AI Association, joins us to discuss how the continent can build sovereign AI systems, retain talent, and shape regulation rooted in local realities.From data sovereignty to the “brain drain” challenge, we explore what responsible AI looks like for Africa—and how regulation can drive innovation, not restrict it.Resources Mentioned: https://www.linkedin.com/in/nickbradshaw/
In this episode of the RegulatingAI Podcast, host Sanjay Puri had an engaging podcast with Governor Matt Meyer, Delaware’s 76th Governor and a national leader in AI governance. Governor Meyer shares how Delaware is pioneering responsible AI through initiatives like the AI sandbox, the OpenAI workforce certification partnership, and efforts to safeguard democracy from deepfakes. This masterclass in state-led AI regulation explores how innovation and accountability can—and must—go hand in hand. 5 Key Takeaways: AI as a Tool, Not Destiny: Governor Meyer emphasizes that AI’s value lies in how it improves lives—not in the technology itself. First to Value, Not First to Hype: Delaware is piloting and scaling AI responsibly, ensuring guardrails before mass adoption. Workforce First: With OpenAI certification programs, Delaware is leading in preparing workers and students for the AI-powered economy. Balancing Innovation & Regulation: The state’s AI sandbox offers a safe testbed for companies to experiment responsibly. Protecting Democracy & People: From tackling election deepfakes to ensuring job transitions, Meyer highlights human-centered governance. If you found this conversation insightful, don’t forget to like, comment, share, and subscribe to the RegulatingAI Podcast for more expert perspectives on the future of AI. Resources Mentioned: https://www.linkedin.com/company/governor-delaware-matt-meyer/ https://governor.delaware.gov/ https://news.delaware.gov/2025/07/23/delaware-launches-bold-ai-sandbox-initiative-cementing-its-role-as-a-national-leader-in-responsible-tech-innovation/ https://news.delaware.gov/2025/09/04/delaware-first-state-in-the-nation-to-partner-with-openai-on-certification-program/
In this episode of RegulatingAI, Sanjay Puri speaks with Nebraska Attorney General Mike Hilgers, who is leading efforts to combat AI-enabled child exploitation. You’ll learn: Why AI-generated CSAM (child sexual abuse material) presents unprecedented risks How Nebraska passed LB 383 to prohibit AI-generated CSAM The challenges of prosecuting AI crimes compared to traditional crimes Why bipartisan coalitions matter in AI governance How innovation and child protection can coexist in law and policy Hilgers also shares his perspective on the U.S.–China AI race and why legal frameworks must adapt to fast-moving technologies. Resources Mentioned: https://www.linkedin.com/company/nebraska-department-of-justice
In this episode of the RegulatingAI Podcast, Sanjay Puri speaks with Rui Pedro Duarte, Managing Director at Loop Future Switzerland and author of The Age of AI Diplomacy. A former Member of Parliament in Portugal, Rui shares a unique perspective on how political experience and technology collide in shaping AI governance. Key discussion points: Why AI diplomacy must evolve to operate at machine speed The concept of “quantum diplomacy” and treaties that self-update The role of coders and open-source communities as new diplomats Why AI should be treated as critical infrastructure, not just a product How collaborative velocity can drive equity between the Global North and South Watch now for a deep exploration of AI’s role in diplomacy and the urgent need for systemic, global cooperation. Resources Mentioned: https://www.linkedin.com/in/rpgduarte
In this episode of the RegulatingAI Podcast, Sanjay Puri speaks with Brando Benifei, Member of the European Parliament, and one of the lead architects of the EU AI Act—the world’s first binding legislation on artificial intelligence. Brando shares deep insights into the challenges of implementation, balancing transparency with intellectual property, and safeguarding freedoms in a rapidly evolving AI landscape. Key highlights include: The role of transparency and auditability in AI governance Proportional fines and their impact on SMEs versus Big Tech Why certain AI practices, like predictive policing and mass surveillance, are prohibited How the EU AI Act integrates with global governance efforts The importance of education, sandboxes, and support for SMEs 🔗 Watch now to understand how Europe is shaping AI regulation—and what it means for the world. Resources Mentioned: https://en.wikipedia.org/wiki/Brando_Benifei https://www.europarl.europa.eu/meps/en/124867/BRANDO_BENIFEI/home
RegulatingAI Podcast: How the UN’s ITU Is Shaping Global AI Standards | Tomas Lamanauskas In this compelling episode, host Sanjay Puri sits down with Tomas Lamanauskas, Deputy Secretary-General of the International Telecommunication Union (ITU), to explore the global architecture of AI governance. 🔍 What you’ll learn: How ITU transitioned from regulating telegraphs to AI governance Why AI standardization is not a barrier to innovation The ITU’s pivotal role in connecting 8 billion people The balance between innovation, regulation, and inclusion Behind-the-scenes of the AI for Good Global Summit 🌍 A must-watch for: Policymakers and AI regulators Tech entrepreneurs and infrastructure investors Anyone who cares about global equity in the age of AI Subscribe for future episodes diving deep into global AI governance. Resources Mentioned: https://www.linkedin.com/in/tlamanauskas/ https://www.itu.int/en/osg/Pages/biography-itu-dsg-tomas.aspx ⏱️ Timestamps: 0:00 Podcast Highlights & Introduction 2:00 What is the ITU and its role in AI regulation? 2:45 From telegraph to AI: A history of the ITU 8:42 Standardizing AI in a rapidly moving world 14:03 The ITU's role in enforcing standards 18:51 Three approaches to AI governance: EU, US, and China 25:01 Geopolitics and national security in AI 30:24 The importance of undersea cables 34:41 Ensuring AI benefits everyone and bridging the digital divide 43:21 The AI for Good Global Summit 48:28 Conclusion and farewell
Live from the AI4 Conference in Las Vegas, Andrew Reiskind, Chief Data Officer at Mastercard, joins the Regulating AI Podcast to discuss the critical intersection of data, AI, and trust. From AI-powered fraud detection to personalization, responsible AI governance, and the rise of agentic commerce, Andrew shares how Mastercard is navigating global challenges in data sovereignty while keeping safety and security at the core. Topics Covered: How Mastercard has used AI for 20+ years in fraud prevention & personalization The role of agentic AI in customer service & commerce Why trust and security must guide AI innovation Frameworks for responsible AI & governance across global markets Subscribe for more insights from AI leaders shaping the future. Resources Mentioned: https://www.linkedin.com/in/andrew-reiskind-53a743/
In this episode of the RegulatingAI podcast, host Sanjay Puri speaks with Professor Edward Santow, former Australian Human Rights Commissioner and co-director of the Human Technology Institute. Together, they explore how algorithms intended to support justice can actually perpetuate discrimination. Key topics include: How Australia’s largest police force used AI to profile Indigenous youth The consequences of using historical data without correcting historical bias Why system-level harms from AI demand policy-level responses What governments must do to protect rights while embracing innovation A sobering and essential conversation about AI, justice, and what ethical governance looks like in practice. Resources Mentioned: https://www.linkedin.com/in/esantow/ ⏱️ Timestamps: 0:00 Podcast Highlights 1:34 Ed’s background and journey into technology governance 2:12 The 'aha' moment: an algorithm targeting young people based on race 5:36 Finding a balance between AI's dystopian problems and positive use cases 9:07 The global fear of missing out (FOMO) and the trade-off with fundamental rights 11:12 Why innovation and regulation are not a trade-off 12:22 Comparing the AI regulatory approaches of the EU, US, and China 13:57 Australia's practical, non-ideological approach to AI 15:45 How Australia is building its niche on liberal democratic values 19:22 The shift from "fluffy principles" to practical AI safety standards 22:37 The three most common issues for corporate leaders in AI governance 23:08 The problem with the "AI guru" model of governance 25:08 The "dirty secret" of AI and the importance of engaging workers 35:24 The impact of AI on jobs and the workplace 40:28 The Asia-Pacific region's role in AI governance 44:07 Preserving indigenous cultures and languages in AI training data 47:14 The concentration of power in a handful of AI companies 50:09 Facial recognition: good uses vs. bad uses 53:57 Lightning round of questions 55:22 Conclusion and farewell
Join host Sanjay Puri in conversation with Dr. Cari Miller, a leading voice in AI governance, as they unpack the recently announced America’s AI Action Plan. 🔍 What you'll learn: Why Pillar One of the U.S. plan may spark global misalignment The risks of removing "misinformation" from AI frameworks Why U.S. innovation might clash with the EU AI Act and global regulatory norms How free speech and foundation models intersect with international policy 🌍 Global policymakers, this one is for you. 🎯 Watch now to understand why the latest U.S. move could raise alarms worldwide. Resources Mentioned: https://www.linkedin.com/in/cari-miller/ ⏱️ Timestamps: 0:00 Introduction of Dr. Cari Miller 2:52 The three pillars of America's AI Action Plan 7:20 Comparing the AI Action Plan to the EU AI Act 8:27 "Hurry up and innovate" and the geopolitical dimension of AI 10:45 The dilemma between innovation and regulation 13:09 The moratorium on state-level AI regulation 15:50 A spectrum for regulation: reversible vs. irreversible harm 17:17 The EU's approach to regulation 19:10 Why AI procurement is the "gate of all gates" for governance 21:27 What makes AI procurement different 23:32 The need for augmented procurement practices and training 24:14 Accounting for hallucination and vendor disclaimers 27:55 Procurement for foundation models vs. fine-tuned solutions 29:39 The possibility of AI insurance 31:02 Distinguishing between trustworthy and "AI snake oil" vendors 33:23 Strengths and weaknesses of existing AI procurement frameworks 35:26 The three checkpoints before issuing an AI RFP 37:41 Sovereign AI and procurement for global south nations 40:20 Concerns about agents and agentic AI systems 44:02 The domain professional and complex multi-turn tasks 45:59 Procurement and pricing models for AI agents 49:00 The maturity of agents and the role of CISOs 52:35 Liability and governance for autonomous agents 55:33 The use of synthetic data: benefits and risks 58:50 Lightning round of questions 1:01:53 Concluding remarks
The RegulatingAI Podcast welcomes Prof. Raquel Brízida Castro to examine how Europe's AI regulatory framework measures up against core constitutional protections. 📌 Topics Covered: ~ The EU AI Act’s categorisation of risk – does it go far enough? ~ The collision between data sovereignty, latency, and user rights ~ Why current legal remedies like GDPR aren't enough for generative AI ~ Does the Brussels effect stand a chance against the Washington effect? ~ Will national courts lose relevance in the age of EU digital regulation? ~ Raquel's legal insight warns of a quiet constitutional revolution underway and why citizen protection must evolve urgently. 🎧 Watch Now: This conversation is vital for anyone navigating AI governance in democratic societies. Resources Mentioned: https://www.linkedin.com/in/raquel-a-br%C3%ADzida-castro-15317a105/ ⏱️ Timestamps: 0:00 Introduction to the podcast and guest, Raquel Brízida Castro 2:21 Magnificent Introduction 2:58 The EU AI Act from a Constitutional Law Perspective 3:20 Constitutional Challenges and the Digital Social Democratic Rule of Law 5:59 New Fundamental Rights in the AI Age 8:27 The Right to Explainability: Rule of Law vs. Rule of Algorithm 11:34 Is the EU AI Act's Risk-Based Approach Adequate? 12:05 The Impact of AI on Fundamental Rights 14:52 Regulation vs. Bureaucracy and Self-Regulation 16:26 The Implementation of the AI Act and its Challenges 21:58 The EU vs. US Approach: Regulation vs. Innovation 23:55 The False Dilemma Between Regulating and Innovation 27:09 The Washington Effect 30:51 Implications for American Companies in Europe 31:49 Digital Sovereignty and the Problem of Latency 35:28 Constitutional Safeguards and Regulatory Overreach 35:40 The Primacy of European Law and the Role of Constitutional Courts 38:58 The Two-Year Moratorium on the EU Act 40:30 Lightning Round of Questions 43:24 Final thoughts























