Discover
The AI Policy Podcast
The AI Policy Podcast
Author: Center for Strategic and International Studies
Subscribed: 355Played: 2,327Subscribe
Share
Description
Join CSIS’s Gregory C. Allen, senior adviser with the Wadhwani AI Centers, on a deep dive into the world of AI policy. Every two weeks, tune in for insightful discussions regarding AI policy regulation, innovation, national security, and geopolitics. The AI Policy Podcast is by the Wadhwani Center for AI and Advanced Technologies at CSIS, a bipartisan think-tank in Washington, D.C.
82 Episodes
Reverse
In this special episode recorded at Fathom’s 2026 Ashby Workshops, Greg sits down with Jennifer Pahlka, founder of Code for America and author of Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better. Jennifer walks us through her career journey, from filing paperwork at a child welfare agency to helping pioneer the U.S. Digital Services in the Obama administration (3:45). She describes the need for upstream policy reform (11:29), and discusses AI’s potential to both empower public servants to challenge antiquated practices and help policymakers simplify complex regulations (28:03). Finally, Jennifer shares some AI use cases she’s particularly excited about in government (59:34).
Jennifer Pahlka is a senior fellow at the Niskanen Center and the Federation of American Scientists and a senior advisor at the Abundance Network. She previously served as U.S. Deputy Chief Technology Officer, helping start the U.S. Digital Services under the second Obama administration, and as a member of the Defense Innovation Network.
Read Jennifer’s book Recoding America and check out her Substack Eating Policy.
Jennifer’s recommended reading:
Hack Your Bureaucracy by Marina Nitze & Nick Sinai
Crisis Engineering by Marina Nitze, Matthew Weaver, & Mikey Dickerson
The Procedure Fetish by Nicholas Bagley
Why Nothing Works by Marc J. Dunkelman
Kill It with Fire by Marianne Bellotti
This episode cross-posts a fireside chat with the Ambassadors of India and France to the United States, Amb. Vinay Kwatra and Amb. Laurent Bili. The discussion was recorded at the Wadhwani AI Center’s January 30 conference, “Exploring Global AI Policy Priorities Ahead of the India AI Impact Summit.” A full recording of the conference, including additional panels and speakers, can be found here.
In this episode, we discuss and evaluate the BIS' new export policy for Nvidia's H200 chips (00:31) before turning to Beijing's decision to block H200 imports (20:18). We then unpack the Pentagon's recently published AI Strategy, including the shift it represents in DOW's approach to AI integration (29:17).
Read the CNAS commentary "Unpacking the H200 Export Policy" here.
In this episode, we examine Grok’s public posting of child sexual abuse material and non-consensual intimate imagery (00:27), the legal consequences xAI may face (12:41), and the international policy community's response (19:05). We then unpack New York’s RAISE Act, including the politics leading up to Gov. Hochul’s signature (22:51) and the final outcome of negotiations (28:16).
In this episode, we're joined by Chris McGuire for a conversation about AI and semiconductor export controls. We begin by discussing Chris' career path into AI and national security (1:55), then turn to his views on recent developments, including reports about a Chinese EUV prototype (11:07). We spend the rest of the episode rating common arguments against AI export controls as fact, fiction, or somewhere in-between (40:25).
Chris is a Senior Fellow for China and Emerging Technologies at the Council on Foreign Relations and a leading expert on U.S.-China AI competition. Before joining CFR, he served as a career government official for over a decade, including as Deputy Senior Director for Technology and National Security at the National Security Council (NSC) from 2022 to 2024. Links to some of Chris' recent work, as discussed in the podcast, are included below.
China’s AI Chip Deficit: Why Huawei Can’t Catch Nvidia and U.S. Export Controls Should Remain
Testimony on Strengthening Export Controls on Semiconductor Manufacturing Equipment
Since 2023, a series of global AI summits has brought together world leaders to advance international dialogue and cooperation on artificial intelligence. Building on this momentum, Prime Minister Narendra Modi announced the India AI Impact Summit, which will take place in New Delhi in February 2026. As the first summit in the series to be hosted in a Global South country, the AI Impact Summit aims to amplify Global South perspectives and advance concrete action to address both the opportunities and risks of AI.
On December 8, 2025, the CSIS Wadhwani AI Center hosted S. Krishnan, Secretary of India’s Ministry of Electronics and Information Technology (MeitY), for a livestreamed fireside chat with Wadhwani AI Center Senior Adviser Gregory C. Allen. Secretary Krishnan, who leads India’s national AI strategy, will outline India’s policy priorities and share insights into the goals and global aspirations shaping the upcoming AI Impact Summit. He will also offer a comprehensive look at the central role MeitY plays in driving innovation across India’s AI ecosystem.
Secretary Krishnan brings more than 35 years of experience in public service, having joined the Indian Administrative Service in 1989. Prior to his current role, he served as the Additional Chief Secretary of the Industries, Investment Promotion and Commerce Department in the Government of Tamil Nadu. He has also served as Senior Advisor in the Office of the Executive Director for India, Sri Lanka, Bangladesh, and Bhutan at the International Monetary Fund, and has represented India in the G20 Expert Groups on International Financial Architecture and Global Financial Safety Nets. Secretary Krishnan holds a bachelor’s degree from St. Stephen’s College in Delhi.
In this episode, we unpack President Trump’s new executive order targeting state AI laws, including how the final version compares to an earlier draft (1:26), and the legal and political challenges it is likely to face (14:46). We then discuss recent Reuters reporting on Meta’s reliance on scam-driven ad revenue (22:12) and what the social media experience suggests about the risks of failing to regulate AI (45:21).
In this episode, we break down the White House’s decision to let Nvidia’s H200 chips be exported to China and Greg’s case against the move (00:33). We then discuss Trump’s planned “One Rule” executive order to preempt state AI laws (18:59), examine the NDAA's proposed AI Futures Steering Committee (23:09), and analyze the Genesis Mission executive order (26:07), comparing its ambitions and funding reality to the Manhattan Project and Apollo program. We close by looking at why major insurers are seeking to exclude AI risks from corporate policies and how that could impact AI adoption, regulation, and governance (40:29).
In this episode, we start by discussing Greg's trip to India and the upcoming India AI Impact Summit in February 2026 (00:29). We then unpack the Trump Administration’s draft executive order to preempt state AI laws (07:46) and break down the European Commission’s new “digital omnibus” package, including proposed adjustments to the AI Act and broader regulatory simplification efforts (17:51). Finally, we discuss Anthropic’s report on a China-backed “highly sophisticated cyber espionage campaign" using Claude and the mixed reactions from cybersecurity and AI policy experts (37:37).
In this episode, Georgia Adamson and Saif Khan from the Institute for Progress join Greg to unpack their October 25 paper, "Should the US Sell Blackwell Chips to China?" They discuss the geopolitical context of the paper (3:26), how the rumored B30A would compare to other advanced AI chips (11:37), and the potential consequences if the US were to permit B30A exports to China (32:00).
Their paper is available here.
One of the most common questions we get from listeners is how to build a successful career in AI policy—so we dedicated an entire episode to answering it. We cover the most formative experiences from Greg's career journey (3:30), general principles for professional success (45:09), and actionable tips specific to breaking into the AI policy space (1:11:52).
In this episode, we cover OpenAI’s latest video-generation model Sora 2 (1:02), concrete harms and potential risks from deepfakes (5:18), the underlying technology and its history (27:03), and how policy can mitigate harms (36:31).
In this episode, we are joined by Rep. Jay Obernolte, one of Congress’s leading voices on AI policy. We discuss his path from developing video games to serving in Congress (00:49), the work of the bipartisan House Task Force on AI and its final report (9:39), competing approaches to designing AI regulation in Congress (16:38), and prospects for federal preemption of state AI legislation (40:32).
Congressman Obernolte has represented California’s 23rd district since 2021. He co-chaired the bipartisan House Task Force on AI, leading the development of an extensive December 2024 report outlining a congressional agenda for AI. He also serves as vice-chair of the Congressional AI Caucus and is the only current member of Congress with an advanced degree in Artificial Intelligence, which he earned from UCLA in 1997. Rep. Obernolte previously served in the California State Legislature.
In this episode, we are joined by economist Harry Holzer to discuss how AI is set to transform labor. Holzer was Chief Economist at the U.S. Department of Labor during the Clinton administration and is currently a Professor of Public Policy at Georgetown University. We break down the fundamentals of the labor market (4:00) and the current and future impact of AI automation (10:30). Holzer also reacts to Anthropic CEO Dario Amodei's warning that AI could eliminate half of entry-level white-collar jobs (23:32) and explains why we need better data capturing AI's impact on the labor market (52:53).
Harry Holzer recently co-authored a white paper titled "Proactively Developing & Assisting the Workforce in the Age of AI," which is available here.
In this episode, we dive into California's new AI transparency law, SB 53. We explore the bill's history (00:30), contrast it with the more controversial SB 1047 (6:43), break down the specific disclosure requirements for AI labs of different scales (13:38), and discuss how industry stakeholders and policy experts have responded to the legislation (29:47).
In this episode, we're joined by Joseph Majkut, Director of CSIS' Energy Security and Climate Change Program, to take an in-depth look at energy's role in AI. We explore the current state of the U.S. electrical grid (11:34), bottlenecks in the AI data center buildout (43:45), how U.S. energy efforts compare internationally (1:16:06), and more.
Joseph has co-authored three reports on AI and energy: AI for the Grid: Opportunities, Risks, and Safeguards (September 2025), The Electricity Supply Bottleneck on U.S. AI Dominance (March 2025), and The AI Power Surge: Growth Scenarios for GenAI Datacenters Through 2030 (March 2025).
In this episode, we discuss how today’s massive AI infrastructure investments compare to the Manhattan Project (00:33), China’s reported ban on Nvidia chips and its implications for export control policy (13:41), Anthropic’s $1.5 billion copyright settlement with authors (33:49), and recent multibillion-dollar AI investments by Nvidia and ASML (44:42).
In this episode, we discuss China's focus on AI adoption (00:58), the underlying factors driving investor enthusiasm (14:51), and the national security implications of China's booming AI industry (31:47).
In this episode, we are joined by Marietje Schaake, former Member of the European Parliament, to unpack the EU AI Act Code of Practice. Schaake served as Chair of the Working Group on Internal Risk Management and Governance of General-Purpose AI Providers for the Code of Practice, with a focus on AI model safety and security. We discuss the development and drafting of the EU AI Act and Code of Practice (16:47), break down how the Code helps AI companies demonstrate compliance with the Act (28:25), and explore the kinds of systemic risks the AI Act seeks to address (32:00).
In this episode, we unpack the Trump administration’s $8.9 billion deal to acquire a 9.9% stake in Intel, examining the underlying logic, financial terms, and political reactions from across the spectrum (00:33). We then cover Nvidia’s sudden halt in H20 chip production for China, its plans for a Blackwell alternative, and what Beijing’s self-sufficiency push means for the AI race (28:18).



🔴🔴WATCH>>ᗪOᗯᑎᒪOᗩᗪ>>LINK>👉https://co.fastmovies.org