Discover
The Tech Policy Press Podcast

The Tech Policy Press Podcast
Author: Tech Policy Press
Subscribed: 93Played: 4,110Subscribe
Share
© Copyright 2025 Tech Policy Press
Description
Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy.
You can find us at https://techpolicy.press/, where you can join the newsletter.
You can find us at https://techpolicy.press/, where you can join the newsletter.
377 Episodes
Reverse
From September 21–28, New York City will host Climate Week. Leaders from business, politics, academia, and civil society will gather to share ideas and develop strategies to address the climate crisis.The tech industry intersects with climate concerns in a number of ways, not least of which is through its own growing demand for natural resources and energy, particularly to power data centers. What should a “tech agenda” for Climate Week include? What are the most important issues that need attention, and how should challenges and opportunities be framed?Last week, Tech Policy Press hosted a live recording of The Tech Policy Press Podcast to get at these questions and more. Justin Hendrix was joined by three expert guests:Alix Dunn, founder and CEO of The MaybeTamara Kneese, director of Data & Society's Climate, Technology, and Justice ProgramHolly Alpine, co-Founder of the Enabled Emissions Campaign
Charlie Kirk, a conservative activist and co-founder of Turning Point USA, died Wednesday after he was shot at an event at Utah Valley University. Kirk’s assassination was instantly broadcast to the world from multiple perspectives on social media platforms including TikTok, Instagram, YouTube and X. But in the hours and days that have followed, the video and various derivative versions of it have proliferated alongside an increasingly divisive debate over Kirk’s legacy, the possible motives of the assassin, and the political implications. It is clear that, in some cases, the tech platforms are struggling to enforce their own content moderation rules, raising questions about their policies and investments in trust and safety, even as AI generated material plays a more significant role in the information ecosystem. To learn more about these phenomena, Justin Hendrix spoke to Wired senior correspondent Lauren Goode, who is covering this story.
Demand for computing power is fueling a massive surge in investment in data centers worldwide. McKinsey estimates spending will hit $6.7 trillion by 2030, with more than $1 trillion expected in the U.S. alone over the next five years. As this boom accelerates, public scrutiny is intensifying. Communities across the country are raising questions about environmental impacts, energy demands, and the broader social and economic consequences of this rapid buildout. To learn more about these debates—and the efforts to shape the industry’s future—Justin Hendrix spoke with two activists: one working at the national level, and another organizing locally in their own community. Vivek Bharathan is a member of the No Desert Data Center Coalition in Tucson, Arizona.Steven Renderos is executive director of MediaJustice, an advocacy organization that just released a report titled The People Say No: Resisting Data Centers in the South.
For the latest episode in her series of podcast discussions, Through to Thriving, Tech Policy Press fellow Anika Collier Navaroli spoke to Vaishnavi J, founder and principal of Vyanams Strategies (VYS), a trust and safety advisory firm focusing on youth safety, and a former safety leader at Meta, Twitter, and Google. Anika and Vaishnavi discussed a range of issues on the theme of how to center the views and needs of young people in trust and safety and tech policy development. They considered the importance of protecting the human rights of children, the debates around recent age assurance and age verification regulations, the trade-offs between safety and privacy, and the implications of what Vaishnavi called an “asymmetry” of knowledge across the tech policy community.
Today’s guest is Petter Törnberg, who with Justus Uitermark is one of the authors of a new book, titled Seeing Like a Platform: An Inquiry into the Condition of Digital Modernity, that sets out to address the “entanglement of epistemology, technology, and politics in digital modernity,” and what studying that entanglement can tell us about the workings of power. The book is part of a part of a series of research monographs that intend to encourage social scientists to embrace a “complex systems approach to studying the social world.”
Last year, Colorado signed a first-of-its-kind artificial intelligence measure into law. The Colorado AI Act would require developers of high-risk AI systems to take reasonable steps to prevent harms to consumers, such as algorithmic discrimination, including by conducting impact assessments on their tools.But last week, the state kicked off a special session where lawmakers held frenzied negotiations over whether to expand or dilute its protections. The chapter unfolded amid fierce lobbying by industry groups and consumer advocates. Ultimately, the state legislature punted on amending the law but agreed to delay its implementation from February to June of next year. The move likely tees up another round of contentious talks over one of the nation’s most sprawling AI statues.This week, Tech Policy Press associate editor Cristiano Lima-Strong spoke to two local reporters who have been closely tracking the saga for the Colorado Sun: political reporter and editor Jesse Paul and politics and policy reporter Taylor Dolven.
On this podcast, we’ve come back again and again to questions around mis- and disinformation, propaganda, rumors, and the role that digital platforms play in anti-democratic phenomena. In a new book published this summer by Oxford University Press called Connective Action and the Rise of the Far-Right: Platforms, Politics, and the Crisis of Democracy, a group of scholars from varied research traditions set out to find new ways to marry more traditional political science with computational social science approaches to understand the phenomenon of democratic backsliding and to bring some clarity to the present moment, particularly in the United States. Justin Hendrix had the chance to speak to two of the volume’s editors and two of its authors:Steven Livingston, a professor and founding director of the Institute for Data Democracy and Politics at the George Washington University;Michael Miller, managing director of the Moynihan Center at the City College of New York;Kate Starbird, a professor at the University of Washington and a co-founder of the Center for an Informed Public; andJosephine Lukito, assistant professor at the University of Texas at Austin and senior faculty research associate at the Center for Media Engagement.
In the latest installment in her series of podcasts called Through to Thriving, Tech Policy Press fellow Anika Collier Navoroli speaks with Dr. Jasmine McNealy, an attorney, critical public interest technologist, and professor in the Department of Media Production, Management, and Technology at the University of Florida;, and Naomi Nix, a staff writer for The Washington Post, where she reports on technology and social media companies. They discuss how they found themselves on the path through journalism and into a focus on tech and tech policy, the distinctions between truth and facts and whether there has ever been such a thing as a singular truth, how communities of color have historically seen and filled the gaps in mainstream media coverage, the rise of news influencers, and how journalists can regain the trust of the public.
Today’s guest, journalist Rahul Bhatia, has written a book that is part journalistic account, part history, and part memoir titled The New India: The Unmaking of the World's Largest Democracy.Reviewing the book in The Guardian, Salil Tripathi writes that “Bhatia’s remarkable book is an absorbing account of India’s transformation from the world’s largest democracy to something more like the world’s most populous country that regularly holds elections.” Bhatia considers the role of technology, including taking a close look at Aadhaar—India’s national biometric identification program—in order to consider the role it plays in the modern state and what the motivations behind it reveal.
On Thursday, Reuters tech reporter Jeff Horwitz, who broke the story of the Facebook Papers back in 2021 when he was at the Wall Street Journal, published two pieces, both detailing new revelations about Meta’s approach to AI chatbots. In a Reuters special report, Horwitz tells the story of a man with a cognitive impairment who died while attempting to travel to meet a chatbot character he believed was real. And in a related article, Horwitz reports on an internal Meta policy document that appears to endorse its chatbots engaging with children “in conversations that are romantic or sensual,” as well as other concerning behaviors. Earlier today, Justin Hendrix caught up with Horwitz about the reports and what they tell us about Silicon Valley’s no holds barred pursuit of AI, even at the expense of the safety of vulnerable people and children.
Daniel J. Solove is the Eugene L. and Barbara A. Bernard Professor of Intellectual Property and Technology Law at the George Washington University Law School. The project of his latest book, On Privacy and Technology, is to synthesize twenty five years of thinking about privacy into a “succinct and accessible” volume and to help the reader understand “the relationship between law, technology, and privacy” in rapidly changing world. Justin Hendrix spoke to him about the book and how recent events in the United States relate to his areas of concern.
Through To Thriving is a a special series of podcast episodes hosted by Tech Policy Press fellow Anika Collier Navaroli. With her guests, Anika is imagining futures beyond our current moment. For this episode, she spoke with Nora Benavidez, senior counsel and director of digital justice and civil rights at the nonprofit Free Press. Anika and Nora discussed the past and present state of platform accountability advocacy, the steps of building a campaign, the possibility of forming a creative agency to support advocates, and what to make of so called “woke AI.”This episode and conversation about advocating for change is dedicated to the memory and life of our former colleague and tech accountability researcher and advocate Brandi Collins-Dexter.
On Saturday, July 26, three days after the Trump administration published its AI action plan, China’s foreign ministry released that country’s action plan for global AI governance. As the US pursues “global dominance,” China is communicating a different posture. What should we know about China’s plan, and how does it contrast with the US plan? What's at stake in the competition between the two superpowers?To answer these questions, Justin Hendrix reached out to a close observer of China's tech policy. Graham Webster is a lecturer and research scholar at Stanford University in the Program on Geopolitics, Technology, and Governance, and he is the Editor-in-Chief of the DigiChina Project, a "collaborative effort to analyze and understand Chinese technology policy developments through direct engagement with primary sources, providing analysis, context, translation, and expert opinion." Webster attended the World Artificial Intelligence Conference in Shanghai.
Yesterday, United States President Donald Trump took to the stage at the "Winning the AI Race Summit" to promote the administration's AI Action Plan. Shortly after it was published, Tech Policy Press editor Justin Hendrix sat down with Sarah Myers West, the co-director of the AI Now Institute; Maia Woluchem, the program director of the Trustworthy Infrastructures team at Data and Society; and Ryan Gerety, the director of the Athena Coalition, to discuss the plan and what it portends for the future.
This weekend, the Americans with Disabilities Act (ADA) turns 35. Signed into law on July 26, 1990, the law provides broad anti-discrimination protections for people with disabilities in the US, and has impacted how people with disabilities interact with various technologies. To discuss how the law has aged and what the fight for equity and inclusion looks like going forward, Tech Policy Press fellow Ariana Aboulafia spoke with three leaders working at the intersection of disability and technology:Maitreya Shah is the tech policy director at the American Association of People with Disabilities.Blake Reid is a professor at the University of Colorado.Cynthia Bennett is a senior research scientist at Google.
Tech Policy Press fellow Anika Collier Navaroli is the host of Through to Thriving, a special podcast series where she talks with technology policy practitioners to explore futures beyond our current moment. For this episode, Anika spoke with two experts on Trust & Safety about balance and resilience in a notoriously difficult field. Alice Hunsberger is the head of Trust & Safety at Musubi, a firm that sells AI content moderation solutions. Jerrel Peterson is the director of content policy at Spotify. Hunsberger and Peterson discussed how they broke into the field, their observations about the current state of the industry, how to better the working relationship between civil society and industry, and their advice for the next generation of practitioners.
Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI. The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said. At the same time, both European and American tech companies have raised concerns about the AI Act’s implementation timeline, with some calling to “stop the clock” on the AI Act’s rollout.To learn more, Tech Policy Press associate editor Ramsha Jahangir spoke to Luca Bertuzzi, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.
In the United States, state legislatures are key players in shaping artificial intelligence policy, as lawmakers attempt to navigate a thicket of politics surrounding complex issues ranging from AI safety, deepfakes, and algorithmic discrimination to workplace automation and government use of AI. The decision by the US Senate to exclude a moratorium on the enforcement of state AI laws from the budget reconciliation package passed by Congress and signed by President Donald Trump over the July 4 weekend leaves the door open for more significant state-level AI policymaking.To take stock of where things stand on state AI policymaking, Tech Policy Press associate editor Cristiano Lima-Strong spoke to two experts:Scott Babwah Brennen, director of NYU’s Center on Technology Policy, and Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation (EFF).
Helen Nissenbaum, a philosopher, is a professor at Cornell Tech and in the Information Science Department at Cornell University. She is director of the Digital Life Initiative at Cornell Tech, which was launched in 2017 to explore societal perspectives surrounding the development and application of digital technology. Her work on contextual privacy, trust, accountability, security, and values in technology design led her to work with collaborators on projects such as TrackMeNot, a tool to mask a user's real search history by sending search engines a cloud of ‘ghost’ queries, and AdNauseam, a browser extension that obfuscates a user’s browsing data to protect from tracking by advertising networks. Building on such projects, in 2015, she coauthored a book with Finn Brunton called Obfuscation: A User’s Guide for Privacy and Protest. The book detailed ideas on mitigating and defeating digital surveillance. With concerns about surveillance surging in a time of rising authoritarianism and the advent of powerful artificial intelligence technologies, Justin Hendrix reached out to Professor Nissenbaum to find out what she’s thinking in this moment, and how her ideas can be applied to present day phenomena.
At Tech Policy Press we’ve been tracking the emerging application of generative AI systems in content moderation. Recently, the European Center for Not-for-Profit Law (ECNL) released a comprehensive report titled Algorithmic Gatekeepers: The Human Rights Impacts of LLM Content Moderation, which looks at the opportunities and challenges of using generative AI in content moderation systems at scale. Justin Hendrix spoke to its primary author, ECNL senior legal manager Marlena Wisniak.
do you invite any guests with vaguely right leaning views? or is this just a echo chamber
far right? oh Lord a silly woke pod bye
the problem is when moderation is pushed across all sites. it's censorship
when you start censoring you have lost the argument
whatever happened to the left? moderation/censorship