DiscoverThe Tech Policy Press Podcast
The Tech Policy Press Podcast
Claim Ownership

The Tech Policy Press Podcast

Author: Tech Policy Press

Subscribed: 95Played: 4,403
Share

Description

Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy.

You can find us at https://techpolicy.press/, where you can join the newsletter.
387 Episodes
Reverse
This episode considers whether today’s massive AI investment boom reflects real economic fundamentals or an unsustainable bubble, and how a potential crash could reshape AI policy, public sentiment, and narratives about the future that are embraced and advanced not only by Silicon Valley billionaires, but also by politicians and governments. Justin Hendrix is joined by:Ryan Cummings, chief of staff at the Stanford Institute for Economic Policy Research and coauthor of a recent New York Times opinion on the possibility of an AI bubble;Sarah West, co-director of the AI Now Institute and coauthor of a Wall Street Journal opinion, "You May Already Be Bailing Out the AI Business"; andBrian Merchant, author of the newsletter Blood in the Machine, a journalist in residence at the AI Now Institute, and author of a recent piece in Wired on signals that suggest a bubble.
This episode was recorded in Barcelona at this year’s Mozilla Festival. One session at the festival focused on how to get better access to data for independent researchers to study technology platforms and products and their effects on society. It coincided with the launch of the Knight-Georgetown Institute’s report, “Better Access: Data for the Common Good,” the product of a year-long effort to create “a roadmap for expanding access to high-influence public platform data – the narrow slice of public platform data that has the greatest impact on civic life,” with input from individuals across the research community, civil society, and journalism. In a gazebo near the Mozilla Festival mainstage, Justin Hendrix hosted a podcast discussion with three people working on questions related to data access and advocating for independent technology research:Peter Chapman, associate director of the Knight-Georgetown Institute;Brandi Geurkink, executive director of the Coalition for Independent Tech Research and a former campaigner and fellow at Mozilla; andLK Seiling, a researcher at the Weizenbaum Institute in Berlin and coordinator of the DSA40 Data Access Collaboratory.Thanks to the Mozilla Foundation and to Francisco, the audio engineer on site at the festival.
For her special series of podcasts, Through to Thriving, Tech Policy Press fellow Anika Collier Navaroli spoke to artist Mimi Ọnụọha, whose work "questions and exposes the contradictory logics of technological progress." The discussion ranged across changing trends in nomenclature of data and artificial intelligence, the role of art in bearing witness to authoritarianism, the interventions and projects that Ọnụọha has created about the datafication of society, and why artists and policy practitioners should work more closely together to build a more just and equitable future.
Ryan Calo is a professor at the University of Washington School of Law with a joint appointment at the Information School and an adjunct appointment at the Paul G. Allen School of Computer Science and Engineering. He is a founding co-director of the UW Tech Policy Lab and a co-founder of the UW Center for an Informed Public. In his new book, Law and Technology: A Methodical Approach, published by Oxford University Press, Calo argues that if the purpose of technology is to expand human capabilities and affordances in the name of innovation, the purpose of law is to establish the expectations, incentives, and boundaries that guide that expansion toward human flourishing. The book "calls for a proactive legal scholarship that inventories societal values and configures technology accordingly."
Instagram has spent years making promises about how it intends to protect minors on its platform. To explore its past shortcomings—and the questions lawmakers and regulators should be asking—I spoke with two of the authors of a new report that offers a comprehensive assessment of Instagram’s record on protecting teens:Laura Edelson, an assistant professor of computer science at Northeastern University and co-director of Cybersecurity for Democracy, and Arturo Béjar, the former director of ‘Protect and Care’ at Facebook who has since become a whistleblower and safety advocate.Edelson and Béjar are two of the authors of “Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors.” The report is based on a comprehensive review of teen accounts and safety tools, and includes a range of recommendations to the company and to regulators.
Mallory Knodel,  executive director of the Social Web Foundation and founder of a weekly newsletter called the Internet Exchange, and Burcu Kilic, a senior fellow at Canada’s Center for International Governance Innovation, or CIGI, are the authors of a recent post on the Internet Exchange titled “Big Tech Redefined the Open Internet to Serve Its Own Interests,” which explores how the idea of the ‘open internet’ has been hollowed out by decades of policy choices and corporate consolidation. Kilic traces the problem back to the 1990s, when the US government adopted a hands-off, industry-led approach to regulating the web, paving the way for surveillance capitalism and the dominance of Big Tech. Knodel explains how large companies have co-opted the language of openness and interoperability to defend monopolistic control. The two argue that trade policy, weak enforcement of regulations like the GDPR, and the rise of AI have deepened global dependencies on a few powerful firms, while the current AI moment risks repeating the same mistakes. They say to push back we must call for coordinated, democratic alternatives: stronger antitrust action, public digital infrastructure, and grassroots efforts to rebuild truly open, interoperable, and civic-minded technology systems.
It’s been three years since Europe’s Digital Services Act (DSA) came into effect, a sweeping set of rules meant to hold online platforms accountable for how they moderate content and protect users. One component of the law allows users to challenge online platform content moderation decisions through independent, certified bodies rather than judicial proceedings. Under Article 21 of the DSA, these “Out-of-Court Dispute Settlement“ bodies are intended to play a crucial role in resolving disputes over moderation decisions, whether it's about content takedowns, demonetization, account suspensions, or even decisions to leave flagged content online.One such out-of-court dispute settlement body is called Appeals Centre Europe. It was established last year as an independent entity with a grant from the Oversight Board Trust, which administers Oversight Board, the content moderation 'supreme court' created and funded by Meta. Appeals Centre Europe has released a new transparency report, and the numbers are striking: of the 1,500 disputes the Centre has ruled on, over three-quarters of the platforms’ original decisions were overturned, either because they were incorrect, or because the platform didn’t provide the content for review at all.Tech Policy Press associate editor Ramsha Jahangir spoke to two experts to unpack what the early wave of disputes tells us about how the system is working, and how platforms are applying their own rules:Thomas Hughes is the CEO of Appeals Center EuropePaddy Leerssen is a postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory, which monitors the implementation of the DSA.
Drawn from the biblical story in the book of Genesis, “Babel” has come to stand for the challenge of communication across linguistic, cultural, and ideological divides—the confusion and fragmentation that arise when we no longer share a common tongue or understanding. Today’s guest John Wihbey,  an associate professor of media Innovation at Northeastern University and the author of a new book titled Governing Babel: The Debate Over Social Media Platforms and Free Speech—And What Comes Next that tries to find an answer to how we can create the space to imagine a different information environment that promotes democracy and consensus rather than division and violence. The book is out October 7 from MIT Press.
Across the United States, dozens of state governments have attempted to establish their own efficiency initiatives, some molded in the image of the federal Department of Government Efficiency (DOGE). A common theme across many of these initiatives is the "stated goal of identifying and eliminating inefficiencies in state government using artificial intelligence (AI)" and promoting "expanded access to existing state data systems," according to a recent analysis by Maddy Dwyer, a policy analyst at the Center for Democracy and Technology.To learn more about what these efforts look like and to consider the broader question of AI’s use in government, Justin Hendrix spoke to Dwyer and Ben Green, an assistant professor in the University of Michigan School of Information and in the Gerald R. Ford School of Public Policy, who has written about DOGE and the use of AI in government for Tech Policy Press.
With two new bills headed to the desk of Governor Governor Gavin Newsom (D), California could soon pass the most significant guardrails for AI companions in the nation, sparking a lobbying brawl between consumer advocates and tech industry groups.In a recent report for Tech Policy Press,  associate editor Cristiano Lima-Strong detailed how groups are pouring tens if not hundreds of thousands of dollars into the lobbying fight, which has gained steam amid mounting scrutiny of the products. Tech Policy Press CEO and Editor Justin Hendrix spoke to Cristiano about the findings, and what the state's legislative battle could mean for AI regulation in the United States. This reporting was supported by a grant from the Tarbell Center for AI Journalism.
​From September 21–28, New York City will host Climate Week. Leaders from business, politics, academia, and civil society will gather to share ideas and develop strategies to address the climate crisis.​The tech industry intersects with climate concerns in a number of ways, not least of which is through its own growing demand for natural resources and energy, particularly to power data centers. What should a “tech agenda” for Climate Week include? What are the most important issues that need attention, and how should challenges and opportunities be framed?​Last week, Tech Policy Press hosted a live recording of The Tech Policy Press Podcast to get at these questions and more. Justin Hendrix was joined by three expert guests:​Alix Dunn, founder and CEO of The Maybe​Tamara Kneese, director of Data & Society's Climate, Technology, and Justice Program​Holly Alpine, co-Founder of the Enabled Emissions Campaign
Charlie Kirk, a conservative activist and co-founder of Turning Point USA, died Wednesday after he was shot at an event at Utah Valley University. Kirk’s assassination was instantly broadcast to the world from multiple perspectives on social media platforms including TikTok, Instagram, YouTube and X. But in the hours and days that have followed, the video and various derivative versions of it have proliferated alongside an increasingly divisive debate over Kirk’s legacy, the possible motives of the assassin, and the political implications. It is clear that, in some cases, the tech platforms are struggling to enforce their own content moderation rules, raising questions about their policies and investments in trust and safety, even as AI generated material plays a more significant role in the information ecosystem. To learn more about these phenomena, Justin Hendrix spoke to Wired senior correspondent Lauren Goode, who is covering this story.
Demand for computing power is fueling a massive surge in investment in data centers worldwide. McKinsey estimates spending will hit $6.7 trillion by 2030, with more than $1 trillion expected in the U.S. alone over the next five years. As this boom accelerates, public scrutiny is intensifying. Communities across the country are raising questions about environmental impacts, energy demands, and the broader social and economic consequences of this rapid buildout. To learn more about these debates—and the efforts to shape the industry’s future—Justin Hendrix spoke with two activists: one working at the national level, and another organizing locally in their own community. Vivek Bharathan is a member of the No Desert Data Center Coalition in Tucson, Arizona.Steven Renderos is executive director of MediaJustice, an advocacy organization that just released a report titled The People Say No: Resisting Data Centers in the South.
For the latest episode in her series of podcast discussions, Through to Thriving, Tech Policy Press fellow Anika Collier Navaroli spoke to Vaishnavi J, founder and principal of Vyanams Strategies (VYS), a trust and safety advisory firm focusing on youth safety, and a former safety leader at Meta, Twitter, and Google. Anika and Vaishnavi discussed a range of issues on the theme of how to center the views and needs of young people in trust and safety and tech policy development. They considered the importance of protecting the human rights of children, the debates around recent age assurance and age verification regulations, the trade-offs between safety and privacy, and the implications of what Vaishnavi called an “asymmetry” of knowledge across the tech policy community.
Seeing Like a Platform

Seeing Like a Platform

2025-08-3140:30

Today’s guest is Petter Törnberg, who with Justus Uitermark is one of the authors of a new book, titled Seeing Like a Platform: An Inquiry into the Condition of Digital Modernity, that sets out to address the “entanglement of epistemology, technology, and politics in digital modernity,” and what studying that entanglement can tell us about the workings of power. The book is part of a part of a series of research monographs that intend to encourage social scientists to embrace a “complex systems approach to studying the social world.”
Last year, Colorado signed a first-of-its-kind artificial intelligence measure into law. The Colorado AI Act would require developers of high-risk AI systems to take reasonable steps to prevent harms to consumers, such as algorithmic discrimination, including by conducting impact assessments on their tools.But last week, the state kicked off a special session where lawmakers held frenzied negotiations over whether to expand or dilute its protections. The chapter unfolded amid fierce lobbying by industry groups and consumer advocates. Ultimately, the state legislature punted on amending the law but agreed to delay its implementation from February to June of next year. The move likely tees up another round of contentious talks over one of the nation’s most sprawling AI statues.This week, Tech Policy Press associate editor Cristiano Lima-Strong spoke to two local reporters who have been closely tracking the saga for the Colorado Sun: political reporter and editor Jesse Paul and politics and policy reporter Taylor Dolven.
On this podcast, we’ve come back again and again to questions around mis- and disinformation, propaganda, rumors, and the role that digital platforms play in anti-democratic phenomena. In a new book published this summer by Oxford University Press called Connective Action and the Rise of the Far-Right: Platforms, Politics, and the Crisis of Democracy, a group of scholars from varied research traditions set out to find new ways to marry more traditional political science with computational social science approaches to understand the phenomenon of democratic backsliding and to bring some clarity to the present moment, particularly in the United States. Justin Hendrix had the chance to speak to two of the volume’s editors and two of its authors:Steven Livingston,  a professor and founding director of the Institute for Data Democracy and Politics at the George Washington University;Michael Miller,  managing director of the Moynihan Center at the City College of New York;Kate Starbird,  a professor at the University of Washington and a co-founder of the Center for an Informed Public; andJosephine Lukito,  assistant professor at the University of Texas at Austin and senior faculty research associate at the Center for Media Engagement.
In the latest installment in her series of podcasts called Through to Thriving, Tech Policy Press fellow Anika Collier Navoroli speaks with Dr. Jasmine McNealy, an attorney, critical public interest technologist, and professor in the Department of Media Production, Management, and Technology at the University of Florida;, and Naomi Nix, a staff writer for The Washington Post, where she reports on technology and social media companies. They discuss how they found themselves on the path through journalism and into a focus on tech and tech policy, the distinctions between truth and facts and whether there has ever been such a thing as a singular truth, how communities of color have historically seen and filled the gaps in mainstream media coverage, the rise of news influencers, and how journalists can regain the trust of the public.
Today’s guest, journalist Rahul Bhatia, has written a book that is part journalistic account, part history, and part memoir titled The New India: The Unmaking of the World's Largest Democracy.Reviewing the book in The Guardian, Salil Tripathi writes that “Bhatia’s remarkable book is an absorbing account of India’s transformation from the world’s largest democracy to something more like the world’s most populous country that regularly holds elections.” Bhatia considers the role of technology, including taking a close look at Aadhaar—India’s national biometric identification program—in order to consider the role it plays in the modern state and what the motivations behind it reveal.
On Thursday, Reuters tech reporter Jeff Horwitz, who broke the story of the Facebook Papers back in 2021 when he was at the Wall Street Journal, published two pieces, both detailing new revelations about Meta’s approach to AI chatbots. In a Reuters special report, Horwitz tells the story of a man with a cognitive impairment who died while attempting to travel to meet a chatbot character he believed was real. And in a related article, Horwitz reports on an internal Meta policy document that appears to endorse its chatbots engaging with children “in conversations that are romantic or sensual,” as well as other concerning behaviors. Earlier today, Justin Hendrix caught up with Horwitz about the reports and what they tell us about Silicon Valley’s no holds barred pursuit of AI, even at the expense of the safety of vulnerable people and children.
loading
Comments (5)

C muir

do you invite any guests with vaguely right leaning views? or is this just a echo chamber

Mar 23rd
Reply

C muir

far right? oh Lord a silly woke pod bye

Mar 23rd
Reply

C muir

the problem is when moderation is pushed across all sites. it's censorship

Mar 23rd
Reply

C muir

when you start censoring you have lost the argument

Mar 23rd
Reply

C muir

whatever happened to the left? moderation/censorship

Mar 23rd
Reply
loading