DiscoverThe Tech Policy Press Podcast
The Tech Policy Press Podcast
Claim Ownership

The Tech Policy Press Podcast

Author: Tech Policy Press

Subscribed: 99Played: 4,513
Share

Description

Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy.

You can find us at https://techpolicy.press/, where you can join the newsletter.
393 Episodes
Reverse
On Friday, the European Commission fined Elon Musk’s X €120 million for breaching the Digital Services Act, delivering the first-ever non-compliance decision under the European Union’s flagship tech regulation. By Saturday, Elon Musk was calling for no less than the abolition of the EU. To discuss the enforcement action, the politics surrounding it, and a variety of other issues related to digital regulation in Europe, Justin Hendrix spoke to Joris van Hoboken, a professor at the Institute for Information Law (IViR) at the University of Amsterdam, and part of the core team of the Digital Services Act (DSA) Observatory.
On this podcast, for years we’ve discussed issues such as conspiracy theories, mis- and disinformation, polarization, and the ways in which the design and incentives on today’s technology platforms exacerbate them. Today’s guest is Calum Lister Matheson,  associate professor and chair of the Department of Communication at the University of Pittsburgh and a faculty member of the Pittsburgh Psychoanalytic Center. He's the author of Post-Weird: Fragmentation, Community, and the Decline of the Mainstream, a new book from Rutgers University Press that applies a different lens on the question as he searches for insights into the seemingly inexplicable behaviors of communities such as serpent handlers, pro-anorexia groups, believers in pseudoscience, and conspiracy theorists that deny the reality of gun violence in schools.
The past few years have seen a great deal of introspection about a professional field which has come to be known as 'trust and safety,' comprised of the people who develop, oversee, and enforce social media policies and community guidelines. Many scholars and advocates describe it as having reached a turning point, mostly for the worst. Joining Tech Policy Press contributing editor Dean Jackson to discuss the evolution of trust and safety—not coincidentally, the title of their forthcoming article In the Emory Law Journal—are professors of law Danielle Keats Citron and Ari Ezra Waldman. Also joining the conversation is Jeff Allen, the chief research officer at the Integrity Institute, a nonprofit whose membership is composed of trust and safety industry professionals.
This week, the European Commission unveiled a sweeping plan to overhaul how the EU enforces its digital and privacy rules as part of a ‘Digital Omnibus,’ aiming to ease compliance burdens and speed up implementation of the bloc’s landmark laws. Branded as a “simplification” initiative, the omnibus proposal touches core areas of EU tech regulation — notably the AI Act and the General Data Protection Regulation (GDPR).The Commission argues that this update is necessary to ensure practical implementation of the laws, but civil society organizations see the proposed reform as the “biggest rollback of digital fundamental rights in EU history.”At the same time, leaders are talking loudly about digital sovereignty — including at last week’s summit in Berlin. But with the Omnibus appearing to weaken protections and tilt power toward large tech firms, what kind of sovereignty is actually being built?Tech Policy Press associate editor Ramsha Jahangir spoke to two experts to understand what the EU is trying to achieve:Leevi Saari, EU Policy Fellow at AI Now InstituteJulia Smakman, Senior Researcher at the Ada Lovelace Institute
In the latest episode in her special podcast series, Through to Thriving, Tech Policy Press fellow Anika Collier Navaroli talks about protecting privacy with Chris Gilliard. Gilliard is co-director of the Critical Internet Studies Institute and the author of Luxury Surveillance, a forthcoming book from MIT Press.
To discuss the past, present and future of information integrity work, Tech Policy Press contributing editor Dean Jackson spoke to American University Center for Security, Innovation and New Technology (CSINT) nonresident fellow Adam Fivenson and assistant professor and CSINT director Samantha Bradshaw.
This episode considers whether today’s massive AI investment boom reflects real economic fundamentals or an unsustainable bubble, and how a potential crash could reshape AI policy, public sentiment, and narratives about the future that are embraced and advanced not only by Silicon Valley billionaires, but also by politicians and governments. Justin Hendrix is joined by:Ryan Cummings, chief of staff at the Stanford Institute for Economic Policy Research and coauthor of a recent New York Times opinion on the possibility of an AI bubble;Sarah West, co-director of the AI Now Institute and coauthor of a Wall Street Journal opinion, "You May Already Be Bailing Out the AI Business"; andBrian Merchant, author of the newsletter Blood in the Machine, a journalist in residence at the AI Now Institute, and author of a recent piece in Wired on signals that suggest a bubble.
This episode was recorded in Barcelona at this year’s Mozilla Festival. One session at the festival focused on how to get better access to data for independent researchers to study technology platforms and products and their effects on society. It coincided with the launch of the Knight-Georgetown Institute’s report, “Better Access: Data for the Common Good,” the product of a year-long effort to create “a roadmap for expanding access to high-influence public platform data – the narrow slice of public platform data that has the greatest impact on civic life,” with input from individuals across the research community, civil society, and journalism. In a gazebo near the Mozilla Festival mainstage, Justin Hendrix hosted a podcast discussion with three people working on questions related to data access and advocating for independent technology research:Peter Chapman, associate director of the Knight-Georgetown Institute;Brandi Geurkink, executive director of the Coalition for Independent Tech Research and a former campaigner and fellow at Mozilla; andLK Seiling, a researcher at the Weizenbaum Institute in Berlin and coordinator of the DSA40 Data Access Collaboratory.Thanks to the Mozilla Foundation and to Francisco, the audio engineer on site at the festival.
For her special series of podcasts, Through to Thriving, Tech Policy Press fellow Anika Collier Navaroli spoke to artist Mimi Ọnụọha, whose work "questions and exposes the contradictory logics of technological progress." The discussion ranged across changing trends in nomenclature of data and artificial intelligence, the role of art in bearing witness to authoritarianism, the interventions and projects that Ọnụọha has created about the datafication of society, and why artists and policy practitioners should work more closely together to build a more just and equitable future.
Ryan Calo is a professor at the University of Washington School of Law with a joint appointment at the Information School and an adjunct appointment at the Paul G. Allen School of Computer Science and Engineering. He is a founding co-director of the UW Tech Policy Lab and a co-founder of the UW Center for an Informed Public. In his new book, Law and Technology: A Methodical Approach, published by Oxford University Press, Calo argues that if the purpose of technology is to expand human capabilities and affordances in the name of innovation, the purpose of law is to establish the expectations, incentives, and boundaries that guide that expansion toward human flourishing. The book "calls for a proactive legal scholarship that inventories societal values and configures technology accordingly."
Instagram has spent years making promises about how it intends to protect minors on its platform. To explore its past shortcomings—and the questions lawmakers and regulators should be asking—I spoke with two of the authors of a new report that offers a comprehensive assessment of Instagram’s record on protecting teens:Laura Edelson, an assistant professor of computer science at Northeastern University and co-director of Cybersecurity for Democracy, and Arturo Béjar, the former director of ‘Protect and Care’ at Facebook who has since become a whistleblower and safety advocate.Edelson and Béjar are two of the authors of “Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors.” The report is based on a comprehensive review of teen accounts and safety tools, and includes a range of recommendations to the company and to regulators.
Mallory Knodel,  executive director of the Social Web Foundation and founder of a weekly newsletter called the Internet Exchange, and Burcu Kilic, a senior fellow at Canada’s Center for International Governance Innovation, or CIGI, are the authors of a recent post on the Internet Exchange titled “Big Tech Redefined the Open Internet to Serve Its Own Interests,” which explores how the idea of the ‘open internet’ has been hollowed out by decades of policy choices and corporate consolidation. Kilic traces the problem back to the 1990s, when the US government adopted a hands-off, industry-led approach to regulating the web, paving the way for surveillance capitalism and the dominance of Big Tech. Knodel explains how large companies have co-opted the language of openness and interoperability to defend monopolistic control. The two argue that trade policy, weak enforcement of regulations like the GDPR, and the rise of AI have deepened global dependencies on a few powerful firms, while the current AI moment risks repeating the same mistakes. They say to push back we must call for coordinated, democratic alternatives: stronger antitrust action, public digital infrastructure, and grassroots efforts to rebuild truly open, interoperable, and civic-minded technology systems.
It’s been three years since Europe’s Digital Services Act (DSA) came into effect, a sweeping set of rules meant to hold online platforms accountable for how they moderate content and protect users. One component of the law allows users to challenge online platform content moderation decisions through independent, certified bodies rather than judicial proceedings. Under Article 21 of the DSA, these “Out-of-Court Dispute Settlement“ bodies are intended to play a crucial role in resolving disputes over moderation decisions, whether it's about content takedowns, demonetization, account suspensions, or even decisions to leave flagged content online.One such out-of-court dispute settlement body is called Appeals Centre Europe. It was established last year as an independent entity with a grant from the Oversight Board Trust, which administers Oversight Board, the content moderation 'supreme court' created and funded by Meta. Appeals Centre Europe has released a new transparency report, and the numbers are striking: of the 1,500 disputes the Centre has ruled on, over three-quarters of the platforms’ original decisions were overturned, either because they were incorrect, or because the platform didn’t provide the content for review at all.Tech Policy Press associate editor Ramsha Jahangir spoke to two experts to unpack what the early wave of disputes tells us about how the system is working, and how platforms are applying their own rules:Thomas Hughes is the CEO of Appeals Center EuropePaddy Leerssen is a postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory, which monitors the implementation of the DSA.
Drawn from the biblical story in the book of Genesis, “Babel” has come to stand for the challenge of communication across linguistic, cultural, and ideological divides—the confusion and fragmentation that arise when we no longer share a common tongue or understanding. Today’s guest John Wihbey,  an associate professor of media Innovation at Northeastern University and the author of a new book titled Governing Babel: The Debate Over Social Media Platforms and Free Speech—And What Comes Next that tries to find an answer to how we can create the space to imagine a different information environment that promotes democracy and consensus rather than division and violence. The book is out October 7 from MIT Press.
Across the United States, dozens of state governments have attempted to establish their own efficiency initiatives, some molded in the image of the federal Department of Government Efficiency (DOGE). A common theme across many of these initiatives is the "stated goal of identifying and eliminating inefficiencies in state government using artificial intelligence (AI)" and promoting "expanded access to existing state data systems," according to a recent analysis by Maddy Dwyer, a policy analyst at the Center for Democracy and Technology.To learn more about what these efforts look like and to consider the broader question of AI’s use in government, Justin Hendrix spoke to Dwyer and Ben Green, an assistant professor in the University of Michigan School of Information and in the Gerald R. Ford School of Public Policy, who has written about DOGE and the use of AI in government for Tech Policy Press.
With two new bills headed to the desk of Governor Governor Gavin Newsom (D), California could soon pass the most significant guardrails for AI companions in the nation, sparking a lobbying brawl between consumer advocates and tech industry groups.In a recent report for Tech Policy Press,  associate editor Cristiano Lima-Strong detailed how groups are pouring tens if not hundreds of thousands of dollars into the lobbying fight, which has gained steam amid mounting scrutiny of the products. Tech Policy Press CEO and Editor Justin Hendrix spoke to Cristiano about the findings, and what the state's legislative battle could mean for AI regulation in the United States. This reporting was supported by a grant from the Tarbell Center for AI Journalism.
​From September 21–28, New York City will host Climate Week. Leaders from business, politics, academia, and civil society will gather to share ideas and develop strategies to address the climate crisis.​The tech industry intersects with climate concerns in a number of ways, not least of which is through its own growing demand for natural resources and energy, particularly to power data centers. What should a “tech agenda” for Climate Week include? What are the most important issues that need attention, and how should challenges and opportunities be framed?​Last week, Tech Policy Press hosted a live recording of The Tech Policy Press Podcast to get at these questions and more. Justin Hendrix was joined by three expert guests:​Alix Dunn, founder and CEO of The Maybe​Tamara Kneese, director of Data & Society's Climate, Technology, and Justice Program​Holly Alpine, co-Founder of the Enabled Emissions Campaign
Charlie Kirk, a conservative activist and co-founder of Turning Point USA, died Wednesday after he was shot at an event at Utah Valley University. Kirk’s assassination was instantly broadcast to the world from multiple perspectives on social media platforms including TikTok, Instagram, YouTube and X. But in the hours and days that have followed, the video and various derivative versions of it have proliferated alongside an increasingly divisive debate over Kirk’s legacy, the possible motives of the assassin, and the political implications. It is clear that, in some cases, the tech platforms are struggling to enforce their own content moderation rules, raising questions about their policies and investments in trust and safety, even as AI generated material plays a more significant role in the information ecosystem. To learn more about these phenomena, Justin Hendrix spoke to Wired senior correspondent Lauren Goode, who is covering this story.
Demand for computing power is fueling a massive surge in investment in data centers worldwide. McKinsey estimates spending will hit $6.7 trillion by 2030, with more than $1 trillion expected in the U.S. alone over the next five years. As this boom accelerates, public scrutiny is intensifying. Communities across the country are raising questions about environmental impacts, energy demands, and the broader social and economic consequences of this rapid buildout. To learn more about these debates—and the efforts to shape the industry’s future—Justin Hendrix spoke with two activists: one working at the national level, and another organizing locally in their own community. Vivek Bharathan is a member of the No Desert Data Center Coalition in Tucson, Arizona.Steven Renderos is executive director of MediaJustice, an advocacy organization that just released a report titled The People Say No: Resisting Data Centers in the South.
For the latest episode in her series of podcast discussions, Through to Thriving, Tech Policy Press fellow Anika Collier Navaroli spoke to Vaishnavi J, founder and principal of Vyanams Strategies (VYS), a trust and safety advisory firm focusing on youth safety, and a former safety leader at Meta, Twitter, and Google. Anika and Vaishnavi discussed a range of issues on the theme of how to center the views and needs of young people in trust and safety and tech policy development. They considered the importance of protecting the human rights of children, the debates around recent age assurance and age verification regulations, the trade-offs between safety and privacy, and the implications of what Vaishnavi called an “asymmetry” of knowledge across the tech policy community.
loading
Comments (5)

C muir

do you invite any guests with vaguely right leaning views? or is this just a echo chamber

Mar 23rd
Reply

C muir

far right? oh Lord a silly woke pod bye

Mar 23rd
Reply

C muir

the problem is when moderation is pushed across all sites. it's censorship

Mar 23rd
Reply

C muir

when you start censoring you have lost the argument

Mar 23rd
Reply

C muir

whatever happened to the left? moderation/censorship

Mar 23rd
Reply