DiscoverThe Sunday Show
The Sunday Show
Claim Ownership

The Sunday Show

Author: Tech Policy Press

Subscribed: 61Played: 2,229
Share

Description

Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy. The Sunday Show is its podcast.

You can find us at https://techpolicy.press/, where you can join the newsletter.
253 Episodes
Reverse
As we documented in Tech Policy Press, when the US Senate AI working group released its roadmap on policy on May 17th, many outside organizations were underwhelmed at best, and some were fiercely critical of the closed door process that produced it. In the days after the report was announced, a group of nonprofit and academic organizations put out what they call a "shadow report" to the US Senate AI policy roadmap. The shadow report is intended as a complement or counterpoint to the Senate working group's product. It collects a bibliography of research and proposals from civil society and academia and addresses several issues the Senators largely passed over. To learn more, Justin Hendrix spoke to some of the report's authors, including:Sarah West, co-executive director of the AI Now InstituteNasser Eledroos, policy lead on technology at Color of ChangeParamita Shah, executive director of Just Futures LawCynthia Conti-Cook, director of research and policy at the Surveillance Resistance Lab
A conversation with Marwa Fatafta, who serves as policy and advocacy director for the nonprofit Access now, which has worked on digital civil rights, connectivity and censorship issues for the past 15 years. Along with other groups, Access Now has engaged Meta in recent months over what it says is the “systematic censorship of Palestinian voices” amidst the Israel-Hamas war in Gaza. 
One tech journalist whose byline always draws me in is Chris Stokel-Walker. He writes for multiple publications including The New York Times, The Washington Post, The Economist, Wired, Fast Company, and New Scientist. Now, he’s got a new book out: How AI Ate the World: A Brief History of Artificial Intelligence - And Its Long Future. Last week, I had the chance to speak with him about it, and about how he covers technology and tech policy generally.
On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY) released a report titled "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate." Just hours after the report was released, Justin Hendrix spoke to two civil rights advocates who are working on AI policy about the good and the bad of the Senate report, and more broadly about how to set AI policy priorities that ensure a brighter future for all:Alejandra Montoya-Boyer, Senior Director at the Center for Civil Rights & Tech at the Leadership Conference on Civil and Human RightsClaudia Ruiz, Senior Civil Rights Policy Analyst at UnidosUS
Last October, Dr. Jasmine McNealy, as an associate professor at the University of Florida, a Senior Fellow in Tech Policy with the Mozilla Foundation, and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University, wrote in Tech Policy Press about the need for a policy agenda for "Rural AI." “Rural communities matter,” she wrote. “And that means they should matter when it comes to the development of policies on artificial intelligence.” The piece was a preview of sorts to a two-day workshop Dr. McNealy organized at the University of Florida in Gainesville that touched on topics ranging from connectivity to bias and discrimination in algorithmic systems to the connection between AI and natural resources. Justin Hendrix attended the workshop, and recently he checked in with Dr. McNealy and three of the other attendees he met there:Michaela Henley, program director and curriculum writer at Black Tech Futures and a senior research fellow representing Black Tech Futures at the Siegel Family Endowment;Dr. Dominique Harrison, founding principal of Equity Innovation Ventures; andDr. Theodora Dryer, who is director of the Water Justice and Technology Studio, founder of the Critical Carbon Computing Collective, and teaches on technology and environmental justice at New York University.
The Hippocratic oath, named for a Greek physician who lived ~2,500 years ago that some call the father of modern medicine, is one of the earliest examples of an expression of professional ethics. It is a symbol of a profession that has built in a number of protections for patient interests, with ethical frameworks and requirements that seek to assure they are maintained.Today’s guest is Chinmayi Sharma, an Associate Professor at Fordham Law School. Sharma thinks there should be a similar professional ethics framework in place for the developers of AI systems, and she’s written a substantial paper on the 'why' and the 'how' of her proposal. 
One topic we come back to again and again on this podcast is disinformation. In many episodes, we’ve discussed various phenomena related to this ambiguous term, and we’ve tried to use science to guide the way.But the guests in this episode suggest that in the broader political discourse, the term is more than over used. Often, they say, lawmakers and other elites that employ it are crossing the line into hyping the effects of disinformation, which they say only helps propagandists and diminishes trust in society. To learn more Justin Hendrix spoke with Gavin Wilde, Thomas Rid, and Olga Belogolova, who with Lee Foster are the authors of an essay in the publication Foreign Affairs titled "Don’t Hype the Disinformation: Downplaying the Risk Helps Foreign Propagandists, But So Does Exaggerating It." 
In an introduction to a special issue of the journal First Monday on topics related to AI and power, Jenna Burrell and Jacob Metcalf argue that "what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science." The papers in the journal go on to interrogate the epistemic culture of AI safety, the promise of utopia through artificial general intelligence how to debunk robot rights, and more. To learn more about some of the ideas in the special issue, Justin Hendrix spoke to Burrell, Metcalf, and two of the other authors of papers included in it: Shazeda Ahmed and Émile P. Torres.
Last week President Joe Biden signed into law a measure that would force the Chinese firm ByteDance to divest its ownership of TikTok, or risk the app being banned in the US. The measure also included restrictions on the sale of personal data to foreign entities. What are the implications of these moves for US and global tech policy going forward? What will the inevitable legal challenges look like?To learn more, Justin Hendrix spoke with Anupam Chander, law professor at Georgetown and a visiting scholar at the Institute for Rebooting Social Media at Harvard University; Rose Jackson, the director of the Democracy and Tech Initiative at the Atlantic Council; and Justin Sherman, CEO of global cyber strategies and adjunct professor at Duke University.
Subcommittee on Innovation, Data, and Commerce held a hearing: “Legislative Solutions to Protect Kids Online and Ensure Americans’ Data Privacy Rights.” Between the Kids Online Safety Act (KOSA) and the American Privacy Rights Act (APRA), both of which have bipartisan and bicameral support, Congress may be closer to acting on the issues than it has been recent memory.One of the witnesses that the hearing was David Brody, who is managing attorney of the Digital Justice Initiative of the Lawyers' Committee for Civil Rights Under Law. Justin Hendrix caught up with Brody the day after the hearing, we spoke about the challenges of advancing the American Privacy Rights Act, and why he connects fundamental data to privacy rights to so many of the other issues that the Lawyers' Committee cares about, including voting rights and how to counter disinformation that targets communities of color.
This episode features two conversations. Both relate to efforts to better understand the impact of technology on society. In the first, we’ll hear from Sayash Kapoor, a PhD candidate at the Department of Computer Science and the Center for Information Technology Policy at Princeton University, and Rishi Bommasani, the society lead at the Stanford Center for Research on Foundation Models. They are two of the authors of a recent paper titled On the Societal Impact of Open Foundation Models. And in the second, we’ll hear from Politico Chief Technology Correspondent Mark Scott about the US-EU Trade and Technology Council (TTC) meeting, and what he’s learned about the question of access to social media platform data by interviewing over 50 stakeholders, including regulators, researchers, and platform executives.
Last week, a federal judge granted a motion to dismiss and strike a lawsuit brought by X Corp, formerly known as Twitter, against a nonprofit research outfit called The Center for Countering Digital Hate (CCDH).  To learn more about why the ruling matters, Justin Hendrix spoke to Alex Abdo, the litigation director at the Knight First Amendment Institute at Columbia University; Imran Ahmed, the CEO and founder of the Center for Countering Digital Hate; and Roberta Kaplan, a partner at the law firm of Kaplan, Hecker, and Fink, which represented CCDH in this matter. 
On this show, when we talk about technology and democracy, guests are often talking about the relationship between technology and existing democratic systems. Today's guest wants us to think more expansively about what doing democracy means and the role the technology can play in it. Nathan Schneider, an assistant professor of media studies at the University of Colorado Boulder, is the author of Governable Spaces: Democratic Design for Online Life.
Last year, researchers at Human Rights Watch wrote about the global backlash against women’s rights. In multiple countries, they say, hard-won progress has been reversed amidst a wave of anti-feminist rhetoric and policies, and it may take decades to reverse the trajectory. It’s against that backdrop that today’s guest pursues concerns at the intersection of tech and digital rights with women’s human rights. Justin Hendrix speaks with Lucy Purdon, the founder of Courage Everywhere and author of a recent report for the Mozilla Foundation titled "Unfinished Business: Incorporating a Gender Perspective into Digital Advertising Reform in the UK and EU."
On Monday, March 18, the US Supreme Court heard oral argument in Murthy v Missouri. In this episode, Tech Policy Press reporting fellow Dean Jackson is joined by two experts- St. John's University School of Law associate professor Kate Klonick and UNC Center on Technology Policy director Matt Perault- to digest the oral argument, what it tells us about which way the Court might go, and what more should be done to create good policy on government interactions with social media platforms when it comes to content moderation and speech.
On March 18, the US Supreme Court will hear oral argument in Murthy v Missouri, a case that asks the justices to consider whether the government coerced or “significantly encouraged” social media executives to remove disfavored speech in violation of the First Amendment during the COVID-19 pandemic. Tech Policy Press reporting fellow Dean Jackson speaks to experts including the Knight First Amendment Institute at Columbia University's Mayze Teitler and Jennifer Jones, and the Tech Justice Law Project's Meetali Jain.
At INFORMED 2024, a conference hosted by the Knight Foundation in January, one panel focused on the subject of information integrity, race, and US elections. The conversation was compelling, and the panelists agreed to reprise it for this podcast. So today we're turning over the mic to Spencer Overton, a Professor of Law at the George Washington University, and the director of the GW Law School's Multiracial Democracy Project.He's joined by three other experts, including: Brandi Collins-Dexter, a media and technology fellow at Harvard's Shorenstein Center, a fellow at the National Center on Race and Digital Justice, and the author of the recent book, Black Skinhead: Reflections on Blackness and Our Political Future. Brandi is developing a podcast of her own with MediaJustice that explores 1980s era media, racialized conspiracism, and politics in Chicago;Dr. Danielle Brown, a social movement and media researcher who holds the 1855 Community and Urban Journalism professorship at Michigan State and is the founding director of the LIFT project, which is focused on mapping, networking and resourcing, trusted messengers to dismantle mis- and disinformation narratives that circulate in Black communities and about Black communities; andKathryn Peters, who was the inaugural executive director of University of North Carolina's Center for Information, Technology, and Public Life and was the co-founder of Democracy Works, where she built programs to help more Americans navigate how to vote. These days, she's working on a variety of projects to empower voters and address election mis- and disinformation.
On Monday, Feb. 26, 2024, the US Supreme Court heard oral arguments for Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton. The cases are on similar but distinct state laws in Florida and Texas that would restrict social media companies’ ability to moderate content on their platforms. Justin Hendrix speaks with Tech Policy Press staff writer Gabby Miller and contributing editor Ben Lennett about key highlights from the discussion.
This week, a public consultation period ended for a new Hong Kong national security law, known as Article 23. Article 23 ostensibly targets a wide array of crimes, including treason, theft of state secrets, espionage, sabotage, sedition, and "external interference" from foreign governments. The Hong Kong legislature, dominated by pro-Beijing lawmakers, is expected to approve it, even as its critics argue that the law criminalizes basic human rights, such as the freedom of expression, signaling a further erosion of the liberties once enjoyed by the residents of Hong Kong.To learn more about what is happening in Hong Kong and what role tech firms and other outside voices could be doing to preserve freedoms for the people of Hong Kong, Justin Hendrix spoke to three experts who are following developments there closely:Chung Ching Kwong, senior analyst at the Inter-Parliamentary Alliance on ChinaLokman Tsui, a fellow at Citizen Lab at University of Toronto, andMichael Caster, the Asia Digital Program Manager with Article 19.
If you’ve been listening to this podcast for a while, you know we’ve spent countless hours together talking about the problems of mis- and disinformation, and what to do about them. And, we’ve tried to focus on the science, on empirical research that can inform efforts to design a better media and technology environment that helps rather than hurts democracy and social cohesion. Today’s guests are Jon Bateman and Dean Jackson. The two have just produced a report for the Carnegie Endowment for International Peace that looks at what is known about a variety of interventions against disinformation, and provides evidence that should guide policy in governments and at technology platforms.
loading
Comments (5)

C muir

do you invite any guests with vaguely right leaning views? or is this just a echo chamber

Mar 23rd
Reply

C muir

far right? oh Lord a silly woke pod bye

Mar 23rd
Reply

C muir

the problem is when moderation is pushed across all sites. it's censorship

Mar 23rd
Reply

C muir

when you start censoring you have lost the argument

Mar 23rd
Reply

C muir

whatever happened to the left? moderation/censorship

Mar 23rd
Reply
Download from Google Play
Download from App Store