DiscoverThe Sunday Show
Claim Ownership
The Sunday Show
Author: Tech Policy Press
Subscribed: 68Played: 2,562Subscribe
Share
© Copyright 2024 Tech Policy Press
Description
Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy. The Sunday Show is its podcast.
You can find us at https://techpolicy.press/, where you can join the newsletter.
You can find us at https://techpolicy.press/, where you can join the newsletter.
302 Episodes
Reverse
Parmy Olson is a Bloomberg Opinion columnist covering technology regulation, artificial intelligence, and social media. Her new book, Supremacy: AI, ChatGPT, and the Race that Will Change the World tells a tale of rivalry and ambition as it chronicles the rush to exploit artificial intelligence. The book explores the trajectories of Sam Altman and Demis Hassabis and their roles in advancing artificial intelligence, the challenges posed by corporate power, and the extraordinary economic stakes of the current race to achieve technological supremacy.
These days, if you see someone with their head bowed, you’re much more likely observing them staring into their phone than in prayer. But from digital rituals to the promises of abundance from Silicon Valley elites, has technology become the world’s most powerful religion? What kinds of promises of salvation and abundance are its leaders making? And how can thinking about technology in this way help us generate ways to reform our approach to it, particularly if we aim to restore humanist principles?Today’s guest is Greg Epstein, who drew on lessons from his vocation as a humanist chaplain at Harvard and MIT to write a new book, just out from MIT Press, called Tech Agnostic: How Technology Became the World's Most Powerful Religion, and Why It Desperately Needs a Reformation.
Today’s guest is Boston University School of Law professor Woodrow Hartzog, who, with the George Washington University Law School's Daniel Solove, is one of the authors of a recent paper that explored the novelist Franz Kafka’s worldview as a vehicle to arrive at key insights for regulating privacy in the age of AI. The conversation explores why privacy-as-control models, which rely on individual consent and choice, fail in the digital age, especially with the advent of AI systems. Hartzog argues for a "societal structure model" of privacy protection that would impose substantive obligations on companies and set baseline protections for everyone rather than relying on individual consent. Kafka's work is a lens to examine how people often make choices against their own interests when confronted with complex technological systems, and how AI is amplifying these existing privacy and control problems.
On Tuesday, November 5th, the final ballots will be cast in the 2024 US presidential election. But the process is far from over. How prepared are social media platforms for the post-election period? What should we make of characters like Elon Musk, who is actively advancing conspiracy theories and false claims about the integrity of the election? And what can we do going forward to support election workers and administrators on the frontlines facing threats and disinformation? To help answer these questions, Justin Hendrix spoke with three experts: Katie Harbath, CEO of Anchor Change and chief global affairs officer at Duco Experts;Nicole Schneidman, technology policy strategist at Protect Democracy; andDean Jackson, principal of Public Circle LLC and a reporting fellow at Tech Policy Press.
If you’re trying to game out the potential role of technology in the post-election period in the US, there is a significant "X" factor. When he purchased the social media platform formerly known as Twitter, “Elon Musk didn’t just get a social network—he got a political weapon.” So says today’s guest, a journalist who is one of the keenest observers of phenomena on the internet: Charlie Warzel, a staff writer at The Atlantic and the author of its newsletter Galaxy Brain. Justin Hendrix caught up with him about what to make of Musk and the broader health of the information environment.
In this episode, Justin Hendrix speaks with three researchers who recently published projects looking at the intersection of generative AI with elections around the world, including:Samuel Woolley, Dietrich Chair of Disinformation Studies at the University of Pittsburgh and one of the authors of a set of studies titled Generative Artificial Intelligence and Elections;Lindsay Gorman, Managing Director and Senior Fellow of the Technology Program at the German Marshall Fund of the United States and an author of a report and online resource titled Spitting Images: Tracking Deepfakes and Generative AI in Elections; andScott Babwah Brennen, Director of the NYU Center on Technology Policy and one of the authors of a deep dive into the literature on the effectiveness of AI content labels and another on the efficacy of recent US state legislation requiring labels on political ads that use generative AI.
Martin Husovec is an associate law professor at the London School of Economics and Political Science (LSE). He works on questions at the intersection of technology and digital liberties, particularly platform regulation, intellectual property and freedom of expression. He's the author of Principles of the Digital Services Act, just out from Oxford University Press. Justin Hendrix spoke to him about the rollout of the DSA, what to make of progress on trusted flaggers and out-of-court dispute resolution bodies, how transparency and reporting on things like 'systemic risk' is playing out, and whether the DSA is up to the ambitious goals policymakers set for it.
In her new book, Fearless Speech: Breaking Free from the First Amendment, Dr. Mary Anne Franks challenges First Amendment orthodoxy and critiques “reckless speech,” which endangers vulnerable groups and protects corporate interests, in order to advance “fearless speech,” which seeks to advance equality and democracy.
With Sam Woolley, Mariana Olaizola Rosenblat and Inga K. Trauthig are authors of a new report from the NYU Stern Center for Business and Human Rights and the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin titled "Covert Campaigns: Safeguarding Encrypted Messaging Platforms from Voter Manipulation." Justin Hendrix caught up with them to learn more about how political propagandists are exploiting the features of encrypted messaging platforms to manipulate voters, and what can be done about it without breaking the promise of encryption for all users.
A lot of folks frustrated with major social media platforms are migrating to alternatives like Mastodon and Bluesky, which operate on decentralized protocols. This summer, Erin Kissane and Darius Kazemi released a report on the governance on fediverse microblogging servers and the moderation practices of the people who run them. Justin Hendrix caught up with Erin Kissane about their findings, including the emerging forms of diplomacy between different server operators, the types of political and policy decisions moderators must make, and the need for more resources and tooling to enable better governance across the fediverse.
The results in this year’s installment of the Freedom House Freedom on the Net report generally follow the same distressing trajectory as prior reports, marking a 14th consecutive year in declines in internet freedom around the world. But in this year of elections, the Freedom House analysts also identified a set of concerning phenomena related to this most fundamental act of democracy and how governments are asserting themselves, for better or worse. Justin Hendrix spoke to report authors Allie Funk and Kian Vesteinsson about their findings.
In this episode, we're crashing a funeral... for CrowdTangle, a piece of software that allowed journalists and independent researchers to get insights into social media. Not our usual material, but this particular loss marks a huge blow in the ongoing fight for public access to data from the platforms, and underscores why we need to continue to fight for transparency. And the folks convened by the Knight-Georgetown Institute and the Coalition for Independent Technology Research refused to let it go unmarked.
Barry Lynn is the executive director of the Open Markets Institute in Washington DC and the author of this month's cover essay in Harper's titled "The Antitrust Revolution: Liberal democracy’s last stand against Big Tech." Justin Hendrix spoke to him about his essay, about the remedy framework proposed by the US Department of Justice following the ruling in the Google search antitrust trial, and about what to anticipate for the antitrust movement following the 2024 US presidential election.
Today’s guest is Sam Jeffers, cofounder and executive director of Who Targets Me. Jeffers has spent several yearshas spent several years building a suite of capabilities to make political advertising more transparent, including tools for individuals and data and support for academics, researchers and journalists. His organization also advocates for better policy from platforms, regulators and governments. (You can download the Who Targets Me browser extension to contribute your data to the project.)
One of the most significant concepts in Europe’s Digital Services Act is that of “systemic risk,” which relates to the spread of illegal content, or content that might have foreseeable negative effects on the exercise of fundamental rights or on on civic discourse, electoral processes, public security and so forth. The DSA requires companies to carry out risk assessments to detail whether they are adequately addressing such risks on their platforms. What exactly amounts to systemic risk and how exactly to go about assessing it is still up in the air in these early days of the DSA’s implementation. In today’s episode, Tech Policy Press Staff Writer Gabby Miller speaks with three experts involved in conversations to try and get to best practices:Jason Pielemeier, Executive Director of the Global Network Initiative;David Sullivan, Executive Director of the Digital Trust & Safety Partnership; andChantal Joris, Senior Legal Officer at Article 19
Last week, Wall Street Journal technology reporter Jeff Horwitz first reported on details of an unredacted version of a complaint against Snap brought by New Mexico Attorney General Raúl Torrez. Tech Policy Press editor Justin Hendrix spoke to Horwitz about its details, and questions it leaves unanswered.
Arvind Narayanan and Sayash Kapoor are the authors of AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, published September 24 by Princeton University Press. In this conversation, Justin Hendrix focuses in particular on the book's Chapter 6, "Why Can't AI Fix Social Media?"
The Institute for Strategic Dialogue (ISD) recently assessed social media platforms’ policies, public commitments, and product interventions related to election integrity across six major issue areas: platform integrity, violent extremism and hate speech, internal and external resourcing, transparency, political advertising and state-affiliated media. Justin Hendrix spoke to two of the report's authors: ISD's Director of Technology & Society, Isabelle Frances-Wright, and its Senior US digital Policy Manager, Ellen Jacobs. ISD's assessment included Snap, Facebook, Instagram, TikTok, YouTube, and X.
Gary Marcus writes that the companies developing artificial intelligence systems want the citizens of democracies “to absorb all the negative externalities” that might arise from their products, “such as the damage to democracy from Generative AI–produced misinformation, or cybercrime and kidnapping schemes using deepfaked voice clones—without them paying a nickel.” And, he says, we need to fight back. His new book is called Taming Silicon Valley: How We Can Ensure That AI Works for Us, published by MIT Technology Press on September 17, 2024.
Marietje Schaake is the author of The Tech Coup: How to Save Democracy from Silicon Valley. Dr. Alondra Nelson, a Professor at the Institute for Advanced Study, who served as deputy assistant to President Joe Biden and Acting Director of the White House Office of Science and Technology Policy (OSTP), calls Schaake “a twenty-first century Tocqueville” who “looks at Silicon Valley and its impact on democratic society with an outsider’s gimlet eye.” Nobel prize winner Maria Ressa says Schaake's new book “exposes the unchecked, corrosive power that is undermining democracy, human rights, and our global order.” And author and activist Cory Doctorow says the book offers “A thorough and necessary explanation of the parade of policy failures that enshittified the internet—and a sound prescription for its disenshittification.” Justin Hendrix spoke to Schaake just before the book's publication on September 24, 2024.
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
United States
do you invite any guests with vaguely right leaning views? or is this just a echo chamber
far right? oh Lord a silly woke pod bye
the problem is when moderation is pushed across all sites. it's censorship
when you start censoring you have lost the argument
whatever happened to the left? moderation/censorship