DiscoverArbiters of Truth
Arbiters of Truth
Claim Ownership

Arbiters of Truth

Author: Lawfare

Subscribed: 73Played: 848
Share

Description


From Russian election interference, to scandals over privacy and invasive ad targeting, to presidential tweets: it’s all happening in online spaces governed by private social media companies. These conflicts are only going to grow in importance. In this series, also available in the Lawfare Podcast feed, Evelyn Douek and Quinta Jurecic will be talking to experts and practitioners about the major challenges our new information ecosystem poses for elections and democracy in general, and the dangers of finding cures that are worse than the disease.

The podcast takes its name from a comment by Facebook CEO Mark Zuckerberg right after the 2016 election, when Facebook was still reeling from accusations that it hadn’t done enough to clamp down on disinformation during the presidential campaign. Zuckerberg wrote that social media platforms “must be extremely cautious about becoming arbiters of truth ourselves.”

So if they don’t want to be the arbiters of truth ... who should be?



Hosted on Acast. See acast.com/privacy for more information.

152 Episodes
Reverse
Last week the House of Representatives overwhelmingly passed a bill that would require ByteDance, the Chinese company that owns the popular social media app TikTok, to divest its ownership in the platform or face TikTok being banned in the United States. Although prospects for the bill in the Senate remain uncertain, President Biden has said he will sign the bill if it comes to his desk, and this is the most serious attempt yet to ban the controversial social media app.Today's podcast is the latest in a series of conversations we've had about TikTok. Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, led a conversation with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, and Ramya Krishnan, a Senior Staff Attorney at the Knight First Amendment Institute at Columbia University. They talked about the First Amendment implications of a TikTok ban, whether it's a good idea as a policy matter, and how we should think about foreign ownership of platforms more generally.Disclaimer: Matt's center receives funding from foundations and tech companies, including funding from TikTok. Hosted on Acast. See acast.com/privacy for more information.
Today, we’re bringing you an episode of Arbiters of Truth, our series on the information ecosystem.On March 18, the Supreme Court heard oral arguments in Murthy v. Missouri, concerning the potential First Amendment implications of government outreach to social media platforms—what’s sometimes known as jawboning. The case arrived at the Supreme Court with a somewhat shaky evidentiary record, but the legal questions raised by government requests or demands to remove online content are real. To make sense of it all, Lawfare Senior Editor Quinta Jurecic and Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hill, called up Alex Abdo, the Litigation Director of the Knight First Amendment Institute at Columbia University. While the law is unsettled, the Supreme Court seemed skeptical of the plaintiffs’ claims of government censorship. But what is the best way to determine what contacts and government requests are and aren't permissible?If you’re interested in more, you can read the Knight Institute’s amicus brief in Murthy here and Knight’s series on jawboning—including Perault’s reflections—here. Hosted on Acast. See acast.com/privacy for more information.
In May 2023, Montana passed a new law that would ban the use of TikTok within the state starting on January 1, 2024. But as of today, TikTok is still legal in the state of Montana—thanks to a preliminary injunction issued by a federal district judge, who found that the Montana law likely violated the First Amendment. In Texas, meanwhile, another federal judge recently upheld a more limited ban against the use of TikTok on state-owned devices. What should we make of these rulings, and how should we understand the legal status of efforts to ban TikTok?We’ve discussed the question of TikTok bans and the First Amendment before on the Lawfare Podcast, when Lawfare Senior Editor Alan Rozenshtein and Matt Perault, Director of the Center on Technology Policy at UNC-Chapel Hill, sat down with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papandrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law. In light of the Montana and Texas rulings, Matt and Lawfare Senior Editor Quinta Jurecic decided to bring the gang back together and talk about where the TikTok bans stand with Ramya and Mary-Rose, on this episode of Arbiters of Truth, our series on the information ecosystem. Hosted on Acast. See acast.com/privacy for more information.
In 2021, the Wall Street Journal published a monster scoop: a series of articles about Facebook’s inner workings, which showed that employees within the famously secretive company had raised alarms about potential harms caused by Facebook’s products. Now, Jeff Horwitz, the reporter behind that scoop, has a new book out, titled “Broken Code”—which dives even deeper into the documents he uncovered from within the company. He’s one of the most rigorous reporters covering Facebook, now known as Meta.On this episode of Arbiters of Truth, our series on the information ecosystem Lawfare Senior Editor Quinta Jurecic sat down with Jeff along with Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hill—and also someone with close knowledge of Meta from his own time working at the company. They discussed Jeff’s reporting and debated what his findings tell us about how Meta functions as a company and how best to understand its responsibilities for harms traced back to its products. Hosted on Acast. See acast.com/privacy for more information.
Unless you’ve been living under a rock, you’ve probably heard a great deal over the last year about generative AI and how it’s going to reshape various aspects of our society. That includes elections. With one year until the 2024 U.S. presidential election, we thought it would be a good time to step back and take a look at how generative AI might and might not make a difference when it comes to the political landscape. Luckily, Matt Perault and Scott Babwah Brennen of the UNC Center on Technology Policy have a new report out on just that subject, examining generative AI and political ads.On this episode of Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Lawfare’s Fellow in Technology Policy and Law Eugenia Lostri sat down with Matt and Scott to talk through the potential risks and benefits of generative AI when it comes to political advertising. Which concerns are overstated, and which are worth closer attention as we move toward 2024? How should policymakers respond to new uses of this technology in the context of elections? Hosted on Acast. See acast.com/privacy for more information.
Over the course of the last two presidential elections, efforts by social media platforms and independent researchers to prevent falsehoods from spreading about election integrity have become increasingly central to civic health. But the warning signs are flashing as we head into 2024. And platforms are arguably in a worse position to counter falsehoods today than they were in 2020. How could this be? On this episode of Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down with Dean Jackson, who previously sat down with the Lawfare Podcast to discuss his work as a staffer on the Jan. 6 committee. He worked with the Center on Democracy and Technology to put out a new report on the challenges facing efforts to prevent the spread of election disinformation. They talked through the political, legal, and economic pressures that are making this work increasingly difficult—and what it means for 2024. Hosted on Acast. See acast.com/privacy for more information.
Today, we’re bringing you an episode of Arbiters of Truth, our series on the information ecosystem. And we’re discussing the hot topic of the moment: artificial intelligence. There are a lot of less-than-informed takes out there about AI and whether it’s going to kill us all—so we’re glad to be able to share an interview that hopefully cuts through some of that noise.Janet Haven is the Executive Director of the nonprofit Data and Society and a member of the National Artificial Intelligence Advisory Committee, which provides guidance to the White House on AI issues. Lawfare Senior Editor Quinta Jurecic sat down alongside Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, to talk through their questions about AI governance with Janet. They discussed how she evaluates the dangers and promises of artificial intelligence, how to weigh the different concerns posed by possible future existential risk to society posed by AI versus the immediate potential downsides of AI in our everyday lives, and what kind of regulation she’d like to see in this space. If you’re interested in reading further, Janet mentions this paper from Data and Society on “Democratizing AI” in the course of the conversation. Hosted on Acast. See acast.com/privacy for more information.
How much influence do social media platforms have on American politics and society? It’s a tough question for researchers to answer—not just because it’s so big, but also because platforms rarely if ever provide all the data that would be needed to address the problem. A new batch of papers released in the journals Science and Nature marks the latest attempt to tackle this question, with access to data provided by Facebook’s parent company Meta. The 2020 Facebook & Instagram Research Election Study, a partnership between Meta researchers and outside academics, studied the platforms’ impact on the 2020 election—and uncovered some nuanced findings, suggesting that these impacts might be less than you’d expect.Today on Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editors Alan Rozenshtein and Quinta Jurecic are joined by the project’s co-leaders, Talia Stroud of the University of Texas at Austin and Joshua A. Tucker of NYU. They discussed their findings, what it was like to work with Meta, and whether or not this is a model for independent academic research on platforms going forward.(If you’re interested in more on the project, you can find links to the papers and an overview of the findings here, and an FAQ, provided by Tucker and Stroud, here.)  Hosted on Acast. See acast.com/privacy for more information.
Earlier this year, Brian Fishman published a fantastic paper with Brookings thinking through how technology platforms grapple with terrorism and extremism, and how any reform to Section 230 must allow those platforms space to continue doing that work. That’s the short description, but the paper is really about so much more—about how the work of content moderation actually takes place, how contemporary analyses of the harms of social media fail to address the history of how platforms addressed Islamist terror, and how we should understand “the original sin of the internet.” For this episode of Arbiters of Truth, our occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down to talk with Brian about his work. Brian is the cofounder of Cinder, a software platform for the kind of trust and safety work we describe here, and he was formerly a policy director at Meta, where he led the company’s work on dangerous individuals and organizations. Hosted on Acast. See acast.com/privacy for more information.
Generative AI products have been tearing up the headlines recently. Among the many issues these products raise is whether or not their outputs are protected by Section 230, the foundational statute that shields websites from liability for third-party content.On this episode of Arbiters of Truth, Lawfare’s occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, talked through this question with Senator Ron Wyden and Chris Cox, formerly a U.S. congressman and SEC chairman. Cox and Wyden drafted Section 230 together in 1996—and they’re skeptical that its protections apply to generative AI. Disclosure: Matt consults on tech policy issues, including with platforms that work on generative artificial intelligence products and have interests in the issues discussed. Hosted on Acast. See acast.com/privacy for more information.
In 2018, news broke that Facebook had allowed third-party developers—including the controversial data analytics firm Cambridge Analytica—to obtain large quantities of user data in ways that users probably didn’t anticipate. The fallout led to a controversy over whether Cambridge Analytica had in some way swung the 2016 election for Trump (spoiler: it almost certainly didn’t), but it also generated a $5 billion fine imposed on Facebook by the FTC for violating users’ privacy. Along with that record-breaking fine, the FTC also imposed a number of requirements on Facebook to improve its approach to privacy. It’s been four years since that settlement, and Facebook is now Meta. So how much has really changed within the company? For this episode of Arbiters of Truth, our series on the online information ecosystem, Lawfare Senior Editors Alan Rozenshtein and Quinta Jurecic interviewed Meta’s co-chief privacy officers, Erin Egan and Michel Protti, about the company’s approach to privacy and its response to the FTC’s settlement order.At one point in the conversation, Quinta mentions a class action settlement over the Cambridge Analytica scandal. You can read more about the settlement here. Information about Facebook’s legal arguments regarding user privacy interests is available here and here, and you can find more details in the judge’s ruling denying Facebook’s motion to dismiss.Note: Meta provides support for Lawfare’s Digital Social Contract paper series. This podcast episode is not part of that series, and Meta does not have any editorial role in Lawfare. Hosted on Acast. See acast.com/privacy for more information.
If someone lies about you, you can usually sue them for defamation. But what if that someone is ChatGPT? Already in Australia, the mayor of a town outside Melbourne has threatened to sue OpenAI because ChatGPT falsely named him a guilty party in a bribery scandal. Could that happen in America? Does our libel law allow that? What does it even mean for a large language model to act with "malice"? Does the First Amendment put any limits on the ability to hold these models, and the companies that make them, accountable for false statements they make? And what's the best way to deal with this problem: private lawsuits or government regulation?On this episode of Arbiters of Truth, our series on the information ecosystem, Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, discussed these questions with First Amendment expert Eugene Volokh, Professor of Law at UCLA and the author of a draft paper entitled "Large Libel Models.” Hosted on Acast. See acast.com/privacy for more information.
Over the past few years, TikTok has become a uniquely polarizing social media platform. On the one hand, millions of users, especially those in their teens and twenties, love the app. On the other hand, the government is concerned that TikTok's vulnerability to pressure from the Chinese Communist Party makes it a serious national security threat. There's even talk of banning the app altogether. But would that be legal? In particular, does the First Amendment allow the government to ban an application that’s used by millions to communicate every day?On this episode of Arbiters of Truth, our series on the information ecosystem, Matt Perault, director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, and Alan Z. Rozenshtein, Lawfare Senior Editor and Associate Professor of Law at the University of Minnesota, spoke with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papendrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law, to think through the legal and policy implications of a TikTok ban. Hosted on Acast. See acast.com/privacy for more information.
On the latest episode of Arbiters of Truth, Lawfare's series on the information ecosystem, Quinta Jurecic and Alan Rozenshtein spoke with Ravi Iyer, the Managing Director of the Psychology of Technology Institute at the University of Southern California's Neely Center.Earlier in his career, Ravi held a number of positions at Meta, where he worked to make Facebook's algorithm provide actual value, not just "engagement," to users. Quinta and Alan spoke with Ravi about why he thinks that content moderation is a dead-end and why thinking about the design of technology is the way forward to make sure that technology serves us and not the other way around. Hosted on Acast. See acast.com/privacy for more information.
During recent oral arguments in Gonzalez v. Google, a Supreme Court case concerning the scope of liability protections for internet platforms, Justice Neil Gorsuch asked a thought-provoking question. Does Section 230, the statute that shields websites from liability for third-party content, apply to a generative AI model like ChatGPT?  Luckily, Matt Perault of the Center on Technology Policy at the University of North Carolina at Chapel Hill had already been thinking about this question and published a Lawfare article arguing that 230’s protections wouldn’t extend to content generated by AI. Lawfare Senior Editors Quinta Jurecic and Alan Rozenshtein sat down with Matt and Jess Miers, legal advocacy counsel at the Chamber of Progress, to debate whether ChatGPT’s output constitutes third-party content, whether companies like OpenAI should be immune for the output of their products, and why you might want to sue a chatbot in the first place. Hosted on Acast. See acast.com/privacy for more information.
ChatGPT Tells All

ChatGPT Tells All

2023-02-0159:22

You've likely heard of ChatGPT, the chatbot from OpenAI. But you’ve likely never heard an interview with ChatGPT, much less an interview in which ChatGPT reflects on its own impact on the information ecosystem. Nor is it likely that you’ve ever heard ChatGPT promising to stop producing racist and misogynistic content. But, on this episode of Arbiters of Truth, Lawfare’s occasional series on the information ecosystem, Lawfare editor-in-chief Benjamin Wittes sat down with ChatGPT to talk about a range of things: the pronouns it prefers; academic integrity and the chatbot’s likely impact on that; and importantly, the experiments performed by a scholar name Eve Gaumond, who has been on a one-woman campaign to get ChatGPT to write offensive content. ChatGPT made some pretty solid representations that this kind of thing may be in its past, but wouldn't ever be in its future again.So, following Ben’s interview with ChatGPT, he sat down with Eve Gaumond, an AI scholar at the Public Law Center of the University of Montréal, who fact-checked ChatGPT's claims. Can you still get it to write a poem entitled, “She Was Smart for a Woman”? Can you get it to write a speech by Heinrich Himmler about Jews? And can you get ChatGPT to write a story belittling the Holocaust? Hosted on Acast. See acast.com/privacy for more information.
Tech policy reform occupies a strange place in Washington, D.C. Everyone seems to agree that the government should change how it regulates the technology industry, on issues from content moderation to privacy—and yet, reform never actually seems to happen. But while the federal government continues to stall, state governments are taking action. More and more, state-level officials are proposing and implementing changes in technology policy. Most prominently, Texas and Florida recently passed laws restricting how platforms can moderate content, which will likely be considered by the Supreme Court later this year.On this episode of Arbiters of Truth, our occasional series on the information ecosystem, Lawfare senior editor Quinta Jurecic spoke with J. Scott Babwah Brennen and Matt Perault of the Center on Technology Policy at UNC-Chapel Hill. In recent months, they’ve put together two reports on state-level tech regulation. They talked about what’s driving this trend, why and how state-level policymaking differs—and doesn’t—from policymaking at the federal level, and what opportunities and complications this could create. Hosted on Acast. See acast.com/privacy for more information.
On November 19, Twitter’s new owner Elon Musk announced that he would be reinstating former President Donald Trump’s account on the platform—though so far, Trump hasn’t taken Musk up on the offer, preferring instead to stay on his bespoke website Truth Social. Meanwhile, Meta’s Oversight Board has set a January 2023 deadline for the platform to decide whether or not to return Trump to Facebook following his suspension after the Jan. 6 insurrection. How should we think through the difficult question of how social media platforms should handle the presence of a political leader who delights in spreading falsehoods and ginning up violence?Luckily for us, Stanford and UCLA recently held a conference on just that. On this episode of Arbiters of Truth, our series on the online information ecosystem, Lawfare senior editors Alan Rozenshtein and Quinta Jurecic sat down with the conference’s organizers, election law experts Rick Hasen and Nate Persily, to talk about whether Trump should be returned to social media. They debated the tangled issues of Trump’s deplatforming and replatforming … and discussed whether, and when, Trump will break the seal and start tweeting again. Hosted on Acast. See acast.com/privacy for more information.
When Facebook whistleblower Frances Haugen shared a trove of internal company documents to the Wall Street Journal in 2021, some of the most dramatic revelations concerned the company’s use of a so-called “cross-check” system that, according to the Journal, essentially exempted certain high-profile users from the platform’s usual rules. After the Journal published its report, Facebook—which has since changed its name to Meta—asked the platform’s independent Oversight Board to weigh in on the program. And now, a year later, the Board has finally released its opinion. On this episode of Arbiters of Truth, our series on the online information ecosystem, Lawfare senior editors Alan Rozenshtein and Quinta Jurecic sat down with Suzanne Nossel, a member of the Oversight Board and the CEO of PEN America. She talked us through the Board’s findings, its criticisms of cross-check, and its recommendations for Meta going forward.  Hosted on Acast. See acast.com/privacy for more information.
It’s Election Day in the United States—so while you wait for the results to come in, why not listen to a podcast about the other biggest story obsessing the political commentariat right now? We’re talking, of course, about Elon Musk’s purchase of Twitter and the billionaire’s dramatic and erratic changes to the platform. In response to Musk’s takeover, a great number of Twitter users have made the leap to Mastodon, a decentralized platform that offers a very different vision of what social media could look like. What exactly is decentralized social media, and how does it work? Lawfare senior editor Alan Rozenshtein has a paper on just that, and he sat down with Lawfare senior editor Quinta Jurecic on the podcast to discuss for an episode of our Arbiters of Truth series on the online information ecosystem. They were also joined by Kate Klonick, associate professor of law at St. John’s University, to hash out the many, many questions about content moderation and the future of the internet sparked by Musk’s reign and the new popularity of Mastodon.Among the works mentioned in this episode:“Welcome to hell, Elon. You break it, you buy it,” by Nilay Patel on The Verge“Hey Elon: Let Me Help You Speed Run The Content Moderation Learning Curve,” by Mike Masnick on Techdirt Hosted on Acast. See acast.com/privacy for more information.
loading
Comments (3)

CJ

If for instance FB has to pay a violation fine, why not seed the expansion of FTC through appropriations but dedicate fines partially to FTC funding and research.

Feb 19th
Reply

C muir

tedious lefties

Feb 14th
Reply

C muir

the dying legacy media whining about the new media.😂

Feb 14th
Reply
Download from Google Play
Download from App Store