ChatGPT Is Not Intelligent w/ Emily M. Bender
Description
Paris Marx is joined by Emily M. Bender to discuss what it means to say that ChatGPT is a “stochastic parrot,” why Elon Musk is calling to pause AI development, and how the tech industry uses language to trick us into buying its narratives about technology.
Emily M. Bender is a professor in the Department of Linguistics at the University of Washington and the Faculty Director of the Computational Linguistics Master’s Program. She’s also the director of the Computational Linguistics Laboratory. Follow Emily on Twitter at @emilymbender or on Mastodon at @emilymbender@dair-community.social.
Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.
The podcast is produced by Eric Wickham and part of the Harbinger Media Network.
Also mentioned in this episode:
- Emily was one of the co-authors on the “On the Dangers of Stochastic Parrots” paper and co-wrote the “Octopus Paper” with Alexander Koller. She was also recently profiled in New York Magazine and has written about why policymakers shouldn’t fall for the AI hype.
- The Future of Life Institute put out the “Pause Giant AI Experiments” letter and the authors of the “Stochastic Parrots” paper responded through DAIR Institute.
- Zachary Loeb has written about Joseph Weizenbaum and the ELIZA chatbot.
- Leslie Kay Jones has researched how Black women use and experience social media.
- As generative AI is rolled out, many tech companies are firing their AI ethics teams.
- Emily points to Algorithmic Justice League and AI Incident Database.
- Deborah Raji wrote about data and systemic racism for MIT Tech Review.
- Books mentioned: Weapons of Math Destruction by Cathy O'Neil, Algorithms of Oppression by Safiya Noble, The Age of Surveillance Capitalism by Shoshana Zuboff, Race After Technology by Ruha Benjamin, Ghost Work by Mary L Gray & Siddharth Suri, Artificial Unintelligence by Meredith Broussard, Design Justice by Sasha Costanza-Chock, Data Conscience: Algorithmic S1ege on our Hum4n1ty by Brandeis Marshall.





@38:10: "... the identities they inhabit." ?! Good grief. Why is "their identities" not enough? There's a terrible irony in a linguist adding words in order to be less clear. I suspect that the signal is that she's super-sensitive to unstated (unknown, imaginary, hypothetical) nuances, or super-scared of clarity. Less clarity = more plausible deniability.
@35:30: Ugh. Perhaps the actual research is better than the guest is conveying here, but at a minimum, her version suggests she isn't as knowledgeable about methods as she thinks she is. In order to warrant emphasizing the racial component, she'd need to compare that same rate for other races. But then, maybe that would slow down the virtue-signalling.
Thanks for this interview. Wonderful guest. Some of her interpretations seemed unduly and prejudicially oriented toward the virtue-signalling jargon of social quasi- (or pseudo-) sciences, but the gist of her views was well founded and well stated. As in most interviews, Paris could've engaged more substantially rather than sounding as if he reflexively and exuberantly agrees with every utterance. As ever, he could also be more thoughtful about his diction, which is ironic in a conversation about language and artificial intelligence.