DiscoverMystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
Claim Ownership

Mystery AI Hype Theater 3000

Author: Emily M. Bender and Alex Hanna

Subscribed: 110Played: 1,342
Share

Description

Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.
43 Episodes
Reverse
Technology journalist Paris Marx joins Alex and Emily for a conversation about the environmental harms of the giant data centers and other water- and energy-hungry infrastructure at the heart of LLMs and other generative tools like ChatGPT -- and why the hand-wavy assurances of CEOs that 'AI will fix global warming' are just magical thinking, ignoring a genuine climate cost and imperiling the clean energy transition in the US.Paris Marx is a tech journalist and host of the podcast Tech Won’t ...
Can “AI” do your science for you? Should it be your co-author? Or, as one company asks, boldly and breathlessly, “Can we automate the entire process of research itself?”Major scientific journals have banned the use of tools like ChatGPT in the writing of research papers. But people keep trying to make “AI Scientists” a thing. Just ask your chatbot for some research questions, or have it synthesize some human subjects to save you time on surveys.Alex and Emily explain why so-called “fully auto...
Did your summer feel like an unending barrage of terrible ideas for how to use “AI”? You’re not alone. It's time for Emily and Alex to clear out the poison, purge some backlog, and take another journey through AI hell -- from surveillance of emotions, to continued hype in education and art.Fresh AI Hell:Synthetic data for Hollywood test screeningsNaNoWriMo's AI failAI is built on exploitationNaNoWriMo sponsored by an AI writing companyNaNoWriMo's AI writing sponsor creates bad writingAI assis...
Dr. Clara Berridge joins Alex and Emily to talk about the many 'uses' for generative AI in elder care -- from "companionship," to "coaching" like medication reminders and other encouragements toward healthier (and, for insurers, cost-saving) behavior. But these technologies also come with questionable data practices and privacy violations. And as populations grow older on average globally, technology such as chatbots is often used to sidestep real solutions to providing meaningful care, while...
The Washington Post is going all in on AI -- surely this won't be a repeat of any past, disastrous newsroom pivots! 404 Media journalist Samantha Cole joins to talk journalism, LLMs, and why synthetic text is the antithesis of good reporting.References:The Washington Post Tells Staff It’s Pivoting to AI: "AI everywhere in our newsroom."Response: Defector Media Promotes Devin The Dugong To Chief AI Officer, Unveils First AI-Generated BlogThe Washington Post's First AI Strategy Editor Talks LLM...
Could this meeting have been an e-mail that you didn't even have to read? Emily and Alex are tearing into the lofty ambitions of Zoom CEO Eric Yuan, who claims the future is a LLM-powered 'digital twin' that can attend meetings in your stead, make decisions for you, and even be tuned to different parameters with just the click of a button.References:The CEO of Zoom wants AI clones in meetingsAll-knowing machines are a fantasyA reminder of some things chatbots are not good forMedical science s...
We regret to report that companies are still trying to make generative AI that can 'transform' healthcare -- but without investing in the wellbeing of healthcare workers or other aspects of actual patient care. Registered nurse and nursing care advocate Michelle Mahon joins Emily and Alex to explain why generative AI falls far, far short of the work nurses do.Michelle Mahon is the Director of Nursing Practice with National Nurses United, the largest union of registered nurses in the country. ...
When is a research paper not a research paper? When a big tech company uses a preprint server as a means to dodge peer review -- in this case, of their wild speculations on the 'dangerous capabilities' of large language models. Ali Alkhatib joins Emily to explain why a recent Google DeepMind document about the hunt for evidence that LLMs might intentionally deceive us was bad science, and yet is still influencing the public conversation about AI.Ali Alkhatib is a computer scientist and former...
You've already heard about the rock-prescribing, glue pizza-suggesting hazards of Google's AI overviews. But the problems with the internet's most-used search engine go way back. UCLA scholar and "Algorithms of Oppression" author Safiya Noble joins Alex and Emily in a conversation about how Google has long been breaking our information ecosystem in the name of shareholders and ad sales.References:Blog post, May 14: Generative AI in Search: Let Google do the searching for youBlog post, May 30:...
The politicians are at it again: Senate Majority Leader Chuck Schumer's series of industry-centric forums last year have birthed a "roadmap" for future legislation. Emily and Alex take a deep dive on this report, and conclude that the time spent writing it could have instead been spent...making useful laws.References:Driving US Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United StatesTech Policy Press: US Senate AI Insight Forum TrackerPut the Pu...
Will the LLMs somehow become so advanced that they learn to lie to us in order to achieve their own ends? It's the stuff of science fiction, and in science fiction these claims should remain. Emily and guest host Margaret Mitchell, machine learning researcher and chief ethics scientist at HuggingFace, break down why 'AI deception' is firmly a feature of human hype.Reference:Patterns: "AI deception: A survey of examples, risks, and potential solutions"Fresh AI Hell:Adobe's 'ethical' image gene...
AI Hell froze over this winter and now a flood of meltwater threatens to drown Alex and Emily. Armed with raincoats and a hastily-written sea shanty*, they tour the realms, from spills of synthetic information, to the special corner reserved for ShotSpotter.**Lyrics & video on Peertube.*Surveillance:*Public kiosks slurp phone dataWorkplace surveillanceSurveillance by bathroom mirrorStalking-as-a-serviceCops tap everyone else's videosFacial recognition at the doctor's office*Synthetic info...
Will AI someday do all our scientific research for us? Not likely. Drs. Molly Crockett and Lisa Messeri join for a takedown of the hype of "self-driving labs" and why such misrepresentations also harm the humans who are vital to scientific research.Dr. Molly Crockett is an associate professor of psychology at Princeton University.Dr. Lisa Messeri is an associate professor of anthropology at Yale University, and author of the new book, In the Land of the Unreal: Virtual and Other Realities in ...
Dr. Timnit Gebru guest-hosts with Alex in a deep dive into Marc Andreessen's 2023 manifesto, which argues, loftily, in favor of maximizing the use of 'AI' in all possible spheres of life.Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Prior to that she was fired by Google, where she was serving as co-lead of the Ethical AI research team, in December 2020 for raising issues of discrimination in the workplace. Timnit also ...
Award-winning AI journalist Karen Hao joins Alex and Emily to talk about why LLMs can't possibly replace the work of reporters -- and why the hype is damaging to already-struggling and necessary publications.References:Adweek: Google Is Paying Publishers to Test an Unreleased Gen AI PlatformThe Quint: AI Invents Quote From Real Person in Article by Bihar News Site: A Wake-Up Call?Fresh AI Hell:Alliance for the FutureVentureBeat: Google researchers unveil ‘VLOGGER’, an AI that can bring still ...
Alex and Emily put on their social scientist hats and take on the churn of research papers suggesting that LLMs could be used to replace human labor in social science research -- or even human subjects. Why these writings are essentially calls to fabricate data.References:PNAS: ChatGPT outperforms crowd workers for text-annotation tasksBeware the Hype: ChatGPT Didn't Replace Human Data AnnotatorsChatGPT Can Replace the Underpaid Workers Who Train AI, Researchers SayPolitical Analysis: Out of ...
Science fiction authors and all-around tech thinkers Annalee Newitz and Charlie Jane Anders join this week to talk about Isaac Asimov's oft-cited and equally often misunderstood laws of robotics, as debuted in his short story collection, 'I, Robot.' Meanwhile, both global and US military institutions are declaring interest in 'ethical' frameworks for autonomous weaponry.Plus, in AI Hell, a ballsy scientific diagram heard 'round the world -- and a proposal for the end of books as we know it, f...
Just Tech Fellow Dr. Chris Gilliard aka "Hypervisible" joins Emily and Alex to talk about the wave of universities adopting AI-driven educational technologies, and the lack of protections they offer students in terms of data privacy or even emotional safety.References:Inside Higher Ed: Arizona State Joins ChatGPT in First Higher Ed PartnershipASU press release version: New Collaboration with OpenAI Charts theFuture of AI in Higher EducationMLive: Your Classmate Could Be an AI Student at this ...
Is ChatGPT really going to take your job? Emily and Alex unpack two hype-tastic papers that make implausible claims about the number of workforce tasks LLMs might make cheaper, faster or easier. And why bad methodology may still trick companies into trying to replace human workers with mathy-math.Visit us on PeerTube for the video of this conversation.References:OpenAI: GPTs are GPTsGoldman Sachs: The Potentially Large Effects of Artificial Intelligence on Economic GrowthFYI: Over the last 60...
New year, same Bullshit Mountain. Alex and Emily are joined by feminist technosolutionism critics Eleanor Drage and Kerry McInerney to tear down the ways AI is proposed as a solution to structural inequality, including racism, ableism, and sexism -- and why this hype can occlude the need for more meaningful changes in institutions.Dr. Eleanor Drage is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence. Dr. Kerry McInerney is a Research Fellow at the Leverhulme Ce...
loading