Discover
Computer Says Maybe
Computer Says Maybe
Author: Alix Dunn
Subscribed: 10Played: 393Subscribe
Share
© 2024
Description
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
86 Episodes
Reverse
Everything is happening so fast. And a lot of it’s bad. What can research and science organizations do when issues are complex, fast-moving, and super important?More like this: Independent Researchers in a Platform Era w/ Brandi GuerkinkBuilding knowledge is more important than ever in times like these. This week, we have three guests. Megan Price from the Human Rights Data Analysis Group (HRDAG) shares how statistics and data science can be used to get justice. Janet Haven and Charlton McIlwan from Data & Society explore the role that research institutions can offer to bridge research knowledge and policy prescription.Further reading & resources:HRDAG’s involvement in the trial of José Efraín Ríon MonttA profile of Guatemala and timeline of its conflict — BBC (last updated in 2024)To Protect and Serve? — a study on predictive policing by William Isaac and Kristian LumAn article about the above study — The AppealHRDAG’s stand against tyrannyMore on Understanding AI — Data & Society’s event series with the New York Public LibraryAbout Janet Haven, Executive Director of Data & SocietyAbout Charlton McIlwan, board president of Data & SocietyBias in Computer Systems by Helen NissenbaumCenter for Critical Race and Digital StudiesIf you want to hear more about the history of D&S, the full conversation is up on Youtube (add link when we have).**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Post Production by Sarah Myles | Pre Production by Georgia Iacovou
Imagine doing tech research… but from outside the tech industry? What an idea…More like this: Nodestar: Turning Networks into Knowledge w/ Andrew TraskSo much of tech research happens within the tech industry itself, because it requires data access, funding, and compute. But what the tech industry has in resources, it lacks in independence, scruples, and a public interest imperative. Alix is joined by Brandi Guerkink from The Coalition of Independent Tech Research to discuss her work at a time where platforms have never been so opaque, and funding has never been so sparseFurther Reading & Resources:More about Brandi and The CoalitionUnderstanding Engagement with U.S. (Mis)Information News Sources on Facebook by Laura Edelson & Dan McCoyMore on Laura EdelsonMore on Dan McCoyJim Jordan bringing in Nigel Farage from the UK to legitimise his attacks on EU tech regulations — PoliticoTed Cruz on preventing jawboning & government censorship of social media — BloombergJudge dismisses ‘vapid’ Elon Musk lawsuit against group that cataloged racist content on X — The GuardianSee the CCDH’s blog post on getting the case thrown outPlatforms are blocking independent researchers from investigating deepfakes by Ariella SteinhornDisclosure: This guest is a PR client of our consultancy team. As always, the conversation reflects our genuine interest in their work and ideas.**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Governments around the world are using predictive systems to manage engagement with even the most vulnerable. Results are mixed.More like this: Algorithmically Cutting Benefits w/ Kevin De LibanLuckily people like Soizic Pénicaud are working to prevent the modern welfare state from becoming a web of punishment of the most marginalised. Soizic has worked on algorithmic transparency both in and outside of a government context, and this week will share her journey from working on incrementally improving these systems (boring, ineffective, hard) — to escaping the slow pace of government and looking at the bigger picture of algorithmic governance, and how it can build better public benefit in France (fun, transformative, and a good challenge).Soizic is working to shift political debates about opaque decision-making algorithms to focus on what they’re really about: the marginalised communities who’s lives are most effected by these systems.Further reading & resources:The Observatory of Public Algorithms and their InventoryThe ongoing court case against the French welfare agency's risk-scoring algorithmMore about SoizicMore on the Transparency of Public Algorithms roadmap from Etalab — the task force Soizic was part ofLa Quadrature du NetFrance’s Digital Inquisition — co-authored by Soizic in collaboration with Lighthouse Reports, 2023AI prototypes for UK welfare system dropped as officials lament ‘false starts’ — The Guardian Jan 2025Learning from Cancelled Systems by Data Justice LabThe Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment — by Nari Johnson et al, featured in FAccT 2024**Subscribe to our newsletter to get more stuff than just a podcast — we host live shows and do other work that you will definitely be interested in!**
Seeing is believing. Right? But what happens when we lose trust in the reproductive media put in front of us?More like this: The Toxic Relationship Between AI and Journalism w/ Nic DawesWe talked to a global expert and leading voice on this issue for the past 20 years, Sam Gregory to get his take. We started way back in 1992 when Rodney King was assaulted by 4 police officers in Los Angeles. Police brutality was (and is) commonplace, but something different happened in this case. Someone used a camcorder and caught it on video. It changed our understanding about the role video could play in accountability. And in the past 30 years, we’ve gone from seeking video as evidence and advocacy, to AI slop threatening to seismically reshape our shared realities.Now apps like Sora provide impersonation-as-entertainment. How did we get here?Further reading & resources:More on the riots following Rodney King’s murder — NPRMore about Sam and WitnessObscuraCam — a privacy-preserving camera app from WITNESS and The Guardian ProjectC2PA: the Coalition for Content Provenance and AuthenticityDeepfakes Rapid Response Force by WITNESSSubscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!Post Production by Sarah Myles
What happens when AI models try to fill the gaping hole in the media landscape where journalists should be?More like this: Reanimating Apartheid w/ Nic DawesThis week Alix is joined by Nic Dawes, who until very recently ran the non-profit newsroom The City. In this conversation we explore journalism’s new found toxic relationship with AI and big tech: can journalists meaningfully use AI in their work? If a model summarises a few documents, does that add a new layer of efficiency, or inadvertently oversimplify? And what can we learn from big tech positioning itself as a helpful friend to journalism during the Search era?Beyond the just accurate relaying of facts, journalistic organisations also represent an entire backlog of valuable training data for AI companies. If you don’t have the same resources as the NYT, suing for copyright infringement isn’t an option — so what then? Nic says we have to break out of the false binary of ‘if you can’t beat them, join them!’Further reading & resources:Judge allows ‘New York Times’ copyright case against OpenAI to go forward — NPRGenerative AI and news report 2025: How people think about AI’s role in journalism and society — Reuters InstituteAn example of The City’s investigative reporting: private equity firms buying up property in the Bronx — 2022The Intimacy Dividend — Shuwei FangSam Altman on Twitter announcing that they’ve improved ChatGPT to be mindful of the mental health effects — “We realize this made it less useful/enjoyable to many users who had no mental health problems, but…”**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Mozilla Foundation wants to chart a new path in the AI era. But what is its role now and how can it help reshape the impacts and opportunities of technology for… everyone?More like this: Defying Datafication w/ Abeba BirhaneAlix sat down with Nabiha Syed to chat through her first year as the new leader of Mozilla Foundation. How does she think about strategy in this moment? What role does she want the foundation to play? And crucially, how is she stewarding a community of human-centered technology builders in a time of hyper-scale and unchecked speculation?As Nabiha says, “restraint is a design principle too”.Plug: We’ll be at MozFest this year broadcasting live and connecting with all kinds of folks. If you’re feeling the FOMO, be on the look out for episodes we produce about our time there.Further reading & resources:Watch this episode on YouTubeImaginative Intelligences — a programme of artist assemblies run by Mozilla FoundationNothing Personal — a new counterculture editorial platform from the Mozilla FoundationMore about MozfestNabiha on the Computer Says Maybe live show at the 2025 AI Action SummitNabiha Syed remakes Mozilla Foundation in the era of Trump and AI — The RegisterNabiha on why she joined MF as executive director — MF Blog**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Loneliness and mental health illnesses are rising in the US, while access to care dwindles — so a lot of people are turning to chatbots. Do chatbots work for therapy?More like this: The Collective Intelligence Project w/ Divya Siddarth and Zarinah AgnewWhy are individuals are confiding in chatbots over qualified human therapists? Stevie Chancellor explains why an LLM can’t replace a therapeutic relationship — but often there’s just no other choice. Turns out the chatbots designed specifically for therapy are even worse than general models like ChatGPT; Stevie shares her ideas on how LLMs could potentially be used — safely — for therapeutic support. This is really helpful primer on how to evaluate chatbots for specific, human-replacing tasks.Further reading & resources:Stevie’s paper on whether replacing therapists with LLMs is even possible (it’s not)See the research on GithubPeople are Losing Their Loved Ones to AI-Fuelled Spiritual Fantasies — Rolling Stone (May 2025)Silicon Valley VC Geoff Lewis becomes convinced that ChatGPT is telling him government secrets from the futureLoneliness considered a public health epidemic according to the APAFTC orders online therapy company BetterHelp to pay damages of $7.8mDelta plans to use AI in ticket pricing draws fire from US lawmakers — Reuters July 2025**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
What’s it like working as a local representative when you live next door to Silicon Valley?More like this: Chasing Away Sidewalk Labs w/ Bianca WylieWhen Hilary Ronen was on the board of supervisors for San Francisco, she had to make lots of decisions about technology. She felt unprepared. Now she sees local policymakers on the frontlines of a battle of resources and governance in an AI era, and is working to upskill them to make better decisions for their constituents. No degree in computer science required.Further reading & resources:Local Leadership in the Era of Artificial Intelligence and the Tech Oligarchy by Hillary RonenMore on Hillary’s work as a Supervisor for SFHillary Ronen on progressives, messaging, hard choices, and justice — interview in 48Hills from January 2025More about Local ProgressConfronting Preemption — a short briefing by Local ProgressWhat Happens When State and Local Laws Conflict — article on state-level preemption by State Court Report**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Welcome to the final boss of scams in the age of technology: Enshittification More like this: Nodestar: The Eternal September w/ Mike MasnickThis is our final episode of Gotcha! — our series on scams, how they work, and how technology both amplifies and obscures them. For this final instalment we have Cory Doctorow on to chat about his new book Enshittification.Is platformisation essentially just an industrial level scam? We will deep-dive the enshittification playbook to understand how companies lock users into decaying platforms, and get away with it. Cory shares ideas on what we can do differently to turn tide. Listen to learn what a ‘chickenised reverse centaur’ is…Further reading & resources:Buy Enshittifcation now from Verso Books!Picks and Shovels by Cory DoctorowOn The Media series on EnshittificationPluralistic — Daily Links and essays by Cory DoctorowConservatism Considered as a Movement of Bitter Rubes — Cory on why conservatism creates a friendly environment for scamsHow I Got Scammed — Cory on his personal experiences of being scammedAll of Cory’s booksAll (Antitrust) Politics Are Local — the entry to Pluralistic that Cory wrote on the day of recording
Thought we were at peak scam? Well, ScamGPT just entered the chat.More like this: Gotcha! The Crypto Grift w/ Mark HaysThis is part three of Gotcha! — our series on scams, how they work, and how technology is supercharging them. This week Lana Swartz and Alice Marwick join Alix to discuss their primer on how generative AI is automating fraud.We dig into the very human, very dark world of the scam industry, where the scammers are often being exploited in highly sophisticated human trafficking operations — and are now using generative AI to scale up and speed up.We talk about how you probably aren’t going to get a deepfake call from a family member to demand a ransom, but the threats are still evolving in ways that are scary and until now largely unregulated. And as ever even though the problems are made worse by technology, we explore the limitations of technology and laws to stem the tide.Further reading & resources:Read the primer here!More about Lana SwartzMore about Alice MarwickNew Money by Lana SwartzScam: Inside Southeast Asia's Cybercrime Compounds by Mark Bo, Ivan Franceschini, and Ling LiRevealed: the huge growth of Myanmar scam centres that may hold 100,000 trafficked peopleAl Jazeera True Crime Report on scamming farms in South East AsiaScam Empire project by the Organised Crime and Corruption Reporting Project**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
This just in with data centers: Energy grids are strained, water is scarce, utility costs are through the roof — ah well, let them eat compute, I guess!More like this: AI Thirst in a Water-Scarce World w/ Julie McCarthyIt was just climate week in NYC and we did a live show on data centers with four amazing guests from around the US…Thank you to the Luminate Foundation for sponsoring this live show and for all of our NY-based friends, and network from around the world that made it to Brooklyn for a magical evening. You can also watch the live recording on Youtube.KeShaun Pearson (Memphis Community Against Pollution) will break down how Elon Musk’s xAI supercomputer is polluting the air of historically Black neighborhoods in Memphis, and how organizers are fighting back against yet another chapter of corporate extraction in their communities.KD Minor (Alliance for Affordable Energy) will demystify the energy impacts of data centers in Louisiana and share organizing strategies to mobilize community opposition to Big Tech and Big Oil infrastructure.Marisol (No Desert Data Center) will talk about their grassroots coalition’s recent win in Tucson to stop Amazon’s Project Blue data center proposal, which threatened the city’s scarce water supply, and how they’re organizing for future protections.Amba Kak (AI Now Institute) will talk us through the bigger picture: what’s behind Big Tech’s AI data center expansion, who stands to benefit from this boom, and what we sacrifice in return.Further reading & resources:Amazon Web Services is company behind Tucson’s Project Blue, according to 2023 county memo — from LuminariaTuscon to create new policies around NDAs following the councils regret around not knowing more about Project Blue — from LuminariaHow Marana, also in the Tuscon area, employed an ordinance to regulate water usage after learning about data center interest in the area.xAI has requested an additional 150MGW of power for Colossus in Memphis, bring it to a total of 300MGWTime reports on increase in nitrogen dioxide pollution around Memphis due to xAI turbinesKeshaun and Justin Pearson on Democracy Now discussing xAI’s human rights violationsMeta’s Mega Data Center Could Strain Louisiana’s Grid — and Entergy Isn’t Prepared — report by the Alliance for Affordable Energy'A Black Hole of Energy Use': Meta's Massive AI Data Center Is Stressing Out a Louisiana Community — 404 Media**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
OpenAI just secured a bizarre financial deal with Nvidia — but the math is not mathing. Is the AI sector an actual market, or a series of high-profile announcements of circular relationships between a tiny number of companies?More like this: Making Myths to Make Money w/ AI NowAlix sat down with Sarah Myers-West to go through the particulars of this deal, and other similar deals that are propping up AI’s industry of vapour. This is not your traditional bubble that’s about to burst — there is no bubble, it’s just that The New Normal is to pour debt into an industry that cannot promise any returns…Further reading & resources:More on the Nvidia OpenAI deal — CNBCAnalysts refer to deal as ‘vendor financing’ — Insider MonkeySpending on AI is at Epic Levels. Will it Ever Pay Off? — WSJOpenAI, Softbank, and Oracle spending $500bn on data centre expansion in Abilene — ReutersHow Larry Ellison used the AI boom and the Tony Blair Institute to bolster his wealthOracle funding Open AI data centers with heaps of debt and will have to borrow at least $25bn a year — The Register**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Multi-level marketing schemes have built an empire by enticing people with promises of self-realisation and economic freedom. The cost is simple: exploit and be exploited.More like this: Worker Power & Big Tech Bossmen w/ David SeligmanThis is part two of Gotcha! Our series on scams, how they work, and how technology is super-charging them. This week Bridget Read came to Alix with a very exciting business opportunity. Bridget authored Little Bosses Everywhere — a book on the history of MLM.We explore how door-to-door sales in the mid 20th century US took on the business model of a ponzi scheme, and transformed the sweaty salesman into an entrepreneurial recruiter with a downline.MLM originators were part of a coordinated plan to challenge the new deal in lieu of radical free enterprise, where the only thing holding you back is yourself, and the economy consists solely of consumers selling to each other in a market of speculation. The secret is, no one is selling a product — they’re selling a way of life.Further reading & resources:Buy Bridget’s book: Little Bosses Everywhere: How the Pyramid Scheme Shaped AmericaFamily Values by Melinda CooperThe Missing Crypto Queen: a podcast by BBC Sounds, about a large scale crypto scam, where there wasn’t even any cryptoLuLaRoe — the pyramid scheme that tricked American mums into selling cheap clothes to their friends and family with the promise of financial independence.My Experience of Being in a Pyramid Scheme (Amway) — a personal account by Darren Mudd on LinkedInWatch our recent live show at NYC Climate WeekSubscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!
Hey you! Do you want some free internet money? If this sounds too good to be true, that’s because it is!More like this: Making Myths to Make Money w/ AI NowThis is Gotcha! A four-part series on scams, how they work, and how technology is supercharging them. We start with Mark Hays from Americans for Financial Reform (AFR), and get into one of the biggest tech-fuelled financial scams out there: cryptocurrencies.Like many things that require mass-buy in, crypto started with an ideology (libertarianism, people hating on Wall Street post 2008). But where does that leave us now? What has crypto morphed into since then, and how does it deceive both consumers and regulators into thinking it’s something that it’s not?Further reading & resources:Seeing Like a State by James C. ScottCapital Without Borders by Brooke HarringtonThe Politics of Bitcoin by David GolumbiaLearn more about Americans for Financial ReformCheck out Web3 Is Going Great by Molly WhiteLine Goes Up by Folding Ideas — an excellent survey of all the tactics and rug-pulls during the height of the NFT boomThe Missing Crypto Queen: a podcast by BBC Sounds, about a large scale crypto scam, where there wasn’t even any crypto**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Gotcha! is a four-part series on scams, how they work, and how technology is supercharging them — running through to October 10.In the series we look at:Crypto: Mark Hays on how a thing touting financial freedom ended up being a kind of fin-cult, rife with scammingMulti-Level Marketing : Bridget Read on the history of the biggest and most successful type of scam that still plagues us todayGenerative AI: Data & Society’s primer on how generative AI is juicing the scam industrial complexEnshittification: Cory Doctorow on his upcoming book, and how the process of Enchittification represents user-hostile practices that scam people into paying more, and ecosystem lock-in
What if you could listen to multiple people at once, and actually understand them?More like this: **The Age of Noise w/ Eryk Salvaggio**In our final instalment (for now!) of Nodestar, Andrew Trask shares his vision for a world where we can assembly understanding from data everywhere. But not in a way that requires corporate control of our world.If broadcasting is the act of talking to multiple people at once — what about broad listening? Where you listen to multiple sources of information, and actually learn something, without trampling over the control that individuals have over who sees what, when.Andrew says that broad listening is difficult to achieve because of three huge problems: information overload, privacy, and veracity — and we are outsourcing these problems to central authorities, who abuse their power in deciding how to relay information to the public. What is Andrew doing at OpenMined to remedy this? Building protocols that decentralise access to training data for model development, obviously.Further Reading & ResourcesThe Computer as a Communication Device by JCR Licklider and Robert W Taylor, 1968World Brain by HG WellsLearn more about OpenMinedWe’re gonna be streaming LIVE at Climate Week — subscribe to our Youtube**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Social media isn’t really social anymore. But that might be changing. Rudy Fraser over at Blacksky Algorithms has built something new. He has built the infrastructure to provide a safe online space for the black community, and in the process challenges the ideas of hierarchical, centralised networks. His work — even outside the very cool development of Blacksky — is an amazing, concrete example of how the abstract ambitions of decentralisation can provide real value for people, and sets us up for a new kind of tech politics.More like this: How to (actually) Keep Kids Safe Online w/ Kate SimThis is part two of Nodestar, our three-part series on decentralisation. Blacksky is a community built using the AT Protocol by Rudy Fraser. Rudy built this both out of a creative drive to make something new using protocol thinking, and out of frustration over a lack of safe community spaces for black folks where they could be themselves, and not have to experience anti-black racism or misogynoir as a price of entry.Rudy and Alix discuss curation as moderation, the future of community stewardship, freeing ourselves from centralised content decision-making, how technology might connect with mutual aid, and the beauty of what he refers to as ‘dotted-line communities’.Further reading:Blacksky AlgorithmsBlacksky the app — if you want an alternative to BlueskyMore about Rudy FraserOpen Collective — a fiscal host for communities and non-profitsPaper Tree — community food bankThe Implicit Feudalism of Online Communities by Nathan SchneiderFlashes — a 3rd party Bluesky app for viewing photosThe Tyranny of Struturelessness by JoreenRudy is a technologist, community organizer, and founder of Blacksky Algorithms, where he builds decentralized social media infrastructure that prioritizes community-driven safety, data ownership, and interoperability. As a Fellow at the Applied Social Media Lab at Harvard’s Berkman Klein Center for Internet & Society, he advances research and development on technology that empowers marginalized communities, particularly Black users
How did the internet become three companies in a trenchcoat? It wasn’t always that way! It used to be fun, and weird, and full of opportunity. To set the scene for the series, we spoke to a stalwart advocate of decentratilsation, Mike Masnick.More like this: Big Tech’s Bogus Vision for the Future w/ Paris MarxThis is part one of Nodestar, a three-part series on decentralisation: how the internet started as a wild west of decentralised exploration, got centralised into the hands of a small number of companies, and how the pendulum has begun it’s swing in the other direction.In this episode Mike Masnick gives us a history of the early internet — starting with what was called the Eternal September, when millions of AOL users flooded the scene, creating a messy, unpredictable, exciting ecosystem of open protocols and terrible UIs.Further reading & resources:Protocols, Not Platforms by Mike MasnickList of apps being built on AT ProtocolGraze — a service to help you make custom feed with ads on AT protoOtherwise Objectionable — an eight part podcast series on the history of section 230Techdirt podcastCTRL-ALT-SPEECH podast**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Foxglove and Global Action Plan have just sued the UK government over their YOLO hyperscale data center plans.More like this: Net0++: Data Centre SprawlLocal government rejected the data center. But Starmer’s administration overruled them. They want to force the development of a water-guzzling, energy draining data center on a local community who has said no. And all of this is on the green belt. The lawsuit filed this week might put a stop to those plans.Alix sat down Ollie Hayes from Global Action Plan and Martha Dark from Foxglove to discuss the legal challenge filed this week. Why now? Aren’t the UK aiming for Net 0? And how does this relate to the UK government’s wider approach to AI?Further reading & resources:Read the Guardian article about the suitRead the Telegraph piece about the suitDonate to the campaignData Centre Finder on Global Action PlanComputer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email pod@themaybe.org
What’s the deal with Silicon Valley selling imagined futures and never delivering on them. What are the consequences of an industry all-in on AI? What if we thought more deeply than just ‘more compute’?More like this: Big Dirty Data Centres with Boxi Wu and Jenna RuddockThis week, Paris Marx (host of Tech Won’t Save Us) joined Alix to chat about his recent work on hyperscale data centres, and his upcoming book on the subjectWe discuss everything from the US shooting itself in the foot with it’s lack of meaningful industrial policy and how decades of lackluster political vision from governments created a vacuum that has now been filled with Silicon Valley's garbage ideas. And of course, how the US’s outsourcing of manufacturing to China has catalysed China’s domestic technological progress.Further reading & resources:Buy Road To Nowhere: What Silicon Valley Gets Wrong About the Future of Transportation by Paris MarxData Vampires — limited series on data centres by Tech Won’t Save UsApple in China by Patrick McGee**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**























