Discover
Computer Says Maybe

Computer Says Maybe
Author: Alix Dunn
Subscribed: 10Played: 299Subscribe
Share
© 2024
Description
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
70 Episodes
Reverse
Social media isn’t really social anymore. But that might be changing. Rudy Fraser over at Blacksky Algorithms has built something new. He has built the infrastructure to provide a safe online space for the black community, and in the process challenges the ideas of hierarchical, centralised networks. His work — even outside the very cool development of Blacksky — is an amazing, concrete example of how the abstract ambitions of decentralisation can provide real value for people, and sets us up for a new kind of tech politics.More like this: How to (actually) Keep Kids Safe Online w/ Kate SimThis is part two of Nodestar, our three-part series on decentralisation. Blacksky is a community built using the AT Protocol by Rudy Fraser. Rudy built this both out of a creative drive to make something new using protocol thinking, and out of frustration over a lack of safe community spaces for black folks where they could be themselves, and not have to experience anti-black racism or misogynoir as a price of entry.Rudy and Alix discuss curation as moderation, the future of community stewardship, freeing ourselves from centralised content decision-making, how technology might connect with mutual aid, and the beauty of what he refers to as ‘dotted-line communities’.Further reading:Blacksky AlgorithmsBlacksky the app — if you want an alternative to BlueskyMore about Rudy FraserOpen Collective — a fiscal host for communities and non-profitsPaper Tree — community food bankThe Implicit Feudalism of Online Communities by Nathan SchneiderFlashes — a 3rd party Bluesky app for viewing photosThe Tyranny of Struturelessness by JoreenRudy is a technologist, community organizer, and founder of Blacksky Algorithms, where he builds decentralized social media infrastructure that prioritizes community-driven safety, data ownership, and interoperability. As a Fellow at the Applied Social Media Lab at Harvard’s Berkman Klein Center for Internet & Society, he advances research and development on technology that empowers marginalized communities, particularly Black users
How did the internet become three companies in a trenchcoat? It wasn’t always that way! It used to be fun, and weird, and full of opportunity. To set the scene for the series, we spoke to a stalwart advocate of decentratilsation, Mike Masnick.More like this: Big Tech’s Bogus Vision for the Future w/ Paris MarxThis is part one of Nodestar, a three-part series on decentralisation: how the internet started as a wild west of decentralised exploration, got centralised into the hands of a small number of companies, and how the pendulum has begun it’s swing in the other direction.In this episode Mike Masnick gives us a history of the early internet — starting with what was called the Eternal September, when millions of AOL users flooded the scene, creating a messy, unpredictable, exciting ecosystem of open protocols and terrible UIs.Further reading & resources:Protocols, Not Platforms by Mike MasnickList of apps being built on AT ProtocolGraze — a service to help you make custom feed with ads on AT protoOtherwise Objectionable — an eight part podcast series on the history of section 230Techdirt podcastCTRL-ALT-SPEECH podast**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Foxglove and Global Action Plan have just sued the UK government over their YOLO hyperscale data center plans.More like this: Net0++: Data Centre SprawlLocal government rejected the data center. But Starmer’s administration overruled them. They want to force the development of a water-guzzling, energy draining data center on a local community who has said no. And all of this is on the green belt. The lawsuit filed this week might put a stop to those plans.Alix sat down Ollie Hayes from Global Action Plan and Martha Dark from Foxglove to discuss the legal challenge filed this week. Why now? Aren’t the UK aiming for Net 0? And how does this relate to the UK government’s wider approach to AI?Further reading & resources:Read the Guardian article about the suitRead the Telegraph piece about the suitDonate to the campaignData Centre Finder on Global Action PlanComputer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email pod@themaybe.org
What’s the deal with Silicon Valley selling imagined futures and never delivering on them. What are the consequences of an industry all-in on AI? What if we thought more deeply than just ‘more compute’?More like this: Big Dirty Data Centres with Boxi Wu and Jenna RuddockThis week, Paris Marx (host of Tech Won’t Save Us) joined Alix to chat about his recent work on hyperscale data centres, and his upcoming book on the subjectWe discuss everything from the US shooting itself in the foot with it’s lack of meaningful industrial policy and how decades of lackluster political vision from governments created a vacuum that has now been filled with Silicon Valley's garbage ideas. And of course, how the US’s outsourcing of manufacturing to China has catalysed China’s domestic technological progress.Further reading & resources:Buy Road To Nowhere: What Silicon Valley Gets Wrong About the Future of Transportation by Paris MarxData Vampires — limited series on data centres by Tech Won’t Save UsApple in China by Patrick McGee**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
How do we yank power out of tech oligarch hands without handing it over to someone else?More like this: Is Digitisation Killing Democracy? w/ Marietje SchaakeCori Crider is a fearless litigator turned market-shaping advocate. She started litigating during many years at leading human rights organisation Reprieve, and then moved on to co-founding Foxglove so she could sue big tech. Now she’s set her sights on market concentration.Cori’s analysis concludes with a hopeful message: we are not stuck in place with eight dudes running the show. In fact, we’ve been here before. The computer age never would have happened the way it did if thousands of patents weren’t liberated from Bell Labs in 1956. How can we use similar tactics to dethrone monopolies and think about how Europe and other large jurisdictions can decouple themselves from silicon valley infrastructure?Further reading & resources:Antitrust Policy for the Conservative by Mark Meader of the FTCThe Open Markets InstituteThe Future of Tech Institute**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Do you have an idea for the show? Email pod@themaybe.org
Did you miss FAccT? We interviewed some of our favourite session organisers!More like this: Part One of our FAccT roundup: Materiality and Militarisation.Georgia, Soizic, and Hanna from The Maybe team just went to FAccT. Georgia and Soizic interviewed a bunch of amazing researchers, practitioners, and artists to give you a taste of what the conference was like if you didn’t get to go. Alix missed it too — you’ll learn along with her!In part two we look into how AI is used to misrepresent people through things like image generation, and even care labour. These are conversations about AI misrepresenting hidden identities, care work becoming data work, how pride and identity is tied to labour — and how labour organisers are building solidarity and movement around this.Who features in this episode:Priya Goswami brought a multimedia exhibition to FAccT: Digital Bharat. This explores the invisibilised care work and manual labour by women in India, and how their day-to-day has become mediated by digital public infrastructures.Kimi Wenzel organised Invisible by Design? Generative AI and Mirrors of Misrepresentation, which invited users to confront generated images of themselves and discuss issues of representation within these systems.Alex Hanna and Clarissa Redwine ran the AI Workers Inquiry, which brought people together to share in how AI has transformed their work, identify common ground, and potentially begin building resistance.Further reading & resources:Circuit Breakers — tech worker conference organised by Clarissa RedwineKimi Wenzel’s researchBuy The AI Con by Alex Hanna and Emily Bender**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
In May, Grok couldn’t stop talking about white genocide. This injection of right-wing South African politics triggered a conversation with a Musk contemporary, Nic Dawes.In this short Nic shares his perspective on how post-apartheid white communities have dealt with apartheid’s end. And how Musk is basically seeking out an information environment that can recreate the apartheid information system: Grok is just an extension of a media ecosystem designed to soothe guilt and stoke resentment.Computer Says Maybe Shorts cover recent news with an expert in our network. If there is a news story you want us to cover, please email pod@themaybe.orgNic is Executive Director at THE CITY, a news outlet serving the people of New York through independent journalism that holds the powerful to account, deepens democratic participation, and helps make sense of the greatest city in the world. He has led news and human rights organizations on three continents, and was previously Deputy Executive Director of Human Rights Watch, Chief Content Officer of Hindustan Times in Delhi, and Editor-in-Chief of South Africa's Mail & Guardian newspaper.
Georgia, Soizic, and Hanna from The Maybe team just went to FAccT. Georgia and Soizic interviewed a bunch of amazing researchers, practitioners, and artists to give you a taste of what the conference was like if you didn’t get to go. Alix missed it too — you’ll learn along with her!In part one we explore the depth of AI’s hidden material impacts, including its use in military applications and to aid genocide. One of our interviewees talked about why they spoke up at the town hall — questioning why FAccT, the biggest AI ethics conference there is, accepts sponsorship from those same military contractors.Who we interviewed for Part One:Charis Papaevangelou who co-organised a CRAFT session called The Hidden Costs of Digital Sovereignty. Greece is trying to position itself as a central digital hub by building data centres and participating in the ‘fourth industrial revolution’ — but what does this actually mean for the people and infrastructure of Greece?Georgia Panagiotidou ran a session on The Tools and Tactics for Supporting Agency in AI Environmental Action — offering some ideas on how the community can get together and meaningfully resist extractive practices.David Widder discussed his workshop on Silicon Valley and The Pentagon, and his research on the recent history of the DoD funding academic papers — is it ever worth taking military money, even for basic research?Tania Duarte offered something very different: a demonstration of two workshops she runs for marginalised groups, to better explain the true materiality of AI, and build knowledge that gives people more agency over the dominant narratives and framings in the industry.Further reading & resources:Recording of Charis’s CRAFT session: The Hidden Cost of Digital SovereigntyCloud hiding undersea: Cables & Data Centers in the Mediterranean crossroads by Theodora KostakaBasic Research, Lethal Effects: Military AI Research Funding as Enlistment and Why ‘open’ AI systems are actually closed and why this matters by David WidderThe video that David quoted the Carnegie Mellon professor from — David was paraphrasing in the episode!We and AI & Better Images of AIMore on Georgia Panagiotidou’s work and resources from her session**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
AI Now have just released their 2025 AI Landscape report — Artificial Power. Alix sat down with two of it’s authors, Amba Kak and Sarah Myers-West for a light unpacking of the themes within.This report isn’t a boring survey of what AI Now have been doing this year; it’s a comprehensive view of the state of AI, and the concentrated powers that prop it up. What are the latest AI-shaped solutions that the hype guys are trying to convince us are real? And how can we reclaim a positive agenda for innovation — and unstick ourselves from a path towards pseudo religious AGI.Further reading & resources:Read the AI Now 2025 Landscape Report: Artificial Power**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!***Amba Kak has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.**Sarah Myers-West has spent the last fifteen years interrogating the role of technology companies and their emergence as powerful political actors on the front lines of international governance. Sarah brings this depth of expertise to policymaking in her current role co-directing AI Now, with a focus on addressing the market incentives and infrastructures that shape tech’s role in society at large and ensuring it serves the interests of the public. Her forthcoming book, Tracing Code (University of California Press) draws on years of historical and social science research to examine the origins of data capitalism and commercial surveillance.*
Felienne Hermans calls herself an ‘involuntary ethnographer of computer science’. She studies the culture behind programming, and challenges the dominant idea that learning to program has to be painful. Alix and Felienne chat about the history of programming and how it went from multidisciplinary and inclusive, to masochistic and exclusive. They also dig into all the ways it excludes women and people who do not speak English.Further reading & resources:Scratch — a high level programming language aimed at kidsHedy — the programming language that Felienne designedJoin in and help out with Hedy!GenderMag by Margaret Burnett — how to ensure more gender inclusiveness in your softwareElm — an easy and kind browser-based programming languageA Case for Feminism in Programming Language Design by Felienne Hermans & Ari SchlesingerA Framework for the Localization of Programming Languages by Felienne Hermans & Alaaeddin SwidanSubscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!Felienne is the creator of the Hedy programming language, a gradual and multi-lingual programming language designed for teaching. She is the author of “The Programmer’s Brain“, a book that helps programmers understand how their brain works and how to use it more effectively. In 2021, Felienne was awarded the Dutch Prize for ICT research. She also has a weekly column on BNR, a Dutch radio station.
Smart people focused on technology politics issues get it. We trade high level helpful concepts like surveillance capitalism, automated inequality, and enshittification. And even as some of these ideas are making it more mainstream, normies aren’t getting the message. We need stories for that. But how? How do we take the technical jargon and high-level concepts that dominate tech narratives and instead create stories that are personal, relatable, and powerful?And how do we combat the amazing hero-god narratives of Silicon Valley without reinforcing them?Alix went to storytelling festival ZEG Fest in Tbilisi to chat with three amazing storytellers about that challenge:Armando Iannucci, creator of Veep and The Thick of It: who discusses how to use humour and satire to keep things simple — and that stories are not ‘made up’, but rather a way to relay a series of facts and concepts that are complex and difficult to process.Chris Wylie, Cambridge Analytica whistleblower: on how the promise of superintelligence and transhumanism is basically like a religious prophecy. His new show Captured explores the stories that tech elites are telling us about our utopian AI future.Adam Pincus, producer of The Laundromat and Leave no Trace: shares his frustrations with the perceived inevitability of AI in his day to day, and also tells us more about his podcast series ‘What Could Go Wrong?’ in which he explores writing a Contagion sequel with director Scott Burns.Further reading & resources:Captured: The Secret Behind Silicon Valley’s AI Takeover — limited podcast series featuring Chris Wylie**‘Contagion’ Screenwriter Scott Z. Burns Asks AI to Write a Sequel to Pandemic Film in Audible Original Series ‘What Could Go Wrong?’** — Variety articleWhat Could Go Wrong? — limited podcast series by Scott Burns**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
There has been an intentional and systematic narrative push that tells governments they are not good enough to provide their own public infrastructure or regulate tech companies that provide it for them.Shocking: these narratives stem from large tech companies, and this represents what Marietje Schaake refers to as a Tech Coup — which is the title of her book (which you should buy!).The Tech Coup refers to the inability of democratic policymakers to provide oversight, regulation, and even visibility into the structural systems that big tech is building, managing, and selling. Marietje and Alix discuss what happens when you have a system of states whose knowledge and confidence have been gutted over decades — hindering them from providing good services, and understanding how to meaningfully regulate the tech space.Further Reading & Resources:Buy The Tech Coup by Marietje Schaake**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of **The Tech Coup.**
This episode contains some descriptions of torture methods, automated human targeting by machines, and psychological warfare throughoutLast week Alix hosted a live show in Mexico City right after REAL ML. Four panellists discussed a huge important topic, which has been wrongfully deemed as taboo by other conferences: the use of AI and other technologies to support the ongoing genocide in Palestine.Here’s a preview of what the four speakers shared:Karen Palacio AKA kardaver gave us an overview of Operation Condor — a program of psychological warfare that ran in the late 20th century in South America to suppress activist voices.Marwa Fatafta explains how these methods are still used today against Palestinians; there are coordinated surveillance projects that make Palestinian citizens feel they are living in a panopticon, and the granular data storage and processing is facilitated by AWS, Google, and Azure.Matt Mahmoudi goes on to describe how these surveillance projects have crystallised into sophisticated CCTV and facial recognition networks through which Palestinians are continuously dehumanised via face-scanning and arbitrary checks that restrict movements.Wanda Muñez discusses how fully autonomous weapons obviously violate human rights in all kinds of ways — but ‘AI ethics’ frameworks never make any considerations for machines that make life or death decisions.Further reading & resources:The Biometric State by Keith Breckenridge — where the phrase ‘automated apartheid’ was conceivedCOGWAR Report by Karen Palacio, AKA KardaverSubscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!Wanda Muñez is an international consultant with twenty years of experience in the design, implementation and evaluation of programs and policies on human rights, gender equality, inclusion and the rights of people with disabilities. Wanda has worked for international NGOs and UN organizations in Asia, Africa, Europe and Latin America. She became involved in the field of artificial intelligence in 2017, initially through the analysis of its intersection with International Humanitarian Law in the issues of autonomous weapons systems; and later focusing on the intersection between human rights and AI. In 2020, she was nominated by the Ministry of Foreign Affairs of Mexico as an independent expert at the Global Alliance on Artificial Intelligence (GPAI), where she contributed to various publications and panels, and led the design of the research “Towards true gender equality and diversity in AI” that is currently being implemented. In 2020, Wanda Muñoz was recognized by the Nobel Women's Initiative as "a peacebuilder working for peace, justice and equality" and by UNLIREC as one of Latin America's "forces of change, working for humanitarian disarmament, non-proliferation and arms control. Wanda also just recently won DEI Champion of Year Award from Women in AI.Karen Palacio, aka kardaver, is an interdisciplinary digital artist, industrial programmer specialized in AI, and data scientist from Córdoba, Argentina. She researches and creates through iterative loops of implementation and reflection, aiming to understand what it means to articulate artistic-technological discourses from the Global South. Her performances, installations, and audiovisual works engage critically and rootedly with the depths of computation, the histories of computing and archives, freedom of knowledge, feminisms, and the pursuit of technological sovereignty. She develops and works with Free Software in her processes, resemanticizing technologies she knows from her background as an industrial programmer.Dr Matt Mahmoudi is Assistant Professor in Digital Humanities at the University of Cambridge, and a Researcher/Advisor on Artificial Intelligence and Human Rights at Amnesty International. Matt’s work has looked at AI-driven surveillance from the NYPD’s surveillance machine to Automated Apartheid in the occupied Palestinian territory. Matt is author of Migrants in the Digital Periphery: New Urban Frontiers of Controls (University of California Press, February 2025), and co-editor of Resisting Borders & Technologies of Violence (Haymarket, 2024) together with Mizue Aizeki and Coline Schupfer.Marwa Fatafta leads Access Now’s policy and advocacy work on digital rights in the Middle East and North Africa (MENA) region. Her work spans a number of issues at the nexus of human rights and technology including content governance and platform accountability, online censorship, digital surveillance, and transnational repression. She has written extensively on the digital occupation in Palestine and focuses on the role of new technologies in armed conflicts and humanitarian contexts and their impact on historically marginalized and oppressed communities. Marwa is a Policy Analyst at Al-Shabaka: The Palestinian Policy Network, an advisory board member of the Tahrir Institute for Middle East Policy, and an advisory committee member for Bread&Net. Marwa was a Fulbright scholar in the US and holds an MA in International Relations from Maxwell School of Citizenship and Public Affairs, Syracuse University. She holds a second MA in Development and Governance from the University of Duisburg-Essen.
Adele Walton’s new book *Logging Off: The Human Cost of our Digital World* is out NOW — for this week’s episode Alix sat down with her to discuss the book, and what pushed her to write it.Adele shares her experiences of using social media from age ten, and growing up only ever feeling ‘understood’ by her followers. And now, the constant ‘how can I make content out of this??’ mindset has followed her into adult life.Adele has been severely effected by online harms through the loss of her sister, and is working to use her lived experiences in her campaigning and advocacy work. The answer for Adele has never been to go full Luddite and reject social media — rather she wants to make online spaces safer for everyone.Further reading & resources:Buy Adele’s book: Logging Off: The Human Cost of our Digital WorldThe Facebook Eye by Nathan Jurgeson — 2012 article from The AtlanticSmartphone Free ChildhoodRipple — a suicide prevention browser extension**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Adele Zeynep Walton is a British Turkish journalist, online safety campaigner and the author of Logging Off: The Human Cost of Our Digital World. She is a campaigner with Bereaved Families for Online Safety, youth ambassador for People Vs Big Tech, and a founding member of the EU youth movement Ctrl + Alt + Reclaim.She is the founder of Logging Off Club, a community which brings people together offline at phone free events to reconnect with themselves and others across the UK. As a Gen Z who grew up on social media, Adele regularly speaks about digital wellbeing, social connection and rebuilding empathy in a polarised world.Adele has written for The Guardian, The Independent, the i, Dazed, i-D, VICE, Metro, Refinery 29, The Big Issue, Jacobin, Open Democracy, gal-dem, Computer Weekly and more. Her articles have been translated into Brazilian Portuguese, German, Italian, Swedish, Turkish and Spanish, and she has been interviewed on Times Radio, LBC Radio, Sky News, BBC Radio Scotland and Channel 4 News and more. Between 2023-2024 Adele was DAZED's first ever political book columnist, where she has interviewed authors including Naomi Klein, Emma Dabiri, Vicky Spratt and more.
Sam Altman is doing another big infrastructure push with World (previously Worldcoin) — the universal human verification system.We had journalist Billy Perrigo on to chat what’s what with World. Is Sam Altman just providing a solution to a problem that he himself caused with OpenAI? Do we really need human verification, or is this just a way to side-step the AI content watermarking issue?Further reading & resources:The Orb Will See You Now by Billy PerrigoThe ethical implications of AI agents by DeepMindComputer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email pod@saysmaybe.comPerrigo is a correspondent at TIME, based in the London bureau. He covers the tech industry, focusing on the companies reshaping our world in strange and unexpected ways. His investigation ‘Inside Facebook’s African Sweatshop’ was a finalist for the 2022 Orwell Prize.
Most of the time we interview people who say No to AI. In this interview, Georgia and Alix talk to two people who look at AI and ask How and For What. And lots of other questions too.Divya Siddarth and Zarinah Agnew from the Collective Intelligence Project share CIP’s work using AI systems to explore more consultative democratic governance, how to reframe the social and relational of knowledge, to pull our thinking out of the individual frame and into collective and communal applications.In Zarinah’s words, they are interested in what happens “between brains, not within brains”. A ‘community chat bot’ might sound cringe but Divya and Zarinah are doing work to make these valuable and useful, rather than addictive and sycophantic. If you’re skeptical of the utility of engaging in these toxic corporate towers of AI at all, this is an episode for you.Further reading & resourcesWhy We Need an Amistics for AI by Brain BoydCollective constitutional AI project with CIP and AnthropicGlobal Dialogues launch announcementI Tested The AI That Calls Your Elderly Parents If You Can't Be Bothered by Joseph Cox from 404 MediaWorker Power & Big Tech Bossmen w/ David SeligmanThe Orb Will See You Now by Billy PerrigoThe Intimacy Dividend by Shuwei Fang**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Divya Siddarth is the executive director and co-founder of CIP. Previously, she has been a political economist and social technologist in Microsoft’s Office of the CTO, the AI and Democracy lead at the U.K.’s AI Safety Institute, and held positions at the Ethics in AI Institute at Oxford, the Ostrom Workshop, and the Harvard Safra Center. She graduated from Stanford with a B.S. in Computational Decision Analysis in 2018.Zarinah is Research Director at the Collective Intelligence Project, where they work on transforming public input into impactful change in the AI ecosystem. Previously a neuroscientist, Zarinah now focuses on the science of collectivity and emerging related technologies. Zarinah is faculty at the London College of Political Technology where they teach on Future Crafting. In their spare time, some might argue, they run too many non-profits.
We’re excited to finally share our report on data center expansion and resistance around the world. It’s been a labor of love, but also showcases the amazing work of many organisations, activists, and journalists around the world that are working to create space for meaningful consultation about hugely consequential decisions. Download it here.In short, the report includes five case studies on data centre development across the globe. We were focused on understanding how companies approach policymakers, what information is made available to communities, how decisions are made to develop data centers, and when communities decide to resist their development, and what the outcomes have been.The ONE big similarity across all case studies is that information about data centre development was consistently hard to find: accessing information about environmental impacts, urban planning, and even the identity of the companies proposing these projects, has been almost impossible to uncover.We end the report with some recommendations for how to increase transparency and crack open democratic consultation of communities on the front lines of this behemoth tech infrastructure.Further reading:Read the report here!A short More Perfect Union doc about living 400 yards from a data centerData Center DynamicsxAI's Memphis Neighbors Push for Facts and Fairness from Tech Policy PressIf you have any thoughts or feedback about the report, please email research@themaybe.org**Subscribe to our newsletter to get invites for community calls around data centre resistance.***Chris Cameron has been a scientist and researcher for over a decade and has been working in environmental justice policy since 2021. Her interest in investigating human rights violations related to environmental injustices has led to her current research into strategic litigation support for communities experiencing harm from data centers. Chris’s previous work has centered around co-designing projects with communities related to environmental rights advocacy and digital storytelling. She also hosts a radio show called Sound Ecology, a space for climate-oriented artists to share their sonic investigations as toolkits for the climate collapse. Contact Chris at cameroncscoop@gmail.com to speak more about data center litigation strategies and the intersection of technology and environmental justice.**Prathm Juneja is the Research Strategist at The Maybe and a PhD Candidate in Social Data Science at the University of Oxford, where his research examines, from a technical and ethical perspective, AI & Elections. He works at the intersection of AI, research, industry, and politics, spending most of his time advising governments, civil society organizations, and companies on civic tech and tech policy.*
Last year, Elon Musk’s xAI built a data centre in Memphis in 19 days — and the local government only found out about it on the 20th day. How?Julie McCarthy and her team at NatureFinance have just released a report about the nature-related impacts of data center development globally. There are some pretty dire statistics in there: 55% of data centers are developed in areas that are already at risk of drought. So why do they get built there?Julie also shares the longer arc of her career, which began in extractive industry transparency, and included time leading the Open Government Partnership, and the Economic Justice Program at Open Society Foundations. She brings all of that experience together for an insightful conversation about what iss happening with tech infrastructure expansion and what we should do about it.Further reading & resources:Kate Raworth’s Doughnut EconomicsNatureFinance websiteNavigating AI’s Thirst in a Water-Scarce World — by NatureFinanceElon Musk building an xAI data centre in 19 days — report by Time MagazineOSF’s Economic Justice ProgrammeThe Entrepreneurial State by Mariana Mazzucato**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Julie is NatureFinance’s CEO. She was founding co-director of the Open Society Foundations’ (OSF) Economic Justice Program, a $100 million per annum global grantmaking and impact investment program focused on issues of fiscal justice, workers’ rights, and corporate power. Previous roles include serving as the founding director of the Open Government Partnership (OGP), and as a Franklin Fellow and peacebuilding adviser at the U.S. Mission to the United Nations, focused on Liberia. Prior to this, McCarthy co-founded the Natural Resource Governance Institute (NRGI), serving as its deputy director until 2009. She is a Brookings non-resident fellow in the Center for Sustainable Development, and an Aspen Civil Society Fellow. Julie lives with her three children in Warwick, NY.
This is another Computer Says Maybe short, this time with Marietje Schaake (author of The Tech Coup), to discuss OpenAI’s recent announcement: they want to partner with governments all around the world to build ‘democratic AI rails’ — sounds bad!Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email pod@saysmaybe.comMarietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of **The Tech Coup.****Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Personalised genotyping company 23andMe just went bankrupt — what’s gonna happen to all that genetic data?We brought back genomics professor Jenny Reardon to discuss the crushing void that was 23andMe’s business model — and that many companies like it have failed before.This is a Computer Says Maybe Short, where we bring in an expert to give their take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email pod@saysmaybe.comFurther reading & resources:The Postgenomic Condition by Jenny ReardonPower Over Precision — Jenny’s first episode with us**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Jenny Reardon is a Professor of Sociology and the Founding Director of the Science and Justice Research Center at the University of California, Santa Cruz. Her research draws into focus questions about identity, justice and democracy that are often silently embedded in scientific ideas and practices. She is the author of Race to the Finish: Identity and Governance in an Age of Genomics (Princeton University Press) and, most recently, The Postgenomic Condition: Ethics, Justice, Knowledge After the Genome (University of Chicago Press)
Comments