DiscoverData & Society
Data & Society
Claim Ownership

Data & Society

Author: Data & Society

Subscribed: 435Played: 5,290
Share

Description

Presenting timely conversations about the purpose and power of technology that bridge our interdisciplinary research with broader public conversations about the societal implications of data and automation.

For more information, visit datasociety.net.
139 Episodes
Reverse
While many people have found benefit and respite in using chatbots for companionship, mental health, and emotional support, the widespread adoption of these tools has also resulted in harm and raised deep concerns about identity and safety. How are chatbots shaping people’s understanding of themselves? What concerns do therapists have about their use? How might these tools be designed and implemented to prioritize users’ wellbeing? What kinds of guardrails, regulations, and safety protocols might be effective? In connection with Data & Society’s ongoing research on mental health and chatbots, on February 26 we explored these questions and more in a conversation moderated by researchers Livia Garofalo and Briana Vecchione. Together with Luca Belli, AI safety lead at Spring Health; Miranda Bogen, founding director of the AI Governance Lab at the Center for Democracy & Technology; and psychiatrist and psychotherapist Marlynn Wei, they discussed the profound shifts in how people seek help and support, and how mental health professionals, policymakers, and tech designers are navigating these shifts now. Learn more about the event and Data & Society’s research on mental health chatbot interventions.
In her new report (404) Job Not Found: What Workforce Training Can’t Fix for Black Atlantans in the Age of AI, Data & Society researcher Anuli Akanegbu provides the first ethnographic examination of how AI-related skills are defined, taught, and valued across Atlanta’s growing tech economy. Drawing on interviews, field observations, and historical analysis, she traces how AI literacy is promoted by industry, implemented by government, and interpreted by workers and community leaders navigating an increasingly AI-driven workforce infrastructure.On February 17, Akanegbu, TechEquity Senior Vice President of Labor Programs Tim Newman, and Bard Computer Science Professor Annabel Rothschild held a critical conversation on the policy stakes of AI-focused workforce development at the state and national level. This conversation was  Informed by Akanegbu’s report and an accompanying policy brief co-authored by D&S Policy Manager Serena Oduro, who moderated this conversation, panelists discussed how government and industry priorities shape workers’ access to opportunity and how policy can address the real-world impacts of automation and AI on workers. Learn more about the eventRead Anuli's reportLearn about Data & Society's 'AI Civics' Initiative
The second Trump administration has launched a full-scale effort to achieve “unchallenged global technological dominance.” It is accelerating the construction of AI infrastructure, from opening up federal lands to ramping up energy production. It has invoked AI-enabled “efficiency” in order to replace federal workers, removed agency guidance on algorithmic discrimination,  and supercharged the use of AI in areas including defense and immigration enforcement. The administration has also pursued novel public ownership efforts, such as taking equity  in Intel and critical minerals firms. To what end? Officials say they are now maximizing the “export of the American AI technology stack.” This is not the deregulatory tech agenda predicted by both supporters and critics of President Trump. So what is it?How should we understand the administration’s actions when it comes to AI? What dynamics are driving these changes in AI policymaking? What might be the downstream consequences for Americans? And how should we respond?
Generative AI models are marketed as the next revolution in workplace automation, but they ultimately rely on human labor — from the people labeling content and checking outputs, to the content creators and workers whose data are extracted to build the systems. As management and organizational leaders adopt AI across workplaces, the use of these systems raises questions about how companies are reshaping the quality of work, job security, and the value of  human labor. How are workers’ lives impacted when AI is used to monitor performance, surveil output, or make intrusive management decisions? Will AI disrupt industries and business models? How can we make sure technology supports workers, rather than undermining them? About 'Understanding AI'In the fall of 2025, The New York Public Library and Data & Society collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the Stavros Niarchos Foundation Library (SNFL)in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.Revisit the series
The concentration of power and lack of regulation in the technology industry directly shapes how AI is designed and deployed, and whose interests it serves. That means decisions about these tools often reflect corporate priorities over public benefits. While AI is often held up as a tool to increase “efficiency,” it is essential to ask: efficiency for whom, and at what cost? What would it mean to create and oversee AI in the public’s best interest? How could these technologies be made more accountable to the people and communities they affect? And what is needed to create a future where AI works for everyone? About 'Understanding AI'In the fall of 2025, The New York Public Library and Data & Society collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the Stavros Niarchos Foundation Library (SNFL)in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.Revisit the series
Artificial intelligence technologies run on powerful computers that require vast amounts of energy, water, and critical minerals. As AI use grows, so does its environmental footprint. Yet there is little consensus on how to assess and address the technology’s toll on the climate before irreparable damage is done. How can we understand the impact AI data centers have on communities and the environment? How can we ensure that communities are able to use empirical data about those impacts to fight back? About 'Understanding AI'In the fall of 2025, The New York Public Library and Data & Society collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the Stavros Niarchos Foundation Library (SNFL)in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.Revisit the series 
Artificial intelligence (AI) is reshaping many aspects of our daily lives: from the way people are hired for jobs, to how housing applications are reviewed, to how government services are delivered in healthcare, education, and beyond. But while organizations of all kinds have been introducing AI systems into their core functions, there is uncertainty about how they are working — including who is on the receiving end of their benefits and harms. What do we need to know about AI and automated decision-making tools today? How can we better understand the technology’s influence, and make informed decisions about where and how to use it? About 'Understanding AI'In the fall of 2025, The New York Public Library and Data & Society collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the Stavros Niarchos Foundation Library (SNFL)in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.Revisit the series
In this moment of AI ascendance and data center accelerationism, there are thousands of tech workers who are concerned about the realities of climate change and see the tech industry’s growing role in it — and who are actively working to create change, develop better tools, and organize for collective action. In her report "Turning the Tide: Climate Action in and Against Tech," Climate, Technology, and Justice Program Director Tamara Kneese examines the ways these workers have attempted to reform the tech industry from within while applying external forms of pressure through policymaking and activism. By engaging in workplace activism and forming broader coalitions with environmental justice organizations, climate conscious tech workers who adhere to the organizer mindset use their insider knowledge to advocate for social change rather than technical tweaks. What does that look like in practice? Read Turning the Tide: Climate Action in and Against TechLearn more about the event and its speakers.
Democracy faces challenges worldwide, and artificial intelligence has become an increasing part of that. In their book Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, cybersecurity technologist Bruce Schneier and data scientist Nathan E. Sanders methodically unpack the ways AI is changing every aspect of democracy, while making the case that we can harness the technology to support and strengthen these systems. Neither fear-mongering nor utopian, Rewiring Democracy aims to present a clear-eyed and optimistic path for putting democratic principles at the heart of AI development — highlighting how citizens, public servants, and elected officials can use AI to expand access to justice and inform, empower, and engage the public.On October 23, the authors discussed their book with Data & Society’s Director of Research Alice Marwick, walking us through their roadmap for understanding how AI is changing power and participation and what we can do to shape that change for the better.
Visit datasociety.net for to learn more about this Book Talk's speakers, access resources and referenced materials, and to purchase copies of The AI Con and Empire of AI.Purchase copies of these books from our Bookshop:The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want by Emily M. Bender and Alex HannaEmpire of AI: Dreams and Nightmares in Sam Altman’s OpenAI by Karen Hao
Recorded on May 6, 2025 at The Greene Space in NYC Featuring Dr. Julián Posada and Aiha Nguyen Resources and recordings are available here: https://datasociety.net/events/what-is-work-worth/
BooksDeath Glitch: How Techno-Solutionism Fails Us in This Life and Beyond (Tamara Kneese)The Pacific Circuit: A Globalized Account of the Battle for the Soul of an American City (Alexis Madrigal)Blockchain Chicken Farm: And Other Stories of Tech in China's Countryside (Xiaowei Wang)
At the turn of the 20th century, the anti-immigration and eugenics movements used data about marginalized people to fuel racial divisions and political violence under the guise of streamlining society toward the future. Today, as the tech industry champions itself as a global leader of progress and innovation, we are falling into the same trap.On April 10th, Anita Say Chan, author of Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future (UCP 2025 and open access), joined Émile P. Torres and Timnit Gebru for a discussion of the 21st century eugenics revival in big tech and how to resist it in a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem. Predatory Data is the first book to draw this direct line between the datafication and prediction techniques of past eugenicists and today’s often violent and extractive “big data” regimes. Torres and Gebru have also extensively studied the second wave of eugenics, identifying a suite of tech-utopian ideologies they call the TESCREAL bundle.Purchase your own copy of Anita Say Chan’s book Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future: https://bookshop.org/a/14284/9780520402843.Learn more about the event at datasociety.net (https://datasociety.net/events/resisting-predatory-data/).
Two years ago, we were told that ‘prompt engineer’ would be a real job — well, it’s not. Is generative AI actually going to replace and transform human labour, or is this just another shallow marketing narrative?In this episode of Computer Says Maybe, host Alix Dunn speaks with Data & Society researchers Aiha Nguyen and Alexandra Mateescu, authors of the primer Generative AI and Labor: Power, Hype, and Value at Work. They discuss how automation is now being used as a threat against workers, and how certain types of labor are being devalued by AI — especially traditionally feminized work like caregiving.Further reading:Generative AI and Labor: Power, Hype, and Value at Work by Aiha Nguyen and Alexandra MateescuBlood in the Machine by Brain MerchantAiha Nguyen is the Program Director for the Labor Futures Initiative at Data & Society where she guides research and engagement. She brings a practitioner's perspective to this role having worked for over a decade in community and worker advocacy and organizing. Her research interests lie at the intersection of labor, technology, and urban studies. She is author of The Constant Boss: Work Under Digital Surveillance and co-author of ‘At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers’, and ‘Generative AI and Labor: Power, Hype and Value at Work’.Alexandra Mateescu is a researcher on the Labor Futures team at the Data & Society Research Institute, where she investigates the impacts of digital surveillance, AI, and algorithmic power within the workplace. As an ethnographer, her past work has led her to explore the role of worker data and its commodification, the intersections of care labor and digital platforms, automation within service industries, and generative AI in creative industries. She is also a 2024-2025 Fellow at the Siegel Family Endowment.Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!
Physical and digital infrastructures have raised tensions around the world, seeding land disputes, climate effects, and disrupting social fabrics. Yet they are also intertwined with myths of progress, transformation, and speculation. To explore these themes, we were joined by Nia Johnson, Ekene Ijeoma, and Lori Regattieri — academics, practitioners, and artists who are each, in their own way, responding to the ways digital infrastructures are transforming the built, natural, and social environments. In a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem, we broke down confrontations between technological infrastructures and local communities and discussed how to  reshape narratives of process, power, change, and futurity.This public panel is part of Connective (t)Issues, a Data & Society workshop organized by the Trustworthy Infrastructures program in partnership with Duke Science & Society. Learn more about the workshop at datasociety.net. https://datasociety.net/announcements/2024/11/20/connective-tissues/
What exactly is generative AI (genAI) red-teaming? What strategies and standards should guide its implementation? And how can it protect the public interest? In this conversation, Lama Ahmad, Camille François, Tarleton Gillespie, Briana Vecchione, and Borhane Blili-Hamelin examined red-teaming’s place in the evolving landscape of genAI evaluation and governance.Our discussion drew on a new report by Data & Society (D&S) and AI Risk and Vulnerability Alliance (ARVA), a nonprofit that aims to empower communities to recognize, diagnose, and manage harmful flaws in AI. The report, Red-Teaming in the Public Interest, investigates how red-teaming methods are being adapted to confront uncertainty about flaws in systems and to encourage public engagement with the evaluation and oversight of genAI systems. Red-teaming offers a flexible approach to uncovering a wide range of problems with genAI models. It also offers new opportunities for incorporating diverse communities into AI governance practices.Ultimately, we hope this report and discussion present a vision of red-teaming as an area of public interest sociotechnical experimentation.Download the report and learn more about the speakers and references at datasociety.net.--00:00 Opening00:12 Welcome and Framing04:48 Panel Introductions09:34 Discussion Overview10:23 Lama Ahmad on The Value of Human Red-Teaming17:37 Tarleton Gillespie on Labor and Content Moderation Antecedents25:03 Briana Vecchione on Participation & Accountability28:25 Camille François on Global Policy and Open-source Infrastructure35:09 Questions and Answers56:39 Final Takeaways
Do you ever wonder how semiconductors (AKA chips) — the things that make up the fine tapestry of modern life — get made? And why does so much chip production bottleneck in Taiwan?Luckily, this is a podcast for nerds like you. Alix was joined this week by Brian Chen from Data & Society, who systematically explains the process of advanced chip manufacture, how its thoroughly entangled in US economic policy, and how Taiwan’s place as the main artery for chips is the product of deep colonial infrastructures.Brian J. Chen is the policy director of Data & Society, leading the organization’s work to shape tech policy. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, political economy, and tech governance.Previously, Brian led campaigns to strengthen the labor and employment rights of digital platform workers and other workers in precarious industries. Before that, he led programs to promote democratic accountability in policing, including community oversight over the adoption and use of police technologies.**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
On November 14, in a conversation moderated by Data & Society Senior Researcher Ranjit Singh, Madhumita Murgia and Armin Samii discussed Murgia’s new book, Code Dependent: Living in the Shadow of AI. Together, they explored living with data by describing their journeys into understanding it, reporting on it, and resisting it. While Murgia’s journalistic journey began with tracing the flow of her personal data sold by data brokers, Samii used his expertise as a computer scientist to build UberCheats, an algorithm auditing tool that extracts GPS coordinates from UberEats receipts to calculate the difference between the actual miles a courier traveled and those Uber claimed they did. In Code Dependent, Samii’s story is the focus of a chapter on how data-driven systems come to play the role of the boss.Purchase a copy of Code Dependent: https://bookshop.org/a/14284/9781250867391Learn more at datasociety.net (https://datasociety.net) 
When Data & Society was founded ten years ago, it was rooted in the insight that data-centric technologies have broad and often unseen impacts on society — and that to better understand those impacts and realize technologies that reflect our highest values, we need interdisciplinary, empirical research.Today, the urgency of that vision is palpable: How societies choose to design and govern technology will determine our collective future. On September 26, we celebrated our first decade with our incredible network of alumni, friends, and supporters. Along with reflections from Data & Society Executive Director Janet Haven, Board President Charlton McIlwain, and Founder danah boyd, the program included a panel discussion and lightning talks.00:00 Opening00:10 Welcome | Charlton McIlwain, Board President08:23 Creating a Field | danah boyd, Founder19:37 Lightning Talk: Xiaowei R. Wang27:02 Lightning Talk: Ranjit Singh33:09 Lightning Talk: Zara Rahman38:42 Lightning Talk: Michelle Miller46:00 Acting on What We Know | Alondra Nelson, John Palfrey, Felicia Wong (moderator: Suresh Venkatasubramanian)1:13:47 Creating Our Future | Janet Haven, Executive Director1:25:42 Closing | Charlton McIlwain, Board President
In the United States, Black maternal health is in steep decline. Despite increased awareness and better data about the depths of racial health disparities, outcomes for Black birthing people remain poor. At the same time, a revolution in healthcare technologies is underway, and as they provide care at the frontlines of a crisis, birth workers are figuring out how to make digital health technologies work for them and their patients.In "Establishing Vigilant Care: Data Infrastructures and the Black Birthing Experience," Joan Mukogosi explores how digital health technologies can produce new forms of harm for Black birthing people — by exposing Black patients to carceral systems, creating information silos that impede interoperability, and failing to meet privacy standards. By paying close attention to how clinical contexts and their associated digital technologies impact how care is delivered, this research offers a glimpse into possibilities for improved cohesion between digital health technologies and birth work.Learn more about Data & Society at datasociety.net.
loading
Comments (3)

Rab Nawaz Khan

Love how Data & Society connects research with real-world impacts of data and automation, highlighting ethical and societal considerations. It shows why transparent, responsible systems matter, especially for personal data frameworks like the Chilean RUT lookup system https://rutrutificadr.cl/revisar-si-un-documento-es-autentico/

Jan 14th
Reply

The Derstine

fascinating. mind blown.

Dec 3rd
Reply (1)