DiscoverComputer Says Maybe
Computer Says Maybe
Claim Ownership

Computer Says Maybe

Author: Alix Dunn

Subscribed: 3Played: 70
Share

Description

Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
31 Episodes
Reverse
We’re wrapped for the year, and will be back on the 10th of Jan. In the meantime, listen to Alix, Prathm, and Georgia discuss their biggest learnings from the pod this year from some of their favourite episodes.**We want to hear from YOU about the podcast — what do you want to hear more of in 2025? Share your ideas with us here: https://tally.so/r/3E860B**Or if you’d rather ramble into a microphone (just like we do…) use this link instead!We pull out clips from the following episodes:The Age of Noise w/ Eryk SalvaggioThe Happy Few: Open Source AI pt1Big Dirty Data Centres w/ Boxi Wu and Jenna RuddockUS Election Special w/ Spencer OvertonChasing Away Sidewalk Labs w/ Bianca WylieThe Human in the LoopThe Stories we Tell Ourselves About AIFurther reading:Learn more about what ex TikTok moderator Mojez has been up to this year via this BBC TikTok
Google has finally been judged to be a monopoly by a federal court — while this was strikingly obvious already, what does this judgement mean? Is this too little too late?This week Alix and Prathm were joined by Michelle Meagher, an antitrust lawyer who shared a brief history of how antitrust started as a tool for governments to stop the consolidation of corporate power, and over time has morphed to focus on issues of competition and consumer protection — which has allowed monopolies to thrive.Michelle discusses the details and her thinking on the ongoing cases against Google, and more generally on how monopolies are basically like a big octopus arm-wrestling itself.Further reading:US Said to Consider a Breakup of Google to Address Search Monopoly — NY TimesGoogle’s second antitrust suit brought by US begins, over online ads — GuardianBig Tech on Trial — Matt StollerHow the EU’s DMA is changing Big Tech — The VergeUK set to clear Microsoft’s deal to buy Call of Duty maker Activision Blizzard — GuardianSign up to the Computer Says Maybe newsletter to get invites to our events and receive other juicy resources straight to your inboxMichelle is a competition lawyer and co-founder of the Balanced Economy Project, Europe’s first anti-monopoly organisation. She is author of Competition is Killing Us: How Big Business is Harming Our Society and Planet - and What to Do About It (Penguin, 2020), a Financial Times Best Economics Book of the Year. She is a Senior Policy Fellow at the University College London Centre for Law, Economics and Society. She is a Senior Fellow working on Monopoly and Corporate Governance at the Centre for Research on Multinational Corporations (SOMO).
What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan?This week, Alix interviewed Eryk Salvaggio, who shares his ideas around how we are moving away from ‘the age of information’ and into an age of noise, where we’ve progressed so far into a paradigm of easy and frictionless information sharing, that information has transformed into an overwhelming wall of noise.So if everything is just noise, what do we filter out and keep in — and what systems do we use to do that?Further reading:Visit Eryk’s WebsiteCybernetic Forests — Eryk’s newsletter on tech and cultureOur upcoming event: Insight Session: The politics, power, and responsibility of AI procurement with Bianca WylieOur newsletter, which shares invites to events like the above, and other interesting bitsEryk Salvaggio has been making tech-critical art since the dawn of the Internet. Now he’s a blend of artist, tech policy researcher, and writer focused on a critical approach to AI. He is the Emerging Technologies Research Advisor at the Siegel Family Endowment, an instructor in Responsible AI at Elisava Barcelona School of Design, a researcher at the metaLab (at) Harvard University’s AI Pedagogy Project, one of the top contributors to Tech Policy Press, and an artist whose work has been shown at festivals including SXSW, DEFCON, and Unsound.
In part two of our episode on open source AI, we delve deeper into we can use openness and participation for sustainable AI governance. It’s clear that everyone agrees that things like the proliferation of harmful content is a huge risk — but what we cannot seem to agree on is how to eliminate this risk.Alix is joined again by Mark Surman, and this time they both take a closer look at the work Audrey Tang did as Taiwan’s first digital minister, where she successfully built and implemented a participatory framework that allowed the people of Taiwan to directly inform AI policy.We also hear more from Merouane Debbah, who built the first LLM trained in Arabic, and highlights the importance of developing AI systems that don’t follow rigid western benchmarks.Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of Mozilla.ai (a commercial AI R+D lab) and Mozilla Ventures (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.More about our guests:Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at Khalifa University in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.Further reading & resourcesPolis — a real-time participation platformRecursive Public by vTaiwanNoor — the first LLM trained on the Arabic languageFalcon FoundationBuy AI Snake Oil by Sayash Kapoor and Arvind Narayanan
In the context of AI, what do we mean when we say ‘open source’? An AI model is not something you can straightforwardly open up like a piece of software; there are huge technical and social considerations to be made.Is it risky to open-source highly capable foundation models? What guardrails do we need to think about when it comes to the proliferation of harmful content? And, can you really call it ‘open’ if the barrier for accessing compute is so high? Is model alignment really the only thing we have to protect us?In this two-parter, Alix is joined by Mozilla president Mark Surman to discuss the benefits and drawbacks of open and closed models. Our guests are Alondra Nelson, Merouane Debbah, Audrey Tang, and Sayash Kapoor.Listen to learn about the early years of the free software movement, the ecosystem lock-in of the closed-source environment, and what kinds of things are possible with a more open approach to AI.Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of Mozilla.ai (a commercial AI R+D lab) and Mozilla Ventures (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.More about our guests:Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.Alondra Nelson is s scholar of the intersections of science, technology, policy, and society, and the Harold F. Linder Professor at the Institute for Advanced Study, an independent research center in Princeton, New Jersey. Dr. Nelson was formerly deputy assistant to President Joe Biden and acting director of the White House Office of Science and Technology Policy (OSTP). In this role, she spearheaded the development of the Blueprint for an AI Bill of Rights, and was the first African American and first woman of color to lead US science and technology policy.Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at Khalifa University in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.Further reading & resourcesPolis — a real-time participation platformRecursive Public by vTaiwanNoor — the first LLM trained on the Arabic languageFalcon FoundationBuy AI Snake Oil by Sayash Kapoor and Arvind Narayanan
This week Alix was joined by Kevin De Liban, who just launched Techntonic Justice, an organisation designed to support and fight for those harmed by AI systems.In this episode Kevin describes his experiences litigating on behalf of people in Arkansas who found their in-home care hours cut aggressively by an algorithm administered by the state. This is a story about taking care away from individuals in the name of ‘efficiency’, and the particular levers for justice that Kevin and his team managed to take advantage of to eventually ban the use of this algorithm in Arkansas.CW: This episode contains descriptions of people being denied care and left in undignified situations at around 08.17- 08.40 and 27.12-28.07Further reading & resources:Techtonic JusticeKevin De Liban is the founder of Techtonic Justice, and the Director of Advocacy at Legal Aid of Arkansas, nurturing multi-dimensional efforts to improve the lives of low-income Arkansans in matters of health, workers' rights, safety net benefits, housing, consumer rights, and domestic violence. With Legal Aid, he has led a successful litigation campaign in federal and state courts challenging Arkansas's use of an algorithm to cut vital Medicaid home-care benefits to individuals who have disabilities or are elderly.
Election Debrief

Election Debrief

2024-11-0840:34

This week we’re wallowing in post-election catharsis: Alix and Prathm process the result together, and discuss the implications this administration has for technology politics.How much of a role will people like Elon Musk and Peter Thiel play during Trump’s presidency? What kind of tactics should the left adopt going forward to stop this from happening again? And what does this mean for the technology politics community?This episode was recorded on Wednesday the 6th of November; we don’t have all the answers but we know we want to move forward and have never been more motivated to make change happen.
For this pre-election special, Prathm spoke with law professor Spencer Overton about how this election has — and hasn’t — be impacted by AI systems. Misinformation and deepfakes appear to be top of the agenda for a lot politicians and commentators, but there’s a lot more to think about…Spencer discusses the USA’s transition into a multiracial democracy, and describes the ongoing cultural anxiety that comes with that — and how that filters down into the politicisation of AI tools, both as fuel for moral panics, as well as being used to suppress voters of colour.Further reading:Artificial Intelligence for Electoral Management | International IDEAOvercoming Racial Harms to Democracy from Artificial Intelligence by Spencer Overton :: SSRNAI’s impact on elections is being overblown | MIT Technology ReviewEffects of Shelby County v. Holder on the Voting Rights Act | Brennan Center for JusticeSpencer Overton is the Patricia Roberts Harris Research Professor at GW Law School. As the Director of the Multiracial Democracy Project at the GW Equity Institute, he focuses on producing and supporting research that grapples with challenges to a well-functioning multiracial democracy. He is currently working on research projects related to the regulation of AI to facilitate a well-functioning multiracial democracy and the implications of alternative voting systems for multiracial democracy.
For our final episode in this series on the environment, Alix interviewed Karen Hao on how tough it is to report on environmental impacts of AI.The conversation focusses on two of Karen’s recent stories, linked below. One of the biggest barriers to consistent reporting on AI’s climate injustices is the sheer opaqueness of information about what companies are trying to do when building infrastructure and what they think the actual costs — primarily of energy use and water — will be. Tech companies that Karen has written about enter communities via shell companies and promise relatively big deals for small municipalities if they allow development of new data centres — and community members often don’t know what they’re signing up for before it’s too late.Listen to learn about how difficult it is to report on this industry, and the tactics and methods Karen has to use to tell her stories.Further reading:Microsoft’s Hypocrisy on AI by Karen HaoAI is Taking Water from the Desert by Karen HaoKaren Hao is an American journalist who writes for publications like The Atlantic. She was previously a foreign correspondent based in Hong Kong for The Wall Street Journal and a senior artificial intelligence editor at the MIT Technology Review. She is best known for her coverage on AI research, technology ethics and the social impact of AI.
In our third episode about AI & the environment, Alix interviewed Sherif Elsayed-Ali, who’s been working on using AI to reduce the carbon emissions of concrete. Yes, that’s right — concrete.This may seem like a very niche place to focus a green initiative on but it isn’t; concrete is the second most used substance in the world because it’s integral to modern infrastructure, and there’s no other material like it. It’s also one of the biggest carbon emitters in the world.In this episode Sherif explains how AI and machine learning can make the process of concrete production more precise and efficient so that it burns much less fuel. Listen to learn about the big picture of global carbon emissions, and how AI can actually be used to actually reduce carbon output, rather than just monitor it — or add to it!Sherif Elsayed-Ali trained as a civil engineer, then studied international human rights law and public policy and administration. He worked with the UN and in the non-profit sector on humanitarian and human rights research and policy, before embarking on a career in tech and climate.Sherif founded Amnesty Tech, a group at the forefront of technology and human rights. He then joined Element AI (today Service Now Research), starting and leading its AI for Climate work. In 2020, he co-founded and became CEO of Carbon Re, an industrial AI company spun out of Cambridge University and UCL, developing novel solutions for decarbonising cement. He then co-founded Nexus Climate, a company providing climate tech advisory services and supporting the startup ecosystem.
This week we are continuing our AI & Environment series with an episode about a key piece of AI infrastructure: data centres. With us this week are Boxi Wu and Jenna Ruddock to explain how data centres are a gruesomely sharp double-edged sword.They contribute to huge amounts of environmental degradation via local water and energy consumption, and impact the health of surrounding communities with incessant noise pollution. Data centres are also used as a political springboard for global leaders, where the expansion of AI infrastructure is seen as being synonymous with progress and economic growth.Boxi and Jenna talk us through the various community concerns that come with data centre development, and the kind of pushback we’re seeing in the UK and the US right now.Boxi Wu is a DPhil researcher at the Oxford Internet Institute and a Research Policy Consultant with the OECD’s AI Policy Observatory. Their research focuses on the politics of AI infrastructure within the context of increasing global inequality and the current climate crisis. Prior to returning to academia, Boxi worked in AI ethics, technology consulting and policy research. Most recently, they worked in AI Ethics & Safety at Google DeepMind where they specialised in the ethics of LLMs and led the responsible release of frontier AI models including the initially released Gemini models.Jenna Ruddock is a researcher and advocate working at the intersections of law, technology, media, and environmental justice. Currently, she is policy counsel at Free Press, where she focuses on digital civil rights, surveillance, privacy, and media infrastructures. She has been a visiting fellow at the University of Amsterdam's critical infrastructure lab (criticalinfralab.net), a postdoctoral fellow with the Technology & Social Change project at the Harvard Kennedy School's Shorenstein Center, and a senior researcher with the Tech, Law & Security Program at American University Washington College of Law. Jenna is also a documentary photographer and producer with a background in community media and factual streaming.Further readingGoverning Computational Infrastructure for Strong and Just AI Economies, co-authored by Boxi WuGetting into Fights with Data Centres by Anne Pasek
This week we’re kicking off a series about AI & the environment. We’re starting with Holly Alpine, who just recently left Microsoft after starting and growing an internal sustainability programme over a decade.Holly’s goal was pretty simple: she wanted Microsoft to honour the sustainability commitments that they had set for themselves. The internal support she had fostered for sustainability initiatives did not match up with Microsoft’s actions — they continued to work with fossil fuel companies even though doing so was at odds with their plans to achieve net 0.Listen to learn about what it’s like approaching this kind of huge systemic challenge with good faith, and trying to make change happen from the inside.Holly Alpine is a dedicated leader in sustainability and environmental advocacy, having spent over a decade at Microsoft pioneering and leading multiple global initiatives. As the founder and head of Microsoft's Community Environmental Sustainability program, Holly directed substantial investments into community-based, nature-driven solutions, impacting over 45 global communities in Microsoft’s global datacenter footprint, with measurable improvements to ecosystem health, social equity, and human well-being.Currently, Holly continues her environmental leadership as a Board member of both American Forests and Zero Waste Washington, while staying active in outdoor sports as a plant-based athlete who enjoys rock climbing, mountain biking, ski mountaineering, and running mountain ultramarathons.Further Reading:Microsoft’s Hypocrisy on AIOur tech has a climate problem: How we solve it
In 2017 Google’s urban planning arm Sidewalk Labs came into Toronto and said “we’re going to turn this into a smart city”.Our guest Bianca Wylie was one of the people who stood up and said “okay but… who asked for this?”This is a story about how a large tech firm came into a community with big promises, and then left with its tail between its legs. In the episode Alix and Bianca discuss the complexities of government procurement of tech, and how attractive corporate solutions look when you’re so riddled with austerity.Bianca Wylie is a writer with a dual background in technology and public engagement.  She is a partner at Digital Public and a co-founder of Tech Reset Canada. She worked for several years in the tech sector in operations, infrastructure, corporate training, and product management. Then, as a professional facilitator, she spent several years co-designing, delivering and supporting public consultation processes for various governments and government agencies. She founded the Open Data Institute Toronto in 2014 and co-founded Civic Tech Toronto in 2015.Further Reading:A Counterpublic Analysis of Sidewalk TorontoBianca Wylie on MediumIn Toronto, Google’s Attempt to Privatize Government Fails—For Now
What if we could have a public library for compute? But is… more compute really what we want right now?This week Alix interviewed Teri Olle from the Economic Security Project, a co-sponsor of the California AI safety bill (SB 1047). The bill has been making the rounds in the news because it would force AI companies to do safety checks on their models before releasing them to the public — which is seen as uh, ‘controversial’, to those in the innovation space.But Teri had a hand in a lesser known part of the bill: the construction of CalCompute, a state owned public cloud cluster for resource-intensive AI development. This would mean public access to the compute power needed to train state of the art AI models — finally giving researchers and plucky start ups access to something otherwise locked inside a corporate walled garden.Teri Olle is the California Campaign Director for Economic Security Project Action. Beginning her career as an attorney, Teri soon moved into policy and issue advocacy, working on state and local efforts to ban toxic chemicals and pesticides, decrease food insecurity and hunger, increase gender representation in politics. She is a founding member of a political action committee dedicated to inserting parent voice into local politics and served as the president of the board of Emerge California. She lives in San Francisco with her husband and two daughters.
Applications for our second cohort of Media Mastery for New AI Protagonists are now open! Join this 5-week program to level up your media impact alongside a dynamic community of emerging experts in AI politics and power—at no cost to you. In this episode, we chat with Daniel Stone, a participant from our first cohort, about his work. Apply by Sunday, September 29th!The adoption of new technologies is driven by stories. A story is a shortcut to understanding something complex. Narratives can lock us into a set of options that are…terrible. The kicker is that narratives are hard to detect and even harder to influence.But how reliable are our narrators? And how can we use story as strategy?The good news is that experts are working to unravel the narratives around AI. All so that folks with public interest in mind can change the game.This week Alix sat down with three researchers looking at three AI narrative questions. She spoke to Hanna Barakat about how the New York Times reports on AI; John Tanner, who scraped and analysed huge amounts of YouTube videos to find narrative patterns; and Daniel Stone, who studied and deconstructed metaphors that power collective understanding about AI.In this ep we ask:What are the stories we tell ourselves about AI? And why do we let industry pick them?How do these narratives change what is politically possible?What can public interest organisations and advocates do to change the narrative game?Hanna Barakat is a research analyst for Computer Says Maybe, working at the intersection of emerging technologies and complex systems design. She graduated from Brown University in 2022 with honors in International Development Studies and a focus in Digital Media Studies.Jonathan Tanner founded Rootcause after more than fifteen years working in senior communications roles for high-profile politicians, CEOs, philanthropists and public thinkers across the world. In this time he has worked across more than a dozen countries running diverse teams whilst writing keynote speeches, securing front page headlines, delivering world-first social media moments and helping to secure meaningful changes to public policy.Daniel Stone is currently undertaking research with Cambridge University’s Centre for Future Intelligence and is the Executive Director of Diffusion.Au. He is a Policy Fellow with the Chifley Research Centre and a Policy Associate at the Centre for Responsible Technology Australia.
There are oceans of research papers digging into the various harms of online platforms. Researchers are asking urgent questions such as how hate speech and misinformation has an effect on our information environment, and our democracy.But how does this research find it’s way to the media, policymakers, advocacy groups, or even tech companies themselves?To help us answer this, Alix is joined this week by Issie Lapowsky, who recently authored Bridging The Divide: Translating Research on Digital Media into Policy and Practice — a report about how research reaches these four groups, and what they do with it. This episode also features John Sands from Knight Foundation, who commissioned this report.Further reading:Bridging The Divide by Issie LapowskyKnight FoundationIssie Lapowsky is a journalist covering the intersection between tech, politics and national affairs. She has been published in WIRED, Protocol, The New York Times, and Fast Company.John Sands is Senior Director of Media and Democracy at Knight Foundation. Since joining Knight Foundation in 2019, he has led more than $100 million in grant making to support independent scholarship and policy research on information and technology in the context of our democracy.
Last week, CEO of Telegram Pavel Durov landed in France and was immediately detained. The details of his arrest are still emerging; he is being charged for being complicit in illegal activities happening on the platform, including the spread of CSAM.Durov’s lawyer has referred to these charges as “absurd” — because the head of a social media company cannot be held responsible for criminal activity on the platform. That might be true in the US but does that hold up in France?This week Alix is joined by Mallory Knodel to talk us through what happened:What are the implications of France making this move, and why now?How has Telegram positioned themselves as the most safe and secure messaging platform when they don’t even use the same encryption standards as WhatsApp?How Telegram has managed to get away with being uncooperative with various governments — or have they?Mallory Knodel is The Center for Democracy & Technology’s Chief Technology Officer. She is also a co-chair of the Human Rights and Protocol Considerations research group of the Internet Research Task Force and a chairing advisor on cybersecurity and AI to the Freedom Online Coalition.
That’s the END of Exhibit X folks; if you’ve been following along, congratulations on choosing to become smarter. If not that’s okay, consider this episode a delicious teaser for the series.In this episode Alix and Prathm engage their large wet brains and pull out the meatiest insights and learnings from the last five episodes. This series has been a delightful intellectual expedition into big tech litigation, knowledge creation, and online speech — if you’re a nerd for any of those things, it would be irresponsible for you to ignore this.Thank you for listening; we hope to do more deep explorations like this in the future!
What makes an expert witness? How does a socio-technical researcher become one? Now that we’re the end of this miniseries, we might finally be ready to answer these questions…In the fifth instalment of Exhibit X, civic tech acrobat Elizabeth Eagen shares her pithy insights on how researchers of emerging technologies are starting to interface with litigators and regulators.The questions we explore this week:When do the expertise of social scientists become ‘good’ enough to stand up in court — and who gets to decide that?How can the traditionally glacial system of courts and legislators keep pace with the shifting whims of technology companies?Litigators want social scientists to get on the stand and say ‘X caused Y’ without a shadow of a doubt — but what social scientist would do that?Elizabeth Eagen is Deputy Director of the Citizens and Technology Lab at Cornell University, which works with communities to study the effects of technology on society and test ideas for changing digital spaces to better serve the public interest. She was a 2022-23 Practitioner Fellow at the Digital Civil Society Lab at Stanford University, and serves as a board member at a number of nonprofit technology organizations.
Exhibit X: The Courts

Exhibit X: The Courts

2024-08-1644:19

Imagine: something horrible has happened and the only evidence you have is a video posted online. Can you submit it into evidence in court? Well, it’s complicated.In part 4 of our Exhibit X series, Alix sat down with Dr. Alexa Koenig to discuss her work with the International Criminal Court. Dr. Koenig and many colleagues are supporting the court to grapple with online evidence and tackling challenges that courts face when they adapt to our digital world.We answer questions like:How does the ICC work with social media companies to acquire evidence?How has generative AI and synthetic media impacted evidence in courts?When can we expect to see social scientists as expert witnesses in court?Alexa Koenig, PhD, JD, is Co-Faculty Director of the Human Rights Center , Director of HRC’s Investigations Program, and an adjunct professor at UC Berkeley School of Law, where she teaches classes that focus on the intersection of emerging technologies and human rights. She also co-teaches a class on open source investigative reporting at Berkeley Journalism. Alexa co-founded the Human Rights Center Investigations Lab, which trains students and professionals to use social media and other digital open source content to strengthen human rights research, reporting, and accountability.
loading