Claim Ownership

Author:

Subscribed: 0Played: 0
Share

Description

 Episodes
Reverse
In this episode we talk to Sarah Franklin, a leading figure in feminist science studies and the sociology of reproduction. In this tour de force of IVF ethics and feminism through the ages, Sarah discusses ethical issues in reproductive technologies, how they compare to AI ethics, how feminism through the ages can help us, Shulamith Firestone’s techno-feminist revolution, and the violence of anti-trans movement across the world. 
In this episode we chat to Michelle N. Huang, Assistant Professor of English and Asian American literature at Northwestern University. Chatting with Michelle is bittersweet, as we think collectively together about anti-Asian racism and how it intersects with histories and representations of technological development in the context of intensified violence against Asian American and Asian diaspora communities during the COVID-19 pandemic. We discuss why the humanities really matter when thinking about technology and the sciences, Michelle’s amazing film essay Inhuman Figures which examines and subverts racist tropes and stereotypes about Asian Americans; why the central idea of looking at what's been discarded, devalued, and finding different values and ways of doing things defines the power of feminist science studies; and what it means to think about race on a molecular level.
In this episode we talk to Sareeta Amrute, Affiliate Associate Professor at the University of Washington who studies race, labour, and class in global tech economies. Sareeta discusses happened when Rihanna and Greta Thunberg got involved in the Indian farmers protests; how race has wound up in algorithms as an indicator of what products you might want to buy; how companies get out of being responsible for white supremacist material sold across their platforms; why all people who make technology have an ethic, though they might not know it; and what the effects are of power in tech companies lying primarily with product teams. 
In this episode Sophie, author of Full Surrogacy Now and self-defined wayward Marxist, talks about defining good technology for the whole of the biosphere, why the purity of the human species has always been contaminated by our animal and technological origins, why nature is much, much stranger than we think, what that means for the lambs that are now being grown in artificial wombs, and why technologies like birth control and IVF can never liberate women within the power dynamics of our capitalist present.
In this episode we chat to Karen Hao, a prominent tech journalist who focuses on the intersections of AI, data, politics and society. Right now she’s based in Hong Kong as a reporter for the Wall Street Journal on China, tech and society; before this, she conducted a number of high profile investigations for the MIT tech review. In our interview we chat about her series on AI colonialism and how tech companies reproduce older colonial patterns of violence and extraction; why both insiders and outside specialists in AI ethics struggle to make AI more ethical when they’re competing with Big Tech’s bottom line; why companies engaging user attitudes isn’t enough, since we can’t really ever ‘opt out’ of certain products and systems; and her hopes for changing up the stories we tell about the Chinese tech industry. 
In the race to produce the biggest language model yet, Google has now overtaken Open AI’s GPT-3 and Microsoft’s T-NLG with a 1.6 trillion parameter model. In 2021, Meg Mitchell was fired from Google, where she was co-founder of their Ethical AI branch, in the aftermath of a paper she co-wrote about why language models can be harmful if they’re too big. In this episode Meg sets the record straight. She explains what large language models are and what they do, why they’re so important to Google. She tells us why it's a problem that these models don’t understand the significance or meaning of the data that they are trained on, which means that wikipedia data can influence what these models take to be historical fact. She also tells us about how some white men are gatekeeping knowledge about large language models, as well as the culture, politics, power and misogyny at Google that led to her firing.
In this episode, we speak to Soraj Hongladarom, a professor of philosophy and Director of the Center for Science, Technology, and Society at Chulalongkorn University in Bangkok. Soraj explains what makes Buddhism a unique and yet appropriate intervention in AI ethics, why we need to aim for enlightenment with machines, and whether there is common ground for different religions to work together in making AI more inclusive. 
In this episode we chat to Os Keyes, an Ada Lovelace fellow and adjunct professor at Seattle University, and a PhD student at the University of Washington in the department of Human Centered Design & Engineering. We discuss everything from avoiding universalism and silver bullets in AI ethics to how feminism underlies Os’s work on autism and AI and automatic gender recognition technologies.
In this episode, we talk to Dr Alex Hanna, Director of Research at the Distributed AI Research Institute which was founded and directed by her ex-boss at Google Dr Timnit Gebru. Previously a sociologist working on ethical AI at Google and now a superstar in her own right, she tells us why Google’s attempt to be neutral is nonsense, how the word good in ‘good tech’ allows people to dodge getting political when orienting technology towards justice, and why technology may not actually take on the biases of its individual creators but probably will take on the biases of its organisation.
In this episode we chat to Virginia Dignum, Professor of Responsible Artificial Intelligence at the University of Umeå where she leads the Social and Ethical Artificial Intelligence research group. We draw on Dignum’s experience as an engineer and legislator to discuss how any given technology might not be good or bad, but is never valueless; how the public can participate in conversations around AI; how to combat evasions of responsibility among creators and deployers of technology, when they say ‘sorry, the system says so’; and why throwing data at a problem might not make it better. 
In this episode, we talk to Blaise Agüera y Arcas, a Fellow and Vice President at Google research and an authority in computer vision, machine intelligence, and computational photography. In this wide ranging episode, we explore why it is important that the AI industry reconsider what intelligence means and who possesses it, how humans and technology have co-evolved with and through one another, the limits of using evolution as a way of thinking about AI, and why we shouldn’t just be optimising AI for survival. We also chat about Blaise’s research on gender and sexuality, from his huge crowdsourced surveys on how people self-identify through to debunking the idea that you can discern someone’s sexuality from their face using facial recognition technology. 
In this episode, we talk to Dr. Kate Chandler, Assistant Professor at Georgetown and a specialist on drone warfare. We recorded this interview the day that Russia invaded Ukraine, which reminded us of just how urgent a task it is to rethink the relationship between tech innovation and warfare. As Kate explains, drones are more than just tools, they’re also intimately tied to political, economic and social systems. In this episode we discuss the historical development of drones - a history which is both commercial and military - and then explore a better future for these kinds of technologies, one where AI innovation money comes from nonviolent sources, and AI can be used for the prevention of violence.  
In this episode, we chat to Meryl Alper, Associate Professor of Communication Studies at Northeastern University. We discuss histories of technological invention by disabled communities, the backlash against poor algorithmically transcribed captions or ‘craptions’, what it actually means for a place or a technology to be accessible to disabled communities with additional socio-economic constraints, and the kinds of assistive augmented communication devices (AAC), like the one used by Stephen Hawking, that are being built by non-speaking people to represent different kinds of voices. 
In this episode, we chat with Professor Wendy Chun, who is Simon Fraser University's Canada 150 Research Chair in New Media. As both an expert in Systems Design Engineering and English Literature, her extraordinary analysis of contemporary digital media bridges the humanities and STEM sciences to think through some of the most pressing technical and conceptual issues in technology today.  Wendy discusses her most recent book, Discriminating Data, where she explains what is actually happening in AI systems that people claim can predict the future, why facebook friendship has forced the idea that friendship is bidirectional, and how technology is being built on the principle of homophily, the idea that similarity breeds connection.  
In this episode, we chat to Dr Leonie Tanczer, a Lecturer in International Security and Emerging Technologies at UCL and Principle Investigator on the Gender and IoT project.  Leonie discusses why online safety and security are not the same when it comes to protection online; how to identify bad actors while protecting people’s privacy; how we can use ‘threat modelling’ to account for and envision harmful unintended uses of technologies, and how to tackle bad behaviour online that is not yet illegal. 
In this episode we chat to Professor Jason Edward Lewis, the University Research Chair in Computational Media and the Indigenous Future Imaginary at Concordia University in Montreal. Jason is Cherokee, Hawaiian and Samoan and an expert in indigenous design in AI. He’s the founder of Obx Labs for Experimental Media and the co-director of a number of research groups such as Aboriginal Territories in Cyberspace, Skins Workshops on Aboriginal Storytelling and Video Game Design, and the Initiative for Indigenous Futures. In this episode we discuss how indigenous communities think about what it means for humans and AI to co-exist, why we need to rethink what it means to be an intelligent machine, and why mainstream Western modes of building technology might actually land us with Skynet. 
In this episode, we chat to Neema Iyer, a technologist, artist and founder of Pollicy, a civic technology organisation based in Kampala, Uganda. We discuss feminism and building AI for the world's fastest growing population, what feminism means in African contexts, and the challenges of working with different governments and regional bodies like the African Union.
In this episode, we talk to Frances Negron-Mutaner, an award-winning filmmaker, writer, and scholar and Professor of English and Comparative Literature at Columbia University, New York City. We discuss her Valor y Cambio or Value and Change project that brought a disused ATM to the streets of Puerto Rico filled with special banknotes. On the banknotes were the faces of Black educators, abolitionists and visionaries of a Caribbean Confederacy - people who are meaningful and inspirational to Puerto Ricans today. The machine asked the person retrieving bills what they valued, and in doing so, sparked what Frances calls decolonial joy. Together, we explore the unintended repurposing of technologies for decolonial and anti-capitalist purposes.
In this episode, we chat to Maya Indira Ganesh, the course lead for the University of Cambridge Master of Studies programme in AI Ethics and Society. She transitioned to academia after working as a feminist researcher with international NGOs and cultural organisations on gender justice, technology, and freedom of expression. We discuss the human labour that is obscured when we say a machine is autonomous, the YouTube phenomenon of ‘Unboxing’ Apple products, and why AV ethics isn’t just about trolley problems.  
In this episode, we chat to Jess Smith, a PhD student at the University of Colorado in Information Science and co-host of the Radical AI podcast who specialises in the intersections of artificial intelligence, machine learning and ethics. We discuss the tensions between Silicon Valley’s move fast and break stuff mantra and the slower pace of ethics work. She also tells us how we can be mindful users of technology and how we can develop computer science programs that foster a new generation of ethically-minded technologists. 
Comments 
Download from Google Play
Download from App Store