DiscoverThe Good Robot
The Good Robot
Claim Ownership

The Good Robot

Author: Dr Kerry Mackereth and Dr Eleanor Drage

Subscribed: 26Played: 443
Share

Description

Join Dr Eleanor Drage and Dr Kerry McInerney as they ask the experts: what is good technology? Is ‘good’ technology even possible? And how can feminism help us work towards it? Each week, they invite scholars, industry practitioners, activists, and more to provide their unique perspective on what feminism can bring to the tech industry and the way that we think about technology. With each conversation, The Good Robot asks how feminism can provide new perspectives on technology’s biggest problems.
47 Episodes
Reverse
In this episode we speak to Abeba Birhane, senior research fellow at Mozilla, about how cognition extends beyond the brain, why why we need to turn questions like ‘why aren't there enough black women in computing’ on their head and actually transform computing cultures, and why human behaviour is a complex adaptive system that can’t always be modelled computationally. 
In this episode we talk to Arjun Subramonian, a Computer Science PhD student at UCLA conducting machine learning research and a member of the grassroots organisation Queer in AI. In this episode we discuss why they joined Queer in AI, how Queer in AI is helping build artificial intelligence directed towards better, more inclusive, and queer futures, why ‘bias’ cannot be seen as a purely technical problem, and why Queer in AI rejected Google sponsorship. 
In this episode we chat to Su Lin Blodgett, a researcher at Microsoft Research in Montreal, on whether you can use AI to measure discrimination, why AI can never be de-biased, and how AI shows us that categories like gender and race are not as clear cut as we think they are.
Ever worried that AI will wipe out humanity? Ever dreamed of merging with AI? Well these are the primary concerns of transhumanism and existential risk, which you may not have heard of, but whose key followers include Elon Musk and Nick Bostrom, author of Superintelligence. But Joshua Schuster and Derek Woods have pointed out that there are serious problems with transhumanism’s dreams and fears, including its privileging of human intelligence above all other species, its assumption that genocides are less important than mass extinction events, and its inability to be historical when speculating about the future. They argue that if we really want to make the world and its technologies less risky, we should instead encourage cooperation, and participation in social and ecological issues. 
Science fiction writer Chen Qiufan ( Stanley Chen), author of Waste Tide, discusses the feedback loop between science fiction and innovation, what happened when he went to live with shamans in China, how science fiction can also be a psychedelic, and why it’s significant that linear time arrived from the West and took over ideas of circular or recurring time between Chinese dynasties. 
In this episode, the historian of science Lorraine Daston explains why science has long been allergic to emotion, which is seen to be the enemy of truth. Instead, objective reason is science’s virtue. She explores moments where it’s very difficult for scientists not to get personally involved, like when you’re working on your pet hypothesis or theory, which might lead you to select data that confirms your hypothesis, or when you’re confronted with some anomalies in your dataset that threaten a beautiful and otherwise perfect theory. But Lorraine also reminds us that the desire for objectivity can itself be an emotion, as it was when Victorian scientists expressed their heroic masculine self-restraint. She also explains why we should only be using AI for the parts of our world which are actually predictable, and how it’s not just engineers who debug algorithms, now that task is being outsourced to us - the consumers - as we’re the ones who are now forced to flag downstream effects when things go wrong.
How should governments collect personal data? In this episode, we talk to Dr Kevin Guyan about the census, and the best ways of asking people to identify themselves. We discuss why surveys that you fill in by hand offer less restrictive options for self-identification than online forms, and how queer communities are not just identified but produced through the counting of a census. As Kevin reminds us, who does the counting affects who is counted. We also discuss why looking at histories of identifying as heterosexual and cisgender is also beneficial to queer communities. 
In this episode we speak to two brilliant professors here at Cambridge, Mónica Moreno Figueroa and Ella McPherson about a data project they launched at the University of Cambridge to track everyday racism in the university. We discuss using technology for social good without being obsessed with the technology itself and the importance of tracking how racism dehumanises people, confuses us about each other, and causes physical suffering, which students of colour have to deal with on top of the ordinary stress of their uni degree. 
In this episode, we talk to Louise Amoore, professor of political geography at Durham and expert in how machine learning algorithms are transforming the ethics and politics of contemporary society. Louise tells us how politics and society have shaped computer science practices. This means that when AI clusters data and creates features and attributes, and when its results are interpreted, it reflects a particular view of the world. In the same way, social views about what is normal and abnormal in the world are being expressed through computer science practices like deep learning.  She emphasises that computer science can solve ethical problems with help from the humanities, which means that if you work with literature, languages, linguistics, geography, politics and sociology, you can help create AIs that model the world differently. 
In this episode we talk to Sarah Franklin, a leading figure in feminist science studies and the sociology of reproduction. In this tour de force of IVF ethics and feminism through the ages, Sarah discusses ethical issues in reproductive technologies, how they compare to AI ethics, how feminism through the ages can help us, Shulamith Firestone’s techno-feminist revolution, and the violence of anti-trans movement across the world. 
In this episode we chat to Michelle N. Huang, Assistant Professor of English and Asian American literature at Northwestern University. Chatting with Michelle is bittersweet, as we think collectively together about anti-Asian racism and how it intersects with histories and representations of technological development in the context of intensified violence against Asian American and Asian diaspora communities during the COVID-19 pandemic. We discuss why the humanities really matter when thinking about technology and the sciences, Michelle’s amazing film essay Inhuman Figures which examines and subverts racist tropes and stereotypes about Asian Americans; why the central idea of looking at what's been discarded, devalued, and finding different values and ways of doing things defines the power of feminist science studies; and what it means to think about race on a molecular level.
In this episode we talk to Sareeta Amrute, Affiliate Associate Professor at the University of Washington who studies race, labour, and class in global tech economies. Sareeta discusses happened when Rihanna and Greta Thunberg got involved in the Indian farmers protests; how race has wound up in algorithms as an indicator of what products you might want to buy; how companies get out of being responsible for white supremacist material sold across their platforms; why all people who make technology have an ethic, though they might not know it; and what the effects are of power in tech companies lying primarily with product teams. 
In this episode Sophie, author of Full Surrogacy Now and self-defined wayward Marxist, talks about defining good technology for the whole of the biosphere, why the purity of the human species has always been contaminated by our animal and technological origins, why nature is much, much stranger than we think, what that means for the lambs that are now being grown in artificial wombs, and why technologies like birth control and IVF can never liberate women within the power dynamics of our capitalist present.
In this episode we chat to Karen Hao, a prominent tech journalist who focuses on the intersections of AI, data, politics and society. Right now she’s based in Hong Kong as a reporter for the Wall Street Journal on China, tech and society; before this, she conducted a number of high profile investigations for the MIT tech review. In our interview we chat about her series on AI colonialism and how tech companies reproduce older colonial patterns of violence and extraction; why both insiders and outside specialists in AI ethics struggle to make AI more ethical when they’re competing with Big Tech’s bottom line; why companies engaging user attitudes isn’t enough, since we can’t really ever ‘opt out’ of certain products and systems; and her hopes for changing up the stories we tell about the Chinese tech industry. 
In the race to produce the biggest language model yet, Google has now overtaken Open AI’s GPT-3 and Microsoft’s T-NLG with a 1.6 trillion parameter model. In 2021, Meg Mitchell was fired from Google, where she was co-founder of their Ethical AI branch, in the aftermath of a paper she co-wrote about why language models can be harmful if they’re too big. In this episode Meg sets the record straight. She explains what large language models are and what they do, why they’re so important to Google. She tells us why it's a problem that these models don’t understand the significance or meaning of the data that they are trained on, which means that wikipedia data can influence what these models take to be historical fact. She also tells us about how some white men are gatekeeping knowledge about large language models, as well as the culture, politics, power and misogyny at Google that led to her firing.
In this episode, we speak to Soraj Hongladarom, a professor of philosophy and Director of the Center for Science, Technology, and Society at Chulalongkorn University in Bangkok. Soraj explains what makes Buddhism a unique and yet appropriate intervention in AI ethics, why we need to aim for enlightenment with machines, and whether there is common ground for different religions to work together in making AI more inclusive. 
In this episode we chat to Os Keyes, an Ada Lovelace fellow and adjunct professor at Seattle University, and a PhD student at the University of Washington in the department of Human Centered Design & Engineering. We discuss everything from avoiding universalism and silver bullets in AI ethics to how feminism underlies Os’s work on autism and AI and automatic gender recognition technologies.
In this episode, we talk to Dr Alex Hanna, Director of Research at the Distributed AI Research Institute which was founded and directed by her ex-boss at Google Dr Timnit Gebru. Previously a sociologist working on ethical AI at Google and now a superstar in her own right, she tells us why Google’s attempt to be neutral is nonsense, how the word good in ‘good tech’ allows people to dodge getting political when orienting technology towards justice, and why technology may not actually take on the biases of its individual creators but probably will take on the biases of its organisation.
In this episode we chat to Virginia Dignum, Professor of Responsible Artificial Intelligence at the University of Umeå where she leads the Social and Ethical Artificial Intelligence research group. We draw on Dignum’s experience as an engineer and legislator to discuss how any given technology might not be good or bad, but is never valueless; how the public can participate in conversations around AI; how to combat evasions of responsibility among creators and deployers of technology, when they say ‘sorry, the system says so’; and why throwing data at a problem might not make it better. 
In this episode, we talk to Blaise Agüera y Arcas, a Fellow and Vice President at Google research and an authority in computer vision, machine intelligence, and computational photography. In this wide ranging episode, we explore why it is important that the AI industry reconsider what intelligence means and who possesses it, how humans and technology have co-evolved with and through one another, the limits of using evolution as a way of thinking about AI, and why we shouldn’t just be optimising AI for survival. We also chat about Blaise’s research on gender and sexuality, from his huge crowdsourced surveys on how people self-identify through to debunking the idea that you can discern someone’s sexuality from their face using facial recognition technology. 
loading
Comments 
Download from Google Play
Download from App Store