Claim Ownership


Subscribed: 0Played: 0


Science fiction writer Chen Qiufan ( Stanley Chen), author of Waste Tide, discusses the feedback loop between science fiction and innovation, what happened when he went to live with shamans in China, how science fiction can also be a psychedelic, and why it’s significant that linear time arrived from the West and took over ideas of circular or recurring time between Chinese dynasties. 
In this episode, the historian of science Lorraine Daston explains why science has long been allergic to emotion, which is seen to be the enemy of truth. Instead, objective reason is science’s virtue. She explores moments where it’s very difficult for scientists not to get personally involved, like when you’re working on your pet hypothesis or theory, which might lead you to select data that confirms your hypothesis, or when you’re confronted with some anomalies in your dataset that threaten a beautiful and otherwise perfect theory. But Lorraine also reminds us that the desire for objectivity can itself be an emotion, as it was when Victorian scientists expressed their heroic masculine self-restraint. She also explains why we should only be using AI for the parts of our world which are actually predictable, and how it’s not just engineers who debug algorithms, now that task is being outsourced to us - the consumers - as we’re the ones who are now forced to flag downstream effects when things go wrong.
How should governments collect personal data? In this episode, we talk to Dr Kevin Guyan about the census, and the best ways of asking people to identify themselves. We discuss why surveys that you fill in by hand offer less restrictive options for self-identification than online forms, and how queer communities are not just identified but produced through the counting of a census. As Kevin reminds us, who does the counting affects who is counted. We also discuss why looking at histories of identifying as heterosexual and cisgender is also beneficial to queer communities. 
In this episode we speak to two brilliant professors here at Cambridge, Mónica Moreno Figueroa and Ella McPherson about a data project they launched at the University of Cambridge to track everyday racism in the university. We discuss using technology for social good without being obsessed with the technology itself and the importance of tracking how racism dehumanises people, confuses us about each other, and causes physical suffering, which students of colour have to deal with on top of the ordinary stress of their uni degree. 
In this episode, we talk to Louise Amoore, professor of political geography at Durham and expert in how machine learning algorithms are transforming the ethics and politics of contemporary society. Louise tells us how politics and society have shaped computer science practices. This means that when AI clusters data and creates features and attributes, and when its results are interpreted, it reflects a particular view of the world. In the same way, social views about what is normal and abnormal in the world are being expressed through computer science practices like deep learning.  She emphasises that computer science can solve ethical problems with help from the humanities, which means that if you work with literature, languages, linguistics, geography, politics and sociology, you can help create AIs that model the world differently. 
In this episode we talk to Sarah Franklin, a leading figure in feminist science studies and the sociology of reproduction. In this tour de force of IVF ethics and feminism through the ages, Sarah discusses ethical issues in reproductive technologies, how they compare to AI ethics, how feminism through the ages can help us, Shulamith Firestone’s techno-feminist revolution, and the violence of anti-trans movement across the world. 
In this episode we chat to Michelle N. Huang, Assistant Professor of English and Asian American literature at Northwestern University. Chatting with Michelle is bittersweet, as we think collectively together about anti-Asian racism and how it intersects with histories and representations of technological development in the context of intensified violence against Asian American and Asian diaspora communities during the COVID-19 pandemic. We discuss why the humanities really matter when thinking about technology and the sciences, Michelle’s amazing film essay Inhuman Figures which examines and subverts racist tropes and stereotypes about Asian Americans; why the central idea of looking at what's been discarded, devalued, and finding different values and ways of doing things defines the power of feminist science studies; and what it means to think about race on a molecular level.
In this episode we talk to Sareeta Amrute, Affiliate Associate Professor at the University of Washington who studies race, labour, and class in global tech economies. Sareeta discusses happened when Rihanna and Greta Thunberg got involved in the Indian farmers protests; how race has wound up in algorithms as an indicator of what products you might want to buy; how companies get out of being responsible for white supremacist material sold across their platforms; why all people who make technology have an ethic, though they might not know it; and what the effects are of power in tech companies lying primarily with product teams. 
In this episode Sophie, author of Full Surrogacy Now and self-defined wayward Marxist, talks about defining good technology for the whole of the biosphere, why the purity of the human species has always been contaminated by our animal and technological origins, why nature is much, much stranger than we think, what that means for the lambs that are now being grown in artificial wombs, and why technologies like birth control and IVF can never liberate women within the power dynamics of our capitalist present.
In this episode we chat to Karen Hao, a prominent tech journalist who focuses on the intersections of AI, data, politics and society. Right now she’s based in Hong Kong as a reporter for the Wall Street Journal on China, tech and society; before this, she conducted a number of high profile investigations for the MIT tech review. In our interview we chat about her series on AI colonialism and how tech companies reproduce older colonial patterns of violence and extraction; why both insiders and outside specialists in AI ethics struggle to make AI more ethical when they’re competing with Big Tech’s bottom line; why companies engaging user attitudes isn’t enough, since we can’t really ever ‘opt out’ of certain products and systems; and her hopes for changing up the stories we tell about the Chinese tech industry. 
In the race to produce the biggest language model yet, Google has now overtaken Open AI’s GPT-3 and Microsoft’s T-NLG with a 1.6 trillion parameter model. In 2021, Meg Mitchell was fired from Google, where she was co-founder of their Ethical AI branch, in the aftermath of a paper she co-wrote about why language models can be harmful if they’re too big. In this episode Meg sets the record straight. She explains what large language models are and what they do, why they’re so important to Google. She tells us why it's a problem that these models don’t understand the significance or meaning of the data that they are trained on, which means that wikipedia data can influence what these models take to be historical fact. She also tells us about how some white men are gatekeeping knowledge about large language models, as well as the culture, politics, power and misogyny at Google that led to her firing.
In this episode, we speak to Soraj Hongladarom, a professor of philosophy and Director of the Center for Science, Technology, and Society at Chulalongkorn University in Bangkok. Soraj explains what makes Buddhism a unique and yet appropriate intervention in AI ethics, why we need to aim for enlightenment with machines, and whether there is common ground for different religions to work together in making AI more inclusive. 
In this episode we chat to Os Keyes, an Ada Lovelace fellow and adjunct professor at Seattle University, and a PhD student at the University of Washington in the department of Human Centered Design & Engineering. We discuss everything from avoiding universalism and silver bullets in AI ethics to how feminism underlies Os’s work on autism and AI and automatic gender recognition technologies.
In this episode, we talk to Dr Alex Hanna, Director of Research at the Distributed AI Research Institute which was founded and directed by her ex-boss at Google Dr Timnit Gebru. Previously a sociologist working on ethical AI at Google and now a superstar in her own right, she tells us why Google’s attempt to be neutral is nonsense, how the word good in ‘good tech’ allows people to dodge getting political when orienting technology towards justice, and why technology may not actually take on the biases of its individual creators but probably will take on the biases of its organisation.
In this episode we chat to Virginia Dignum, Professor of Responsible Artificial Intelligence at the University of Umeå where she leads the Social and Ethical Artificial Intelligence research group. We draw on Dignum’s experience as an engineer and legislator to discuss how any given technology might not be good or bad, but is never valueless; how the public can participate in conversations around AI; how to combat evasions of responsibility among creators and deployers of technology, when they say ‘sorry, the system says so’; and why throwing data at a problem might not make it better. 
In this episode, we talk to Blaise Agüera y Arcas, a Fellow and Vice President at Google research and an authority in computer vision, machine intelligence, and computational photography. In this wide ranging episode, we explore why it is important that the AI industry reconsider what intelligence means and who possesses it, how humans and technology have co-evolved with and through one another, the limits of using evolution as a way of thinking about AI, and why we shouldn’t just be optimising AI for survival. We also chat about Blaise’s research on gender and sexuality, from his huge crowdsourced surveys on how people self-identify through to debunking the idea that you can discern someone’s sexuality from their face using facial recognition technology. 
In this episode, we talk to Dr. Kate Chandler, Assistant Professor at Georgetown and a specialist on drone warfare. We recorded this interview the day that Russia invaded Ukraine, which reminded us of just how urgent a task it is to rethink the relationship between tech innovation and warfare. As Kate explains, drones are more than just tools, they’re also intimately tied to political, economic and social systems. In this episode we discuss the historical development of drones - a history which is both commercial and military - and then explore a better future for these kinds of technologies, one where AI innovation money comes from nonviolent sources, and AI can be used for the prevention of violence.  
In this episode, we chat to Meryl Alper, Associate Professor of Communication Studies at Northeastern University. We discuss histories of technological invention by disabled communities, the backlash against poor algorithmically transcribed captions or ‘craptions’, what it actually means for a place or a technology to be accessible to disabled communities with additional socio-economic constraints, and the kinds of assistive augmented communication devices (AAC), like the one used by Stephen Hawking, that are being built by non-speaking people to represent different kinds of voices. 
In this episode, we chat with Professor Wendy Chun, who is Simon Fraser University's Canada 150 Research Chair in New Media. As both an expert in Systems Design Engineering and English Literature, her extraordinary analysis of contemporary digital media bridges the humanities and STEM sciences to think through some of the most pressing technical and conceptual issues in technology today.  Wendy discusses her most recent book, Discriminating Data, where she explains what is actually happening in AI systems that people claim can predict the future, why facebook friendship has forced the idea that friendship is bidirectional, and how technology is being built on the principle of homophily, the idea that similarity breeds connection.  
In this episode, we chat to Dr Leonie Tanczer, a Lecturer in International Security and Emerging Technologies at UCL and Principle Investigator on the Gender and IoT project.  Leonie discusses why online safety and security are not the same when it comes to protection online; how to identify bad actors while protecting people’s privacy; how we can use ‘threat modelling’ to account for and envision harmful unintended uses of technologies, and how to tackle bad behaviour online that is not yet illegal. 
Download from Google Play
Download from App Store