DiscoverPhilosophical Disquisitions
Philosophical Disquisitions
Claim Ownership

Philosophical Disquisitions

Author: John Danaher

Subscribed: 186Played: 3,409


Interviews with experts about the philosophy of the future.
102 Episodes
Do our values change over time? What role do emotions and technology play in altering our values? In this episode I talk to Steffen Steinert (PhD) about these issues. Steffen is a postdoctoral researcher on the Value Change project at TU Delft. His research focuses on the philosophy of technology, ethics of technology, emotions, and aesthetics. He has published papers on roboethics, art and technology, and philosophy of science. In his previous research he also explored philosophical issues related to humor and amusement.You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes Topics discussed include: What is a value?Descriptive vs normative theories of valuePsychological theories of personal valuesThe nature of emotionsThe connection between emotions and valuesEmotional contagionEmotional climates vs emotional atmospheresThe role of social media in causing emotional contagionIs the coronavirus promoting a negative emotional climate?Will this affect our political preferences and policies?General lessons for technology and value change Relevant Links Steffen's HomepageThe Designing for Changing Values Project @ TU DelftCorona and Value Change by Steffen'Unleashing the Constructive Potential of Emotions' by Steffen and Sabine RoeserAn Overview of the Schwartz Theory of Basic Personal Values Subscribe to the newsletter
83 - Privacy is Power

83 - Privacy is Power


Are you being watched, tracked and traced every minute of the day? Probably. The digital world thrives on surveillance. What should we do about this? My guest today is Carissa Véliz. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute of Ethics in AI at Oxford University. She is also a Tutorial Fellow at Hertford College Oxford. She works on privacy, technology, moral and political philosophy and public policy. She has also been a guest on this podcast on two previous occasions. Today, we’ll be talking about her recently published book Privacy is Power. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).  Show Notes Topics discussed in this show include: The most surprising examples of digital surveillanceThe nature of privacyIs privacy dead?Privacy as an intrinsic and instrumental valueThe relationship between privacy and autonomyDoes surveillance help with security and health?The problem with mass surveillanceThe phenomenon of toxic dataHow surveillance undermines democracy and freedomAre we willing to trade privacy for convenient services?And much more Relevant Links Carissa's WebpagePrivacy is Power by CarissaSummary of Privacy is Power in AeonReview of Privacy is Power in The Guardian Carissa's Twitter feed (a treasure trove of links about privacy and surveillance)Views on Privacy: A Survey by Sian Brooke and Carissa VélizData, Privacy and the Individual by Carissa Véliz Subscribe to the newsletter
 Facial recognition technology has seen its fair share of both media and popular attention in the past 12 months. The runs the gamut from controversial uses by governments and police forces, to coordinated campaigns to ban or limit its use. What should we do about it? In this episode, I talk to Brenda Leong about this issue. Brenda is Senior Counsel and Director of Artificial Intelligence and Ethics at Future of Privacy Forum. She manages the FPF portfolio on biometrics, particularly facial recognition. She authored the FPF Privacy Expert’s Guide to AI, and co-authored the paper, “Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models.” Prior to working at FPF, Brenda served in the U.S. Air Force. You can listen to the episode below or download here. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show notesTopics discussed include: What is facial recognition anyway? Are there multiple forms that are confused and conflated? What's the history of facial recognition? What has changed recently? How is the technology used? What are the benefits of facial recognition? What's bad about it? What are the privacy and other risks? Is there something unique about the face that should make us more worried about facial biometrics when compared to other forms? What can we do to address the risks? Should we regulate or ban?Relevant Links Brenda's Homepage Brenda on Twitter 'The Privacy Expert's Guide to AI and Machine Learning' by Brenda (at FPF) Brenda's US Congress Testimony on Facial Recognition 'Facial recognition and the future of privacy: I always feel like … somebody’s watching me' by Brenda 'The Case for Banning Law Enforcement From Using Facial Recognition Technology' by Evan Selinger and Woodrow Hartzog Subscribe to the newsletter
In today's episode, I talk to Nikita Aggarwal about the legal and regulatory aspects of AI and algorithmic governance. We focus, in particular, on three topics: (i) algorithmic credit scoring; (ii) the problem of 'too big to fail' tech platforms and (iii) AI crime. Nikita is a DPhil (PhD) candidate at the Faculty of Law at Oxford, as well as a Research Associate at the Oxford Internet Institute's Digital Ethics Lab. Her research examines the legal and ethical challenges due to emerging, data-driven technologies, with a particular focus on machine learning in consumer lending. Prior to entering academia, she was an attorney in the legal department of the International Monetary Fund, where she advised on financial sector law reform in the Euro area. You can listen to the episode below or download here. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).Show Notes Topics discussed include: The digitisation, datafication and disintermediation of consumer credit markets Algorithmic credit scoring The problems of risk and bias in credit scoring How law and regulation can address these problems Tech platforms that are too big to fail What should we do if Facebook fails? The forms of AI crime How to address the problem of AI crime Relevant Links Nikita's homepage Nikita on Twitter 'The Norms of Algorithmic Credit Scoring' by Nikita 'What if Facebook Goes Down? Ethical and Legal Considerations for the Demise of Big Tech Platforms' by Carl Ohman and Nikita 'Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions' by Thomas King, Nikita, Mariarosario Taddeo and Luciano FloridiPost Block Status & visibility Visibility Public Publish September 18, 2020 1:09 pm Stick to the top of the blog Author John Danaher Enable AMP Move to trash 9 Revisions Permalink Categories Uncategorized Podcast Add New Category Tags Add New Tag Separate with commas or the Enter key. Featured image Excerpt Discussion Open publish panel NotificationsCode editor selected Subscribe to the newsletter
Lots of algorithmic tools are now used to support decision-making in the criminal justice system. Many of them are criticised for being biased. What should be done about this? In this episode, I talk to Chelsea Barabas about this very question. Chelsea is a PhD candidate at MIT, where she examines the spread of algorithmic decision making tools in the US criminal legal system. She works with interdisciplinary researchers, government officials and community organizers to unpack and transform mainstream narratives around criminal justice reform and data-driven decision making. She is currently a Technology Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School of Government. Formerly, she was a research scientist for the AI Ethics and Governance Initiative at the MIT Media Lab. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).Show notes Topics covered in this show include The history of algorithmic decision-making in criminal justiceModern AI tools in criminal justiceThe problem of biased decision-makingExamples of bias in practiceThe FAT (Fairness, Accountability and Transparency) approach to biasCan we de-bias algorithms using formal, technical rules?Can we de-bias algorithms through proper review and oversight?Should we be more critical of the data used to build these systems?Problems with pre-trial risk assessment measuresThe abolitionist perspective on criminal justice reform Relevant Links Chelsea's homepageChelsea on Twitter"Beyond Bias: Reimagining the terms "Ethical AI" in Criminal Law" by ChelseaVideo presentation of this paper"Studying up: reorienting the study of algorithmic fairness around issues of power." by Chelsea and orsKleinberg et al on the impossibility of fairnessKleinberg et al on using algorithms to detect discriminationThe Condemnation of Blackness by Khalil Gibran Muhammad Subscribe to the newsletter
 What happens if an autonomous machine does something wrong? Who, if anyone, should be held responsible for the machine's actions? That's the topic I discuss in this episode with Daniel Tigard. Daniel Tigard is a Senior Research Associate in the Institute for History & Ethics of Medicine, at the Technical University of Munich. His current work addresses issues of moral responsibility in emerging technology. He is the author of several papers on moral distress and responsibility in medical ethics as well as, more recently, papers on moral responsibility and autonomous systems. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).          Show NotesTopics discussed include:  What is responsibility? Why is it so complex? The three faces of responsibility: attribution, accountability and answerability Why are people so worried about responsibility gaps for autonomous systems? What are some of the alleged solutions to the "gap" problem? Who are the techno-pessimists and who are the techno-optimists? Why does Daniel think that there is no techno-responsibility gap? Is our application of responsibility concepts to machines overly metaphorical?  Relevant Links Daniel's ResearchGATE profile Daniel's papers on Philpapers "There is no Techno-Responsibility Gap" by Daniel "Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability" by Mark Coeckelbergh Technologically blurred accountability? by Kohler, Roughley and Sauer Subscribe to the newsletter
   Are robots like humans? Are they agents? Can we have relationships with them? These are just some of the questions I explore with today's guest, Sven Nyholm. Sven is an assistant professor of philosophy at Utrecht University in the Netherlands. His research focuses on ethics, particularly the ethics of technology. He is a friend of the show, having appeared twice before. In this episode, we are talking about his recent, great, book Humans and Robots: Ethics, Agency and Anthropomorphism. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes:Topics covered in this episode include: Why did Sven play football with a robot? Who won? What is a robot? What is an agent? Why does it matter if robots are agents? Why does Sven worry about a normative mismatch between humans and robots? What should we do about this normative mismatch? Why are people worried about responsibility gaps arising as a result of the widespread deployment of robots? How should we think about human-robot collaborations? Why should human drivers be more like self-driving cars? Can we be friends with a robot? Why does Sven reject my theory of ethical behaviourism? Should we be pessimistic about the future of roboethics?Relevant Links Sven's Homepage Sven on Philpapers Humans and Robots: Ethics, Agency and Anthropomorphism 'Can a robot be a good colleague?' by Sven and Jilles Smids 'Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci' by Sven 'Automated Cars Meet Human Drivers: Responsible Human-Robot Coordination and The Ethics of Mixed Traffic' by Sven and Jilles Smids  Subscribe to the newsletter
If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California State University, Chico and an M.Sc. in Ethics of Technology from the University of Twente. He is a founding member of the Foundation for Responsible Robotics and a member of the 4TU Centre for Ethics and Technology. Scott is skeptical of AI as a grand solution to societal problems and argues that AI should be boring.You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).  Show NotesTopic covered include: Why do people worry about the opacity of AI?What's the difference between explainability and transparency?What's the moral value or function of explainable AI?Must we distinguish between the ethical value of an explanation and its epistemic value?Why is it so technically difficult to make AI explainable?Will we ever have a technical solution to the explanation problem?Why does Scott think there is Catch 22 involved in insisting on explainable AI?When should we insist on explanations and when are they unnecessary?Should we insist on using boring AI?  Relevant LinksScotts's webpageScott's paper "A Misdirected Principle with a Catch: Explicability for AI"Scott's paper "The Value of Transparency: Bulk Data and Authorisation""The Right to an Explanation Explained" by Margot KaminskiEpisode 36 - Wachter on Algorithms and Explanations  Subscribe to the newsletter
How do we get back to normal after the COVID-19 pandemic? One suggestion is that we use increased amounts of surveillance and tracking to identify and isolate infected and at-risk persons. While this might be a valid public health strategy it does raise some tricky ethical questions. In this episode I talk to Carissa Véliz about these questions. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics at Oxford and the Wellcome Centre for Ethics and Humanities, also at Oxford. She is the editor of the Oxford Handbook of Digital Ethics as well as two forthcoming solo-authored books Privacy is Power (Transworld) and The Ethics of Privacy (Oxford University Press).You can download the episode here or listen below.You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).  Show NotesTopics discussed include The value of privacyDo we balance privacy against other rights/values?The significance of consent in debates about consentDigital contact tracing and digital quarantinesThe ethics of digital contact tracingIs the value of digital contact tracing being oversold?The relationship between testing and contact tracingCOVID 19 as an important moment in the fight for privacyThe data economy in light of COVID 19The ethics of immunity passportsThe importance of focusing on the right things in responding to COVID 19  Relevant LinksCarissa's WebpageCarissa's Twitter feed (a treasure trove of links about privacy and surveillance)Views on Privacy: A Survey by Sian Brooke and Carissa VélizData, Privacy and the Individual by Carissa VélizScience paper on the value of digital contact tracingThe Apple-Google proposal for digital contact tracing''The new normal': China's excessive coronavirus public monitoring could be here to stay' 'In Coronavirus Fight, China Gives Citizens a Color Code, With Red Flags''To curb covid-19, China is using its high-tech surveillance tools''Digital surveillance to fight COVID-19 can only be justified if it respects human rights''Why ‘Mandatory Privacy-Preserving Digital Contact Tracing’ is the Ethical Measure against COVID-19' by Cansu Canca'The COVID-19 Tracking App Won't Work' 'What are 'immunity passports' and could they help us end the coronavirus lockdown?''The case for ending the Covid-19 pandemic with mass testing'  Subscribe to the newsletter
There is a lot of data and reporting out there about the COVID 19 pandemic. How should we make sense of that data? Do the media narratives misrepresent or mislead us as to the true risks associated with the disease? Have governments mishandled the response? Can they be morally blamed for what they have done. These are the questions I discuss with my guest on today's show: David Shaw. David is a Senior Researcher at the Institute for Biomedical Ethics at the University of Basel and an Assistant Professor at the Care and Public Health Research Institute, Maastricht University. We discuss some recent writing David has been doing on the Journal of Medical Ethics blog about the coronavirus crisis.You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).  Show NotesTopics discussed include... Why is it important to keep death rates and other data in context?Is media reporting of deaths misleading?Why do the media discuss 'soaring' death rates and 'grim' statistics?Are we ignoring the unintended health consequences of COVID 19?Should we take the economic costs more seriously given the link between poverty/inequality and health outcomes?Did the UK government mishandle the response to the crisis? Are they blameworthy for what they did?Is it fair to criticise governments for their handling of the crisis?Is it okay for governments to experiment on their populations in response to the crisis?Relevant LinksDavid's Profile Page at the University of Basel'The Vital Contexts of Coronavirus' by David'The Slow Dragon and the Dim Sloth: What can the world learn from coronavirus responses in Italy and the UK?' by Marcello Ienca and David Shaw'Don't let the ethics of despair infect the ICU' by David Shaw, Dan Harvey and Dale Gardiner'Deaths in New York City Are More Than Double the Usual Total' in the NYT (getting the context right?!)Preliminary results from German Antibody tests in one town: 14% of the population infectedDo Death Rates Go Down in a Recession?The Sun's Good Friday headlineSubscribe to the newsletter
I'm still thinking a lot about the COVID-19 pandemic. In this episode I turn away from some of the 'classical' ethical questions about the disease and talk more about how to understand it and form reasonable beliefs about the public health information that has been issued in response to it. To help me do this I will be talking to Katherine Furman. Katherine is a lecturer in philosophy at the University of Liverpool. Her research interests are at the intersection of Philosophy and Health Policy. She is interested in how laypeople understand issues of science, objectivity in the sciences and social sciences, and public trust in science. Her previous work has focused on the HIV/AIDs pandemic and the Ebola outbreak in West Africa in 2014-2015. We will be talking about the lessons we can draw from this work for how we think about the COVID-19 pandemic.You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).Show NotesTopics discussed include: The history of explaining the causes of diseaseMono-causal theories of diseaseMulti-causal theories of diseaseLessons learned from the HIV/AIDs pandemicThe practical importance of understanding the causes of disease in the current pandemicIs there an ethics of belief?Do we have epistemic duties in relation to COVID-19?Is it reasonable to believe 'rumours' about the disease?Lessons learned from the 2014-2015 Ebola outbreakThe importance of values in the public understanding of scienceRelevant LinksKatherine's HomepageKatherine @ University of Liverpool"Mono-Causal and Multi-Causal Theories of Disease: How to Think Virally and Socially about the Aetiology of AIDS" by Katherine"Moral Responsibility, Culpable Ignorance, and Suppressed Disagreement" by Katherine"The international response to the Ebola outbreak has excluded Africans and their interests" by KatherineImperial College paper on COVID-19 scenariosOxford Paper on possible exposure levels to novel Coronavirus  Subscribe to the newsletter
We have a limited number of ventilators. Who should get access to them? In this episode I talk to Lars Sandman. Lars is a Professor of Healthcare Ethics at Linköping University, Sweden. Lars’s research involves studying ethical aspects of distributing scarce resources within health care and studying and developing methods for ethical analyses of health-care procedures. We discuss the ethics of healthcare prioritisation in the midst of the COVID 19 pandemic, focusing specifically on some principles Lars, along with others, developed for the Swedish government.You download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show NotesThe prioritisation challenges we currently faceEthical principles for prioritisation in healthcareProblems with applying ethical theories in practiceSwedish legal principles on healthcare prioritisationPrinciples for access to ICU during the COVID 19 pandemicDo we prioritise younger people?Chronological age versus biological ageCould we use a lottery principle?Should we prioritise healthcare workers?Impact of COVID 19 prioritisation on other healthcare priorities  Relevant LinksLar's WebpageSwedish Legal PrinciplesBackground to the Swedish LawNew priority principles in Sweden (English Translation by Christian Munthe)"Principles for allocation of scarce medical interventions" by Persad, Werthheimer and Emanuel (good overview of the ethical debate)The grim ethical dilemma of rationing medical care, explained - Vox.comSubscribe to the newsletter
Lots of people are dying right now. But people die all the time. How should we respond to all this death? In this episode I talk to Michael Cholbi about the philosophy of grief. Michael Cholbi is Professor of Philosophy at the University of Edinburgh. He has published widely in ethical theory, practical ethics, and the philosophy of death and dying. We discus the nature of grief, the ethics of grief and how grief might change in the midst of a pandemic.You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show NotesTopics discussed include... What is grief?What are the different forms of grief?Is grief always about death?Is grief a good thing?Is grief a bad thing?Does the cause of death make a difference to grief?How does the COVID 19 pandemic disrupt grief?What are the politics of grief?Will future societies memorialise the deaths of people in the pandemic?  Relevant LinksMichael's HomepageRegret, Resilience and the Nature of Grief by MichaelFinding the Good in Grief by MichaelGrief's Rationality, Backward and Forward by MichaelCoping with Grief: A Series of Philosophical Disquisitions by meGrieving alone — coronavirus upends funeral rites (Financial Times)Coronavirus: How Covid-19 is denying dignity to the dead in Italy (BBC)Why the 1918 Spanish flu defied both memory and imagination100 years later, why don’t we commemorate the victims and heroes of ‘Spanish flu’?Subscribe to the newsletter
As nearly half the world's population is now under some form of quarantine or lockdown, it seems like an apt time to consider the ethics of infectious disease control measures of this sort. In this episode, I chat to Jonathan Pugh and Tom Douglas, both of whom are Senior Research Fellows at the Uehiro Centre for Practical Ethics in Oxford, about this very issue. We talk about the moral principles that should apply to our evaluation of infectious disease control and some of the typical objections to it. Throughout we focus specifically on some of different interventions that are being applied to tackle COVID-19.You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show NotesTopics covered include: Methods of infectious disease controlConsequentialist justifications for disease controlNon-consequentialist justificationsThe proportionality of disease control measuresCould these measures stigmatise certain populations?Could they exacerbate inequality or fuel discrimination?Must we err on the side of precaution in the midst of a novel pandemic?Is ethical evaluation a luxury at a time like this?Relevant LinksJonathan Pugh's HomepageTom Douglas's Homepage'Pandemic Ethics: Infectious Pathogen Control Measures and Moral Philosophy' by Jonathan and Tom'Justifications for Non-Consensual Medical Intervention: From Infectious Disease Control to Criminal Rehabilitation' by Jonathan and Tom'Infection Control for Third-Party Benefit: Lessons from Criminal Justice' by TomHow Different Asian Countries Responded to COVID 19    Subscribe to the newsletter
Like almost everyone else, I have been obsessing over the novel coronavirus pandemic for the past few months. Given the dramatic escalation in the pandemic in the past week, and the tricky ethical questions it raises for everyone, I thought it was about time to do an episode about it. So I reached out to people on Twitter and Jeff Sebo kindly volunteered himself to join me for a conversation. Jeff is a Clinical Assistant Professor of Environmental Studies, Affiliated Professor of Bioethics, Medical Ethics, and Philosophy, and Director of the Animal Studies M.A. Program at New York University. Jeff’s research focuses on bioethics, animal ethics, and environmental ethics. This episode was put together in a hurry but I think it covers a lot of important ground. I hope you find it informative and useful. Be safe!You can download the episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Spotify, Stitcher and many over podcasting services (the RSS feed is here). Show NotesTopics covered include: Individual duties and responsibilities to stop the spreadMedical ethics and medical triageBalancing short-term versus long-term interestsHealth versus well-being and other goodsState responsibilities and the social safety netThe duties of politicians and public officialsThe risk of authoritarianism and the erosion of democratic valuesGlobal justice and racism/xenophobiaOur duties to frontline workers and vulnerable members of societyAnimal ethics and the risks of industrial agricultureThe ethical upside of the pandemic: will this lead to more solidarity and sustainability?Pandemics and global catastrophic risksWhat should we be doing right now?  Some Relevant LinksJeff's webpagePatient 31 in South KoreaThe Duty to Vaccinate and collective action problemsItalian medical ethics recommendationsCOVID 19 and the Impossibility of MoralityThe problem with the UK government's (former) 'herd immunity' approachA history of the Spanish FluSubscribe to the newsletter
In this episode I talk to David Wood. David is currently the chair of the London Futurists group and a full-time futurist speaker, analyst, commentator, and writer. He studied the philosophy of science at Cambridge University. He has a background in designing, architecting, implementing, supporting, and avidly using smart mobile devices. He is the author or lead editor of nine books including, "RAFT 2035", "The Abolition of Aging", "Transcending Politics", and "Sustainable Superabundance". We chat about the last book on this list -- Sustainable Superabundance -- and its case for an optimistic future.You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction1:40 - Who are the London Futurists? What do they do?3:34 - Why did David write Sustainable Superabundance?7:22 - What is sustainable superabundance?11:05 - Seven spheres of flourishing and seven types of superabundance?16:16 - Why is David a transhumanist?20:20 - Dealing with two criticisms of transhumanism: (i) isn't it naive and polyannish? (ii) isn't it elitist, inegalitarian and dangerous?30:00 - Key principles of transhumanism34:52 - How will we address energy needs of the future?40:35 - How optimistic can we really be about the future of energy?46:20 - Dealing with pessimism about food production?52:48 - Are we heading for another AI winter?1:01:08 - The politics of superabundance - what needs to change?  Relevant LinksDavid Wood on TwitterLondon Futurists websiteLondon Futurists YoutubeSustainable Superabundance by DavidOther books in the Transpolitica seriesTo be a machine by Mark O'ConnellPrevious episode with James Hughes about techno-progressive transhumanismPrevious episode with Rick Searle about the dark side of transhumanismSubscribe to the newsletter
In this episode I talk (again) to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the philosophy of science. His research has been covered in Nature, Popular Science, The Chronicle of Higher Education, The Atlantic, New Scientist, and other major outlets. We talk about his latest book, co-authored with Julian Savulescu, on love drugs.You can listen to the episode below or download it here. You can also subscribe to the podcast on Apple, Stitcher, Spotify and other leading podcasting services (the RSS feed is here).Show Notes0:00 - Introduction2:17 - What is love? (Baby don't hurt me) What is a love drug?7:30 - What are the biological underpinnings of love?10:00 - How constraining is the biological foundation to love?13:45 - So we're not natural born monogamists or polyamorists?17:48 - Examples of actual love drugs23:32 - MDMA in couples therapy27:55 - The situational ethics of love drugs33:25 - The non-specific nature of love drugs39:00 - The basic case in favour of love drugs40:48 - The ethics of anti-love drugs44:00 - The ethics of conversion therapy48:15 - Individuals vs systemic change50:20 - Do love drugs undermine autonomy or authenticity?54:20 - The Vice of In-Principlism56:30 - The future of love drugs  Relevant LinksBrian's page (freely accessible papers)Brian's Researchgate page (freely accessible papers)Brian asking Sam Harris a questionThe book: Love Drugs or Love is the Drug'Love and enhancement technology'by Brian Earp'The Vice of In-principlism and the Harmfulness of Love' by me  Subscribe to the newsletter
[This is the text of a talk I gave to the Irish Law Reform Commission Annual Conference in Dublin on the 13th of November 2018. You can listen to an audio version of this lecture here or using the embedded player above.]In the mid-19th century, a set of laws were created to address the menace that newly-invented automobiles and locomotives posed to other road users. One of the first such laws was the English The Locomotive Act 1865, which subsequently became known as the ‘Red Flag Act’. Under this act, any user of a self-propelled vehicle had to ensure that at least two people were employed to manage the vehicle and that one of these persons:“while any locomotive is in motion, shall precede such locomotive on foot by not less than sixty yards, and shall carry a red flag constantly displayed, and shall warn the riders and drivers of horses of the approach of such locomotives…”The motive behind this law was commendable. Automobiles did pose a new threat to other, more vulnerable, road users. But to modern eyes the law was also, clearly, ridiculous. To suggest that every car should be preceded by a pedestrian waving a red flag would seem to defeat the point of having a car: the whole idea is that it is faster and more efficient than walking. The ridiculous nature of the law eventually became apparent to its creators and all such laws were repealed in the 1890s, approximately 30 years after their introduction.[1]The story of the Red Flag laws shows that legal systems often get new and emerging technologies badly wrong. By focusing on the obvious or immediate risks, the law can neglect the long-term benefits and costs.I mention all this by way of warning. As I understand it, it has been over 20 years since the Law Reform Commission considered the legal challenges around privacy and surveillance. A lot has happened in the intervening decades. My goal in this talk is to give some sense of where we are now and what issues may need to be addressed over the coming years. In doing this, I hope not to forget the lesson of the Red Flag laws.1. What’s changed?  Let me start with the obvious question. What has changed, technologically speaking, since the LRC last considered issues around privacy and surveillance? Two things stand out.First, we have entered an era of mass surveillance. The proliferation of digital devices — laptops, computers, tablets, smart phones, smart watches, smart cars, smart fridges, smart thermostats and so forth — combined with increased internet connectivity has resulted in a world in which we are all now monitored and recorded every minute of every day of our lives. The cheapness and ubiquity of data collecting devices means that it is now, in principle, possible to imbue every object, animal and person with some data-monitoring technology. The result is what some scholars refer to as the ‘internet of everything’ and with it the possibility of a perfect ‘digital panopticon’. This era of mass surveillance puts increased pressure on privacy and, at least within the EU, has prompted significant legislative intervention in the form of the GDPR.Second, we have created technologies that can take advantage of all the data that is being collected. To state the obvious: data alone is not enough. As all lawyers know, it is easy to befuddle the opposition in a complex law suit by ‘dumping’ a lot of data on them during discovery. They drown in the resultant sea of information. It is what we do with the data that really matters. In this respect, it is the marriage of mass surveillance with new kinds of artificial intelligence that creates the new legal challenges that we must now tackle with some urgency.Artificial intelligence allows us to do three important things with the vast quantities of data that are now being collected:(i) It enables new kinds of pattern matching - what I mean here is that AI systems can spot patterns in data that were historically difficult for computer systems to spot (e.g. image or voice recognition), and that may also be difficult, if not impossible, for humans to spot due to their complexity. To put it another way, AI allows us to understand data in new ways.(ii) It enables the creation of new kinds of informational product - what I mean here is that the AI systems don’t simply rebroadcast, dispassionate and objective forms of the data we collect. They actively construct and reshape the data into artifacts that can be more or less useful to humans.(iii) It enables new kinds of action and behaviour - what I mean here is that the informational products created by these AI systems are not simply inert artifacts that we observe with bemused detachment. They are prompts to change and alter human behaviour and decision-making.On top of all this, these AI systems do these things with increasing autonomy (or, less controversially, automation). Although humans do assist the AI systems in both understanding, constructing and acting on foot of the data being collected, advances in AI and robotics make it increasingly possible for machines to do things without direct human assistance or intervention.It is these ways of using data, coupled with increasing automation, that I believe give rise to the new legal challenges. It is impossible for me to cover all of these challenges in this talk. So what I will do instead is to discuss three case studies that I think are indicative of the kinds of challenges that need to be addressed, and that correspond to the three things we can now do with the data that we are collecting.2. Case Study: Facial Recognition TechnologyThe first case study has to do with facial recognition technology. This is an excellent example of how AI can understand data in new ways. Facial recognition technology is essentially like fingerprinting for the face. From a selection of images, an algorithm can construct a unique mathematical model of your facial features, which can then be used to track and trace your identity across numerous locations.The potential conveniences of this technology are considerable: faster security clearance at airports; an easy way to record and confirm attendance in schools; an end to complex passwords when accessing and using your digital services; a way for security services to track and identify criminals; a tool for locating missing persons and finding old friends. Little surprise then that many of us have already welcomed the technology into our lives. It is now the default security setting on the current generation of smartphones. It is also being trialled at airports (including Dublin Airport),[2] train stations and public squares around the world. It is cheap and easily plugged into existing CCTV surveillance systems. It can also take advantage of the vast databases of facial images collected by governments and social media engines.Despite its advantages, facial recognition technology also poses a significant number of risks. It enables and normalises blanket surveillance of individuals across numerous environments. This makes it the perfect tool for oppressive governments and manipulative corporations. Our faces are one of our most unique and important features, central to our sense of who we are and how we relate to each other — think of the Beatles immortal line ‘Eleanor Rigby puts on the face that she keeps in the jar by the door’ — facial recognition technology captures this unique feature and turns into a digital product that can be copied and traded, and used for marketing, intimidation and harassment.Consider, for example, the unintended consequences of the FindFace app that was released in Russia in 2016. Intended by its creators to be a way of making new friends, the FindFace app matched images on your phone with images in social media databases, thus allowing you to identify people you may have met but whose names you cannot remember. Suppose you met someone at a party, took a picture together with them, but then didn’t get their name. FindFace allows you use the photo to trace their real identity.[3] What a wonderful idea, right? Now you need never miss out on an opportunity for friendship because of oversight or poor memory. Well, as you might imagine, the app also has a dark side. It turns out to be the perfect technology for stalkers, harassers and doxxers (the internet slang for those who want to out people’s real world identities). Anyone who is trying to hide or obscure their identity can now be traced and tracked by anyone who happens to take a photograph of them.What’s more, facial recognition technology is not perfect. It has been shown to be less reliable when dealing with non-white faces, and there are several documented cases in which it matches the wrong faces, thus wrongly assuming someone is a criminal when they are not. For example, many US drivers have had their licences cancelled because an algorithm has found two faces on a licence database to be suspiciously similar and has then wrongly assumed the people in question to be using a false identity. In another famous illustration of the problem, 28 members of the US congress (most of them members of racial minorities), were falsely matched with criminal mugshots using facial recognition technology created by Amazon.[4] As some researchers have put it, the widespread and indiscriminate use of facial recognition means that we are all now part of a perpetual line-up that is both biased and error prone.[5] The conveniences of facial recognition thus come at a price, one that often only becomes apparent when something goes wrong, and is more costly for some social groups than others.What should be done about this from a legal perspective? The obvious answer is to carefully regulate the technology to manage its risks and opportunities. This is, in a sense, what is already being done under the GDPR. Article 9 of the GDPR stipulates that facial recognition is a kind of biometric data that is subject to special protections. The default position is that it should not be collected, but this is subject to a long list of qualifications and exceptions. It is, for example, permissible to collect it if the data has already been made public, if you get the explicit consent of the person, if it serves some legitimate public interest, if it is medically necessary or necessary for public health reasons, if it is necessary to protect other rights and so on. Clearly the GDPR does restrict facial recognition in some ways. A recent Swedish case fined a school for the indiscriminate use of facial recognition for attendance monitoring.[6] Nevertheless, the long list of exceptions makes the widespread use of facial recognition not just a possibility but a likelihood. This is something the EU is aware of and in light of the Swedish case they have signalled an intention to introduce stricter regulation of facial recognition.This is something we in Ireland should also be considering. The GDPR allows states to introduce stricter protections against certain kinds of data collection. And, according to some privacy scholars, we need the strictest possible protections to to save us from the depredations of facial recognition. Woodrow Hartzog, one of the foremost privacy scholars in the US, and Evan Selinger, a philosopher specialising in the ethics of technology, have recently argued that facial recognition technology must be banned. As they put it (somewhat alarmingly):[7]“The future of human flourishing depends upon facial recognition technology being banned before the systems become too entrenched in our lives. Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.”They caution against anyone who thinks that the technology can be procedurally regulated, arguing that governmental and commercial interests will always lobby for expansion of the technology beyond its initially prescribed remit. They also argue that attempts at informed consent will be (and already are) a ‘spectacular failure’ because people don’t understand what they are consenting to when they give away their facial fingerprint.Some people might find this call for a categorical ban extreme, unnecessary and impractical. Why throw the baby out with the bathwater and other cliches to that effect. But I would like to suggest that there is something worth taking seriously here, particularly since facial recognition technology is just the tip of the iceberg of data collection. People are already experimenting with emotion recognition technology, which uses facial images to predict future behaviour in real time, and there are many other kinds of sensitive data that are being collected, digitised and traded. Genetic data is perhaps the most obvious other example. Given that data is what fuels the fire of AI, it is possible that we should consider cutting off some of the fuel supply in its entirety.3. Case Study: DeepfakesLet me move on to my second case study. This one has to do with how AI is used to create new informational products from data. As an illustration of this I will focus on so-called ‘deepfake’ technology. This is a machine learning technique that allows you to construct realistic synthetic media from databases of images and audio files. The most prevalent use of deepfakes is, perhaps unsurprisingly, in the world of pornography, where the faces of famous actors have been repeatedly grafted onto porn videos. This is disturbing and makes deepfakes an ideal technology for ‘synthetic’ revenge porn.Perhaps more socially significant than this, however, are the potential political uses of deepfake technology. In 2017, a team of researchers at the University of Washington created a series of deepfake videos of Barack Obama which I will now play for you.[8] The images in these videos are artificial. They haven’t been edited together from different clips. They have been synthetically constructed by an algorithm from a database of audiovisual materials. Obviously, the video isn’t entirely convincing. If you look and listen closely you can see that there is something stilted and artificial about it. In addition to this it uses pre-recorded audio clips to sync to the synthetic video. Nevertheless, if you weren’t looking too closely, you might be convinced it was real. Furthermore, there are other teams working on using the same basic technique to create synthetic audio too. So, as the technology improves, it could be very difficult for even the most discerning viewers to tell the difference between fiction and reality.Now there is nothing new about synthetic media. With the support of the New Zealand Law Foundation, Tom Barraclough and Curtis Barnes have published one of the most detailed investigations into the legal policy implications of deepfake technology.[9] In their report, they highlight the fact that an awful lot of existing audiovisual media is synthetic: it is all processed, manipulated and edited to some degree. There is also a long history of creating artistic and satirical synthetic representations of political and public figures. Think, for example, of the caricatures in Punch magazine or in the puppet show Spitting Image. Many people who use deepfake technology to create synthetic media will, no doubt, claim a legitimate purpose in doing so. They will say they are engaging in legitimate satire or critique, or producing works of artistic significance.Nevertheless, there does seem to be something worrying about deepfake technology. The highly realistic nature of the audiovisual material being created makes it the ideal vehicle for harassment, manipulation, defamation, forgery and fraud. Furthermore, the realism of the resultant material also poses significant epistemic challenges for society. The philosopher Regina Rini captures this problem well. She argues that deepfake technology poses a threat to our society’s ‘epistemic backstop’. What she means is that as a society we are highly reliant on testimony from others to get by. We rely on it for news and information, we use it to form expectations about the world and build trust in others. But we know that testimony is not always reliable. Sometimes people will lie to us; sometimes they will forget what really happened. Audiovisual recordings provide an important check on potentially misleading forms of testimony. They encourage honesty and competence. As Rini puts it:[10]“The availability of recordings undergirds the norms of testimonial practice…Our awareness of the possibility of being recorded provides a quasi-independent check on reckless testifying, thereby strengthening the reasonability of relying on the words of others. Recordings do this in two distinctive ways: actively correcting errors in past testimony and passively regulating ongoing testimonial practices.”The problem with deepfake technology is that it undermines this function. Audiovisual recordings can no longer provide the epistemic backstop that keeps us honest.What does this mean for the law? I am not overly concerned about the impact of deepfake technology on legal evidence-gathering practices. The legal system, with its insistence on ‘chain of custody’ and testimonial verification of audiovisual materials, is perhaps better placed than most to deal with the threat of deepfakes (though there will be an increased need for forensic experts to identify deepfake recordings in court proceedings). What I am more concerned about is how deepfake technologies will be weaponised to harm and intimidate others — particularly members of vulnerable populations. The question is whether anything can be done to provide legal redress for these problems? As Barraclough and Barnes point out in their report, it is exceptionally difficult to legislate in this area. How do you define the difference between real and synthetic media (if at all)? How do you balance the free speech rights against the potential harms to others? Do we need specialised laws to do this or are existing laws on defamation and fraud (say) up to the task? Furthermore, given that deepfakes can be created and distributed by unknown actors, who would the potential cause of action be against?These are difficult questions to answer. The one concrete suggestion I would make is that any existing or proposed legislation on ‘revenge porn’ should be modified so that it explicitly covers the possibility of synthetic revenge porn. Ireland is currently in the midst of legislating against the nonconsensual sharing of ‘intimate images’ in the Harassment, Harmful Communications and Related Offences Bill. I note that the current wording of the offence in section 4 of the Bill covers images that have been ‘altered’ but someone might argue that synthetically constructed images are not, strictly speaking, altered. There may be plans to change this wording to cover this possibility — I know that consultations and amendments to the Bill are ongoing[11] — but if there aren’t then I suggest that there should be.To reiterate, I am using deepfake technology as an illustration of a more general problem. There are many other ways in which the combination data and AI can be used to mess with the distinction between fact and fiction. The algorithmic curation and promotion of fake news, for example, or the use of virtual and augmented reality to manipulate our perception of public and private spaces, both pose significant threats to property rights, privacy rights and political rights. We need to do something to legally manage this brave new (technologically constructed) world.4. Case Study: Algorithmic Risk PredictionLet me turn turn now to my final case study. This one has to do with how data can be used to prompt new actions and behaviours in the world. For this case study, I will look to the world of algorithmic risk prediction. This is where we take a collection of datapoints concerning an individual’s behaviour and lifestyle and feed it into an algorithm that can make predictions about their likely future behaviour. This is a long-standing practice in insurance, and is now being used in making credit decisions, tax auditing, child protection, and criminal justice (to name but a few examples). I’ll focus on its use in criminal justice for illustrative purposes.Specifically, I will focus on the debate surrounding the COMPAS algorithm, that has been used in a number of US states. The COMPAS algorithm (created by a company called Northpointe, now called Equivant) uses datapoints to generate a recidivism risk score for criminal defendants. The datapoints include things like the person’s age at arrest, their prior arrest/conviction record, the number of family members who have been arrested/convicted, their address, their education and job and so on. These are then weighted together using an algorithm to generate a risk score. The exact weighting procedure is unclear, since the COMPAS algorithm is a proprietary technology, but the company that created it has released a considerable amount of information about the datapoints it uses into the public domain.If you know anything about the COMPAS algorithm you will know that it has been controversial. The controversy stems from two features of how the algorithm works. First, the algorithm is relatively opaque. This is a problem because the fair administration of justice requires that legal decision-making be transparent and open to challenge. A defendant has a right to know how a tribunal or court arrived at its decision and to challenge or question its reasoning. If this information isn’t known — either because the algorithm is intrinsically opaque or has been intentionally rendered opaque for reasons of intellectual property — then this principle of fair administration is not being upheld. This was one of the grounds on which the use of COMPAS algorithm was challenged in the US case of Loomis v Wisconsin.[12] In that case, the defendant, Loomis, challenged his sentencing decision on the basis that the trial court had relied on the COMPAS risk score in reaching its decision. His challenge was ultimately unsuccessful. The Wisconsin Supreme Court reasoned that the trial court had not relied solely on the COMPAS risk score in reaching its decision. The risk score was just one input into the court’s decision-making process, which was itself transparent and open to challenge. That said, the court did agree that courts should be wary when relying on such algorithms and said that warnings should be attached to the scores to highlight their limitations.The second controversy associated with the COMPAS algorithm has to do with its apparent racial bias. To understand this controversy I need to say a little bit more about how the algorithm works. Very roughly, the COMPAS algorithm is used to sort defendants into to outcome ‘buckets’: a 'high risk' reoffender bucket or a 'low risk' reoffender bucket. A number of years back a group of data journalists based at ProPublica conducted an investigation into which kinds of defendants got sorted into those buckets. They discovered something disturbing. They found that the COMPAS algorithm was more likely to give black defendants a false positive high risk score and more likely to give white defendants a false negative low risk score. The exact figures are given in the table below. Put another way, the COMPAS algorithm tended to rate black defendants as being higher risk than they actually were and white defendants as being lower risk than they actually were. This was all despite the fact that the algorithm did not explicitly use race as a criterion in its risk scores.Needless to say, the makers of the COMPAS algorithm were not happy about this finding. They defended their algorithm, arguing that it was in fact fair and non-discriminatory because it was well calibrated. In other words, they argued that it was equally accurate in scoring defendants, irrespective of their race. If it said a black defendant was high risk, it was right about 60% of the time and if it said that a white defendant was high risk, it was right about 60% of the time. This turns out to be true. The reason why it doesn't immediately look like it is equally accurate upon a first glance at the relevant figures is that there are a lot more black defendants than white defendants -- an unfortunate feature of the US criminal justice system that is not caused by the algorithm but is, rather, a feature the algorithm has to work around.So what is going on here? Is the algorithm fair or not? Here is where things get interesting. Several groups of mathematicians analysed this case and showed that the main problem here is that the makers of COMPAS and the data journalists were working with different conceptions of fairness and that these conceptions were fundamentally incompatible. This is something that can be formally proved. The clearest articulation of this proof can be found in a paper by Jon Kleinberg, Sendhil Mullainathan and Manish Raghavan.[13] To simplify their argument, they said that there are two things you might want a fair decision algorithm to do: (i) you might want it to be well-calibrated (i.e. equally accurate in its scoring irrespective of racial group); (ii) you might want it to achieve an equal representation for all groups in the outcome buckets. They then proved that except in two unusual cases, it is impossible to satisfy both criteria. The two unusual cases are when the algorithm is a 'perfect predictor' (i.e. it always get things right) or, alternatively, when the base rates for the relevant populations are the same (e.g. there are the same number of black defedants as there are white defendants). Since no algorithmic decision procedure is a perfect predictor, and since our world is full of base rate inequalities, this means that no plausible real-world use of a predictive algorithm is likely to be perfectly fair and non-discriminatory. What's more, this is generally true for all algorithmic risk predictions and not just true for cases involving recidivism risk. If you would like to see a non-mathematical illustration of the problem, I highly recommend checking out a recent article in the MIT Technology Review which includes a game you can play using the COMPAS algorithm and which illustrates the hard tradeoff between different conceptions of fairness.[14]What does all this mean for the law? Well, when it comes to the issue of transparency and challengeability, it is worth noting that the GDPR, in articles 13-15 and article 22, contains what some people refer to as a ‘right to explanation’. It states that, when automated decision procedures are used, people have a right to access meaningful information about the logic underlying the procedures. What this meaningful information looks like in practice is open to some interpretation, though there is now an increasing amount of guidance from national data protection units about what is expected.[15] But in some ways this misses the deeper point. Even if we make these procedures perfectly transparent and explainable, there remains the question about how we manage the hard tradeoff between different conceptions of fairness and non-discrimination. Our legal conceptions of fairness are multidimensional and require us to balance competing interests. When we rely on human decision-makers to determine what is fair, we accept that there will be some fudging and compromise involved. Right now, we let this fudging take place inside the minds of the human decision-makers, oftentimes without questioning it too much or making it too explicit. The problem with algorithmic risk predictions is that they force us to make this fudging explicit and precise. We can no longer pretend that the decision has successfully balanced all the competing interests and demands. We have to pick and choose. Thus, in some ways, the real challenge with these systems is not that they are opaque and non-transparent but, rather, that when they are transparent they force us to make hard choices.To some, this is the great advantage of algorithmic risk prediction. A paper by Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan and Cass Sunstein entitled ‘Discrimination in the Age of the Algorithm’ makes this very case.[16] They argue that the real problem at the moment is that decision-making is discriminatory and its discriminatory nature is often implicit and hidden from view. The widespread use of transparent algorithms will force it into the open where it can be washed by the great disinfectant of sunlight. But I suspect others will be less sanguine about this new world of algorithmically mediated justice. They will argue that human-led decision-making, with its implicit fudging, is preferable, partly because it allows us to sustain the illusion of justice. Which world do we want to live in? The transparent and explicit world imagined by Kleinberg et al, or the murky and more implicit world of human decision-making? This is also a key legal challenge for the modern age.5. ConclusionIt’s time for me to wrap up. One lingering question you might have is whether any of the challenges outlined above are genuinely new. This is a topic worth debating. In one sense, there is nothing completely new about the challenges I have just discussed. We have been dealing with variations of them for as long as humans have lived in complex, literate societies. Nevertheless, there are some differences with the past. There are differences of scope and scale — mass surveillance and AI enables collection of data at an unprecedented scale and its use on millions of people at the same time. There are differences of speed and individuation — AI systems can update their operating parameters in real time and in highly individualised ways. And finally, there are the crucial differences in the degree of autonomy with which these systems operate, which can lead to problems in how we assign legal responsibility and liability.Endnotes[1] I am indebted to Jacob Turner for drawing my attention to this story. He discusses it in his book Robot Rules - Regulating Artificial Intelligence (Palgrave MacMillan, 2018). This is probably the best currently available book about Ai and law. [2] See; and [3] [4] This was a stunt conducted by the ACLU. See here for the press release [5] [6] For the story, see here [7] Their original call for this can be found here: [8] The video can be found here;; For more information on the research see here:; [9] The full report can be found here: [10] The paper currently exists in a draft form but can be found here: [11] [12] For a summary of the judgment, see here: [13] “Inherent Tradeoffs in the Fair Determination of Risk Scores” - available here [14] The article can be found at this link - [15] Casey et al ‘Rethinking Explainabie Machines’ - available here [16] An open access version of the paper can be downloaded here Subscribe to the newsletter
In this episode I talk to Dr Regina Rini. Dr Rini currently teaches in the Philosophy Department at York University, Toronto where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She has a PhD from NYU and before coming to York in 2017 was an Assistant Professor / Faculty Fellow at the NYU Center for Bioethics, a postdoctoral research fellow in philosophy at Oxford University and a junior research fellow of Jesus College Oxford. We talk about the political and epistemological consequences of deepfakes. This is a fascinating and timely conversation.You can download this episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed here). Show Notes0:00 - Introduction3:20 - What are deepfakes?7:35 - What is the academic justification for creating deepfakes (if any)?11:35 - The different uses of deepfakes: Porn versus Politics16:00 - The epistemic backstop and the role of audiovisual recordings22:50 - Two ways that recordings regulate our testimonial practices26:00 - But recordings aren't a window onto the truth, are they?34:34 - Is the Golden Age of recordings over?39:36 - Will the rise of deepfakes lead to the rise of epistemic elites?44:32 - How will deepfakes fuel political partisanship?50:28 - Deepfakes and the end of public reason54:15 - Is there something particularly disruptive about deepfakes?58:25 - What can be done to address the problem?  Relevant LinksRegina's HomepageRegina's Philpapers Page"Deepfakes and the Epistemic Backstop" by Regina"Fake News and Partisan Epistemology" by ReginaJeremy Corbyn and Boris Johnson Deepfake Video"California’s Anti-Deepfake Law Is Far Too Feeble" Op-Ed in WiredSubscribe to the newsletter
In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford and Hong Kong. In 2017, he joined the Research Group for Ethics in Information Technology, at the Department of Informatics, Universitat Hamburg. We talk about the robotic disruption of morality and how it affects our capacity to develop moral virtues. Pak argues for a distinctive Confucian approach to this topic and so provides something of a masterclass on Confucian virtue ethics in the course of our conversation.You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction2:56 - How do robots disrupt our moral lives?7:18 - Robots and Moral Deskilling12:52 - The Folk Model of Virtue Acquisition21:16 - The Confucian approach to Ethics24:28 - Confucianism versus the European approach29:05 - Confucianism and situationism34:00 - The Importance of Rituals39:39 - A Confucian Response to Moral Deskilling43:37 - Criticisms (moral silencing)46:48 - Generalising the Confucian approach50:00 - Do we need new Confucian rituals?Relevant LinksPak's homepage at the University of HamburgPak's Philpeople Profile"Rituals and Machines: A Confucian Response to Technology Driven Moral Deskilling" by Pak"Responsible Innovation for Decent Nonliberal Peoples: A Dilemma?" by Pak"Consenting to Geoengineering" by PakEpisode 45 with Shannon Vallor on Technology and the VirtuesSubscribe to the newsletter
Comments (1)


This is my favorite podcast. It's absolutely excellent.

Aug 19th
Download from Google Play
Download from App Store