DiscoverPhilosophical Disquisitions
Philosophical Disquisitions
Claim Ownership

Philosophical Disquisitions

Author: John Danaher

Subscribed: 243Played: 6,593
Share

Description

Interviews with experts about the philosophy of the future.
150 Episodes
Reverse
In this episode, John and Sven answer questions from podcast listeners. Topics covered include: the relationships between animal ethics and AI ethics; religion and philosophy of tech; the analytic-continental divide; the debate about short vs long-term risks; getting engineers to take ethics seriously and much much more. Thanks to everyone that submitted a question. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
What does the future hold for humanity's relationship with technology? Will we become ever more integrated with and dependent on technology? What are the normative and axiological consequences of this? In this episode, Sven and John discuss these questions and reflect, more generally, on technology, ethics and the value of speculation about the future. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading Mark Coeckelbergh The Political Philosophy of AI David Chalmers Reality+ #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode, Sven and John talk about relationships with machines. Can you collaborate with a machine? Can robots be friends, colleagues or, perhaps, even lovers? These are common tropes in science fiction and popular culture, but is there any credibility to them? What would the ethical status of such relationships be? Should they be welcomed or avoided? These are just some of the questions addressed in this episode. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading Evans, Robbins and Bryson - 'Do we collaborate with what we design?' Helen Ryland - 'It's Friendship Jim But Not as We Know It: A Degrees-of-Friendship View of Human–Robot Friendships' #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode Sven and John discuss the moral status of machines, particularly humanoid robots. Could machines ever be more than mere things? Some people see this debate as a distraction from the important ethical questions pertaining to technology; others take it more seriously. Sven and John share their thoughts on this topic and give some guidance as to how to think about the nature of moral status and its significance. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading David Gunkel, Person, Thing, Robot Butlin, Long et al 'Consciousness in AI: Insights from the Science of Consciousness' Summary of the above paper from Daily Nous. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode, Sven and John discuss the controversy arising from the idea moral agency in machines. What is an agent? What is a moral agent? Is it possible to create a machine with a sense of moral agency? Is this desirable or to be avoided at all costs? These are just some of the questions up for debate. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading Amanda Sharkey, 'Can we program or train robots to be good?' Paul Formosa and Malcolm Ryan, 'Making Moral Machines: Why we need artificial moral agents?' Michael Anderson and Susan Leigh Anderson, 'Machine ethics: creating an ethical intelligent agent' Carissa Véliz, 'Moral zombies: why algorithms are not moral agents' #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode Sven and John discuss the thorny topic of responsibility gaps and technology. Over the past two decades, a small cottage industry of legal and philosophical research has arisen in relation to the idea that increasingly autonomous machines create gaps in responsibility. But what does this mean? Is it a serious ethical/legal problem? How can it be resolved? All this and more is explored in this episode. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading Robert Sparrow 'Killer Robots' Alexander Hevelke and Julian Nida-Rümelin, "Responsibility for Crashes of Autonomous Vehicles" Andreas Matthias 'The Responsibility Gap: Ascribing Responsibility for the Actions Learning Automata' Jack Stilgoe 'Who's Driving Innovation', Chapter 1 Discount To get a discounted copy of Sven’s book, click here and use the code ‘TEC20’ to get 20% off the regular price. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode, John and Sven talk about the role that technology can play in changing our behaviour. In doing so, they note the long and troubled history of philosophy and self-help. They also ponder whether we can use technology to control our lives or whether technology controls us. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services.   Recommendations Brett Frischmann and Evan Selinger, Reengineering Humanity. Carissa Véliz, Privacy is Power. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode, John and Sven discuss risk and technology ethics. They focus, in particular, on the perennially popular and widely discussed problems of value alignment (how to get technology to align with our values) and control (making sure technology doesn't do something terrible). They start the conversation with the famous case study of Stanislov Petrov and the prevention of nuclear war. You can listen below or download the episode here. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommendations for further reading Atoosa Kasirzadeh and Iason Gabriel, 'In Conversation with AI: Aligning Language Models with Human Values' Nick Bostrom, relevant chapters from Superintelligence Stuart Russell, Human Compatible Langdon Winner, 'Do Artifacts Have Politics?' Iason Gabriel, 'Artificial Intelligence, Values and Alignment' Brian Christian, The Alignment Problem Discount You can purchase a 20% discounted copy of This is Technology Ethics by using the code TEC20 at the publisher's website. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode, John and Sven discuss the methods of technology ethics. What exactly is it that technology ethicists do? How can they answer the core questions about the value of technology and our moral response to it? Should they consult their intuitions? Run experiments? Use formal theories? The possible answers to these questions are considered with a specific case study on the ethics of self-driving cars. You can listen below or download the episode here. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services. Recommended Reading Peter Königs 'Of Trolleys and Self-Driving Cars:What machine ethicists can and cannot learn from trolleyology' John Harris 'The Immoral Machine' Edmond Awad et al 'The Moral Machine Experiment' Discount You can purchase a 20% discounted copy of This is Technology Ethics by using the code TEC20 at the publisher's website. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
I am very excited to announce the launch of a new podcast series with my longtime friend and collaborator Sven Nyholm. The podcast is intended to introduce key themes, concepts, arguments and ideas arising from the ethics of technology. It roughly follows the structure from the book This is Technology Ethics by Sven , but in a loose and conversational style. In the nine episodes, we will cover the nature of technology and ethics, the methods of technology ethics, the problems of control, responsibility, agency and behaviour change that are central to many contemporary debates about the ethics of technology. We will also cover perennially popular topics such as whether a machine could have moral status, whether a robot could (or should) be a friend, lover or work colleague, and the desirability of merging with machines. The podcast is intended to be accessible to a wide audience and could provide an ideal companion to an introductory or advanced course in the ethics of technology (with particular focus on AI, robotics and other digital technologies). I will be releasing the podcast on the Philosophical Disquisitions podcast feed, but I have also created an independent podcast feed and website, if you are just interested in it. The first episode can be downloaded here or you can listen below. You can also subscribe on Apple, Spotify, Amazon and a range of other podcasting services. If you go the website or subscribe via the standalone feed, you can download the first two episodes now. There is also a promotional tie with the book publisher. If you use the code 'TEC20' on the publisher's website (here) you can get 20% off the regular price.  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode, I chat to Matthijs Maas about pausing AI development. Matthijs is currently a Senior Research Fellow at the Legal Priorities Project and a Research Affiliate at the Centre for the Study of Existential Risk at the University of Cambridge. In our conversation, we focus on the possibility of slowing down or limiting the development of technology. Many people are sceptical of this possibility but Matthijs has been doing some extensive research of historical case studies of, apparently successful, technological slowdown. We discuss these case studies in some detail. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksRecording of Matthijs's Chalmers about this topic: https://www.youtube.com/watch?v=vn4ADfyrJ0Y&t=2s Slides from this talk -- https://drive.google.com/file/d/1J9RW49IgSAnaBHr3-lJG9ZOi8ZsOuEhi/view?usp=share_linkPrevious essay / primer, laying out the basics of the argument:  https://verfassungsblog.de/paths-untaken/Incomplete longlist database of candidate case studies: https://airtable.com/shrVHVYqGnmAyEGsz #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode of the podcast I chat to Atoosa Kasirzadeh. Atoosa is an Assistant Professor/Chancellor's fellow at the University of Edinburgh. She is also the Director of Research at the Centre for Technomoral Futures at Edinburgh. We chat about the alignment problem in AI development, roughly: how do we ensure that AI acts in a way that is consistent with human values. We focus, in particular, on the alignment problem for language models such as ChatGPT, Bard and Claude, and how some old ideas from the philosophy of language could help us to address this problem. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksAtoosa's webpageAtoosa's paper (with Iason Gabriel) 'In Conversation with AI: Aligning Language Models with Human Values' #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
[UPDATED WITH CORRECT EPISODE LINK]In this episode I chat to Miles Brundage. Miles leads the policy research team at Open AI. Unsurprisingly, we talk a lot about GPT and generative AI. Our conservation covers the risks that arise from their use, their speed of development, how they should be regulated, the harms they may cause and the opportunities they create. We also talk a bit about what it is like working at OpenAI and why Miles made the transition from academia to industry (sort of). Lots of useful insight in this episode from someone at the coalface of AI development. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode of the podcast I chat to Jess Morley. Jess is currently a DPhil candidate at the Oxford Internet Institute. Her research focuses on the use of data in healthcare, oftentimes on the impact of big data and AI, but, as she puts it herself, usually on 'less whizzy' things. Sadly, our conversation focuses on the whizzy things, in particular the recent hype about large language models and their potential to disrupt the way in which healthcare is managed and delivered. Jess is sceptical about the immediate potential for disruption but thinks it is worth exploring, carefully, the use of this technology in healthcare. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksJess's WebsiteJess on TwitterJohn Snow's cholera map  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode, I chat to Robert Long about AI sentience. Robert is a philosopher that works on issues related to the philosopy of mind, cognitive science and AI ethics. He is currently a philosophy fellow at the Centre for AI Safety in San Francisco. He completed his PhD at New York University. We do a deep dive on the concept of sentience, why it is important, and how we can tell whether an animal or AI is sentient. We also discuss whether it is worth taking the topic of AI sentience seriously. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Relevant LinksRobert's webpageRobert's substack Subscribe to the newsletter
In this episode of the podcast, I talk to Thore Husfeldt about the impact of GPT on education. Thore is a Professor of Computer Science at the IT University of Copehagen, where he specialises in pretty technical algorithm-related research. He is also affiliated with Lund University in Sweden. Beyond his technical work, Thore is interested in ideas at the intersection of computer science, philosophy and educational theory. In our conversation, Thore outlines four models of what a university education is for, and considers how GPT disrupts these models. We then talk, in particular, about the 'signalling' theory of higher education and how technologies like GPT undercut the value of certain signals, and thereby undercut some forms of assessment. Since I am an educator, I really enjoyed this conversation, but I firmly believe there is much food for thought in it for everyone. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode of the podcast, I chat to Anton Korinek about the economic impacts of GPT. Anton is a Professor of Economics at the University of Virginia and the Economics Lead at the Centre for AI Governance. He has researched widely on the topic of automation and labour markets. We talk about whether GPT will substitute for or complement human workers; the disruptive impact of GPT on the economic organisation; the jobs/roles most immediately at risk; the impact of GPT on wage levels; the skills needed to survive in an AI-enhanced economy, and much more.You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksAnton's homepageAnton's paper outlining 25 uses of LLMs for academic economistsAnton's dialogue with GPT, Claude and the economic David Autor #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
In this episode of the podcast, I chat to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology in Sweden. We talk about GPT and LLMs more generally. What are they? Are they intelligent? What risks do they pose or presage? Are we proceeding with the development of this technology in a reckless way? We try to answer all these questions, and more. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
How should we conceive of social robots? Some sceptics think they are little more than tools and should be treated as such. Some are more bullish on their potential to attain full moral status. Is there some middle ground? In this episode, I talk to Paula Sweeney about this possibility. Paula defends a position she calls 'fictional dualism' about social robots. This allows us to relate to social robots in creative, human-like ways, without necessarily ascribing them moral status or rights. Paula is a philosopher based in the University of Aberdeen, Scotland. She has a background in the philosophy of language (which we talk about a bit) but has recently turned her attentio n to applied ethics of technology. She is currently writing a book about social robots. You download the episode here, or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services. Relevant LinksA Fictional Dualism Model of Social Robots by PaulaTrusting Social Robots by PaulaWhy Indirect Harms do Not Support Social Robot Rights by Paula #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
It's clear that human social morality has gone through significant changes in the past. But why? What caused these changes? In this episode, I chat to Jeroen Hopster from the University of Utrecht about this topic. We focus, in particular, on a recent paper that Jeroen co-authored with a number of colleagues about four historical episodes of moral change and what we can learn from them. That paper, from which I take the title of this podcast, was called 'Pistols, Pills, Pork and Ploughs' and, as you might imagine, looks at how specific technologies (pistols, pills, pork and ploughs) have played a key role in catalysing moral change. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter
loading