DiscoverPhilosophical Disquisitions87 - AI and the Value Alignment Problem
87 - AI and the Value Alignment Problem

87 - AI and the Value Alignment Problem

Update: 2020-12-231
Share

Description

Iason Gabriel

How do we make sure that an AI does the right thing? How could we do this when we ourselves don't even agree on what the right thing might be? In this episode, I talk to Iason Gabriel about these questions. Iason is a political theorist and ethicist currently working as a Research Scientist at DeepMind. His research focuses on the moral questions raised by artificial intelligence. His recent work addresses the challenge of value alignment, responsible innovation, and human rights. He has also been a prominent contributor to the debate about the ethics of effective altruism.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).

<iframe allowfullscreen="" frameborder="0" height="30" mozallowfullscreen="true" src="https://archive.org/embed/iason-gabriel-23-12-2020-11.45" webkitallowfullscreen="true" width="500"></iframe> 

Show Notes:

Topics discussed include:

  • What is the value alignment problem?
  • Why is it so important that we get value alignment right?
  • Different ways of conceiving the problem
  • How different AI architectures affect the problem
  • Why there can be no purely technical solution to the value alignment problem
  • Six potential solutions to the value alignment problem
  • Why we need to deal with value pluralism and uncertainty
  • How political theory can help to resolve the problem

 

Relevant Links


<style type="text/css"> #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ </style>
<form action="//blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&id=5cb9ef6d67" class="validate" id="mc-embedded-subscribe-form" method="post" name="mc-embedded-subscribe-form" novalidate="" target="_blank">
<label>Subscribe to the newsletter</label> <input class="email" id="mce-EMAIL" name="EMAIL" required="" type="email" value="" />
<input name="b_58eb058b33241976ce21bc706_5cb9ef6d67" tabindex="-1" type="text" value="" />
<input class="button" id="mc-embedded-subscribe" name="subscribe" type="submit" value="Subscribe" />
</form>
Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

87 - AI and the Value Alignment Problem

87 - AI and the Value Alignment Problem

John Danaher