DiscoverDoom DebatesDavid Shapiro Part II: Unaligned Superintelligence Is Totally Fine?
David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

Update: 2024-08-22
Share

Description

Today I’m reacting to David Shapiro’s response to my previous episode, and also to David’s latest episode with poker champion & effective altruist Igor Kurganov.

I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.

00:00 Introduction

01:08 David's Response and Engagement

03:02 The Corrigibility Problem

05:38 Nirvana Fallacy

10:57 Prophecy and Faith-Based Assertions

22:47 AI Coexistence with Humanity

35:17 Does Curiosity Make AI Value Humans?

38:56 Instrumental Convergence and AI's Goals

46:14 The Fermi Paradox and AI's Expansion

51:51 The Future of Human and AI Coexistence

01:04:56 Concluding Thoughts

Join the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for listening.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

Liron Shapira