Two Visions of AI Apocalypse (Robert Wright & David Krueger)
Description
This episode includes an Overtime segment that’s available to paid subscribers. If you’re not a paid subscriber, we of course encourage you to become one. If you are a paid subscriber, you can access the complete conversation either via the audio and video on this post or by setting up the paid-subscriber podcast feed, which includes all exclusive audio content. To set that up, simply grab the RSS feed from this page (by first clicking the “...” icon on the audio player) and paste it into your favorite podcast app.
If you have trouble completing the process, check out this super-simple how-to guide.
0:00 How Eliezer Yudkowsky’s new book envisions AI takeover
8:24 Will we ever really know how AI works?
15:10 The “paperclip maximizer” problem revisited
26:13 Will advanced AIs be insatiable?
31:39 David’s alternative takeover scenario: gradual disempowerment
43:31 Can—and should—we keep humans in the loop?
51:46 Heading to Overtime
Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and David Krueger (The University of Montréal). Recorded September 24, 2025.
Twitter: https://twitter.com/NonzeroPods
Overtime titles:
0:00 AI accelerationists: true believers or trolls?
3:26 How David became an AI safety early adopter
10:15 AI safety’s international cooperation gap
19:40 “Mutually Assured AI Malfunction” and WWIII
25:34 David: Superintelligence is (pretty) near
32:31 What’s the deal with AI “situational awareness?”
44:18 What’s the deal with Leopold Aschenbrenner?
Overtime video: