“The Mom Test for AI Extinction Scenarios” by Taylor G. Lunt
Description
(Also posted to my Substack; written as part of the Halfhaven virtual blogging camp.)
Let's set aside the question of whether or not superintelligent AI would want to kill us, and just focus on the question of whether or not it could. This is a hard thing to convince people of, but lots of very smart people agree that it could. The Statement on AI Risk in 2023 stated simply:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Since the statement in 2023, many others have given their reasons for why superintelligent AI would be dangerous. In the recently-published book If Anyone Builds It, Everyone Dies, the authors Eliezer Yudkowsky and Nate Soares lay out one possible AI extinction scenario, and say that going up against a superintelligent AI would be like going up [...]
The original text contained 1 footnote which was omitted from this narration.
---
First published:
October 13th, 2025
Source:
https://www.lesswrong.com/posts/n2XrjMFehWvBumt9i/the-mom-test-for-ai-extinction-scenarios
---
Narrated by TYPE III AUDIO.