DiscoverLessWrong (30+ Karma)“The Mom Test for AI Extinction Scenarios” by Taylor G. Lunt
“The Mom Test for AI Extinction Scenarios” by Taylor G. Lunt

“The Mom Test for AI Extinction Scenarios” by Taylor G. Lunt

Update: 2025-10-14
Share

Description

(Also posted to my Substack; written as part of the Halfhaven virtual blogging camp.)

Let's set aside the question of whether or not superintelligent AI would want to kill us, and just focus on the question of whether or not it could. This is a hard thing to convince people of, but lots of very smart people agree that it could. The Statement on AI Risk in 2023 stated simply:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Since the statement in 2023, many others have given their reasons for why superintelligent AI would be dangerous. In the recently-published book If Anyone Builds It, Everyone Dies, the authors Eliezer Yudkowsky and Nate Soares lay out one possible AI extinction scenario, and say that going up against a superintelligent AI would be like going up [...]

The original text contained 1 footnote which was omitted from this narration.

---


First published:

October 13th, 2025



Source:

https://www.lesswrong.com/posts/n2XrjMFehWvBumt9i/the-mom-test-for-ai-extinction-scenarios


---


Narrated by TYPE III AUDIO.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“The Mom Test for AI Extinction Scenarios” by Taylor G. Lunt

“The Mom Test for AI Extinction Scenarios” by Taylor G. Lunt