“Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most ‘classic humans’ in a few decades.” by Raemon
Description
I wrote my recent Accelerando post to mostly stand on it's own as a takeoff scenario. But, the reason it's on my mind is that, if I imagine being very optimistic about how a smooth AI takeoff goes, but where an early step wasn't "fully solve the unbounded alignment problem, and then end up with extremely robust safeguards[1]"...
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
- It's politically intractable to get a global halt or globally controlled takeoff.
- Superintelligence is moderately likely to be somewhat nice.
- We'll get to run lots of experiments on near-human-AI that will be reasonably informative about how things will generalize to the somewhat-superhuman-level.
- We get to ramp up [...]
---
Outline:
(03:50 ) There is no safe muddling through without perfect safeguards
(06:24 ) i. Factorio
(06:27 ) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15 ) Fictional vs Real Evidence
(11:35 ) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23 ) This is the Dream Time
(14:33 ) Is the resulting posthuman population morally valuable?
(16:51 ) The Hanson Counterpoint: So youre against ever changing?
(19:04 ) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18 ) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
---
Narrated by TYPE III AUDIO.