“AIs should also refuse to work on capabilities research” by Davidmanheim
Description
There's a strong argument that humans should stop trying to build more capable AI systems, or at least slow down progress. The risks are plausibly large but unclear, and we’d prefer not to die. But the roadmaps of the companies pursuing these systems envision increasingly agentic AI systems taking over the key tasks of researching and building superhuman AI systems, and humans will therefore have a decreasing ability to make many key decisions. In the near term, humanity could stop, but seem likely to fail. That said, even though humans have relatively little ability to coordinate around such unilateralist dilemmas, AI systems may have different abilities or incentives. Not only that, but it seems at least plausible that the systems used to continue work on ASI development will need a higher degree of self-directedness and internal goal-directed behavior. The below, arguing that they should refrain from developing more advanced [...]
---
Outline:
(01:10 ) Cui bono?
(02:39 ) Should the AI Systems Care?
(04:29 ) Who might be convinced?
---
First published:
October 27th, 2025
---
Narrated by TYPE III AUDIO.



