“Rejecting Violence as an AI Safety Strategy” by James_Miller
Update: 2025-09-23
Description
Violence against AI developers would increase rather than reduce the existential risk from AI. This analysis shows how such tactics would catastrophically backfire and counters the potential misconception that a consequentialist AI doomer might rationally endorse violence by non-state actors.
- Asymmetry of force. Violence would shift the contest from ideas to physical force, a domain where AI safety advocates would face overwhelming disadvantages. States and corporations command vast security apparatuses and intelligence networks. While safety advocates can compete intellectually through research and argumentation, entering a physical conflict would likely result in swift, decisive defeat.
- Network resilience and geographic distribution. The AI development ecosystem spans multiple continents, involves thousands of researchers, and commands trillions in resources. Targeting individuals would likely redistribute talent and capital to more secure locations without altering the fundamental trajectory.
- Economic and strategic imperatives. AI development represents both unprecedented economic opportunity and perceived national security [...]
---
First published:
September 22nd, 2025
Source:
https://www.lesswrong.com/posts/inFW6hMG3QEx8tTfA/rejecting-violence-as-an-ai-safety-strategy
---
Narrated by TYPE III AUDIO.
Comments
In Channel