DiscoverLessWrong (30+ Karma)“Rejecting Violence as an AI Safety Strategy” by James_Miller
“Rejecting Violence as an AI Safety Strategy” by James_Miller

“Rejecting Violence as an AI Safety Strategy” by James_Miller

Update: 2025-09-23
Share

Description

Violence against AI developers would increase rather than reduce the existential risk from AI. This analysis shows how such tactics would catastrophically backfire and counters the potential misconception that a consequentialist AI doomer might rationally endorse violence by non-state actors.

  1. Asymmetry of force. Violence would shift the contest from ideas to physical force, a domain where AI safety advocates would face overwhelming disadvantages. States and corporations command vast security apparatuses and intelligence networks. While safety advocates can compete intellectually through research and argumentation, entering a physical conflict would likely result in swift, decisive defeat.
  2. Network resilience and geographic distribution. The AI development ecosystem spans multiple continents, involves thousands of researchers, and commands trillions in resources. Targeting individuals would likely redistribute talent and capital to more secure locations without altering the fundamental trajectory.
  3. Economic and strategic imperatives. AI development represents both unprecedented economic opportunity and perceived national security [...]

---


First published:

September 22nd, 2025



Source:

https://www.lesswrong.com/posts/inFW6hMG3QEx8tTfA/rejecting-violence-as-an-ai-safety-strategy


---


Narrated by TYPE III AUDIO.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“Rejecting Violence as an AI Safety Strategy” by James_Miller

“Rejecting Violence as an AI Safety Strategy” by James_Miller