DiscoverLessWrong (30+ Karma)“AI is not inevitable.” by David Scott Krueger (formerly: capybaralet)
“AI is not inevitable.” by David Scott Krueger (formerly: capybaralet)

“AI is not inevitable.” by David Scott Krueger (formerly: capybaralet)

Update: 2025-11-08
Share

Description

AI companies are explicitly trying to build AIs that are smarter than humans, despite clear signs that it might lead to human extinction. It will be tragic and ironic if humanity's largest project ever is an all-out race to destroy ourselves. But can we really stop building more and more powerful AI? Or do we just need to try to “steer” it and hope for the best?

Climate change and other societal failures have led more and more people to realize that the world is not the sensible, ordered place we’ve often been taught to believe it is. The world is a mess, countries can’t cooperate, and there's no one in charge of making sure we don’t do crazy things like build technology that has a good chance of killing us all. And it looks like that's what we’re going to do.

So does that mean we should just give up on actually managing the development of AI sensibly? Hope for the best, plan for the worst… well, not the worst, but… a scenario where (if we’re lucky) at least some people survive… maybe the ones who live in the right countries, the ones who own shares of [...]

---


First published:

November 7th, 2025



Source:

https://www.lesswrong.com/posts/tihoysekY9XnvF9LD/ai-is-not-inevitable


---


Narrated by TYPE III AUDIO.


---

Images from the article:

EVITABLE logo

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“AI is not inevitable.” by David Scott Krueger (formerly: capybaralet)

“AI is not inevitable.” by David Scott Krueger (formerly: capybaralet)