DiscoverParlons FuturIA : "un risque existentiel non négligeable" nous disent les 3 chercheurs en IA les plus cités
IA : "un risque existentiel non négligeable" nous disent les 3 chercheurs en IA les plus cités

IA : "un risque existentiel non négligeable" nous disent les 3 chercheurs en IA les plus cités

Update: 2025-06-12
Share

Description

1. The 3 most cited AI researchers of all time (Geoffrey Hinton, Yoshua Bengio, Ilya Sutskever) are vocally concerned about this. One of them believes the risk of extinction is higher than 50%.


2. The CEOs of the 4 leading AI companies have all acknowledged this risk as real. “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity”


-Sam Altman, CEO of OpenAI “I think there's some chance that it will end humanity. I probably agree with Geoff Hinton that it's about 10% or 20% or something like that.”


-Elon Musk, CEO of xAI “I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.”


-Dario Amodei, CEO of Anthropic “We need to be working now, yesterday, on those problems, no matter what the probability is, because it’s definitely non-zero.”


-Demis Hassabis, CEO of Google DeepMind 3. Half of surveyed AI researchers believe that there are double-digit odds of extinction https://x.com/HumanHarlan/status/1925015874840543653

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

IA : "un risque existentiel non négligeable" nous disent les 3 chercheurs en IA les plus cités

IA : "un risque existentiel non négligeable" nous disent les 3 chercheurs en IA les plus cités

Thomas Jestin