DiscoverStop the WorldSuperintelligence and human security, with Dan Hendrycks
Superintelligence and human security, with Dan Hendrycks

Superintelligence and human security, with Dan Hendrycks

Update: 2025-11-03
Share

Description

Last month, some of the world’s leading artificial intelligence experts signed a petition calling for a prohibition on developing superintelligent AI until it is safe. One of those experts was Dan Hendrycks, director for the Center for AI Safety and an adviser to Elon Musk’s xAI and leading firm Scale AI. Dan has led original and thought-provoking research including into the risk of rogue AIs escaping human control, the deliberate misuse of the technology by malign actors, and the emergence of dangerous strategic dynamics if one nation creates superintelligence, prompting fears among rival nations.

 

In the lead-up to ASPI’s Sydney Dialogue tech and security conference in December, Dan talks about the different risks AI poses, the possibility that AI develops its own goals and values, the concept of recursion in which machines build smarter machines, definitions of artificial “general” intelligence, the shortcomings of current AIs and the inadequacy of historical analogies such as nuclear weapons in understanding risks from superintelligence.

 

To see some of the research discussed in today’s episode, visit the Center for AI Safety’s website here.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Superintelligence and human security, with Dan Hendrycks

Superintelligence and human security, with Dan Hendrycks

Australian Strategic Policy Institute (ASPI)