DiscoverHundred Year PodcastStanford PhD explains why AI isn’t going to destroy humanity, but why we need to make it safer
Stanford PhD explains why AI isn’t going to destroy humanity, but why we need to make it safer

Stanford PhD explains why AI isn’t going to destroy humanity, but why we need to make it safer

Update: 2024-07-16
Share

Description

Duncan Eddy has spent years working in the realm of space satellite communications, and now he’s directing his talents toward AI as the Executive Director of the Stanford Center for AI Safety. In this episode, Duncan speaks with Adario Strange to explain why the commercialization of space will continue to fuel our explorations into the Moon and Mars, and how AI-powered robots may be the primary method for deep space exploration in the future. Then the discussion turns toward the topic of AI safety and the algorithm the Stanford group developed to try to help guide the technology in the right direction. Finally, the area of AI super intelligence comes up, and you may be surprised at what Duncan has to say about that given his role as an AI safety advocate.You can find out more about the Stanford Center for AI Safety here:
https://aisafety.stanford.edu#AI #artificialintelligence #software #siliconvalley #elonmusk #spacex #jeffbezos #blueorigin #space #spacetravel #robots #agi #superintelligence #aisafety #sciencefiction #scifi

Subscribe to our newsletter!
Hundred Year Lens | Adario Strange | Substack

Visit our Podcast site!
Hundred Year Podcast

Intro/Outro Music by Karl Casey @ White Bat Audio

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Stanford PhD explains why AI isn’t going to destroy humanity, but why we need to make it safer

Stanford PhD explains why AI isn’t going to destroy humanity, but why we need to make it safer

Irreverent Labs