DiscoverGoogle DeepMind: The PodcastAI Safety...Ok Doomer: with Anca Dragan
AI Safety...Ok Doomer: with Anca Dragan

AI Safety...Ok Doomer: with Anca Dragan

Update: 2024-08-28
Share

Description

 Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind. 

For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".

Thanks to everyone who made this possible, including but not limited to: 

Presenter: Professor Hannah Fry

Series Producer: Dan Hardoon

Editor: Rami Tzabar, TellTale Studios 

Commissioner & Producer: Emma Yousif

Music composition: Eleni Shaw

 

Camera Director and Video Editor: Tommy Bruce

Audio Engineer: Perry Rogantin

Video Studio Production: Nicholas Duke

Video Editor: Bilal Merhi

Video Production Design: James Barton

Visual Identity and Design: Eleanor Tomlinson

Commissioned by Google DeepMind


Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

 

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

AI Safety...Ok Doomer: with Anca Dragan

AI Safety...Ok Doomer: with Anca Dragan

Anca Dragan, Hannah Fry