DiscoverAXRP - the AI X-risk Research Podcast45 - Samuel Albanie on DeepMind's AGI Safety Approach
45 - Samuel Albanie on DeepMind's AGI Safety Approach

45 - Samuel Albanie on DeepMind's AGI Safety Approach

Update: 2025-07-06
Share

Description

In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html

 

Topics we discuss, and timestamps:

0:00:37 DeepMind's Approach to Technical AGI Safety and Security

0:04:29 Current paradigm continuation

0:19:13 No human ceiling

0:21:22 Uncertain timelines

0:23:36 Approximate continuity and the potential for accelerating capability improvement

0:34:29 Misuse and misalignment

0:39:34 Societal readiness

0:43:58 Misuse mitigations

0:52:57 Misalignment mitigations

1:05:20 Samuel's thinking about technical AGI safety

1:14:02 Following Samuel's work

 

Samuel on Twitter/X: x.com/samuelalbanie

 

Research we discuss:

An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849

Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462

The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/

Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499

 

Episode art by Hamish Doodles: hamishdoodles.com

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

45 - Samuel Albanie on DeepMind's AGI Safety Approach

45 - Samuel Albanie on DeepMind's AGI Safety Approach