DiscoverDoom DebatesScott Aaronson Makes Me Think OpenAI's “Safety” Is Fake, Clueless, Reckless and Insane
Scott Aaronson Makes Me Think OpenAI's “Safety” Is Fake, Clueless, Reckless and Insane

Scott Aaronson Makes Me Think OpenAI's “Safety” Is Fake, Clueless, Reckless and Insane

Update: 2024-12-11
Share

Description

Today I’m reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov.

Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. He’s best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog.

Scott is one of my biggest intellectual influences. His famous Who Can Name The Bigger Number essay and his long-running blog are among my best memories of coming across high-quality intellectual content online as a teen. His posts and lectures taught me much of what I know about complexity theory.

Scott recently completed a two-year stint at OpenAI focusing on the theoretical foundations of AI safety, so I was interested to hear his insider account.

Unfortunately, what I heard in the interview confirms my worst fears about the meaning of “safety” at today’s AI companies: that they’re laughably clueless at how to achieve any measure of safety, but instead of doing the adult thing and slowing down their capabilities work, they’re pushing forward recklessly.

00:00 Introducing Scott Aaronson

02:17 Scott's Recruitment by OpenAI

04:18 Scott's Work on AI Safety at OpenAI

08:10 Challenges in AI Alignment

12:05 Watermarking AI Outputs

15:23 The State of AI Safety Research

22:13 The Intractability of AI Alignment

34:20 Policy Implications and the Call to Pause AI

38:18 Out-of-Distribution Generalization

45:30 Moral Worth Criterion for Humans

51:49 Quantum Mechanics and Human Uniqueness

01:00:31 Quantum No-Cloning Theorem

01:12:40 Scott Is Almost An Accelerationist?

01:18:04 Geoffrey Hinton's Proposal for Analog AI

01:36:13 The AI Arms Race and the Need for Regulation

01:39:41 Scott Aronson's Thoughts on Sam Altman

01:42:58 Scott Rejects the Orthogonality Thesis

01:46:35 Final Thoughts

01:48:48 Lethal Intelligence Clip

01:51:42 Outro

Show Notes

Scott’s Interview on Win-Win with Liv Boeree and Igor Kurganov: https://www.youtube.com/watch?v=ANFnUHcYza0

Scott’s Blog: https://scottaaronson.blog

PauseAI Website: https://pauseai.info

PauseAI Discord: https://discord.gg/2XXWXvErfA

Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Scott Aaronson Makes Me Think OpenAI's “Safety” Is Fake, Clueless, Reckless and Insane

Scott Aaronson Makes Me Think OpenAI's “Safety” Is Fake, Clueless, Reckless and Insane

Liron Shapira