DiscoverOverthinkAI Safety with Shazeda Ahmed
AI Safety with Shazeda Ahmed

AI Safety with Shazeda Ahmed

Update: 2024-04-09
Share

Description

Welcome your robot overlords! In episode 101 of Overthink, Ellie and David speak with Dr. Shazeda Ahmed, specialist in AI Safety, to dive into the philosophy guiding artificial intelligence. With the rise of LLMs like ChatGPT, the lofty utilitarian principles of Effective Altruism have taken the tech-world spotlight by storm. Many who work on AI safety and ethics worry about the dangers of AI, from how automation might put entire categories of workers out of a job to how future forms of AI might pose a catastrophic “existential risk” for humanity as a whole. And yet, optimistic CEOs portray AI as the beginning of an easy, technology-assisted utopia. Who is right about AI: the doomers or the utopians? And whose voices are part of the conversation in the first place? Is AI risk talk spearheaded by well-meaning experts or investor billionaires? And, can philosophy guide discussions about AI toward the right thing to do?


Check out the episode's extended cut here!


Nick Bostrom, Superintelligence
Adrian Daub, What Tech Calls Thinking
Virginia Eubanks, Automating Inequality
Mollie Gleiberman, “Effective Altruism and the strategic ambiguity of ‘doing good’”
Matthew Jones and Chris Wiggins, How Data Happened
William MacAskill, What We Owe the Future
Toby Ord, The Precipice
Inioluwa Deborah Raji et al., “The Fallacy of AI Functionality”
Inioluwa Deborah Raji and Roel Dobbe, “Concrete Problems in AI Safety, Revisted”
Peter Singer, Animal Liberation
Amia Srinivisan, “Stop The Robot Apocalypse”

Support the show

Patreon | patreon.com/overthinkpodcast
Website | overthinkpodcast.com
Instagram & Twitter | @overthink_pod
Email | dearoverthink@gmail.com
YouTube | Overthink podcast

Comments 
In Channel
Extinction

Extinction

2024-11-0558:50

Hope

Hope

2024-10-2258:14

Friendship

Friendship

2024-10-0859:57

Hyperreality

Hyperreality

2024-09-1059:12

Envy

Envy

2024-08-2754:53

Intensity

Intensity

2024-08-1358:48

Success

Success

2024-07-1658:27

Organisms

Organisms

2024-07-0248:03

Fun

Fun

2024-06-1858:20

Reading

Reading

2024-05-2159:25

Laziness

Laziness

2024-05-0758:42

Mixed-Race Identity

Mixed-Race Identity

2024-04-2359:49

Overthinking

Overthinking

2024-03-2659:45

Zombies

Zombies

2024-03-1259:47

Reputation

Reputation

2024-02-2759:58

Cities

Cities

2024-02-1301:02:52

loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

AI Safety with Shazeda Ahmed

AI Safety with Shazeda Ahmed

Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D.