Discover80k After HoursHighlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks
Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks

Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks

Update: 2024-09-18
Share

Description

This is a selection of highlights from episode #200 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:

Ezra Karger on what superforecasters and experts think about existential risks

And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.

Highlights:

  • Luisa’s intro (00:00:00 )
  • Why we need forecasts about existential risks (00:00:26 )
  • Headline estimates of existential and catastrophic risks (00:02:43 )
  • What explains disagreements about AI risks? (00:06:18 )
  • Learning more doesn't resolve disagreements about AI risks (00:08:59 )
  • A lot of disagreement about AI risks is about when AI will pose risks (00:11:31 )
  • Cruxes about AI risks (00:15:17 )
  • Is forecasting actually useful in the real world? (00:18:24 )

Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks

Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks

The 80000 Hours team