Description
Robin Hanson restates his views on AI risk.
https://www.overcomingbias.com/p/ai-risk-again
Holden Karfnosky — Success without dignity: a nearcasting story of avoiding catastrophe by luck
2023-03-2019:26
Larks — A Windfall Clause for CEO could worsen AI race dynamics
2023-03-2014:35
Otto Barten — Paper summary: The effectiveness of AI existential risk communication to the American and Dutch public
2023-03-2007:15
Shulman & Thornley — How much should governments pay to prevent catastrophes? Longtermism's limited role
2023-03-2057:30
Riley Harris — Summary of 'Are we living at the hinge of history?' by William MacAskill.
2023-03-1407:10
Riley Harris — Summary of 'Longtermist institutional reform' by Tyler M. John and William MacAskill
2023-03-1405:32
Hayden Wilkinson — Global priorities research: Why, how, and what have we learned?
2023-03-1344:42
Piper — What should be kept off-limits in a virology lab?
2023-03-1307:49
Ezra Klein — This changes everything
2023-03-1310:42
Victoria Krakovna — Near-term motivation for AGI alignment
2023-03-1104:39
Anthropic — Core views on AI safety: when, why, what, and how
2023-03-1138:11
Noah Smith — LLMs are not going to destroy the human race
2023-03-0816:46
Andy Greenberg — A privacy hero's final wish: an institute to redirect AI's future
2023-03-0812:58
Noy & Zhang — Experimental evidence on the productivity effects of generative artificial intelligence
2023-03-0424:58
Robin Hanson — AI risk, again
2023-03-0408:29
Williams and Kane — Preventing the Misuse of DNA Synthesis
2023-03-0441:09
Kevin Collier — What is consciousness? ChatGPT and advanced AI might redefine our answer
2023-03-0207:28
Landgrebe, Barnes & Hobbhahn — Reflection Mechanisms as an Alignment Target - Attitudes on “near-term” AI
2023-03-0214:28
Risto Uuk — The EU AI Act Newsletter #24
2023-03-0206:14
Noam Kolt — Algorithmic black swans
2023-03-0101:20:11
0.5x
0.8x
1.0x
1.25x
1.5x
2.0x
3.0x
Sleep Timer
Off
End of Episode
5 Minutes
10 Minutes
15 Minutes
30 Minutes
45 Minutes
60 Minutes
120 Minutes