DiscoverCold Takes Audio
Cold Takes Audio
Claim Ownership

Cold Takes Audio

Author: coldtakes

Subscribed: 74Played: 1,338


Amateur read-throughs of blog posts on, for those who prefer listening to reading. Available on Apple, Spotify, Google Podcasts, and anywhere else you listen to podcasts by searching Cold Takes Audio.
50 Episodes
Major AI companies can increase or reduce global catastrophic risks.
People are far better at their jobs than at anything else. Here are the best ways to help the most important century go well.
For people who want to help improve our prospects for navigating transformative AI, and have an audience (even a small one).
Hypothetical stories where the world tries, but fails, to avert a global disaster.
An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI.
Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?
A few ways we might get very powerful AI systems to be safe.
Four analogies for why "We don't see any misbehavior by this AI" isn't enough.
Today's AI development methods risk training AIs to be deceptive, manipulative and ambitious. This might not be easy to fix as it comes up.
We scored mid-20th-century sci-fi writers on nonfiction predictions. They weren't great, but weren't terrible either. Maybe doing futurism works fine.
With great power comes, er, unclear responsibility and zero accountability.
How big a deal could AI misalignment be? About as big as it gets.
Investigating important topics with laziness, impatience, hubris and self-preservation.
What kind of governance system should you set up, if you're starting from scratch and can do it however you want?
Preventing extinction would be good - but "saving 8 billion lives" good or "saving a trillion trillion trillion lives" good?
A day in the life of trying to complete a self-assigned project with no clear spec or goal.
Learning By Writing

Learning By Writing


First in a series of dialogues on utilitarianism and "future-proof ethics."
Future-Proof Ethics

Future-Proof Ethics


Ethics based on common sense seems to have a horrible historical track record. Can we do better?