Cold Takes Audio

Amateur read-throughs of blog posts on Cold-Takes.com, for those who prefer listening to reading. Available on Apple, Spotify, Google Podcasts, and anywhere else you listen to podcasts by searching Cold Takes Audio.

What AI companies can do today to help with the most important century

Major AI companies can increase or reduce global catastrophic risks.https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/

02-20
18:27

Jobs that can help with the most important century

People are far better at their jobs than at anything else. Here are the best ways to help the most important century go well.https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/

02-10
30:42

Spreading messages to help with the most important century

For people who want to help improve our prospects for navigating transformative AI, and have an audience (even a small one).https://www.cold-takes.com/spreading-messages-to-help-with-the-most-important-century/

01-25
20:09

How we could stumble into AI catastrophe

Hypothetical stories where the world tries, but fails, to avert a global disaster.https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe

01-13
28:54

Transformative AI issues (not just misalignment): an overview

An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI.https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/

01-05
25:03

Racing Through a Minefield: the AI Deployment Problem

Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/

12-22
21:04

High-level hopes for AI aligment

A few ways we might get very powerful AI systems to be safe.https://www.cold-takes.com/high-level-hopes-for-ai-alignment/

12-15
23:49

AI safety seems hard to measure

Four analogies for why "We don't see any misbehavior by this AI" isn't enough.https://www.cold-takes.com/ai-safety-seems-hard-to-measure/

12-08
22:22

Why Would AI "Aim" To Defeat Humanity?

Today's AI development methods risk training AIs to be deceptive, manipulative and ambitious. This might not be easy to fix as it comes up.https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/

11-29
46:14

The Track Record of Futurists Seems ... Fine

We scored mid-20th-century sci-fi writers on nonfiction predictions. They weren't great, but weren't terrible either. Maybe doing futurism works fine.https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/

06-30
21:21

Nonprofit Boards are Weird

With great power comes, er, unclear responsibility and zero accountability. https://www.cold-takes.com/nonprofit-boards-are-weird-2/

06-23
25:27

AI Could Defeat All Of Us Combined

How big a deal could AI misalignment be? About as big as it gets.https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/

06-07
23:56

Useful Vices for Wicked Problems

Investigating important topics with laziness, impatience, hubris and self-preservation.https://www.cold-takes.com/useful-vices-for-wicked-problems/

04-12
25:19

Ideal governance (for companies, countries and more)

What kind of governance system should you set up, if you're starting from scratch and can do it however you want?https://www.cold-takes.com/ideal-governance-for-companies-countries-and-more/

04-05
17:33

Debating myself on whether “extra lives lived” are as good as “deaths prevented”

Preventing extinction would be good - but "saving 8 billion lives" good or "saving a trillion trillion trillion lives" good?https://www.cold-takes.com/debating-myself-on-whether-extra-lives-lived-are-as-good-as-deaths-prevented/

03-29
20:20

The Wicked Problem Experience

A day in the life of trying to complete a self-assigned project with no clear spec or goal. https://www.cold-takes.com/the-wicked-problem-experience/

03-02
14:50

Defending One-Dimensional Ethics

First in a series of dialogues on utilitarianism and "future-proof ethics."https://www.cold-takes.com/defending-one-dimensional-ethics/

02-15
26:00

Future-Proof Ethics

Ethics based on common sense seems to have a horrible historical track record. Can we do better?https://www.cold-takes.com/future-proof-ethics/

02-02
27:09

Recommend Channels