Major AI companies can increase or reduce global catastrophic risks.https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/
People are far better at their jobs than at anything else. Here are the best ways to help the most important century go well.https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/
For people who want to help improve our prospects for navigating transformative AI, and have an audience (even a small one).https://www.cold-takes.com/spreading-messages-to-help-with-the-most-important-century/
Hypothetical stories where the world tries, but fails, to avert a global disaster.https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe
An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI.https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/
Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/
A few ways we might get very powerful AI systems to be safe.https://www.cold-takes.com/high-level-hopes-for-ai-alignment/
Four analogies for why "We don't see any misbehavior by this AI" isn't enough.https://www.cold-takes.com/ai-safety-seems-hard-to-measure/
Today's AI development methods risk training AIs to be deceptive, manipulative and ambitious. This might not be easy to fix as it comes up.https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/
We scored mid-20th-century sci-fi writers on nonfiction predictions. They weren't great, but weren't terrible either. Maybe doing futurism works fine.https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/
With great power comes, er, unclear responsibility and zero accountability. https://www.cold-takes.com/nonprofit-boards-are-weird-2/
How big a deal could AI misalignment be? About as big as it gets.https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/
Investigating important topics with laziness, impatience, hubris and self-preservation.https://www.cold-takes.com/useful-vices-for-wicked-problems/
What kind of governance system should you set up, if you're starting from scratch and can do it however you want?https://www.cold-takes.com/ideal-governance-for-companies-countries-and-more/
Preventing extinction would be good - but "saving 8 billion lives" good or "saving a trillion trillion trillion lives" good?https://www.cold-takes.com/debating-myself-on-whether-extra-lives-lived-are-as-good-as-deaths-prevented/
A day in the life of trying to complete a self-assigned project with no clear spec or goal. https://www.cold-takes.com/the-wicked-problem-experience/
First in a series of dialogues on utilitarianism and "future-proof ethics."https://www.cold-takes.com/defending-one-dimensional-ethics/
Ethics based on common sense seems to have a horrible historical track record. Can we do better?https://www.cold-takes.com/future-proof-ethics/