Discover
Doom Debates!
Doom Debates!
Author: Liron Shapira
Subscribed: 39Played: 1,861Subscribe
Share
© Liron Shapira
Description
It's time to talk about the end of the world. With your host, Liron Shapira.
lironshapira.substack.com
lironshapira.substack.com
148 Episodes
Reverse
David Sirota helped create “Don’t Look Up” sometimes feels like we’re living inside his movie. Does he share my belief that the looming planetary threat is rogue AI?Sirota is an award-winning investigative journalist, bestselling author, and former speechwriter for Bernie Sanders. He was nominated for an Oscar for co-writing the story of Don’t Look Up.Find our more about David’s work at The Lever: https://www.levernews.com/Timestamps00:00:00 — Cold Open00:01:20 — Introducing David Sirota00:04:34 — Why David Fights Against Power and the Concentration of Power00:13:46 — From NAFTA to AI: The Warnings We Ignored00:22:05 — How Big Will the AI “Jobpocalypse” Be?00:25:28 — Superintelligence & the Parallel to Don’t Look Up00:28:37 — What’s Your P(Doom)™?00:31:44 — The Speed of the AI Threat00:36:26 — Society Is Losing a Collective Capacity to Focus00:38:34 — Is Climate Change David’s Biggest Existential Concern?00:45:01 — David Reacts to Bernie Sanders’ Data Center Moratorium Proposal00:49:11 — Can We Build The “Off Button”?00:52:08 — “Don’t Look Up” x AGI Mashup00:54:35 — Why There’s Still Hope00:58:14 — Living in “Don’t Look Up”00:59:46 — Wrap-Up: Where to Follow Major AI NewsLinksWatch Don't Look Up — https://www.netflix.com/gb/title/81252357 The Lever, investigative news outlet — https://www.levernews.com/David Sirota on X — https://x.com/davidsirotaDavid Sirota, Wikipedia — https://en.wikipedia.org/wiki/David_SirotaMaster Plan podcast — https://the.levernews.com/master-plan/David Sirota, “Hostile Takeover” on Amazon — https://www.amazon.com/Hostile-Takeover-Corruption-Conquered-Government/dp/0307237354The Three-Body Problem (novel), Wikipedia — https://en.wikipedia.org/wiki/The_Three-Body_Problem_(novel)WarGames (1983 film), Wikipedia — https://en.wikipedia.org/wiki/WarGamesAdam McKay, Wikipedia — https://en.wikipedia.org/wiki/Adam_McKayWatch Don’t Look Up — https://www.netflix.com/title/81252357AI 2027 scenario — https://ai-2027.com/Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
The popular debate show, Middle Ground by Jubilee, invited me on to take the "anti-AI" side back in April 2024.This highlight reel of the episode shows my experience of the discussion.I'm a lifelong techno-optimist, and it's unnatural for me to represent an anti-tech position. It's just that our AI labs are admitting they're on the path to superintelligence they don't know how to control, which implies we're all about to die and the universe will be robbed of all value forever before our kids grow up. Other than that one little consideration, I'm normally pro-tech!Timestamps00:00:00 — Why Liron is Worried about AI00:02:13 — The Nuclear Analogy00:02:43 — Human Evolution and Neuralink00:03:14 — The AI Labs’ Own WarningsLinksFull episode on YouTube —https://www.youtube.com/watch?v=47fGrqzoFr8Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Multiple live callers join this month's Q&A as we react to Bernie Sanders and AOC's data center moratorium, the sudden shutdown of SORA 2, and the record breaking "Stop the AI Race" protest.I explain why Claude Code has me claiming a 700% productivity boost, what that means for takeoff timelines and debate instrumental convergence.Timestamps00:00:00 — Cold Open00:01:08 — Can AI Train on Its Own Data to Reach Superintelligence?00:03:42 — Are We in the Takeoff? 700% Faster with Claude Code00:04:27 — EJJ Joins: Is Instrumental Convergence Really That Dangerous?00:16:44 — The Positive Feedback Loop Problem00:20:09 — S-Risk, Consciousness, and Objective Morality00:22:27 — Futarchy and Prediction Markets00:24:31 — Low P(Doom) Arguments and Bayesian Updates00:31:05 — Lee Cyrano Joins: Superintelligence Won’t Matter for Decades01:02:45 — Lesaun Joins: Are There Adults in the Room?01:17:39 — Connor Leahy: “There Are No Adults in the Room”01:19:51 — Bernie Sanders Calls for a Data Center Moratorium01:24:23 — Claude Code Anecdotes and Audience Q&A01:35:49 — The Stop the AI Race Protest in San Francisco01:41:38 — Known Unknowns and Risk Assessment01:45:03 — From Waymo to Existential Risk01:51:28 — Closing: The Road to One Million SubscribersLinksQuintin Pope vs Liron Shapira debate on Doom Debates — https://lironshapira.substack.com/p/ai-alignment-is-solved-phd-researcherCAP theorem, Wikipedia — https://en.wikipedia.org/wiki/CAP_theoremGoogle Cloud Spanner, Wikipedia — https://en.wikipedia.org/wiki/Spanner_(database)Newcomb’s problem, Wikipedia — https://en.wikipedia.org/wiki/Newcomb%27s_paradoxGödel’s incompleteness theorems, Wikipedia — https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theoremsDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Dr. Quintin Pope, PhD, is one of the few critics of AI doomerism who is truly fluent in the concepts and arguments. In Oct, 2023 he joined me for a debate in Twitter Spaces where he argued that AI alignment was basically already solved.His “inside view” on machine-learning forced me to update my position, but could he knock me off the doom train?Timestamps00:00:00 — Cold Open00:00:43 — Introductions00:01:22 — Quintin's Opening Statement00:02:32 — Liron's Opening Statement00:05:10 — Has RLHF Solved the Alignment Problem?00:07:52 — AI Capabilities Are Constrained by Training Data00:10:52 — Defining ASI and Could RLHF Align a Superintelligence?00:13:13 — Quintin Is More Optimistic Than OpenAI00:14:16 — What Is ASI in Your Mind?00:15:57 — AI in 5 Years (2028) & AI Coding Agents00:19:05 — Continuous or Discontinuous Capability Gains?00:19:39 — DEBATE: General Intelligence Algorithm in Humans00:30:02 — The Only Coherent Explanation of Humans Going to the Moon00:34:01 — Are We "Fully Cooked" as a General Optimizer?00:35:53 — Common Mistake in Forecasting Superintelligence00:42:22 — 'Neat' vs 'Scruffy': Will Interpretable Structure Emerge Inside Neural Nets?00:48:57 — Does This Disagreement Actually Matter for P(Doom)?00:54:33 — Thought Experiment: Could You Have Predicted a Species Would Go to the Moon?00:57:26 — The Basin of Attraction for Superintelligence00:59:35 — Does a Superintelligence Even Exist in Algorithm Space?01:09:59 — Closing Statements01:12:40 — Audience Q&A01:19:35 — Wrap UpLinksOriginal Twitter Spaces debate (Quintin Pope vs. Liron Shapira) — https://x.com/i/spaces/1YpJkwOzOqEJj/peekQuintin Pope on Twitter/X — https://twitter.com/QuintinPope5Quintin Pope, Alignment Forum profile — https://www.alignmentforum.org/users/quintin-popeInstructGPT, Wikipedia — https://en.wikipedia.org/wiki/InstructGPTAIXI, Wikipedia — https://en.wikipedia.org/wiki/AIXIAlphaZero, Wikipedia — https://en.wikipedia.org/wiki/AlphaZeroMuZero, Wikipedia — https://en.wikipedia.org/wiki/MuZeroDeepMind AlphaZero and MuZero page — https://deepmind.google/research/alphazero-and-muzero/Midjourney — https://www.midjourney.com/DALL-E, Wikipedia — https://en.wikipedia.org/wiki/DALL-EOpenAI Superalignment announcement — https://openai.com/index/introducing-superalignment/Shard Theory sequence on LessWrong — https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX“Evolution Provides No Evidence for the Sharp Left Turn” — https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn“My Objections to ‘We’re All Gonna Die with Eliezer Yudkowsky’” — https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky“AI is Centralizing by Default; Let’s Not Make It Worse” — https://forum.effectivealtruism.org/posts/zd5inbT4kYKivincm/ai-is-centralizing-by-default-let-s-not-make-it-worseSingular Learning Theory, Alignment Forum sequence — https://www.alignmentforum.org/s/mqwA5FcL6SrHEQzoxDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
My new interview on Robert Wright's Nonzero Podcast where we dive into the agentic AI explosion. Bob is an exceptionally sharp interviewer who connects the dots between job displacement and AI doom.I highly recommend becoming a premium subscriber to Bob’s Nonzero Newsletter so you can watch the Overtime segment in every interview he does — https://nonzero.orgOur discussion continues in Overtime for premium subscribers — https://www.nonzero.org/p/early-access-the-allure-and-dangerLinksNonzero Podcast on YouTube — https://www.youtube.com/@nonzeroRobert Wright, The God Test (book, Amazon) — https://www.amazon.com/God-Test-Artificial-Intelligence-Reckoning/dp/1668061651Timestamps00:00:00 — Introduction and Today's Topics00:03:22 — Vibe Coding and the Agentic Revolution00:08:57 — The Future of Employment00:17:57 — Agents and What They Can Do00:27:59 — The "Can It" and "Will It" Framework for AI Doom00:30:27 — OpenClaw and Liron's Experience with AI Agents00:36:45 — The Case for Slowing Down AI Development00:43:28 — Anthropic, the Pentagon, and AI Politics00:48:37 — AI Safety Leadership Concerns00:52:06 — Closing and Overtime TeaseDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Noah Smith is an economist and author of Noahpinion, one of the most popular Substacks in the world.He returns to Doom Debates to share a massive update to his P(Doom), and things get a little heated.Timestamps00:00:00 — Cold Open00:00:41 — Welcome Back Noah Smith!00:01:40 — Noah's P(Doom) Update00:03:57 — The Chatbot-Genie-God Framework00:05:14 — What's Your P(Doom)™00:09:59 — Unpacking Noah's Update00:16:56 — Why Incidents of Rogue AI Lower P(Doom)00:20:04 — Noah's Mainline Doom Scenario: Much Worse Than COVID-1900:23:29 — Society Responds After Growing Pains00:29:25 — Agentic AI Contributed to Noah's Position00:31:35 — Should Yudkowsky Get Bayesian Credit?00:33:59 — Are We Communicating the Right Way with Policymakers?00:40:16 — Finding Common Ground on AI Policy00:47:07 — Wrap-Up: People Need to Be More ScaredLinksDoom Debate with Noah Smith — Part 1: https://www.youtube.com/watch?v=AwmJ-OnK2I4Noah’s Twitter — https://x.com/noahpinionNoah’s Substack — https://noahpinion.blogDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Dr. Claire Berlinski is a journalist, Oxford PhD, and author of The Cosmopolitan Globalist.She invited me to her weekly symposium to make the case for AI as an existential risk. Can we convince her sharp, skeptical audience that P(Doom) is high?Subscribe to The Cosmopolitan Globalist: https://claireberlinski.substack.com/Follow Claire on X: https://x.com/ClaireBerlinski“If Anyone Builds It, Everyone Dies” by Eliezer Yudkowsky & Nate Soares — https://ifanyonebuildsit.comTimestamps00:00:00 — Introduction00:02:10 — Welcome and Setting the Stage00:06:16 — Outcome Steering: The Magic of Intelligence00:10:40 — Collective Intelligence and the Path to ASI00:12:53 — The Five-Point Argument00:14:56 — The Alignment Problem and Control00:17:56 — The Genie Problem and Recursive Self-Improvement00:20:38 — Timeline: Five Years or Fifty?00:26:14 — Social Revolution and Pausing AI00:28:54 — Energy Constraints and Resource Limits00:31:23 — Morality, Empathy, and Superintelligence00:37:45 — How AI Is Actually Built00:38:31 — Computational Irreducibility and Co-Evolution00:44:57 — Foom and the Discontinuity Question00:46:44 — US-China Rivalry and the Arms Race00:49:36 — The Co-Evolution Argument00:55:36 — Alignment as Psychoanalysis00:57:24 — Anthropic’s “Harmless Slop” Paper01:00:33 — Policy Solutions: The Pause Button01:04:47 — Military AI and the Singularity01:07:10 — Cognitive Obstacles and Doom Fatigue01:09:07 — Why People Don’t Act01:13:00 — Reaching Representatives and Building a Platform01:17:12 — Sam Altman and the Manhattan Project Parallel01:19:14 — Community Building and Pause AI01:22:07 — Call to Action and ClosingDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Fan favorite Dr. Steven Byrnes returns to discuss recent AI progress and the concerning paradigm shift to "ruthless sociopath AI" he sees on the horizon.Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem.Timestamps00:00:00 — Cold Open00:00:48 — Welcoming Back the Returning Champion00:02:38 — Research Update: What's New in The Last 6 Months00:04:31 — The Rise of AI Agents00:07:49 — What's Your P(Doom)?™00:13:42 — "Brain-Like AGI": The Next Generation of AI00:17:01 — Can LLMs Ever Match the Human Brain?00:31:51 — Will AI Kill Us Before It Takes Our Jobs?00:36:12 — Country of Geniuses in a Data Center00:41:34 — Why We Should Expect "Ruthless Sociopathic" ASI00:54:15 — Post-Training & RLVR — A "Thin Layer" of Real Intelligence01:02:32 — Consequentialism and the Path to Superintelligence01:17:02 — Airplanes vs. Rockets: An Analogy for AI01:24:33 — FOOM and Recursive Self-ImprovementLinksSteven Byrnes’ Website & Research— https://sjbyrnes.com/Steve’s X—https://x.com/steve47285Astera Institute—https://astera.org/“Why We Should Expect Ruthless Sociopath ASI” — https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asiIntro to Brain-Like-AGI Safety—https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8Steve on LessWrong—https://www.lesswrong.com/users/steve2152AI 2027 — Scenario Timeline — https://ai-2027.com/Part 1: “The Man Who Might SOLVE AI Alignment”—https://www.youtube.com/watch?v=_ZRUq3VEAc0Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Multiple live callers join this month's Q&A as we cover the imminent demise of programming as a profession, the Anthropic/Pentagon showdown, and debate the finer details of wireheading. I clarify my recent AI doom belief updates, and then the man behind Roko's Basilisk crashes the stream to argue I haven't updated nearly far enough.Timestamps00:00:00 — Cold Open00:00:56 — Welcome to the Livestream & Taking Questions from Chat00:12:44 — Anonymous Caller Asks If Rationalists Should Prioritize Attention-Grabbing Protests00:18:30 — The Good Case Scenario00:26:00 — Hugh Chungus Joins the Stream00:30:54 — Producer Ori, Liron's Recent Alignment Updates00:43:47 — We're In an Era of Centaurs00:47:40 — Noah Smith's Updates on AGI and Alignment00:48:44 — Co Co Chats Cybersecurity00:57:32 — The Attacker's Advantage in Offense/Defense Balance01:02:55 — Anthropic vs The Pentagon01:06:20 — "We're Getting Frog Boiled"01:11:06 — Stoner AI & Debating the Finer Points of Wireheading01:25:00 — A Caller Backs the Penrose Argument01:34:01 — Greyson Dials In01:40:21 — Surprise Guest Joins & Says Alignment Isn't a Problem02:05:15 — More Q&A with Chat02:14:26 — Closing ThoughtsLinks* Liron on X — https://x.com/liron* AI 2027 — https://ai-2027.com/* “Good Luck, Have Fun, Don’t Die” (film) — https://www.imdb.com/title/tt38301748/* “The AI Doc” (film) — https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimistDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Professor Moshe Vardi thinks AI will kill us with kindness by automating away our jobs. I think they'll just kill us for real.Who’s right? Tune into this episode and decide where you get off the Doom Train™.Some highlights of Professor Vardi’s impressive CV:* University Professor at Rice — a rare distinction that lets him teach in any department.* 65,000+ citations, an H-index above 100, and nearly 50 years spent mechanizing reasoning, which makes him one of the most decorated computer scientists alive.* He ran the ACM’s flagship publication for a decade, and now bridges CS and policy at Rice’s Baker Institute.* He has been sounding the alarm on AI-driven job automation for over ten years.* He signed the 2023 AI extinction risk statement, and calls himself “part of the resistance.”Links* Moshe Vardi’s Wikipedia — https://en.wikipedia.org/wiki/Moshe_Vardi* Moshe Vardi, Rice University — https://profiles.rice.edu/faculty/moshe-y-vardi* Baker Institute for Public Policy — https://www.bakerinstitute.org/* Nick Bostrom, Deep Utopia: Life and Meaning in a Solved World — https://www.amazon.com/Deep-Utopia-Life-Meaning-Solved/dp/1646871642* Joseph Aoun, Robot-Proof: Higher Education in the Age of Artificial Intelligence — https://www.amazon.com/Robot-Proof-Higher-Education-Artificial-Intelligence/dp/0262535971Timestamps00:00:00 — Cold Open00:00:54 — Introducing Professor Vardi00:02:01 — Professor Vardi’s Academic Focus: CS, AI, & Public Policy00:07:18 — What’s Your P(Doom)™?00:12:28 — We’re Not Doomed, “We’re Screwed”00:16:44 — AI’s Impact on Meaning & Purpose00:27:47 — Let’s Ride the Doom Train ™00:35:43 — The Future of Jobs00:39:24 — A Country of Geniuses in a Data Center00:41:04 — Corporations as Superintelligence00:45:49 — Agency, Consciousness, and the Limits of AI00:50:07 — The Mad Scientist Scenario00:54:02 — Could a Data Center of Geniuses Destroy Humanity?01:03:13 — The WALL-E Meme and Fun Theory01:04:01 — Why Professor Vardi Signed the AI Extinction Risk Statement01:06:02 — Wrap-Up + 1 Way Ticket to DoomDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Fresh off my debate with Destiny, his Discord community invited me into their voice chat to talk about AI doom. Just like the man himself, his fans are sharp.Let's find out where they get off The Doom Train™.My recent debate with Destiny — https://www.youtube.com/watch?v=rNgffLZTeWwTimestamps00:00:00 — Cold Open00:00:54 — Liron Joins Destiny’s Discord00:02:21 — The AI Doom Premise00:03:27 — Defining Intelligence and Is An LLM Really AI?00:07:12 — Will AI Become Uncontrollable?00:12:44 — The AI Alignment Problem00:24:11 — The Difficulty of Pausing AI00:26:01 — AI vs The Human Brain00:32:41 — Future AI Capabilities, Steering Toward Goals, & Philosophical DisagreementsDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Renowned scientists just set The Doomsday Clock closer than ever to midnight.I agree our collective doom seems imminent, but why do the clocksetters point to climate change—not rogue AI—as an existential threat?UChicago Professor Daniel Holz, a top physicist who sets the Doomsday Clock, joins me to debate society’s top existential risks.00:00:00 — Cold Open 00:00:51 — Introducing Professor Holz00:02:08 — The Doomsday Clock is at 85 Seconds to Midnight! 00:04:37 — What's Your P(Doom)?™ 00:08:09 — Making A Probability of Doomsday, or P(Doom), Equation 00:12:07 — How We All Die: Nuclear vs Climate vs AI 00:21:08 — Nuclear Close Calls from The Cold War 00:28:38 — History of The Doomsday Clock 00:30:18 — The Threat of Biological Risks Like Mirror Life 00:33:40 — Professor Holz’s Position on AI Misalignment Risk 00:44:49 — Does The Doomsday Clock Give Short Shrift to AI Misalignment Risk? 00:59:09 — Why Professor Holz Founded UChicago’s Existential Risk Laboratory (XLab) 01:06:22 — The State of Academic Research on AI Safety & Existential Risks 01:12:32 — The Case for Pausing AI Development 01:17:11 — Debate: Is Climate Change an Existential Threat? 01:28:48 — Call to Action: How to Reduce Our Collective ThreatLinksProfessor Daniel Holz’s Wikipedia — https://en.wikipedia.org/wiki/Daniel_HolzXLab (Existential Risk Laboratory) — https://xrisk.uchicago.edu/2026 Doomsday Clock Statement — https://thebulletin.org/doomsday-clock/2026-statement/The Bulletin of the Atomic Scientists (Nonprofit that produces the clock)— https://thebulletin.org/UChicago Magazine features Prof. Holz’s class — “Are We Doomed?”: https://mag.uchicago.edu/science-medicine/are-we-doomedThe Doomsday Machine by Daniel Ellsberg — https://www.amazon.com/Doomsday-Machine-Confessions-Nuclear-Planner/dp/1608196704If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares — https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640Learn more about pausing frontier AI development from PauseAI — https://pauseai.infoDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Liron was invited to give a presentation to some of the most promising creators in AI safety communications.Get a behind-the-scenes look at Doom Debates and how the channel has grown so quickly. Learn more about the Frame Fellowship: https://framefellowship.com/Timestamps00:00:00 — Liron tees up the presentation00:02:28 — Liron’s Frame Fellowhip Presentation00:06:03 — Introducing Doom Debates00:07:44 — Meeting the Frame Fellows00:19:38 — Why I Started Doom Debates00:30:20 — Handling Hate and Criticism00:31:56 — The Talent Stack00:39:34 — Finding Your Unique Niche00:40:58 — Q&A00:42:04 — On Funding the Show00:48:05 — Audience Demographics & Gender Strategy00:51:17 — How to communicate AI Risk Effectively00:56:22 — Social Media Strategy00:58:10 — Closing RemarksDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Destiny has racked up millions of views for his sharp takes on political and cultural news. Now, I finally get to ask him about a topic he’s been agnostic on: Will AI end humanity?Come ride with us on the Doom Train™ 🚂Timestamps00:00:00 — Teaser00:01:16 — Welcoming Destiny00:02:54 — What’s Your P(Doom)?™00:04:46 — 2017 vs 2026: Destiny’s views on AI00:11:04 — AI could vastly surpass human intelligence00:16:02 — Can AI doom us?00:18:42 — Intelligence doesn’t guarantee morality00:22:18 — The vibes-based case against doom00:29:58 — The human brain is inefficient00:35:17 — Does every intelligence in the universe self-destruct via AI?00:37:28 — Destiny turns the tables: Where does Liron get off The Doom Train™00:46:07 — Will a warning shot cause society to develop AI safely?00:54:10 — Roko’s Basilisk, the AI box problem00:59:37 — Will Destiny update his P(Doom)?™01:04:19 — Closing thoughtsLinksDestiny's YouTube — https://www.youtube.com/@destinyDestiny's X account — https://x.com/TheOmniLiberalMarc Andreessen saying AI isn’t dangerous because “it is math” — https://a16z.com/ai-will-save-the-world/Will Smith eating spaghetti AI video — https://knowyourmeme.com/memes/ai-will-smith-eating-spaghettiRoko’s Basilisk on LessWrong — https://www.lesswrong.com/tag/rokos-basiliskEliezer Yudkowsky’s AI Box Experiment — https://www.yudkowsky.net/singularity/aiboxDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
On the eve of superintelligence, real AI safety is a nonexistent field. I break down why in my latest essay, The Facade of AI Safety Will Crumble.This is a podcast reading of a recent Substack post: https://lironshapira.substack.com/p/the-facade-of-ai-safety-will-crumbleDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Elon Musk just made a stunning admission about the insane future he’s steering us toward.In a new interview with Dwarkesh Patel and John Collison on the Cheeky Pint podcast, Elon said that humanity can’t expect to be “in charge” of AI for long, because humans will soon only have 1% of the combined total human+AI intelligence.Then, he claimed to have a plan to build AI overlords that will naturally support humanity's flourishing.In this mini episode, I react to Elon's remarks and expose why his plan for humanity's survival in the age of AI is dangerously flimsy.My original tweet: https://x.com/liron/status/2020906862397489492Clip source: Elon Musk – "In 36 months, the cheapest place to put AI will be space”Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
California gubernatorial candidate Zoltan Istvan reveals his P(Doom) and makes the case for universal basic income and radical life extension.Timestamps00:00:00 — Teaser00:00:50 — Meet Zoltan, Democratic Candidate for California Governor00:08:30 — The 2026 California Governor's Race00:12:50 — Zoltan's Platform Is Automated Abundance00:19:45 — What's Your P(Doom)™00:28:26 — Campaigning on Existential Risk00:32:36 — Does Zoltan Support a Global AI Pause?00:48:39 — Exploring His Platform: Education, Crime, and Affordability01:08:55 — Exploring His Platform:: Super Cities, Space, and Longevity01:13:00 — Closing ThoughtsLinksZoltan Istvan’s Campaign for California Governor – zoltanistvan2026.comThe Transhumanist Wager by Zoltan Istvan – https://www.amazon.com/Transhumanist-Wager-Zoltan-Istvan/dp/0988616114Wired Article on the “Bunker” Party – https://www.wired.com/story/ai-risk-party-san-francisco/PauseAI – pauseai.infoSL4 Mailing List Archive – sl4.org/archiveDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Get ready for a rematch with the one & only Bentham’s Bulldog, a.k.a. Matthew Adelstein! Our first debate covered a wide range of philosophical topics.Today’s Debate #2 is all about Matthew’s new argument against the inevitability of AI doom. He comes out swinging with a calculated P(Doom) of just 2.6% , based on a multi-step probability chain that I challenge as potentially falling into a “Type 2 Conjunction Fallacy” (a.k.a. Multiple Stage Fallacy).We clash on whether to expect “alignment by default” and the nature of future AI architectures. While Matthew sees current RLHF success as evidence that AIs will likely remain compliant, I argue that we’re building “Goal Engines” — superhuman optimization modules that act like nuclear cores wrapped in friendly personalities. We debate whether these engines can be safely contained, or if the capability to map goals to actions is inherently dangerous and prone to exfiltration.Despite our different forecasts (my 50% vs his sub-10%), we actually land in the “sane zone” together on some key policy ideas, like the potential necessity of a global pause.While Matthew’s case for low P(Doom) hasn’t convinced me, I consider his post and his engagement with me to be super high quality and good faith. We’re not here to score points, we just want to better predict how the intelligence explosion will play out.Timestamps00:00:00 — Teaser00:00:35 — Bentham’s Bulldog Returns to Doom Debates00:05:43 — Higher-Order Evidence: Why Skepticism is Warranted00:11:06 — What’s Your P(Doom)™00:14:38 — The “Multiple Stage Fallacy” Objection00:21:48 — The Risk of Warring AIs vs. Misalignment00:27:29 — Historical Pessimism: The “Boy Who Cried Wolf”00:33:02 — Comparing AI Risk to Climate Change & Nuclear War00:38:59 — Alignment by Default via Reinforcement Learning00:46:02 — The “Goal Engine” Hypothesis00:53:13 — Is Psychoanalyzing Current AI Valid for Future Systems?01:00:17 — Winograd Schemas & The Fragility of Value01:09:15 — The Nuclear Core Analogy: Dangerous Engines in Friendly Wrappers01:16:16 — The Discontinuity of Unstoppable AI01:23:53 — Exfiltration: Running Superintelligence on a Laptop01:31:37 — Evolution Analogy: Selection Pressures for Alignment01:39:08 — Commercial Utility as a Force for Constraints01:46:34 — Can You Isolate the “Goal-to-Action” Module?01:54:15 — Will Friendly Wrappers Successfully Control Superhuman Cores?02:04:01 — Moral Realism and Missing Out on Cosmic Value02:11:44 — The Paradox of AI Solving the Alignment Problem02:19:11 — Policy Agreements: Global Pauses and China02:26:11 — Outro: PauseCon DC 2026 PromoLinksBentham’s Bulldog Official Substack — https://benthams.substack.comThe post we debated — https://benthams.substack.com/p/against-if-anyone-builds-it-everyoneApply to PauseCon DC 2026 here or via https://pauseai-us.orgForethought Institute’s paper: Preparing for the Intelligence ExplosionTom Davidson (Forethought Institute)’s post: How quick and big would a software intelligence explosion be?Scott Alexander on the Coffeepocalypse Argument---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Harlan Stewart works in communications for the Machine Intelligence Research Institute (MIRI). In this episode, Harlan and I give our honest opinions on Dario Amodei's new essay "The Adolescence of Technology".Timestamps0:00:00 — Cold Open0:00:47 — How Harlan Stewart Got Into AI Safety0:02:30 — What’s Your P(Doom)?™0:04:09 — The “Doomer” Label0:06:13 — Overall Reaction to Dario’s Essay: The Missing Mood0:09:15 — The Rosy Take on Dario’s Essay0:10:42 — Character Assassination & Low Blows0:13:39 — Dario Amodei is Shifting the Overton Window in The Wrong Direction0:15:04 — Object-Level vs. Meta-Level Criticisms0:17:07 — The “Inevitability” Strawman Used by Dario0:19:03 — Dario Refers to Doom as a Self-Fulfilling Prophecy0:22:38 — Dismissing Critics as “Too Theoretical”0:43:18 — The Problem with Psychoanalyzing AI0:56:12 — “Intellidynamics” & Reflective Stability1:07:12 — Why Is Dario Dismissing an AI Pause?1:11:45 — Final TakeawaysLinksHarlan’s X — https://x.com/HumanHarlan“The Adolescence of Technology” by Dario Amodei — https://www.darioamodei.com/essay/the-adolescence-of-technologyDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Check out the new Doom Debates studio in this Q&A with special guest Producer Ori! Liron gets into a heated discussion about whether doomers must validate short-term risks, like data center water usage, in order to build a successful political coalition.Originally streamed on Saturday, January 24.Timestamps00:00:00 — Cold Open00:00:26 — Introduction and Studio Tour00:08:17 — Q&A: Alignment, Accelerationism, and Short-Term Risks00:18:15 — Dario Amodei, Davos, and AI Pause00:27:42 — Producer Ori Joins: Locations and Vibes00:35:31 — Legislative Strategy vs. Social Movements (The Tobacco Playbook)00:45:01 — Ethics of Investing in or Working for AI Labs00:54:23 — Defining Superintelligence and Human Limitations01:02:58 — Technical Risks: Self-Replication and Cyber Warfare01:19:08 — Live Debate with Zane: Short-Term vs. Long-Term Strategy01:53:15 — Marketing Doom Debates and Guest Outreach01:56:45 — Live Call with Jonas: Scenarios for Survival02:05:52 — Conclusion and Mission StatementLinksLiron’s X Post about Destiny — https://x.com/liron/status/2015144778652905671?s=20Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77 — https://www.youtube.com/watch?v=IUX00c5x2UMDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe























