DiscoverDoom Debates!
Doom Debates!
Claim Ownership

Doom Debates!

Author: Liron Shapira

Subscribed: 38Played: 1,649
Share

Description

It's time to talk about the end of the world. With your host, Liron Shapira.

lironshapira.substack.com
140 Episodes
Reverse
Multiple live callers join this month's Q&A as we cover the imminent demise of programming as a profession, the Anthropic/Pentagon showdown, and debate the finer details of wireheading. I clarify my recent AI doom belief updates, and then the man behind Roko's Basilisk crashes the stream to argue I haven't updated nearly far enough.Timestamps00:00:00 — Cold Open00:00:56 — Welcome to the Livestream & Taking Questions from Chat00:12:44 — Anonymous Caller Asks If Rationalists Should Prioritize Attention-Grabbing Protests00:18:30 — The Good Case Scenario00:26:00 — Hugh Chungus Joins the Stream00:30:54 — Producer Ori, Liron's Recent Alignment Updates00:43:47 — We're In an Era of Centaurs00:47:40 — Noah Smith's Updates on AGI and Alignment00:48:44 — Co Co Chats Cybersecurity00:57:32 — The Attacker's Advantage in Offense/Defense Balance01:02:55 — Anthropic vs The Pentagon01:06:20 — "We're Getting Frog Boiled"01:11:06 — Stoner AI & Debating the Finer Points of Wireheading01:25:00 — A Caller Backs the Penrose Argument01:34:01 — Greyson Dials In01:40:21 — Surprise Guest Joins & Says Alignment Isn't a Problem02:05:15 — More Q&A with Chat02:14:26 — Closing ThoughtsLinks* Liron on X — https://x.com/liron* AI 2027 — https://ai-2027.com/* “Good Luck, Have Fun, Don’t Die” (film) — https://www.imdb.com/title/tt38301748/* “The AI Doc” (film) — https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimistDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Professor Moshe Vardi thinks AI will kill us with kindness by automating away our jobs. I think they'll just kill us for real.Who’s right? Tune into this episode and decide where you get off the Doom Train™.Some highlights of Professor Vardi’s impressive CV:* University Professor at Rice — a rare distinction that lets him teach in any department.* 65,000+ citations, an H-index above 100, and nearly 50 years spent mechanizing reasoning, which makes him one of the most decorated computer scientists alive.* He ran the ACM’s flagship publication for a decade, and now bridges CS and policy at Rice’s Baker Institute.* He has been sounding the alarm on AI-driven job automation for over ten years.* He signed the 2023 AI extinction risk statement, and calls himself “part of the resistance.”Links* Moshe Vardi’s Wikipedia — https://en.wikipedia.org/wiki/Moshe_Vardi* Moshe Vardi, Rice University — https://profiles.rice.edu/faculty/moshe-y-vardi* Baker Institute for Public Policy — https://www.bakerinstitute.org/* Nick Bostrom, Deep Utopia: Life and Meaning in a Solved World — https://www.amazon.com/Deep-Utopia-Life-Meaning-Solved/dp/1646871642* Joseph Aoun, Robot-Proof: Higher Education in the Age of Artificial Intelligence — https://www.amazon.com/Robot-Proof-Higher-Education-Artificial-Intelligence/dp/0262535971Timestamps00:00:00 — Cold Open00:00:54 — Introducing Professor Vardi00:02:01 — Professor Vardi’s Academic Focus: CS, AI, & Public Policy00:07:18 — What’s Your P(Doom)™?00:12:28 — We’re Not Doomed, “We’re Screwed”00:16:44 — AI’s Impact on Meaning & Purpose00:27:47 — Let’s Ride the Doom Train ™00:35:43 — The Future of Jobs00:39:24 — A Country of Geniuses in a Data Center00:41:04 — Corporations as Superintelligence00:45:49 — Agency, Consciousness, and the Limits of AI00:50:07 — The Mad Scientist Scenario00:54:02 — Could a Data Center of Geniuses Destroy Humanity?01:03:13 — The WALL-E Meme and Fun Theory01:04:01 — Why Professor Vardi Signed the AI Extinction Risk Statement01:06:02 — Wrap-Up + 1 Way Ticket to DoomDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Fresh off my debate with Destiny, his Discord community invited me into their voice chat to talk about AI doom. Just like the man himself, his fans are sharp.Let's find out where they get off The Doom Train™.My recent debate with Destiny — https://www.youtube.com/watch?v=rNgffLZTeWwTimestamps00:00:00 — Cold Open00:00:54 — Liron Joins Destiny’s Discord00:02:21 — The AI Doom Premise00:03:27 — Defining Intelligence and Is An LLM Really AI?00:07:12 — Will AI Become Uncontrollable?00:12:44 — The AI Alignment Problem00:24:11 — The Difficulty of Pausing AI00:26:01 — AI vs The Human Brain00:32:41 — Future AI Capabilities, Steering Toward Goals, & Philosophical DisagreementsDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Renowned scientists just set The Doomsday Clock closer than ever to midnight.I agree our collective doom seems imminent, but why do the clocksetters point to climate change—not rogue AI—as an existential threat?UChicago Professor Daniel Holz, a top physicist who sets the Doomsday Clock, joins me to debate society’s top existential risks.00:00:00 — Cold Open 00:00:51 — Introducing Professor Holz00:02:08 — The Doomsday Clock is at 85 Seconds to Midnight! 00:04:37 — What's Your P(Doom)?™ 00:08:09 — Making A Probability of Doomsday, or P(Doom), Equation 00:12:07 — How We All Die: Nuclear vs Climate vs AI 00:21:08 — Nuclear Close Calls from The Cold War 00:28:38 — History of The Doomsday Clock 00:30:18 — The Threat of Biological Risks Like Mirror Life 00:33:40 — Professor Holz’s Position on AI Misalignment Risk 00:44:49 — Does The Doomsday Clock Give Short Shrift to AI Misalignment Risk? 00:59:09 — Why Professor Holz Founded UChicago’s Existential Risk Laboratory (XLab) 01:06:22 — The State of Academic Research on AI Safety & Existential Risks 01:12:32 — The Case for Pausing AI Development 01:17:11 — Debate: Is Climate Change an Existential Threat? 01:28:48 — Call to Action: How to Reduce Our Collective ThreatLinksProfessor Daniel Holz’s Wikipedia — https://en.wikipedia.org/wiki/Daniel_HolzXLab (Existential Risk Laboratory) — https://xrisk.uchicago.edu/2026 Doomsday Clock Statement — https://thebulletin.org/doomsday-clock/2026-statement/The Bulletin of the Atomic Scientists (Nonprofit that produces the clock)— https://thebulletin.org/UChicago Magazine features Prof. Holz’s class — “Are We Doomed?”: https://mag.uchicago.edu/science-medicine/are-we-doomedThe Doomsday Machine by Daniel Ellsberg — https://www.amazon.com/Doomsday-Machine-Confessions-Nuclear-Planner/dp/1608196704If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares — https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640Learn more about pausing frontier AI development from PauseAI — https://pauseai.infoDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Liron was invited to give a presentation to some of the most promising creators in AI safety communications.Get a behind-the-scenes look at Doom Debates and how the channel has grown so quickly. Learn more about the Frame Fellowship: https://framefellowship.com/Timestamps00:00:00 — Liron tees up the presentation00:02:28 — Liron’s Frame Fellowhip Presentation00:06:03 — Introducing Doom Debates00:07:44 — Meeting the Frame Fellows00:19:38 — Why I Started Doom Debates00:30:20 — Handling Hate and Criticism00:31:56 — The Talent Stack00:39:34 — Finding Your Unique Niche00:40:58 — Q&A00:42:04 — On Funding the Show00:48:05 — Audience Demographics & Gender Strategy00:51:17 — How to communicate AI Risk Effectively00:56:22 — Social Media Strategy00:58:10 — Closing RemarksDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Destiny has racked up millions of views for his sharp takes on political and cultural news. Now, I finally get to ask him about a topic he’s been agnostic on: Will AI end humanity?Come ride with us on the Doom Train™ 🚂Timestamps00:00:00 — Teaser00:01:16 — Welcoming Destiny00:02:54 — What’s Your P(Doom)?™00:04:46 — 2017 vs 2026: Destiny’s views on AI00:11:04 — AI could vastly surpass human intelligence00:16:02 — Can AI doom us?00:18:42 — Intelligence doesn’t guarantee morality00:22:18 — The vibes-based case against doom00:29:58 — The human brain is inefficient00:35:17 — Does every intelligence in the universe self-destruct via AI?00:37:28 — Destiny turns the tables: Where does Liron get off The Doom Train™00:46:07 — Will a warning shot cause society to develop AI safely?00:54:10 — Roko’s Basilisk, the AI box problem00:59:37 — Will Destiny update his P(Doom)?™01:04:19 — Closing thoughtsLinksDestiny's YouTube — https://www.youtube.com/@destinyDestiny's X account — https://x.com/TheOmniLiberalMarc Andreessen saying AI isn’t dangerous because “it is math” — https://a16z.com/ai-will-save-the-world/Will Smith eating spaghetti AI video — https://knowyourmeme.com/memes/ai-will-smith-eating-spaghettiRoko’s Basilisk on LessWrong — https://www.lesswrong.com/tag/rokos-basiliskEliezer Yudkowsky’s AI Box Experiment — https://www.yudkowsky.net/singularity/aiboxDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
On the eve of superintelligence, real AI safety is a nonexistent field. I break down why in my latest essay, The Facade of AI Safety Will Crumble.This is a podcast reading of a recent Substack post: https://lironshapira.substack.com/p/the-facade-of-ai-safety-will-crumbleDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Elon Musk just made a stunning admission about the insane future he’s steering us toward.In a new interview with Dwarkesh Patel and John Collison on the Cheeky Pint podcast, Elon said that humanity can’t expect to be “in charge” of AI for long, because humans will soon only have 1% of the combined total human+AI intelligence.Then, he claimed to have a plan to build AI overlords that will naturally support humanity's flourishing.In this mini episode, I react to Elon's remarks and expose why his plan for humanity's survival in the age of AI is dangerously flimsy.My original tweet: https://x.com/liron/status/2020906862397489492Clip source: Elon Musk – "In 36 months, the cheapest place to put AI will be space”Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
California gubernatorial candidate Zoltan Istvan reveals his P(Doom) and makes the case for universal basic income and radical life extension.Timestamps00:00:00 — Teaser00:00:50 — Meet Zoltan, Democratic Candidate for California Governor00:08:30 — The 2026 California Governor's Race00:12:50 — Zoltan's Platform Is Automated Abundance00:19:45 — What's Your P(Doom)™00:28:26 — Campaigning on Existential Risk00:32:36 — Does Zoltan Support a Global AI Pause?00:48:39 — Exploring His Platform: Education, Crime, and Affordability01:08:55 — Exploring His Platform:: Super Cities, Space, and Longevity01:13:00 — Closing ThoughtsLinksZoltan Istvan’s Campaign for California Governor – zoltanistvan2026.comThe Transhumanist Wager by Zoltan Istvan – https://www.amazon.com/Transhumanist-Wager-Zoltan-Istvan/dp/0988616114Wired Article on the “Bunker” Party – https://www.wired.com/story/ai-risk-party-san-francisco/PauseAI – pauseai.infoSL4 Mailing List Archive – sl4.org/archiveDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Get ready for a rematch with the one & only Bentham’s Bulldog, a.k.a. Matthew Adelstein! Our first debate covered a wide range of philosophical topics.Today’s Debate #2 is all about Matthew’s new argument against the inevitability of AI doom. He comes out swinging with a calculated P(Doom) of just 2.6% , based on a multi-step probability chain that I challenge as potentially falling into a “Type 2 Conjunction Fallacy” (a.k.a. Multiple Stage Fallacy).We clash on whether to expect “alignment by default” and the nature of future AI architectures. While Matthew sees current RLHF success as evidence that AIs will likely remain compliant, I argue that we’re building “Goal Engines” — superhuman optimization modules that act like nuclear cores wrapped in friendly personalities. We debate whether these engines can be safely contained, or if the capability to map goals to actions is inherently dangerous and prone to exfiltration.Despite our different forecasts (my 50% vs his sub-10%), we actually land in the “sane zone” together on some key policy ideas, like the potential necessity of a global pause.While Matthew’s case for low P(Doom) hasn’t convinced me, I consider his post and his engagement with me to be super high quality and good faith. We’re not here to score points, we just want to better predict how the intelligence explosion will play out.Timestamps00:00:00 — Teaser00:00:35 — Bentham’s Bulldog Returns to Doom Debates00:05:43 — Higher-Order Evidence: Why Skepticism is Warranted00:11:06 — What’s Your P(Doom)™00:14:38 — The “Multiple Stage Fallacy” Objection00:21:48 — The Risk of Warring AIs vs. Misalignment00:27:29 — Historical Pessimism: The “Boy Who Cried Wolf”00:33:02 — Comparing AI Risk to Climate Change & Nuclear War00:38:59 — Alignment by Default via Reinforcement Learning00:46:02 — The “Goal Engine” Hypothesis00:53:13 — Is Psychoanalyzing Current AI Valid for Future Systems?01:00:17 — Winograd Schemas & The Fragility of Value01:09:15 — The Nuclear Core Analogy: Dangerous Engines in Friendly Wrappers01:16:16 — The Discontinuity of Unstoppable AI01:23:53 — Exfiltration: Running Superintelligence on a Laptop01:31:37 — Evolution Analogy: Selection Pressures for Alignment01:39:08 — Commercial Utility as a Force for Constraints01:46:34 — Can You Isolate the “Goal-to-Action” Module?01:54:15 — Will Friendly Wrappers Successfully Control Superhuman Cores?02:04:01 — Moral Realism and Missing Out on Cosmic Value02:11:44 — The Paradox of AI Solving the Alignment Problem02:19:11 — Policy Agreements: Global Pauses and China02:26:11 — Outro: PauseCon DC 2026 PromoLinksBentham’s Bulldog Official Substack — https://benthams.substack.comThe post we debated — https://benthams.substack.com/p/against-if-anyone-builds-it-everyoneApply to PauseCon DC 2026 here or via https://pauseai-us.orgForethought Institute’s paper: Preparing for the Intelligence ExplosionTom Davidson (Forethought Institute)’s post: How quick and big would a software intelligence explosion be?Scott Alexander on the Coffeepocalypse Argument---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Harlan Stewart works in communications for the Machine Intelligence Research Institute (MIRI). In this episode, Harlan and I give our honest opinions on Dario Amodei's new essay "The Adolescence of Technology".Timestamps0:00:00 — Cold Open0:00:47 — How Harlan Stewart Got Into AI Safety0:02:30 — What’s Your P(Doom)?™0:04:09 — The “Doomer” Label0:06:13 — Overall Reaction to Dario’s Essay: The Missing Mood0:09:15 — The Rosy Take on Dario’s Essay0:10:42 — Character Assassination & Low Blows0:13:39 — Dario Amodei is Shifting the Overton Window in The Wrong Direction0:15:04 — Object-Level vs. Meta-Level Criticisms0:17:07 — The “Inevitability” Strawman Used by Dario0:19:03 — Dario Refers to Doom as a Self-Fulfilling Prophecy0:22:38 — Dismissing Critics as “Too Theoretical”0:43:18 — The Problem with Psychoanalyzing AI0:56:12 — “Intellidynamics” & Reflective Stability1:07:12 — Why Is Dario Dismissing an AI Pause?1:11:45 — Final TakeawaysLinksHarlan’s X — https://x.com/HumanHarlan“The Adolescence of Technology” by Dario Amodei — https://www.darioamodei.com/essay/the-adolescence-of-technologyDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Check out the new Doom Debates studio in this Q&A with special guest Producer Ori! Liron gets into a heated discussion about whether doomers must validate short-term risks, like data center water usage, in order to build a successful political coalition.Originally streamed on Saturday, January 24.Timestamps00:00:00 — Cold Open00:00:26 — Introduction and Studio Tour00:08:17 — Q&A: Alignment, Accelerationism, and Short-Term Risks00:18:15 — Dario Amodei, Davos, and AI Pause00:27:42 — Producer Ori Joins: Locations and Vibes00:35:31 — Legislative Strategy vs. Social Movements (The Tobacco Playbook)00:45:01 — Ethics of Investing in or Working for AI Labs00:54:23 — Defining Superintelligence and Human Limitations01:02:58 — Technical Risks: Self-Replication and Cyber Warfare01:19:08 — Live Debate with Zane: Short-Term vs. Long-Term Strategy01:53:15 — Marketing Doom Debates and Guest Outreach01:56:45 — Live Call with Jonas: Scenarios for Survival02:05:52 — Conclusion and Mission StatementLinksLiron’s X Post about Destiny — https://x.com/liron/status/2015144778652905671?s=20Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77 — https://www.youtube.com/watch?v=IUX00c5x2UMDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Audrey Tang was the youngest minister in Taiwanese history. Now she's working to align AI with democratic principles as Taiwan's Cyber Ambassador.In this debate, I probe her P(doom) and stress-test her vision for safe AI development.Timestamps00:00:00 — Episode Preview00:01:43 — Introducing Audrey Tang, Cyber Ambassador of Taiwan00:07:20 — Being Taiwan’s First Digital Minister00:17:19 — What's Your P(Doom)? ™00:21:10 — Comparing AI Risk to Nuclear Risk00:22:53 — The Statement on AI Extinction Risk00:27:29 — Doomerism as a Hyperstition00:30:51 — Audrey Explains Her Vision of "Plurality"00:37:17 — Audrey Explains Her Principles of Civic Ethics, The "6-Pack of Care"00:45:58 — AGI Timelines: "It's Already Here"00:54:41 — The Apple Analogy01:03:09 — What If AI FOOMs?01:11:19 — What AI Can vs What AI Will Do01:15:20 — Lessons from COVID-1901:19:59 — Is Society Ready? Audrey Reflects on a Personal Experience with Mortality01:23:50 — AI Alignment Cannot Be Top-Down01:34:04 — AI-as-Mother vs AI-as-Gardener01:37:26 — China and the Geopolitics of AI Chip Manufacturing in Taiwan01:40:47 — Red Lines, International Treaties, and the Off Button01:48:26 — Debate Wrap-UpLinksPlurality: The Future of Collaborative Technology and Democracy by Glen Weyl and Audrey Tang — https://www.amazon.com/Plurality-Future-Collaborative-Technology-Democracy/dp/B0D98RPKCKAudrey’s X — https://x.com/audreytAudrey’s Wikipedia — https://en.wikipedia.org/wiki/Audrey_TangDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
I joined Steve Bannon’s War Room Battleground to talk about AI doom.Hosted by Joe Allen, we cover AGI timelines, raising kids with a high p(doom), and why improving our survival odds requires a global wake-up call.00:00:00 — Episode Preview00:01:17 — Joe Allen opens the show and introduces Liron Shapira00:04:06 — Liron: What’s Your P(Doom)?00:05:37 — How Would an AI Take Over?00:07:20 — The Timeline to AGI00:08:17 — Benchmarks & AI Passing the Turing Test00:14:43 — Liron Is Typically a Techno-Optimist00:18:00 — Raising a Family with a High P(Doom)00:23:48 — Mobilizing a Grassroots AI Survival Campaign00:26:45 — Final Message: A Wake-Up Call00:29:23 — Joe Allen’s Closing Message to the War Room PosseLinks: Joe’s Substack — https://substack.com/@joebotJoe’s Twitter — https://x.com/JOEBOTxyzBannon’s War Room Twitter — https://x.com/Bannons_WarRoomWarRoom Battleground EP 922: AI Doom Debates with Liron Shapira on Rumble — https://rumble.com/v742oo4-warroom-battleground-ep-922-ai-doom-debates-with-liron-shapira.htmlDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Economist Noah Smith is the author of Noahpinion, one of the most popular Substacks in the world.Far from worrying about human extinction from superintelligent AI, Noah is optimistic AI will create a world where humans still have plentiful, high-paying jobs!In this debate, I stress-test his rosy outlook. Let’s see if Noah can instill us with more confidence about humanity’s rapidly approaching AI future.Timestamps00:00:00 - Episode Preview00:01:41 - Introducing Noah Smith00:03:19 - What’s Your P(Doom)™00:04:40 - Good vs. Bad Transhumanist Outcomes00:15:17 - Catastrophe vs. Total Extinction00:17:15 - Mechanisms of Doom00:27:16 - The AI Persuasion Risk00:36:20 - Instrumental Convergence vs. Peace00:53:08 - The “One AI” Breakout Scenario01:01:18 - The “Stoner AI” Theory01:08:49 - Importance of Reflective Stability01:14:50 - Orthogonality & The Waymo Argument01:21:18 - Comparative Advantage & Jobs01:27:43 - Wealth Distribution & Robot Lords01:34:34 - Supply Curves & Resource Constraints01:43:38 - Policy of Reserving Human Resources01:48:28 - Closing: The Case for OptimismLinksNoah’s Substack — https://noahpinion.blog“Plentiful, high-paying jobs in the age of AI” — https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the“My thoughts on AI safety” — https://www.noahpinion.blog/p/my-thoughts-on-ai-safetyNoah’s Twitter — https://x.com/noahpinion---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
In September of 2023, when OpenAI’s GPT-4 was still a fresh innovation and people were just beginning to wrap their heads around large language models, I was invited to debate Beff Jezos, Bayeslord, and other prominent “effective accelerationists” a.k.a. “e/acc” folks on an X Space.E/acc’s think building artificial superintelligence is unlikely to disempower humanity and doom the future, because that’d be an illegal exception to the rule that accelerating new technology is always the highest-expected-value for humanity.As you know, I disagree — I think doom is extremely likely and imminent possibility.This debate took place 9 months before I started Doom Debates, and was one of the experiences that made me realize debating AI doom was my calling. It’s also the only time Beff Jezos has ever not been too chicken to debate me.Timestamps00:00:00 — Liron’s New Intro00:04:15 — Debate Starts Here: Litigating FOOM00:06:18 — Defining the Recursive Feedback Loop00:15:05 — The Two-Part Doomer Thesis00:26:00 — When Does a Tool Become an Agent?00:44:02 — The Argument for Convergent Architecture00:46:20 — Mathematical Objections: Ergodicity and Eigenvalues01:03:46 — Bayeslord Enters: Why Speed Doesn’t Matter01:12:40 — Beff Jezos Enters: Physical Priors vs. Internet Data01:13:49 — The 5% Probability of Doom by GPT-501:20:09 — Chaos Theory and Prediction Limits01:27:56 — Algorithms vs. Hardware Constraints01:35:20 — Galactic Resources vs. Human Extermination01:54:13 — The Intelligence Bootstrapping Script Scenario02:02:13 — The 10-Megabyte AI Virus Debate02:11:54 — The Nuclear Analogy: Noise Canceling vs. Rubble02:37:39 — Controlling Intelligence: The Roman Empire Analogy02:44:53 — Real-World Latency and API Rate Limits03:03:11 — The Difficulty of the Off Button03:24:47 — Why Liron is “e/acc at Heart”Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
AGI timelines, offense/defense balance, evolution vs engineering, how to lower P(Doom), Eliezer Yudkowksy, and much more!Timestamps:00:00 Trailer03:10 Is My P(Doom) Lowering?11:29 First Caller: AI Offense vs Defense Balance16:50 Superintelligence Skepticism25:05 Agency and AI Goals29:06 Communicating AI Risk36:35 Attack vs Defense Equilibrium38:22 Can We Solve Outer Alignment?54:47 What is Your P(Pocket Nukes)?1:00:05 The “Shoggoth” Metaphor Is Outdated1:06:23 Should I Reframe the P(Doom) Question?1:12:22 How YOU Can Make a Difference1:24:43 Can AGI Beat Biology?1:39:22 Agency and Convergent Goals1:59:56 Viewer Poll: What Content Should I Make?2:26:15 AI Warning Shots2:32:12 More Listener Questions: Debate Tactics, Getting a PhD, Specificity2:53:53 Closing ThoughtsLinks:Support PauseAI — https://pauseai.info/Support PauseAI US — https://www.pauseai-us.org/Support LessWrong / Lightcone Infrastructure — LessWrong is fundraising!Support MIRI — MIRI’s 2025 FundraiserAbout the show:Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Devin Elliot is a former pro snowboarder turned software engineer who has logged thousands of hours building AI systems. His P(Doom) is a flat ⚫. He argues that worrying about an AI takeover is as irrational as fearing your car will sprout wings and fly away.We spar over the hard limits of current models: Devin insists LLMs are hitting a wall, relying entirely on external software “wrappers” to feign intelligence. I push back, arguing that raw models are already demonstrating native reasoning and algorithmic capabilities.Devin also argues for decentralization by claiming that nuclear proliferation is safer than centralized control.We end on a massive timeline split: I see superintelligence in a decade, while he believes we’re a thousand years away from being able to “grow” computers that are truly intelligence.Timestamps00:00:00 Episode Preview00:01:03 Intro: Snowboarder to Coder00:03:30 "I Do Not Have a P(Doom)"00:06:47 Nuclear Proliferation & Centralized Control00:10:11 The "Spotify Quality" House Analogy00:17:15 Ideal Geopolitics: Decentralized Power00:25:22 Why AI Can't "Fly Away"00:28:20 The Long Addition Test: Native or Tool?00:38:26 Is Non-Determinism a Feature or a Bug?00:52:01 The Impossibility of Mind Uploading00:57:46 "Growing" Computers from Cells01:02:52 Timelines: 10 Years vs. 1,000 Years01:11:40 "Plastic Bag Ghosts" & Builder Intuition01:13:17 Summary of the Debate01:15:30 Closing ThoughtsLinksDevin’s Twitter — https://x.com/devinjelliot---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Dr. Michael Timothy Bennett, Ph.D, is an award-winning young researcher who has developed a new formal framework for understanding intelligence. He has a TINY P(Doom) because he claims superintelligence will be resource-constrained and tend toward cooperation.In this lively debate, I stress-test Michael’s framework and debate whether its theorized constraints will actually hold back superintelligent AI.Timestamps* 00:00 Trailer* 01:41 Introducing Michael Timothy Bennett* 04:33 What’s Your P(Doom)?™* 10:51 Michael’s Thesis on Intelligence: “Abstraction Layers”, “Adaptation”, “Resource Efficiency”* 25:36 Debate: Is Einstein Smarter Than a Rock?* 39:07 “Embodiment”: Michael’s Unconventional Computation Theory vs Standard Computation* 48:28 “W-Maxing”: Michael’s Intelligence Framework vs. a Goal-Oriented Framework* 59:47 Debating AI Doom* 1:09:49 Debating Instrumental Convergence* 1:24:00 Where Do You Get Off The Doom Train™ — Identifying The Cruxes of Disagreement* 1:44:13 Debating AGI Timelines* 1:49:10 Final RecapLinksMichael’s website — https://michaeltimothybennett.comMichael’s Twitter — https://x.com/MiTiBennettMichael’s latest paper, “How To Build Conscious Machines” — https://osf.io/preprints/thesiscommons/wehmg_v1?view_onlyDoom Debates' Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
My guest today achieved something EXTREMELY rare and impressive: Coming onto my show with an AI optimist position, then admitting he hadn’t thought of my counterarguments before, and updating his beliefs in realtime! Also, he won the 2013 Nobel Prize in computational biology.I’m thrilled that Prof. Levitt understands the value of raising awareness about imminent extinction risk from superintelligent AI, and the value of debate as a tool to uncover the truth — the dual missions of Doom Debates!Timestamps0:00 — Trailer1:18 — Introducing Michael Levitt4:20 — The Evolution of Computing and AI12:42 — Measuring Intelligence: Humans vs. AI23:11 — The AI Doom Argument: Steering the Future25:01 — Optimism, Pessimism, and Other Existential Risks34:15 — What’s Your P(Doom)™36:16 — Warning Shots and Global Regulation55:28 — Comparing AI Risk to Pandemics and Nuclear War1:01:49 — Wrap-Up1:06:11 — Outro + New AI safety resourceShow NotesMichael Levitt’s Twitter — https://x.com/MLevitt_NP2013-- Get full access to Doom Debates at lironshapira.substack.com/subscribe
loading
Comments