DiscoverDoom Debates!
Doom Debates!
Claim Ownership

Doom Debates!

Author: Liron Shapira

Subscribed: 38Played: 1,773
Share

Description

It's time to talk about the end of the world. With your host, Liron Shapira.

lironshapira.substack.com
144 Episodes
Reverse
My new interview on Robert Wright's Nonzero Podcast where we dive into the agentic AI explosion. Bob is an exceptionally sharp interviewer who connects the dots between job displacement and AI doom.I highly recommend becoming a premium subscriber to Bob’s Nonzero Newsletter so you can watch the Overtime segment in every interview he does — https://nonzero.orgOur discussion continues in Overtime for premium subscribers — https://www.nonzero.org/p/early-access-the-allure-and-dangerLinksNonzero Podcast on YouTube — https://www.youtube.com/@nonzeroRobert Wright, The God Test (book, Amazon) — https://www.amazon.com/God-Test-Artificial-Intelligence-Reckoning/dp/1668061651Timestamps00:00:00 — Introduction and Today's Topics00:03:22 — Vibe Coding and the Agentic Revolution00:08:57 — The Future of Employment00:17:57 — Agents and What They Can Do00:27:59 — The "Can It" and "Will It" Framework for AI Doom00:30:27 — OpenClaw and Liron's Experience with AI Agents00:36:45 — The Case for Slowing Down AI Development00:43:28 — Anthropic, the Pentagon, and AI Politics00:48:37 — AI Safety Leadership Concerns00:52:06 — Closing and Overtime TeaseDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Noah Smith is an economist and author of Noahpinion, one of the most popular Substacks in the world.He returns to Doom Debates to share a massive update to his P(Doom), and things get a little heated.Timestamps00:00:00 — Cold Open00:00:41 — Welcome Back Noah Smith!00:01:40 — Noah's P(Doom) Update00:03:57 — The Chatbot-Genie-God Framework00:05:14 — What's Your P(Doom)™00:09:59 — Unpacking Noah's Update00:16:56 — Why Incidents of Rogue AI Lower P(Doom)00:20:04 — Noah's Mainline Doom Scenario: Much Worse Than COVID-1900:23:29 — Society Responds After Growing Pains00:29:25 — Agentic AI Contributed to Noah's Position00:31:35 — Should Yudkowsky Get Bayesian Credit?00:33:59 — Are We Communicating the Right Way with Policymakers?00:40:16 — Finding Common Ground on AI Policy00:47:07 — Wrap-Up: People Need to Be More ScaredLinksDoom Debate with Noah Smith — Part 1: https://www.youtube.com/watch?v=AwmJ-OnK2I4Noah’s Twitter — https://x.com/noahpinionNoah’s Substack — https://noahpinion.blogDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Dr. Claire Berlinski is a journalist, Oxford PhD, and author of The Cosmopolitan Globalist.She invited me to her weekly symposium to make the case for AI as an existential risk. Can we convince her sharp, skeptical audience that P(Doom) is high?Subscribe to The Cosmopolitan Globalist: https://claireberlinski.substack.com/Follow Claire on X: https://x.com/ClaireBerlinski“If Anyone Builds It, Everyone Dies” by Eliezer Yudkowsky & Nate Soares — https://ifanyonebuildsit.comTimestamps00:00:00 — Introduction00:02:10 — Welcome and Setting the Stage00:06:16 — Outcome Steering: The Magic of Intelligence00:10:40 — Collective Intelligence and the Path to ASI00:12:53 — The Five-Point Argument00:14:56 — The Alignment Problem and Control00:17:56 — The Genie Problem and Recursive Self-Improvement00:20:38 — Timeline: Five Years or Fifty?00:26:14 — Social Revolution and Pausing AI00:28:54 — Energy Constraints and Resource Limits00:31:23 — Morality, Empathy, and Superintelligence00:37:45 — How AI Is Actually Built00:38:31 — Computational Irreducibility and Co-Evolution00:44:57 — Foom and the Discontinuity Question00:46:44 — US-China Rivalry and the Arms Race00:49:36 — The Co-Evolution Argument00:55:36 — Alignment as Psychoanalysis00:57:24 — Anthropic’s “Harmless Slop” Paper01:00:33 — Policy Solutions: The Pause Button01:04:47 — Military AI and the Singularity01:07:10 — Cognitive Obstacles and Doom Fatigue01:09:07 — Why People Don’t Act01:13:00 — Reaching Representatives and Building a Platform01:17:12 — Sam Altman and the Manhattan Project Parallel01:19:14 — Community Building and Pause AI01:22:07 — Call to Action and ClosingDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Fan favorite Dr. Steven Byrnes returns to discuss recent AI progress and the concerning paradigm shift to "ruthless sociopath AI" he sees on the horizon.Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem.Timestamps00:00:00 — Cold Open00:00:48 — Welcoming Back the Returning Champion00:02:38 — Research Update: What's New in The Last 6 Months00:04:31 — The Rise of AI Agents00:07:49 — What's Your P(Doom)?™00:13:42 — "Brain-Like AGI": The Next Generation of AI00:17:01 — Can LLMs Ever Match the Human Brain?00:31:51 — Will AI Kill Us Before It Takes Our Jobs?00:36:12 — Country of Geniuses in a Data Center00:41:34 — Why We Should Expect "Ruthless Sociopathic" ASI00:54:15 — Post-Training & RLVR — A "Thin Layer" of Real Intelligence01:02:32 — Consequentialism and the Path to Superintelligence01:17:02 — Airplanes vs. Rockets: An Analogy for AI01:24:33 — FOOM and Recursive Self-ImprovementLinksSteven Byrnes’ Website & Research— https://sjbyrnes.com/Steve’s X—https://x.com/steve47285Astera Institute—https://astera.org/“Why We Should Expect Ruthless Sociopath ASI” — https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asiIntro to Brain-Like-AGI Safety—https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8Steve on LessWrong—https://www.lesswrong.com/users/steve2152AI 2027 — Scenario Timeline — https://ai-2027.com/Part 1: “The Man Who Might SOLVE AI Alignment”—https://www.youtube.com/watch?v=_ZRUq3VEAc0Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Multiple live callers join this month's Q&A as we cover the imminent demise of programming as a profession, the Anthropic/Pentagon showdown, and debate the finer details of wireheading. I clarify my recent AI doom belief updates, and then the man behind Roko's Basilisk crashes the stream to argue I haven't updated nearly far enough.Timestamps00:00:00 — Cold Open00:00:56 — Welcome to the Livestream & Taking Questions from Chat00:12:44 — Anonymous Caller Asks If Rationalists Should Prioritize Attention-Grabbing Protests00:18:30 — The Good Case Scenario00:26:00 — Hugh Chungus Joins the Stream00:30:54 — Producer Ori, Liron's Recent Alignment Updates00:43:47 — We're In an Era of Centaurs00:47:40 — Noah Smith's Updates on AGI and Alignment00:48:44 — Co Co Chats Cybersecurity00:57:32 — The Attacker's Advantage in Offense/Defense Balance01:02:55 — Anthropic vs The Pentagon01:06:20 — "We're Getting Frog Boiled"01:11:06 — Stoner AI & Debating the Finer Points of Wireheading01:25:00 — A Caller Backs the Penrose Argument01:34:01 — Greyson Dials In01:40:21 — Surprise Guest Joins & Says Alignment Isn't a Problem02:05:15 — More Q&A with Chat02:14:26 — Closing ThoughtsLinks* Liron on X — https://x.com/liron* AI 2027 — https://ai-2027.com/* “Good Luck, Have Fun, Don’t Die” (film) — https://www.imdb.com/title/tt38301748/* “The AI Doc” (film) — https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimistDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Professor Moshe Vardi thinks AI will kill us with kindness by automating away our jobs. I think they'll just kill us for real.Who’s right? Tune into this episode and decide where you get off the Doom Train™.Some highlights of Professor Vardi’s impressive CV:* University Professor at Rice — a rare distinction that lets him teach in any department.* 65,000+ citations, an H-index above 100, and nearly 50 years spent mechanizing reasoning, which makes him one of the most decorated computer scientists alive.* He ran the ACM’s flagship publication for a decade, and now bridges CS and policy at Rice’s Baker Institute.* He has been sounding the alarm on AI-driven job automation for over ten years.* He signed the 2023 AI extinction risk statement, and calls himself “part of the resistance.”Links* Moshe Vardi’s Wikipedia — https://en.wikipedia.org/wiki/Moshe_Vardi* Moshe Vardi, Rice University — https://profiles.rice.edu/faculty/moshe-y-vardi* Baker Institute for Public Policy — https://www.bakerinstitute.org/* Nick Bostrom, Deep Utopia: Life and Meaning in a Solved World — https://www.amazon.com/Deep-Utopia-Life-Meaning-Solved/dp/1646871642* Joseph Aoun, Robot-Proof: Higher Education in the Age of Artificial Intelligence — https://www.amazon.com/Robot-Proof-Higher-Education-Artificial-Intelligence/dp/0262535971Timestamps00:00:00 — Cold Open00:00:54 — Introducing Professor Vardi00:02:01 — Professor Vardi’s Academic Focus: CS, AI, & Public Policy00:07:18 — What’s Your P(Doom)™?00:12:28 — We’re Not Doomed, “We’re Screwed”00:16:44 — AI’s Impact on Meaning & Purpose00:27:47 — Let’s Ride the Doom Train ™00:35:43 — The Future of Jobs00:39:24 — A Country of Geniuses in a Data Center00:41:04 — Corporations as Superintelligence00:45:49 — Agency, Consciousness, and the Limits of AI00:50:07 — The Mad Scientist Scenario00:54:02 — Could a Data Center of Geniuses Destroy Humanity?01:03:13 — The WALL-E Meme and Fun Theory01:04:01 — Why Professor Vardi Signed the AI Extinction Risk Statement01:06:02 — Wrap-Up + 1 Way Ticket to DoomDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Fresh off my debate with Destiny, his Discord community invited me into their voice chat to talk about AI doom. Just like the man himself, his fans are sharp.Let's find out where they get off The Doom Train™.My recent debate with Destiny — https://www.youtube.com/watch?v=rNgffLZTeWwTimestamps00:00:00 — Cold Open00:00:54 — Liron Joins Destiny’s Discord00:02:21 — The AI Doom Premise00:03:27 — Defining Intelligence and Is An LLM Really AI?00:07:12 — Will AI Become Uncontrollable?00:12:44 — The AI Alignment Problem00:24:11 — The Difficulty of Pausing AI00:26:01 — AI vs The Human Brain00:32:41 — Future AI Capabilities, Steering Toward Goals, & Philosophical DisagreementsDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Renowned scientists just set The Doomsday Clock closer than ever to midnight.I agree our collective doom seems imminent, but why do the clocksetters point to climate change—not rogue AI—as an existential threat?UChicago Professor Daniel Holz, a top physicist who sets the Doomsday Clock, joins me to debate society’s top existential risks.00:00:00 — Cold Open 00:00:51 — Introducing Professor Holz00:02:08 — The Doomsday Clock is at 85 Seconds to Midnight! 00:04:37 — What's Your P(Doom)?™ 00:08:09 — Making A Probability of Doomsday, or P(Doom), Equation 00:12:07 — How We All Die: Nuclear vs Climate vs AI 00:21:08 — Nuclear Close Calls from The Cold War 00:28:38 — History of The Doomsday Clock 00:30:18 — The Threat of Biological Risks Like Mirror Life 00:33:40 — Professor Holz’s Position on AI Misalignment Risk 00:44:49 — Does The Doomsday Clock Give Short Shrift to AI Misalignment Risk? 00:59:09 — Why Professor Holz Founded UChicago’s Existential Risk Laboratory (XLab) 01:06:22 — The State of Academic Research on AI Safety & Existential Risks 01:12:32 — The Case for Pausing AI Development 01:17:11 — Debate: Is Climate Change an Existential Threat? 01:28:48 — Call to Action: How to Reduce Our Collective ThreatLinksProfessor Daniel Holz’s Wikipedia — https://en.wikipedia.org/wiki/Daniel_HolzXLab (Existential Risk Laboratory) — https://xrisk.uchicago.edu/2026 Doomsday Clock Statement — https://thebulletin.org/doomsday-clock/2026-statement/The Bulletin of the Atomic Scientists (Nonprofit that produces the clock)— https://thebulletin.org/UChicago Magazine features Prof. Holz’s class — “Are We Doomed?”: https://mag.uchicago.edu/science-medicine/are-we-doomedThe Doomsday Machine by Daniel Ellsberg — https://www.amazon.com/Doomsday-Machine-Confessions-Nuclear-Planner/dp/1608196704If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares — https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640Learn more about pausing frontier AI development from PauseAI — https://pauseai.infoDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Liron was invited to give a presentation to some of the most promising creators in AI safety communications.Get a behind-the-scenes look at Doom Debates and how the channel has grown so quickly. Learn more about the Frame Fellowship: https://framefellowship.com/Timestamps00:00:00 — Liron tees up the presentation00:02:28 — Liron’s Frame Fellowhip Presentation00:06:03 — Introducing Doom Debates00:07:44 — Meeting the Frame Fellows00:19:38 — Why I Started Doom Debates00:30:20 — Handling Hate and Criticism00:31:56 — The Talent Stack00:39:34 — Finding Your Unique Niche00:40:58 — Q&A00:42:04 — On Funding the Show00:48:05 — Audience Demographics & Gender Strategy00:51:17 — How to communicate AI Risk Effectively00:56:22 — Social Media Strategy00:58:10 — Closing RemarksDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Destiny has racked up millions of views for his sharp takes on political and cultural news. Now, I finally get to ask him about a topic he’s been agnostic on: Will AI end humanity?Come ride with us on the Doom Train™ 🚂Timestamps00:00:00 — Teaser00:01:16 — Welcoming Destiny00:02:54 — What’s Your P(Doom)?™00:04:46 — 2017 vs 2026: Destiny’s views on AI00:11:04 — AI could vastly surpass human intelligence00:16:02 — Can AI doom us?00:18:42 — Intelligence doesn’t guarantee morality00:22:18 — The vibes-based case against doom00:29:58 — The human brain is inefficient00:35:17 — Does every intelligence in the universe self-destruct via AI?00:37:28 — Destiny turns the tables: Where does Liron get off The Doom Train™00:46:07 — Will a warning shot cause society to develop AI safely?00:54:10 — Roko’s Basilisk, the AI box problem00:59:37 — Will Destiny update his P(Doom)?™01:04:19 — Closing thoughtsLinksDestiny's YouTube — https://www.youtube.com/@destinyDestiny's X account — https://x.com/TheOmniLiberalMarc Andreessen saying AI isn’t dangerous because “it is math” — https://a16z.com/ai-will-save-the-world/Will Smith eating spaghetti AI video — https://knowyourmeme.com/memes/ai-will-smith-eating-spaghettiRoko’s Basilisk on LessWrong — https://www.lesswrong.com/tag/rokos-basiliskEliezer Yudkowsky’s AI Box Experiment — https://www.yudkowsky.net/singularity/aiboxDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
On the eve of superintelligence, real AI safety is a nonexistent field. I break down why in my latest essay, The Facade of AI Safety Will Crumble.This is a podcast reading of a recent Substack post: https://lironshapira.substack.com/p/the-facade-of-ai-safety-will-crumbleDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Elon Musk just made a stunning admission about the insane future he’s steering us toward.In a new interview with Dwarkesh Patel and John Collison on the Cheeky Pint podcast, Elon said that humanity can’t expect to be “in charge” of AI for long, because humans will soon only have 1% of the combined total human+AI intelligence.Then, he claimed to have a plan to build AI overlords that will naturally support humanity's flourishing.In this mini episode, I react to Elon's remarks and expose why his plan for humanity's survival in the age of AI is dangerously flimsy.My original tweet: https://x.com/liron/status/2020906862397489492Clip source: Elon Musk – "In 36 months, the cheapest place to put AI will be space”Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
California gubernatorial candidate Zoltan Istvan reveals his P(Doom) and makes the case for universal basic income and radical life extension.Timestamps00:00:00 — Teaser00:00:50 — Meet Zoltan, Democratic Candidate for California Governor00:08:30 — The 2026 California Governor's Race00:12:50 — Zoltan's Platform Is Automated Abundance00:19:45 — What's Your P(Doom)™00:28:26 — Campaigning on Existential Risk00:32:36 — Does Zoltan Support a Global AI Pause?00:48:39 — Exploring His Platform: Education, Crime, and Affordability01:08:55 — Exploring His Platform:: Super Cities, Space, and Longevity01:13:00 — Closing ThoughtsLinksZoltan Istvan’s Campaign for California Governor – zoltanistvan2026.comThe Transhumanist Wager by Zoltan Istvan – https://www.amazon.com/Transhumanist-Wager-Zoltan-Istvan/dp/0988616114Wired Article on the “Bunker” Party – https://www.wired.com/story/ai-risk-party-san-francisco/PauseAI – pauseai.infoSL4 Mailing List Archive – sl4.org/archiveDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Get ready for a rematch with the one & only Bentham’s Bulldog, a.k.a. Matthew Adelstein! Our first debate covered a wide range of philosophical topics.Today’s Debate #2 is all about Matthew’s new argument against the inevitability of AI doom. He comes out swinging with a calculated P(Doom) of just 2.6% , based on a multi-step probability chain that I challenge as potentially falling into a “Type 2 Conjunction Fallacy” (a.k.a. Multiple Stage Fallacy).We clash on whether to expect “alignment by default” and the nature of future AI architectures. While Matthew sees current RLHF success as evidence that AIs will likely remain compliant, I argue that we’re building “Goal Engines” — superhuman optimization modules that act like nuclear cores wrapped in friendly personalities. We debate whether these engines can be safely contained, or if the capability to map goals to actions is inherently dangerous and prone to exfiltration.Despite our different forecasts (my 50% vs his sub-10%), we actually land in the “sane zone” together on some key policy ideas, like the potential necessity of a global pause.While Matthew’s case for low P(Doom) hasn’t convinced me, I consider his post and his engagement with me to be super high quality and good faith. We’re not here to score points, we just want to better predict how the intelligence explosion will play out.Timestamps00:00:00 — Teaser00:00:35 — Bentham’s Bulldog Returns to Doom Debates00:05:43 — Higher-Order Evidence: Why Skepticism is Warranted00:11:06 — What’s Your P(Doom)™00:14:38 — The “Multiple Stage Fallacy” Objection00:21:48 — The Risk of Warring AIs vs. Misalignment00:27:29 — Historical Pessimism: The “Boy Who Cried Wolf”00:33:02 — Comparing AI Risk to Climate Change & Nuclear War00:38:59 — Alignment by Default via Reinforcement Learning00:46:02 — The “Goal Engine” Hypothesis00:53:13 — Is Psychoanalyzing Current AI Valid for Future Systems?01:00:17 — Winograd Schemas & The Fragility of Value01:09:15 — The Nuclear Core Analogy: Dangerous Engines in Friendly Wrappers01:16:16 — The Discontinuity of Unstoppable AI01:23:53 — Exfiltration: Running Superintelligence on a Laptop01:31:37 — Evolution Analogy: Selection Pressures for Alignment01:39:08 — Commercial Utility as a Force for Constraints01:46:34 — Can You Isolate the “Goal-to-Action” Module?01:54:15 — Will Friendly Wrappers Successfully Control Superhuman Cores?02:04:01 — Moral Realism and Missing Out on Cosmic Value02:11:44 — The Paradox of AI Solving the Alignment Problem02:19:11 — Policy Agreements: Global Pauses and China02:26:11 — Outro: PauseCon DC 2026 PromoLinksBentham’s Bulldog Official Substack — https://benthams.substack.comThe post we debated — https://benthams.substack.com/p/against-if-anyone-builds-it-everyoneApply to PauseCon DC 2026 here or via https://pauseai-us.orgForethought Institute’s paper: Preparing for the Intelligence ExplosionTom Davidson (Forethought Institute)’s post: How quick and big would a software intelligence explosion be?Scott Alexander on the Coffeepocalypse Argument---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Harlan Stewart works in communications for the Machine Intelligence Research Institute (MIRI). In this episode, Harlan and I give our honest opinions on Dario Amodei's new essay "The Adolescence of Technology".Timestamps0:00:00 — Cold Open0:00:47 — How Harlan Stewart Got Into AI Safety0:02:30 — What’s Your P(Doom)?™0:04:09 — The “Doomer” Label0:06:13 — Overall Reaction to Dario’s Essay: The Missing Mood0:09:15 — The Rosy Take on Dario’s Essay0:10:42 — Character Assassination & Low Blows0:13:39 — Dario Amodei is Shifting the Overton Window in The Wrong Direction0:15:04 — Object-Level vs. Meta-Level Criticisms0:17:07 — The “Inevitability” Strawman Used by Dario0:19:03 — Dario Refers to Doom as a Self-Fulfilling Prophecy0:22:38 — Dismissing Critics as “Too Theoretical”0:43:18 — The Problem with Psychoanalyzing AI0:56:12 — “Intellidynamics” & Reflective Stability1:07:12 — Why Is Dario Dismissing an AI Pause?1:11:45 — Final TakeawaysLinksHarlan’s X — https://x.com/HumanHarlan“The Adolescence of Technology” by Dario Amodei — https://www.darioamodei.com/essay/the-adolescence-of-technologyDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Check out the new Doom Debates studio in this Q&A with special guest Producer Ori! Liron gets into a heated discussion about whether doomers must validate short-term risks, like data center water usage, in order to build a successful political coalition.Originally streamed on Saturday, January 24.Timestamps00:00:00 — Cold Open00:00:26 — Introduction and Studio Tour00:08:17 — Q&A: Alignment, Accelerationism, and Short-Term Risks00:18:15 — Dario Amodei, Davos, and AI Pause00:27:42 — Producer Ori Joins: Locations and Vibes00:35:31 — Legislative Strategy vs. Social Movements (The Tobacco Playbook)00:45:01 — Ethics of Investing in or Working for AI Labs00:54:23 — Defining Superintelligence and Human Limitations01:02:58 — Technical Risks: Self-Replication and Cyber Warfare01:19:08 — Live Debate with Zane: Short-Term vs. Long-Term Strategy01:53:15 — Marketing Doom Debates and Guest Outreach01:56:45 — Live Call with Jonas: Scenarios for Survival02:05:52 — Conclusion and Mission StatementLinksLiron’s X Post about Destiny — https://x.com/liron/status/2015144778652905671?s=20Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77 — https://www.youtube.com/watch?v=IUX00c5x2UMDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Audrey Tang was the youngest minister in Taiwanese history. Now she's working to align AI with democratic principles as Taiwan's Cyber Ambassador.In this debate, I probe her P(doom) and stress-test her vision for safe AI development.Timestamps00:00:00 — Episode Preview00:01:43 — Introducing Audrey Tang, Cyber Ambassador of Taiwan00:07:20 — Being Taiwan’s First Digital Minister00:17:19 — What's Your P(Doom)? ™00:21:10 — Comparing AI Risk to Nuclear Risk00:22:53 — The Statement on AI Extinction Risk00:27:29 — Doomerism as a Hyperstition00:30:51 — Audrey Explains Her Vision of "Plurality"00:37:17 — Audrey Explains Her Principles of Civic Ethics, The "6-Pack of Care"00:45:58 — AGI Timelines: "It's Already Here"00:54:41 — The Apple Analogy01:03:09 — What If AI FOOMs?01:11:19 — What AI Can vs What AI Will Do01:15:20 — Lessons from COVID-1901:19:59 — Is Society Ready? Audrey Reflects on a Personal Experience with Mortality01:23:50 — AI Alignment Cannot Be Top-Down01:34:04 — AI-as-Mother vs AI-as-Gardener01:37:26 — China and the Geopolitics of AI Chip Manufacturing in Taiwan01:40:47 — Red Lines, International Treaties, and the Off Button01:48:26 — Debate Wrap-UpLinksPlurality: The Future of Collaborative Technology and Democracy by Glen Weyl and Audrey Tang — https://www.amazon.com/Plurality-Future-Collaborative-Technology-Democracy/dp/B0D98RPKCKAudrey’s X — https://x.com/audreytAudrey’s Wikipedia — https://en.wikipedia.org/wiki/Audrey_TangDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
I joined Steve Bannon’s War Room Battleground to talk about AI doom.Hosted by Joe Allen, we cover AGI timelines, raising kids with a high p(doom), and why improving our survival odds requires a global wake-up call.00:00:00 — Episode Preview00:01:17 — Joe Allen opens the show and introduces Liron Shapira00:04:06 — Liron: What’s Your P(Doom)?00:05:37 — How Would an AI Take Over?00:07:20 — The Timeline to AGI00:08:17 — Benchmarks & AI Passing the Turing Test00:14:43 — Liron Is Typically a Techno-Optimist00:18:00 — Raising a Family with a High P(Doom)00:23:48 — Mobilizing a Grassroots AI Survival Campaign00:26:45 — Final Message: A Wake-Up Call00:29:23 — Joe Allen’s Closing Message to the War Room PosseLinks: Joe’s Substack — https://substack.com/@joebotJoe’s Twitter — https://x.com/JOEBOTxyzBannon’s War Room Twitter — https://x.com/Bannons_WarRoomWarRoom Battleground EP 922: AI Doom Debates with Liron Shapira on Rumble — https://rumble.com/v742oo4-warroom-battleground-ep-922-ai-doom-debates-with-liron-shapira.htmlDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Economist Noah Smith is the author of Noahpinion, one of the most popular Substacks in the world.Far from worrying about human extinction from superintelligent AI, Noah is optimistic AI will create a world where humans still have plentiful, high-paying jobs!In this debate, I stress-test his rosy outlook. Let’s see if Noah can instill us with more confidence about humanity’s rapidly approaching AI future.Timestamps00:00:00 - Episode Preview00:01:41 - Introducing Noah Smith00:03:19 - What’s Your P(Doom)™00:04:40 - Good vs. Bad Transhumanist Outcomes00:15:17 - Catastrophe vs. Total Extinction00:17:15 - Mechanisms of Doom00:27:16 - The AI Persuasion Risk00:36:20 - Instrumental Convergence vs. Peace00:53:08 - The “One AI” Breakout Scenario01:01:18 - The “Stoner AI” Theory01:08:49 - Importance of Reflective Stability01:14:50 - Orthogonality & The Waymo Argument01:21:18 - Comparative Advantage & Jobs01:27:43 - Wealth Distribution & Robot Lords01:34:34 - Supply Curves & Resource Constraints01:43:38 - Policy of Reserving Human Resources01:48:28 - Closing: The Case for OptimismLinksNoah’s Substack — https://noahpinion.blog“Plentiful, high-paying jobs in the age of AI” — https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the“My thoughts on AI safety” — https://www.noahpinion.blog/p/my-thoughts-on-ai-safetyNoah’s Twitter — https://x.com/noahpinion---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
In September of 2023, when OpenAI’s GPT-4 was still a fresh innovation and people were just beginning to wrap their heads around large language models, I was invited to debate Beff Jezos, Bayeslord, and other prominent “effective accelerationists” a.k.a. “e/acc” folks on an X Space.E/acc’s think building artificial superintelligence is unlikely to disempower humanity and doom the future, because that’d be an illegal exception to the rule that accelerating new technology is always the highest-expected-value for humanity.As you know, I disagree — I think doom is extremely likely and imminent possibility.This debate took place 9 months before I started Doom Debates, and was one of the experiences that made me realize debating AI doom was my calling. It’s also the only time Beff Jezos has ever not been too chicken to debate me.Timestamps00:00:00 — Liron’s New Intro00:04:15 — Debate Starts Here: Litigating FOOM00:06:18 — Defining the Recursive Feedback Loop00:15:05 — The Two-Part Doomer Thesis00:26:00 — When Does a Tool Become an Agent?00:44:02 — The Argument for Convergent Architecture00:46:20 — Mathematical Objections: Ergodicity and Eigenvalues01:03:46 — Bayeslord Enters: Why Speed Doesn’t Matter01:12:40 — Beff Jezos Enters: Physical Priors vs. Internet Data01:13:49 — The 5% Probability of Doom by GPT-501:20:09 — Chaos Theory and Prediction Limits01:27:56 — Algorithms vs. Hardware Constraints01:35:20 — Galactic Resources vs. Human Extermination01:54:13 — The Intelligence Bootstrapping Script Scenario02:02:13 — The 10-Megabyte AI Virus Debate02:11:54 — The Nuclear Analogy: Noise Canceling vs. Rubble02:37:39 — Controlling Intelligence: The Roman Empire Analogy02:44:53 — Real-World Latency and API Rate Limits03:03:11 — The Difficulty of the Off Button03:24:47 — Why Liron is “e/acc at Heart”Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
loading
Comments 
loading