DiscoverDoom Debates
Doom Debates
Claim Ownership

Doom Debates

Author: Liron Shapira

Subscribed: 22Played: 766
Share

Description

It's time to talk about the end of the world!

lironshapira.substack.com
97 Episodes
Reverse
In this special cross-post from Jona Ragogna’s channel, I'm interviewed about why superintelligent AI poses an imminent extinction threat, and how AI takeover is going to unfold.Newly exposed to AI x-risk, Jona asks sharp questions about why we’re racing toward superintelligence despite the danger, and what ordinary people can do now to lower p(doom). This is one of the most crisp explainers of the AI-doom argument I’ve done to date.Timestamps0:00 Intro0:41 Why AI is likely to cause human extinction2:55 How AI takeover happens4:55 AI systems have goals6:33 Liron explains p(Doom)8:50 The worst case scenario is AI sweeps us away12:46 The best case scenario is hard to define14:24 How to avoid doom15:09 Frontier AI companies are just doing "ad hoc" alignment20:30 Why "warning shots" from AI aren't scary yet23:19 Should young adults work on AI alignment research?24:46 We need a grassroots movement28:31 Life choices when AI doom is imminent32:35 Are AI forecasters just biased?34:12 The Doom Train™ and addressing counterarguments40:28 Anthropic's new AI welfare announcement isn't a major breakthrough44:35 It's unknown what's going on inside LLMs and AI systems53:22 Effective Altruism's ties to AI risk56:58 Will AI be a "worthy descendant"?1:01:08 How to calculate P(Doom)1:02:49 Join the unofficial If Anyone Builds It, Everyone Dies book launch party!Show NotesSubscribe to Jona Ragogna — https://youtube.com/@jonaragognaIF ANYONE BUILDS IT LAUNCH WEEK EVENTS:Mon Sep 15 @ 9am PT / 12pm ET / 1600 UTCMy Eliezer Yudkowsky premieres on YouTube! Stay tuned for details.Tue Sep 16 @ 2pm PT / 5pm ET / 2100 UTCThe Doom Debates unofficial IABI Launch Party!!!More details about launch week HERE!---Doom Debates’s Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Ladies and gentlemen, we are days away from the long-awaited release of Eliezer Yudkowsky and Nate Soares's new book, “If Anyone Builds It, Everyone Dies”!!!Mon Sep 15 @9am PT: My Interview with Eliezer YudkowskyWe'll be kicking things off the morning of Monday, September 15th with a live watch party of my very special new interview with the one and only Eliezer Yudkowsky!All of us will be in the YouTube live chat. I'll be there, producer Ori will be there, and you'll get a first look at this new & exciting interview: Questions he's never been asked before.We'll talk about my history first meeting Eliezer in 2008, how things have evolved, what's going on now, why everybody has their head in the sand. And we'll even go to the LessWrong sequences and do some rationality deep cuts.This will be posted on my personal YouTube channel. But if you're subscribed to Doom Debates, you'll get the link in your feed as we get closer to Monday.Mark your calendars for Monday, September 15th, 9:00am Pacific, 12:00 PM Eastern. That's 1600 UTC for my Europals out there, midnight for my China peeps.I think YouTube commenter @pairy5420 said it best: "Bro, I could have three essays due, my crush just text me, the FBI outside my house, me having a winning lottery ticket to turn in all at the same time, and I still wouldn't miss the interview."That's the spirit, @pairy5420. You won't want to miss the Eliezer interview YouTube premiere. I'll see you all there. It's gonna be a great start to the book's launch week.Tue Sep 16 @2pm PT: The Doom Debates Unofficial Book Launch PartyNow let's talk about the launch day: Tuesday, September 16th. Once you've picked up your copy of the book, get ready for the Doom Debates unofficial "If Anyone Builds It, Everyone Dies" launch party!!!This is going to be a very special, unprecedented event here on this channel. It's gonna be a three hour live stream hosted by me and producer Ori, our new full-time producer. He's gonna be the co-host of the party.He’s finally coming out from behind the scenes, making his debut on the channel. Unless you count the first episode of the show - he was there too. The two of us are going to be joined by a who's who of very special guests. Get a load of these guests who are gonna be dropping by the unofficial launch party:* John Sherman from the AIX Risk Network and Michael, my other co-host of Warning Shots* Roman Yampolskiy, a top researcher and communicator in the field of AI x-risk* Emmett Shear, founder and CEO of a new AI alignment company called Softmax. He was previously the CEO of Twitch, and the interim CEO of OpenAI* Roon, member of the technical staff at OpenAI and friend of the show* Gary Marcus, cognitive scientist, author, entrepreneur, and famous AGI skeptic* Liv Boeree, the YouTube star, poker champion, and science communicator* Robert Wright, bestselling author, podcaster, wide-ranging intellectual, about announce his new book about AI* Holly Elmore, executive director of PauseAI US* Roko Mijic, you guys know Roko 😉And that's not all. There's going to be even more that I can't tell you about now, but it will not disappoint.So I really hope to see you all there at the unofficial "If Anyone Builds It, Everyone Dies" launch party, Tuesday, September 16th.Same as the book’s launch day: Tuesday, September 16th at 2:00pm Pacific, 5:00pm Eastern.Pick up your copy of the book that morning. Don't come to the party without it. We're gonna have a bouncer stationed at the door, and if you don't show him that you've got a copy of "If Anyone Builds It, Everyone Dies," he's gonna give you a big thumbs down.BUY THE BOOK!!!In all seriousness though, please support the book if you like Doom Debates. If you feel like you've gotten some value out of the show and you wanna give back a little bit, that is my ask. Head over to ifanyonebuildsit.com and buy the book from their links there. Go to Amazon, Barnes and Noble, wherever you normally buy books, just buy the damn thing. It's $14.99 on Kindle. It's not gonna break the bank.Then spread the word. Tell your friends and family, tell your coworkers at the office. Try to get a few more copies sold. We don't have another book launch coming, guys. This is it.This is our chance to take a little bit of action when it can actually move the needle and help. If you've been procrastinating this whole time, you gotta stop. You gotta go buy it now because the New York Times is gonna be checking this week.This is the last week of pre-orders. You really want to give it that launch bump. Don't try to drag it out after launch week. Time is of the essence.The Doom Debates MissionUltimately that's why I do this show. This isn't just entertainment for smart people. There is actually an important mission. We're trying to optimize the mission here. Help me out.Or at the very least, help high quality discourse because a lot of people across the spectrum agree, this is a high quality book contributing to the discourse, and we need more books like it.Thanks again for being with me on this journey to lower P(Doom) by convincing the average person that AI is urgently life-threatening to them and their loved ones. It's really important work.See you all on Monday 9am PT at the Eliezer Yudkowsky interview (details coming soon), and Tuesday 2pm PT at the launch party (event link)! Get full access to Doom Debates at lironshapira.substack.com/subscribe
Louis Berman is a polymath who brings unique credibility to AI doom discussions. He's been coding AI for 25 years, served as CTO of major tech companies, recorded the first visual sighting of what became the dwarf planet Eris, and has now pivoted to full-time AI risk activism. He's lobbied over 60 politicians across multiple countries for PauseAI and authored two books on existential risk.Louis and I are both baffled by the calm, measured tone that dominates AI safety discourse. As Louis puts it: "No one is dealing with this with emotions. No one is dealing with this as, oh my God, if they're right. Isn't that the scariest thing you've ever heard about?"Louis isn't just talking – he's acting on his beliefs. He just bought a "bug out house" in rural Maryland, though he's refreshingly honest that this isn't about long-term survival. He expects AI doom to unfold over months or years rather than Eliezer's instant scenario, and he's trying to buy his family weeks of additional time while avoiding starvation during societal collapse.He's spent extensive time in congressional offices and has concrete advice about lobbying techniques. His key insight: politicians' staffers consistently claim "if just five people called about AGI, it would move the needle". We need more people like Louis!Timestamps* 00:00:00 - Cold Open: The Missing Emotional Response* 00:00:31 - Introducing Louis Berman: Polymath Background and Donor Disclosure* 00:03:40 - The Anodyne Reaction: Why No One Seems Scared* 00:07:37 - P-Doom Calibration: Gary Marcus and the 1% Problem* 00:11:57 - The Bug Out House: Prepping for Slow Doom* 00:13:44 - Being Amazed by LLMs While Fearing ASI* 00:18:41 - What’s Your P(Doom)™* 00:25:42 - Bayesian Reasoning vs. Heart of Hearts Beliefs* 00:32:10 - Non-Doom Scenarios and International Coordination* 00:40:00 - The Missing Mood: Where's the Emotional Response?* 00:44:17 - Prepping Philosophy: Buying Weeks, Not Years* 00:52:35 - Doom Scenarios: Slow Takeover vs. Instant Death* 01:00:43 - Practical Activism: Lobbying Politicians and Concrete Actions* 01:16:44 - Where to Find Louis's Books and Final Wrap-up* 01:18:17 - Outro: Super Fans and Mission PartnersLinksLouis’s website — https://xriskbooks.com — Buy his books!ControlAI’s form to easily contact your representative and make a difference — https://controlai.com/take-action/usa — Highly recommended!Louis’s interview about activism with John Sherman and Felix De Simone — https://www.youtube.com/watch?v=Djd2n4cufTMIf Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.comBecome a Mission Partner!Want to meaningfully help the show’s mission (raise awareness of AI x-risk & raise the level of debate around this crucial subject) become reality? Donate $1,000+ to the show (no upper limit) and I’ll invite you to the private Discord channel. Email me at liron@doomdebates.com if you have questions or want to donate crypto.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Rob Miles is the most popular AI safety educator on YouTube, with millions of views across his videos explaining AI alignment to general audiences. He dropped out of his PhD in 2011 to focus entirely on AI safety communication – a prescient career pivot that positioned him as one of the field's most trusted voices over a decade before ChatGPT made AI risk mainstream.Rob sits firmly in the 10-90% P(Doom) range, though he admits his uncertainty is "hugely variable" and depends heavily on how humanity responds to the challenge. What makes Rob particularly compelling is the contrast between his characteristic British calm and his deeply serious assessment of our situation. He's the type of person who can explain existential risk with the measured tone of a nature documentarian while internally believing we're probably headed toward catastrophe.Rob has identified several underappreciated problems, particularly around alignment stability under self-modification. He argues that even if we align current AI systems, there's no guarantee their successors will inherit those values – a discontinuity problem that most safety work ignores. He's also highlighted the "missing mood" in AI discourse, where people discuss potential human extinction with the emotional register of an academic conference rather than an emergency.We explore Rob's mainline doom scenario involving recursive self-improvement, why he thinks there's enormous headroom above human intelligence, and his views on everything from warning shots to the Malthusian dynamics that might govern a post-AGI world. Rob makes a fascinating case that we may be the "least intelligent species capable of technological civilization" – which has profound implications for what smarter systems might achieve.Our key disagreement centers on strategy: Rob thinks some safety-minded people should work inside AI companies to influence them from within, while I argue this enables "tractability washing" that makes the companies look responsible while they race toward potentially catastrophic capabilities. Rob sees it as necessary harm reduction; I see it as providing legitimacy to fundamentally reckless enterprises.The conversation also tackles a meta-question about communication strategy. Rob acknowledges that his measured, analytical approach might be missing something crucial – that perhaps someone needs to be "running around screaming" to convey the appropriate emotional urgency. It's a revealing moment from someone who's spent over a decade trying to wake people up to humanity's most important challenge, only to watch the world continue treating it as an interesting intellectual puzzle rather than an existential emergency.Timestamps* 00:00:00 - Cold Open* 00:00:28 - Introducing Rob Miles* 00:01:42 - Rob's Background and Childhood* 00:02:05 - Being Aspie* 00:04:50 - Less Wrong Community and "Normies"* 00:06:24 - Chesterton's Fence and Cassava Root* 00:09:30 - Transition to AI Safety Research* 00:11:52 - Discovering Communication Skills* 00:15:36 - YouTube Success and Channel Growth* 00:16:46 - Current Focus: Technical vs Political* 00:18:50 - Nuclear Near-Misses and Y2K* 00:21:55 - What’s Your P(Doom)™* 00:27:31 - Uncertainty About Human Response* 00:31:04 - Views on Yudkowsky and AI Risk Arguments* 00:42:07 - Mainline Catastrophe Scenario* 00:47:32 - Headroom Above Human Intelligence* 00:54:58 - Detailed Doom Scenario* 01:01:07 - Self-Modification and Alignment Stability* 01:17:26 - Warning Shots Problem* 01:20:28 - Moving the Overton Window* 01:25:59 - Protests and Political Action* 01:33:02 - The Missing Mood Problem* 01:40:28 - Raising Society's Temperature* 01:44:25 - "If Anyone Builds It, Everyone Dies"* 01:51:05 - Technical Alignment Work* 01:52:00 - Working Inside AI Companies* 01:57:38 - Tractability Washing at AI Companies* 02:05:44 - Closing Thoughts* 02:08:21 - How to Support Doom Debates: Become a Mission PartnerLinksRob’s YouTube channel — https://www.youtube.com/@RobertMilesAIRob’s Twitter — https://x.com/robertskmilesRational Animations (another great YouTube channel, narrated by Rob) — https://www.youtube.com/RationalAnimationsBecome a Mission Partner!Want to meaningfully help the show’s mission (raise awareness of AI x-risk & raise the level of debate around this crucial subject) become reality? Donate $1,000+ to the show (no upper limit) and I’ll invite you to the private Discord channel. Email me at liron@doomdebates.com if you have questions or want to donate crypto. Get full access to Doom Debates at lironshapira.substack.com/subscribe
Vitalik Buterin is the founder of Ethereum, the world's second-largest cryptocurrency by market cap, currently valued at around $500 billion. But beyond revolutionizing blockchain technology, Vitalik has become one of the most thoughtful voices on AI safety and existential risk.He's donated over $665 million to pandemic prevention and other causes, and has a 12% P(Doom) – putting him squarely in what I consider the "sane zone" for AI risk assessment. What makes Vitalik particularly interesting is that he's both a hardcore techno-optimist who built one of the most successful decentralized systems ever created, and someone willing to seriously consider AI regulation and coordination mechanisms.Vitalik coined the term "d/acc" – defensive, decentralized, democratic, differential acceleration – as a middle path between uncritical AI acceleration and total pause scenarios. He argues we need to make the world more like Switzerland (defensible, decentralized) and less like the Eurasian steppes (vulnerable to conquest).We dive deep into the tractability of AI alignment, whether current approaches like DAC can actually work when superintelligence arrives, and why he thinks a pluralistic world of competing AIs might be safer than a single aligned superintelligence. We also explore his vision for human-AI merger through brain-computer interfaces and uploading.The crux of our disagreement is that I think we're heading for a "plants vs. animals" scenario where AI will simply operate on timescales we can't match, while Vitalik believes we can maintain agency through the right combination of defensive technologies and institutional design.Finally, we tackle the discourse itself – I ask Vitalik to debunk the common ad hominem attacks against AI doomers, from "it's just a fringe position" to "no real builders believe in doom." His responses carry weight given his credibility as both a successful entrepreneur and someone who's maintained intellectual honesty throughout his career.Timestamps* 00:00:00 - Cold Open* 00:00:37 - Introducing Vitalik Buterin* 00:02:14 - Vitalik's altruism* 00:04:36 - Rationalist community influence* 00:06:30 - Opinion of Eliezer Yudkowsky and MIRI* 00:09:00 - What’s Your P(Doom)™* 00:24:42 - AI timelines* 00:31:33 - AI consciousness* 00:35:01 - Headroom above human intelligence* 00:48:56 - Techno optimism discussion* 00:58:38 - e/acc: Vibes-based ideology without deep arguments* 01:02:49 - d/acc: Defensive, decentralized, democratic acceleration* 01:11:37 - How plausible is d/acc?* 01:20:53 - Why libertarian acceleration can paradoxically break decentralization* 01:25:49 - Can we merge with AIs?* 01:35:10 - Military AI concerns: How war accelerates dangerous development* 01:42:26 - The intractability question* 01:51:10 - Anthropic and tractability-washing the AI alignment problem* 02:00:05 - The state of AI x-risk discourse* 02:05:14 - Debunking ad hominem attacks against doomers* 02:23:41 - Liron’s outroLinksVitalik’s website: https://vitalik.eth.limoVitalik’s Twitter: https://x.com/vitalikbuterinEliezer Yudkowsky’s explanation of p-Zombies: https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies—Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Today I’m sharing my interview on Robert Wright’s Nonzero Podcast from last May. Rob is an especially sharp interviewer who doesn't just nod along, he had great probing questions for me.This interview happened right after Ilya Sutskever and Jan Leike resigned from OpenAI in May 2024, continuing a pattern that goes back to Dario Amodei leaving to start Anthropic. These aren't fringe doomers; these are the people hired specifically to solve the safety problem, and they keep concluding it's not solvable at the current pace.00:00:00 - Liron’s preface00:02:10 - Robert Wright introduces Liron00:04:02 - PauseAI protests at OpenAI headquarters00:05:15 - OpenAI resignations (Ilya Sutskever, Jan Leike, Dario Amodei, Paul Christiano, Daniel Kokotajlo)00:15:30 - P vs NP problem as analogy for AI alignment difficulty00:22:31 - AI pause movement and protest turnout00:29:02 - Defining AI doom and sci-fi scenarios00:32:05 - What’s My P(Doom)™00:35:18 - Fast vs slow AI takeoff and Sam Altman's position00:42:33 - Paperclip thought experiment and instrumental convergence explanation00:54:40 - Concrete examples of AI power-seeking behavior (business assistant scenario)01:00:58 - GPT-4 TaskRabbit deception example and AI reasoning capabilities01:09:00 - AI alignment challenges and human values discussion01:17:33 - Wrap-up and transition to premium subscriber contentShow NotesThis episode on Rob’s Nonzero Newsletter. You can subscribe for premium access to the last 1 hour of our discussion! — https://www.nonzero.org/p/in-defense-of-ai-doomerism-robertThis episode on Rob’s YouTube — https://www.youtube.com/watch?v=VihA_-8kBNgPauseAI — https://pauseai.infoPauseAI US — http://pauseai-us.orgDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Dr. Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem.Steve has a whopping 90% P(Doom), but unlike most AI safety researchers who focus on current LLMs, he argues that LLMs will plateau before becoming truly dangerous, and the real threat will come from next-generation "brain-like AGI" based on actor-critic reinforcement learning.For the last five years, he's been diving deep into neuroscience to reverse engineer how human brains actually work, and how to use that knowledge to solve the technical AI alignment problem. He's one of the few people who both understands why alignment is hard and is taking a serious technical shot at solving it.We cover his "two subsystems" model of the brain, why current AI safety approaches miss the mark, his disagreements with social evolution approaches, and why understanding human neuroscience matters for building aligned AGI.* 00:00:00 - Cold Open: Solving the technical alignment problem* 00:00:26 - Introducing Dr. Steven Byrnes and his impressive background* 00:01:59 - Steve's unique mental strengths* 00:04:08 - The cold fusion research story demonstrating Steve's approach* 00:06:18 - How Steve got interested in neuroscience through Jeff Hawkins* 00:08:18 - Jeff Hawkins' cortical uniformity theory and brain vs deep learning* 00:11:45 - When Steve first encountered Eliezer's sequences and became AGI-pilled* 00:15:11 - Steve's research direction: reverse engineering human social instincts* 00:21:47 - Four visions of alignment success and Steve's preferred approach* 00:29:00 - The two brain subsystems model: steering brain vs learning brain* 00:35:30 - Brain volume breakdown and the learning vs steering distinction* 00:38:43 - Cerebellum as the "LLM" of the brain doing predictive learning* 00:46:44 - Language acquisition: Chomsky vs learning algorithms debate* 00:54:13 - What LLMs fundamentally can't do: complex context limitations* 01:07:17 - Hypothalamus and brainstem doing more than just homeostasis* 01:13:45 - Why morality might just be another hypothalamus cell group* 01:18:00 - Human social instincts as model-based reinforcement learning* 01:22:47 - Actor-critic reinforcement learning mapped to brain regions* 01:29:33 - Timeline predictions: when brain-like AGI might arrive* 01:38:28 - Why humans still beat AI on strategic planning and domain expertise* 01:47:27 - Inner vs outer alignment: cocaine example and reward prediction* 01:55:13 - Why legible Python code beats learned reward models* 02:00:45 - Outcome pumps, instrumental convergence, and the Stalin analogy* 02:11:48 - What’s Your P(Doom)™* 02:16:45 - Massive headroom above human intelligence* 02:20:45 - Can AI take over without physical actuators? (Yes)* 02:26:18 - Steve's bold claim: 30 person-years from proto-AGI to superintelligence* 02:32:17 - Why overhang makes the transition incredibly dangerous* 02:35:00 - Social evolution as alignment solution: why it won't work* 02:46:47 - Steve's research program: legible reward functions vs RLHF* 02:59:52 - AI policy discussion: why Steven is skeptical of pause AI* 03:05:51 - Lightning round: offense vs defense, P(simulation), AI unemployment* 03:12:42 - Thanking Steve and wrapping up the conversation* 03:13:30 - Liron's outro: Supporting the show and upcoming episodes with Vitalik and EliezerShow Notes* Steven Byrnes' Website & Research — https://sjbyrnes.com/* Steve’s Twitter — https://x.com/steve47285* Astera Institute — https://astera.org/Steve’s Sequences* Intro to Brain-Like-AGI Safety — https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8* Foom & Doom 1: “Brain in a box in a basement” — https://www.alignmentforum.org/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement* Foom & Doom 2: Technical alignment is hard — https://www.alignmentforum.org/posts/bnnKGSCHJghAvqPjS/foom-and-doom-2-technical-alignment-is-hard---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Geoffrey Miller is an evolutionary psychologist at the University of New Mexico, bestselling author, and one of the world's leading experts on signaling theory and human sexual selection. His book "Mate" was hugely influential for me personally during my dating years, so I was thrilled to finally get him on the show.In this episode, Geoffrey drops a bombshell 50% P(Doom) assessment, coming from someone who wrote foundational papers on neural networks and genetic algorithms back in the '90s before pivoting to study human mating behavior for 30 years.What makes Geoffrey's doom perspective unique is that he thinks both inner and outer alignment might be unsolvable in principle, ever. He's also surprisingly bearish on AI's current value, arguing it hasn't been net positive for society yet despite the $14 billion in OpenAI revenue.We cover his fascinating intellectual journey from early AI researcher to pickup artist advisor to AI doomer, why Asperger's people make better psychology researchers, the polyamory scene in rationalist circles, and his surprisingly optimistic take on cooperating with China. Geoffrey brings a deeply humanist perspective. He genuinely loves human civilization as it is and sees no reason to rush toward our potential replacement.* 00:00:00 - Introducing Prof. Geoffrey Miller* 00:01:46 - Geoffrey’s intellectual career arc: AI → evolutionary psychology → back to AI* 00:03:43 - Signaling theory as the main theme driving his research* 00:05:04 - Why evolutionary psychology is legitimate science, not just speculation* 00:08:18 - Being a professor in the AI age and making courses "AI-proof"* 00:09:12 - Getting tenure in 2008 and using academic freedom responsibly* 00:11:01 - Student cheating epidemic with AI tools, going "fully medieval"* 00:13:28 - Should professors use AI for grading? (Geoffrey says no, would be unethical)* 00:23:06 - Coming out as Aspie and neurodiversity in academia* 00:29:15 - What is sex and its role in evolution (error correction vs. variation)* 00:34:06 - Sexual selection as an evolutionary "supercharger"* 00:37:25 - Dating advice, pickup artistry, and evolutionary psychology insights* 00:45:04 - Polyamory: Geoffrey’s experience and the rationalist connection* 00:50:96 - Why rationalists tend to be poly vs. Chesterton's fence on monogamy* 00:54:07 - The "primal" lifestyle and evolutionary medicine* 00:56:59 - How Iain M. Banks' Culture novels shaped Geoffrey’s AI thinking* 01:05:26 - What’s Your P(Doom)™* 01:08:04 - Main doom scenario: AI arms race leading to unaligned ASI* 01:14:10 - Bad actors problem: antinatalists, religious extremists, eco-alarmists* 01:21:13 - Inner vs. outer alignment - both may be unsolvable in principle* 01:23:56 - "What's the hurry?" - Why rush when alignment might take millennia?* 01:28:17 - Disagreement on whether AI has been net positive so far* 01:35:13 - Why AI won't magically solve longevity or other major problems* 01:37:56 - Unemployment doom and loss of human autonomy* 01:40:13 - Cosmic perspective: We could be "the baddies" spreading unaligned AI* 01:44:93 - "Humanity is doing incredibly well" - no need for Hail Mary AI* 01:49:01 - Why ASI might be bad at solving alignment (lacks human cultural wisdom)* 01:52:06 - China cooperation: "Whoever builds ASI first loses"* 01:55:19 - Liron’s OutroShow NotesLinks* Geoffrey’s Twitter* Geoffrey’s University of New Mexico Faculty Page* Geoffrey’s Publications* Designing Neural Networks using Genetic Algorithms - His most cited paper* Geoffrey’s Effective Altruism Forum PostsBooks by Geoffrey Miller* Mate: Become the Man Women Want (2015) - Co-authored with Tucker Max* The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature (2000)* Virtue Signaling: Essays on Darwinian Politics and Free Speech (2019)* Spent: Sex, Evolution, and Consumer Behavior (2009)Related Doom Debates Episodes* Liam Robins on College in the AGI Era - Student perspective on AI cheating* Liron Reacts to Steven Pinker on AI Risk - Critiquing Pinker's AI optimism* Steven Byrnes on Brain-Like AGI - Upcoming episode on human brain architecture---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
I’m doing a new weekly show on the AI Risk Network called Warning Shots. Check it out!I’m only cross-posting the first episode here on Doom Debates. You can watch future episodes by subscribing to the AI Risk Network channel.This week's warning shot: Mark Zuckerberg announced that Meta is racing toward recursive self-improvement and superintelligence. His exact words: "Developing superintelligence is now in sight and we just want to make sure that we really strengthen the effort as much as possible to go for it." This should be front-page news. Instead, everyone's talking about some CEO's dumb shenanigans at a Coldplay concert.Recursive self-improvement is when AI systems start upgrading themselves - potentially the last invention humanity ever makes. Every AI safety expert knows this is a bright red line. And Zuckerberg just said he's sprinting toward it. In a sane world, he'd have to resign for saying this. That's why we made this show - to document these warning shots as they happen, because someone needs to be paying attention* 00:00 - Opening comments about Zuckerberg and superintelligence* 00:51 - Show introductions and host backgrounds* 01:56 - Geoff Lewis psychotic episode and ChatGPT interaction discussion* 05:04 - Transition to main warning shot about Mark Zuckerberg* 05:32 - Zuckerberg's recursive self-improvement audio clip* 08:22 - Second Zuckerberg clip about "going for superintelligence"* 10:29 - Analysis of "superintelligence in everyone's pocket"* 13:07 - Discussion of Zuckerberg's true motivations* 15:13 - Nuclear development analogy and historical context* 17:39 - What should happen in a sane society (wrap-up)* 20:01 - Final thoughts and sign-offShow NotesHosts:* Doom Debates - Liron Shapira's channel* AI Risk Network - John Sherman's channel* Lethal Intelligence - Michael's animated AI safety contentThis Episode's Warning Shots:* Mark Zuckerberg podcast appearance discussing superintelligence* Geoff Lewis (Bedrock VC) Twitter breakdown---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Eneasz Brodski and Steven Zuber host the Bayesian Conspiracy podcast, which has been running for nine years and covers rationalist topics from AI safety to social dynamics. They're both OG rationalists who've been in the community since the early LessWrong days around 2007-2010. I've been listening to their show since the beginning, and finally got to meet my podcast heroes!In this episode, we get deep into the personal side of having a high P(Doom) — how do you actually live a good life when you think there's a 50% chance civilization ends by 2040? We also debate whether spreading doom awareness helps humanity or just makes people miserable, with Eneasz pushing back on my fearmongering approach.We also cover my Doom Train framework for systematically walking through AI risk arguments, why most guests never change their minds during debates, the sorry state of discourse on tech Twitter, and how rationalists can communicate better with normies. Plus some great stories from the early LessWrong era, including my time sitting next to Eliezer while he wrote Harry Potter and the Methods of Rationality.* 00:00 - Opening and introductions* 00:43 - Origin stories: How we all got into rationalism and LessWrong* 03:42 - Liron's incredible story: Sitting next to Eliezer while he wrote HPMOR* 06:19 - AI awakening moments: ChatGPT, AlphaGo, and move 37* 13:48 - Do AIs really "understand" meaning? Symbol grounding and consciousness* 26:21 - Liron's 50% P(Doom) by 2040 and the Doom Debates mission* 29:05 - The fear mongering debate: Does spreading doom awareness hurt people?* 34:43 - "Would you give 95% of people 95% P(Doom)?" - The recoil problem* 42:02 - How to live a good life with high P(Doom)* 45:55 - Economic disruption predictions and Liron's failed unemployment forecast* 57:19 - The Doom Debates project: 30,000 watch hours and growing* 58:43 - The Doom Train framework: Mapping the stops where people get off* 1:03:19 - Why guests never change their minds (and the one who did)* 1:07:08 - Communication advice: "Zooming out" for normies* 1:09:39 - The sorry state of arguments on tech Twitter* 1:24:11 - Do guests get mad? The hologram effect of debates* 1:30:11 - Show recommendations and final thoughtsShow NotesThe Bayesian Conspiracy — https://www.thebayesianconspiracy.comDoom Debates episode with Mike Israetel — https://www.youtube.com/watch?v=RaDWSPMdM4oDoom Debates episode with David Duvenaud — https://www.youtube.com/watch?v=mb9w7lFIHRMDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Liam Robins is a math major at George Washington University who's diving deep into AI policy and rationalist thinking.In Part 1, we explored how AI is transforming college life. Now in Part 2, we ride the Doom Train together to see if we can reconcile our P(Doom) estimates. 🚂Liam starts with a P(Doom) of just 3%, but as we go through the stops on the Doom Train, something interesting happens: he actually updates his beliefs in realtime!We get into heated philosophical territory around moral realism, psychopaths, and whether intelligence naturally yields moral goodness.By the end, Liam's P(Doom) jumps from 3% to 8% - one of the biggest belief updates I've ever witnessed on the show. We also explore his "Bayes factors" approach to forecasting, debate the reliability of superforecasters vs. AI insiders, and discuss why most AI policies should be Pareto optimal regardless of your P(Doom).This is rationality in action: watching someone systematically examine their beliefs, engage with counterarguments, and update accordingly.0:00 - Opening0:42 - What’s Your P(Doom)™01:18 - Stop 1: AGI timing (15% chance it's not coming soon)01:29 - Stop 2: Intelligence limits (1% chance AI can't exceed humans)01:38 - Stop 3: Physical threat assessment (1% chance AI won't be dangerous)02:14 - Stop 4: Intelligence yields moral goodness - the big debate begins04:42 - Moral realism vs. evolutionary explanations for morality06:43 - The psychopath problem: smart but immoral humans exist08:50 - Game theory and why psychopaths persist in populations10:21 - Liam's first major update: 30% down to 15-20% on moral goodness12:05 - Stop 5: Safe AI development process (20%)14:28 - Stop 6: Manageable capability growth (20%)15:38 - Stop 7: AI conquest intentions - breaking down into subcategories17:03 - Alignment by default vs. deliberate alignment efforts19:07 - Stop 8: Super alignment tractability (20%)20:49 - Stop 9: Post-alignment peace (80% - surprisingly optimistic)23:53 - Stop 10: Unaligned ASI mercy (1% - "just cope")25:47 - Stop 11: Epistemological concerns about doom predictions27:57 - Bayes factors analysis: Why Liam goes from 38% to 3%30:21 - Bayes factor 1: Historical precedent of doom predictions failing33:08 - Bayes factor 2: Superforecasters think we'll be fine39:23 - Bayes factor 3: AI insiders and government officials seem unconcerned45:49 - Challenging the insider knowledge argument with concrete examples48:47 - The privilege access epistemology debate56:02 - Major update: Liam revises base factors, P(Doom) jumps to 8%58:18 - Odds ratios vs. percentages: Why 3% to 8% is actually huge59:14 - AI policy discussion: Pareto optimal solutions across all P(Doom) levels1:01:59 - Why there's low-hanging fruit in AI policy regardless of your beliefs1:04:06 - Liam's future career plans in AI policy1:05:02 - Wrap-up and reflection on rationalist belief updatingShow Notes* Liam Robins on Substack -* Liam’s Doom Train post -* Liam’s Twitter - @liamhrobinsAnthropic's "Alignment Faking in Large Language Models" - The paper that updated Liam's beliefs on alignment by default---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Amjad Masad is the founder and CEO of Replit, a full-featured AI-powered software development platform whose revenue reportedly just shot up from $10M/yr to $100M/yr+.Last week, he went on Joe Rogan to share his vision that "everyone will become an entrepreneur" as AI automates away traditional jobs.In this episode, I break down why Amjad's optimistic predictions rely on abstract hand-waving rather than concrete reasoning. While Replit is genuinely impressive, his claims about AI limitations—that they can only "remix" and do "statistics" but can't "generalize" or create "paradigm shifts"—fall apart when applied to specific examples.We explore the entrepreneurial bias problem, why most people can't actually become successful entrepreneurs, and how Amjad's own success stories (like quality assurance automation) actually undermine his thesis. Plus: Roger Penrose's dubious consciousness theories, the "Duplo vs. Lego" problem in abstract thinking, and why Joe Rogan invited an AI doomer the very next day.00:00 - Opening and introduction to Amjad Masad03:15 - "Everyone will become an entrepreneur" - the core claim08:45 - Entrepreneurial bias: Why successful people think everyone can do what they do15:20 - The brainstorming challenge: Human vs. AI idea generation22:10 - "Statistical machines" and the remixing framework28:30 - The abstraction problem: Duplos vs. Legos in reasoning35:50 - Quantum mechanics and paradigm shifts: Why bring up Heisenberg?42:15 - Roger Penrose, Gödel's theorem, and consciousness theories52:30 - Creativity definitions and the moving goalposts58:45 - The consciousness non-sequitur and Silicon Valley "hubris"01:07:20 - Ahmad George success story: The best case for Replit01:12:40 - Job automation and the 50% reskilling assumption01:18:15 - Quality assurance jobs: Accidentally undermining your own thesis01:23:30 - Online learning and the contradiction in AI capabilities01:29:45 - Superintelligence definitions and learning in new environments01:35:20 - Self-play limitations and literature vs. programming01:41:10 - Marketing creativity and the Think Different campaign01:45:45 - Human-machine collaboration and the prompting bottleneck01:50:30 - Final analysis: Why this reasoning fails at specificity01:58:45 - Joe Rogan's real opinion: The Roman Yampolskiy follow-up02:02:30 - Closing thoughtsShow NotesSource video: Amjad Masad on Joe Rogan - July 2, 2025Roman Yampolskiy on Joe Rogan - https://www.youtube.com/watch?v=j2i9D24KQ5kReplit - https://replit.comAmjad’s Twitter - https://x.com/amasadDoom Debates episode where I react to Emmett Shear’s Softmax - https://www.youtube.com/watch?v=CBN1E1fvh2gDoom Debates episode where I react to Roger Penrose - https://www.youtube.com/watch?v=CBN1E1fvh2g---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Liam Robins is a math major at George Washington University who recently had his own "AGI awakening" after reading Leopold Aschenbrenner's Situational Awareness. I met him at my Manifest 2025 talk about stops on the Doom Train.In this episode, Liam confirms what many of us suspected: pretty much everyone in college is cheating with AI now, and they're completely shameless about it.We dive into what college looks like today: how many students are still "rawdogging" lectures, how professors are coping with widespread cheating, how the social life has changed, and what students think they’ll do when they graduate.* 00:00 - Opening* 00:50 - Introducing Liam Robins* 05:27 - The reality of college today: Do they still have lectures?* 07:20 - The rise of AI-enabled cheating in assignments* 14:00 - College as a credentialing regime vs. actual learning* 19:50 - "Everyone is cheating their way through college" - the epidemic* 26:00 - College social life: "It's just pure social life"* 31:00 - Dating apps, social media, and Gen Z behavior* 36:21 - Do students understand the singularity is near?Show NotesGuest:* Liam Robins on Substack - https://thelimestack.substack.com/* Liam's Doom Train post - https://thelimestack.substack.com/p/my-pdoom-is-276-heres-why* Liam’s Twitter - @liamrobinsKey References:* Leopold Aschenbrenner - "Situational Awareness"* Bryan Caplan - "The Case Against Education"* Scott Alexander - Astral Codex Ten* Jeffrey Ding - ChinAI Newsletter* New York Magazine - "Everyone Is Cheating Their Way Through College"Events & Communities:* Manifest Conference* LessWrong* Eliezer Yudkowsky - "Harry Potter and the Methods of Rationality"Previous Episodes:* Doom Debates Live at Manifest 2025 - https://www.youtube.com/watch?v=detjIyxWG8MDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Carl Feynman got his Master’s in Computer Science and B.S. in Philosophy from MIT, followed by a four-decade career in AI engineering.He’s known Eliezer Yudkowsky since the ‘90s, and witnessed Eliezer’s AI doom argument taking shape before most of us were paying any attention!He agreed to come on the show because he supports Doom Debates’s mission of raising awareness of imminent existential risk from superintelligent AI.00:00 - Teaser00:34 - Carl Feynman’s Background02:40 - Early Concerns About AI Doom03:46 - Eliezer Yudkowsky and the Early AGI Community05:10 - Accelerationist vs. Doomer Perspectives06:03 - Mainline Doom Scenarios: Gradual Disempowerment vs. Foom07:47 - Timeline to Doom: Point of No Return08:45 - What’s Your P(Doom)™09:44 - Public Perception and Political Awareness of AI Risk11:09 - AI Morality, Alignment, and Chatbots Today13:05 - The Alignment Problem and Competing Values15:03 - Can AI Truly Understand and Value Morality?16:43 - Multiple Competing AIs and Resource Competition18:42 - Alignment: Wanting vs. Being Able to Help Humanity19:24 - Scenarios of Doom and Odds of Success19:53 - Mainline Good Scenario: Non-Doom Outcomes20:27 - Heaven, Utopia, and Post-Human Vision22:19 - Gradual Disempowerment Paper and Economic Displacement23:31 - How Humans Get Edged Out by AIs25:07 - Can We Gaslight Superintelligent AIs?26:38 - AI Persuasion & Social Influence as Doom Pathways27:44 - Riding the Doom Train: Headroom Above Human Intelligence29:46 - Orthogonality Thesis and AI Motivation32:48 - Alignment Difficulties and Deception in AIs34:46 - Elon Musk, Maximal Curiosity & Mike Israetel’s Arguments36:26 - Beauty and Value in a Post-Human Universe38:12 - Multiple AIs Competing39:31 - Space Colonization, Dyson Spheres & Hanson’s “Alien Descendants”41:13 - What Counts as Doom vs. Not Doom?43:29 - Post-Human Civilizations and Value Function44:49 - Expertise, Rationality, and Doomer Credibility46:09 - Communicating Doom: Missing Mood & Public Receptiveness47:41 - Personal Preparation vs. Belief in Imminent Doom48:56 - Why Can't We Just Hit the Off Switch?50:26 - The Treacherous Turn and Redundancy in AI51:56 - Doom by Persuasion or Entertainment53:43 - Differences with Eliezer Yudkowsky: Singleton vs. Multipolar Doom55:22 - Why Carl Chose Doom Debates56:18 - Liron’s OutroShow NotesCarl’s Twitter — https://x.com/carl_feynmanCarl’s LessWrong — https://www.lesswrong.com/users/carl-feynmanGradual Disempowerment — https://gradual-disempowerment.aiThe Intelligence Curse — https://intelligence-curse.aiAI 2027 — https://ai-2027.comAlcor cryonics — https://www.alcor.orgThe LessOnline Conference — https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Richard Hanania is the President of the Center for the Study of Partisanship and Ideology. His work has been praised by Vice President JD Vance, Tyler Cowen, and Bryan Caplan among others.In his influential newsletter, he’s written about why he finds AI doom arguments unconvincing. He was gracious enough to debate me on this topic. Let’s see if one of us can change the other’s P(Doom)!0:00 Intro1:53 Richard's politics2:24 The state of political discourse3:30 What's your P(Doom)?™6:38 How to stop the doom train8:27 Statement on AI risk9:31 Intellectual influences11:15 Base rates for AI doom15:43 Intelligence as optimization power31:26 AI capabilities progress53:46 Why isn't AI yet a top blogger?58:02 Diving into Richard's Doom Train58:47 Diminishing Returns on Intelligence1:06:36 Alignment will be relatively trivial1:15:14 Power-seeking must be programmed1:21:27 AI will simply be benevolent1:27:17 Superintelligent AI will negotiate with humans1:33:00 Super AIs will check and balance each other out1:36:54 We're mistaken about the nature of intelligence1:41:46 Summarizing Richard's AI doom position1:43:22 Jobpocalypse and gradual disempowerment1:49:46 Ad hominem attacks in AI discourseShow NotesSubscribe to Richard Hanania's Newsletter: https://richardhanania.comRichard's blogpost laying out where he gets off the AI "doom train": https://www.richardhanania.com/p/ai-doomerism-as-science-fictionRichard's interview with Steven Pinker: https://www.richardhanania.com/p/pinker-on-alignment-and-intelligenceRichard's interview with Robin Hanson: https://www.richardhanania.com/p/robin-hanson-says-youre-going-toMy Doom Debate with Robin Hanson: https://www.youtube.com/watch?v=dTQb6N3_zu8My reaction to Steven Pinker's AI doom position, and why his arguments are shallow: https://www.youtube.com/watch?v=-tIq6kbrF-4"The Betterness Explosion" by Robin Hanson: https://www.overcomingbias.com/p/the-betterness-explosionhtml---Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16FgPauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Emmett Shear is the cofounder and ex-CEO of Twitch, ex-interim-CEO of OpenAI, and a former Y Combinator partner. He recently announced Softmax, a new company researching a novel solution to AI alignment.In his recent interview, Emmett explained “organic alignment”, drawing comparisons to biological systems and advocating for AI to be raised in a community-like setting with humans.Let’s go through his talk, point by point, to see if Emmett’s alignment plan makes sense…00:00 Episode Highlights00:36 Introducing Softmax and its Founders01:33 Research Collaborators and Ken Stanley's Influence02:16 Softmax's Mission and Organic Alignment03:13 Critique of Organic Alignment05:29 Emmett’s Perspective on AI Alignment14:36 Human Morality and Cognitive Submodules38:25 Top-Down vs. Emergent Morality in AI44:56 Raising AI to Grow Up with Humanity48:43 Softmax's Incremental Approach to AI Alignment52:22 Convergence vs. Divergence in AI Learning55:49 Multi-Agent Reinforcement Learning01:12:28 The Importance of Storytelling in AI Development01:16:34 Living With AI As It Grows01:20:19 Species Giving Birth to Species01:23:23 The Plan for AI's Adolescence01:26:53 Emmett's Views on Superintelligence01:31:00 The Future of AI Alignment01:35:10 Final Thoughts and Criticisms01:44:07 Conclusion and Call to ActionShow NotesEmmett Shear’s interview on BuzzRobot with Sophia Aryan (source material) — https://www.youtube.com/watch?v=_3m2cpZqvdwBuzzRobot’s YouTube channel — https://www.youtube.com/@BuzzRobotBuzzRobot’s Twitter — https://x.com/buZZrobot/SoftMax’s website — https://softmax.comMy Doom Debate with Ken Stanley (Softmax advisor) — https://www.youtube.com/watch?v=GdthPZwU1CoMy Doom Debate with Gil Mark on whether aligning AIs in groups is a more solvable problem — https://www.youtube.com/watch?v=72LnKW_jae8Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Prof. Scott Sumner is a well-known macroeconomist who spent more than 30 years teaching at Bentley University, and now holds an Emeritus Chair in monetary policy at George Mason University's Mercatus Center. He's best known for his blog, the Money Illusion, which sparked the idea of Market Monetarism and NGDP targeting.I sat down with him at LessOnline 2025 to debate why his P(Doom) is pretty low. Where does he get off the Doom Train? 🚂00:00 Episode Preview00:34 Introducing Scott Sumner05:20 Is AGI Coming Soon?09:12 Potential of AI in Various Fields36:49 Ethical Implications of Superintelligent AI41:03 The Nazis as an Outlier in History43:36 Intelligence and Morality: The Orthogonality Thesis49:03 The Risk of Misaligned AI Goals01:09:31 Recapping Scott’s PositionShow NotesScott’s current blog, The Pursuit of Happiness:Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
John Searle's "Chinese Room argument" has been called one of the most famous thought experiments of the 20th century. It's still frequently cited today to argue AI can never truly become intelligent.People continue to treat the Chinese Room like a brilliant insight, but in my opinion, it's actively misleading and DUMB! Here’s why…00:00 Intro00:20 What is Searle's Chinese Room Argument?01:43 John Searle (1984) on Why Computers Can't Understand01:54 Why the "Chinese Room" Metaphor is MisleadingThis mini-episode is taken from Liron's reaction to Sir Roger Penrose. Watch the full episode:Show Notes2008 Interview with John Searle: https://www.youtube.com/watch?v=3TnBjLmQawQ&t=253s1984 Debate with John Searle: https://www.youtube.com/watch?v=6tzjcnPsZ_w“Chinese Room” cartoon: https://miro.medium.com/v2/0*iTvDe5ebNPvg10AO.jpegWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Let’s see where the attendees of Manifest 2025 get off the Doom Train, and whether I can convince them to stay on and ride with me to the end of the line!00:00 Introduction to Doom Debates03:21 What’s Your P(Doom)?™05:03 🚂 “AGI Isn't Coming Soon”08:37 🚂 “AI Can't Surpass Human Intelligence”12:20 🚂 “AI Won't Be a Physical Threat”13:39 🚂 “Intelligence Yields Moral Goodness”17:21 🚂 “Safe AI Development Process”17:38 🚂 “AI Capabilities Will Rise at a Manageable Pace”20:12 🚂 “AI Won't Try to Conquer the Universe”25:00 🚂 “Superalignment Is A Tractable Problem”28:58 🚂 “Once We Solve Superalignment, We’ll Enjoy Peace”31:51 🚂 “Unaligned ASI Will Spare Us”36:40 🚂 “AI Doomerism Is Bad Epistemology”40:11 Bonus 🚂: “Fine, P(Doom) is high… but that’s ok!”42:45 Recapping the DebateSee also my previous episode explaining the Doom Train: https://lironshapira.substack.com/p/poking-holes-in-the-ai-doom-argumentWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
I often talk about the “Doom Train”, the series of claims and arguments involved in concluding that P(Doom) from artificial superintelligence is high. In this episode, it’s finally time to show you the whole track!00:00 Introduction01:09 “AGI isn’t coming soon”04:42 “Artificial intelligence can’t go far beyond human intelligence”07:24 “AI won’t be a physical threat”08:28 “Intelligence yields moral goodness”09:39 “We have a safe AI development process”10:48 “AI capabilities will rise at a manageable pace”12:28 “AI won’t try to conquer the universe”15:12 “Superalignment is a tractable problem”16:55 “Once we solve superalignment, we’ll enjoy peace”19:02 “Unaligned ASI will spare us”20:12 “AI doomerism is bad epistemology”21:42 Bonus arguments: “Fine, P(Doom) is high… but that’s ok!”Stops on the Doom TrainAGI isn’t coming soon* No consciousness* No emotions* No creativity — AIs are limited to copying patterns in their training data, they can’t “generate new knowledge”* AIs aren’t even as smart as dogs right now, never mind humans* AIs constantly make dumb mistakes, they can’t even do simple arithmetic reliably* LLM performance is hitting a wall — GPT 4.5 is barely better than GPT 4.1 despite being larger scale* No genuine reasoning* No microtubules exploiting uncomputable quantum effects* No soul* We’ll need to build tons of data centers and power before we get to AGI* No agency* This is just another AI hype cycle, every 25 years people think AGI is coming soon and they’re wrongArtificial intelligence can’t go far beyond human intelligence* “Superhuman intelligence” is a meaningless concept* Human engineering already is coming close to the laws of physics* Coordinating a large engineering project can’t happen much faster than humans do it* No individual human is that smart compared to humanity as a whole, including our culture, corporations, and other institutions. Similarly no individual AI will ever be that smart compared to the sum of human culture and other institutions.AI won’t be a physical threat* AI doesn’t have arms or legs, it has zero control over the real world* An AI with a robot body can’t fight better than a human soldier* We can just disconnect an AI’s power to stop it* We can just turn off the internet to stop it* We can just shoot it with a gun* It’s just math* Any supposed chain of events where AI kills humans is far-fetched science fictionIntelligence yields moral goodness* More intelligence is correlated with more morality* Smarter people commit fewer crimes* The orthogonality thesis is false* AIs will discover moral realism* If we made AIs so smart, and we were trying to make them moral, then they’ll be smart enough to debug their own morality* Positive-sum cooperation was the outcome of natural selectionWe have a safe AI development process* Just like every new technology, we’ll figure it out as we go* We don’t know what problems need to be fixed until we build the AI and test it out* If an AI causes problems, we’ll be able to turn it off and release another version* We have safeguards to make sure AI doesn’t get uncontrollable/unstoppable* If we accidentally build an AI that stops accepting our shutoff commands, it won’t manage to copy versions of itself outside our firewalls which then proceed to spread exponentially like a computer virus* If we accidentally build an AI that escapes our data center and spreads exponentially like a computer virus, it won’t do too much damage in the world before we can somehow disable or neutralize all its copies* If we can’t disable or neutralize copies of rogue AIs, we’ll rapidly build other AIs that can do that job for us, and won’t themselves go rogue on usAI capabilities will rise at a manageable pace* Building larger data centers will be a speed bottleneck* Another speed bottleneck is the amount of research that needs to be done, both in terms of computational simulation, and in terms of physical experiments, and this kind of research takes lots of time* Recursive self-improvement “foom” is impossible* The whole economy never grows with localized centralized “foom”* Need to collect cultural learnings over time, like humanity did as a whole* AI is just part of the good pattern of exponential economic growth erasAI won’t try to conquer the universe* AIs can’t “want” things* AIs won’t have the same “fight instincts” as humans and animals, because they weren’t shaped by a natural selection process that involved life-or-death resource competition* Smart employees often work for less-smart bosses* Just because AIs help achieve goals doesn’t mean they have to be hard-core utility maximizers* Instrumental convergence is false: achieving goals effectively doesn’t mean you have to be relentlessly seizing power and resources* A resource-hungry goal-maximizer AIs wouldn’t seize literally every atom; there’ll still be some leftover resources for humanity* AIs will use new kinds of resources that humans aren’t using - dark energy, wormholes, alternate universes, etcSuperalignment is a tractable problem* Current AIs have never killed anybody* Current AIs are extremely successful at doing useful tasks for humans* If AIs are trained on data from humans, they’ll be “aligned by default”* We can just make AIs abide by our laws* We can align the superintelligent AIs by using a scheme involving cryptocurrency on the blockchain* Companies have economic incentives to solve superintelligent AI alignment, because unaligned superintelligent AI would hurt their profits* We’ll build an aligned not-that-smart AI, which will figure out how to build the next-generation AI which is smarter and still aligned to human values, and so on until aligned superintelligenceOnce we solve superalignment, we’ll enjoy peace* The power from ASI won’t be monopolized by a single human government / tyranny* The decentralized nodes of human-ASI hybrids won’t be like warlords constantly fighting each other, they’ll be like countries making peace* Defense will have an advantage over attack, so the equilibrium of all the groups of humans and ASIs will be multiple defended regions, not a war of mutual destruction* The world of human-owned ASIs is a stable equilibrium, not one where ASI-focused projects keep buying out and taking resources away from human-focused ones (Gradual Disempowerment)Unaligned ASI will spare us* The AI will spare us because it values the fact that we created it* The AI will spare us because studying us helps maximize its curiosity and learning* The AI will spare us because it feels toward us the way we feel toward our pets* The AI will spare us because peaceful coexistence creates more economic value than war* The AI will spare us because Ricardo’s Law of Comparative Advantage says you can still benefit economically from trading with someone who’s weaker than youAI doomerism is bad epistemology* It’s impossible to predict doom* It’s impossible to put a probability on doom* Every doom prediction has always been wrong* Every doomsayer is either psychologically troubled or acting on corrupt incentives* If we were really about to get doomed, everyone would already be agreeing about that, and bringing it up all the timeSure P(Doom) is high, but let’s race to build it anyway because…Coordinating to not build ASI is impossible* China will build ASI as fast as it can, no matter what — because of game theory* So however low our chance of surviving it is, the US should take the chance firstSlowing down the AI race doesn’t help anything* Chances of solving AI alignment won’t improve if we slow down or pause the capabilities race* I personally am going to die soon, and I don’t care about future humans, so I’m open to any hail mary to prevent myself from dying* Humanity is already going to rapidly destroy ourselves with nuclear war, climate change, etc* Humanity is already going to die out soon because we won’t have enough babiesThink of the good outcome* If it turns out that doom from overly-fast AI building doesn’t happen, in that case, we can more quickly get to the good outcome!* People will stop suffering and dying fasterAI killing us all is actually good* Human existence is morally negative on net, or close to zero net moral value* Whichever AI ultimately comes to power will be a “worthy successor” to humanity* Whichever AI ultimately comes to power will be as morally valuable as human descendents generally are to their ancestors, even if their values drift* The successor AI’s values will be interesting, productive values that let them successfully compete to dominate the universe* How can you argue with the moral choices of an ASI that’s smarter than you, that you know goodness better than it does?* It’s species-ist to judge what a superintelligent AI would want to do. The moral circle shouldn’t be limited to just humanity.* Increasing entropy is the ultimate north star for techno-capital, and AI will increase entropy faster* Human extinction will solve the climate crisis, and pollution, and habitat destruction, and let mother earth healWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
loading
Comments