Discover
Doom Debates

Doom Debates
Author: Liron Shapira
Subscribed: 24Played: 1,021Subscribe
Share
© Liron Shapira
Description
110 Episodes
Reverse
Today I’m taking a rare break from AI doom to cover the dumbest kind of doom humanity has ever created for itself: climate change. We’re talking about a problem that costs less than $2 billion per year to solve. For context, that’s what the US spent on COVID relief every 7 hours during the pandemic. Bill Gates could literally solve this himself.My guest Andrew Song runs Make Sunsets, which launches weather balloons filled with sulfur dioxide (SO₂) into the stratosphere to reflect sunlight and cool the planet. It’s the same mechanism volcanoes use—Mount Pinatubo cooled Earth by 0.5°C for a year in 1991. The physics is solid, the cost is trivial, and the coordination problem is nonexistent.So why aren’t we doing it? Because people are squeamish about “playing God” with the atmosphere, even while we’re building superintelligent AI. Because environmentalists would rather scold you into turning off your lights than support a solution that actually works.This conversation changed how I think about climate change. I went from viewing it as this intractable coordination problem to realizing it’s basically already solved—we’re just LARPing that it’s hard! 🙈 If you care about orders of magnitude, this episode will blow your mind. And if you feel guilty about your carbon footprint: you can offset an entire year of typical American energy usage for about 15 cents. Yes, cents.Timestamps* 00:00:00 - Introducing Andrew Song, Cofounder of Make Sunsets* 00:03:08 - Why the company is called “Make Sunsets”* 00:06:16 - What’s Your P(Doom)™ From Climate Change* 00:10:24 - Explaining geoengineering and solar radiation management* 00:16:01 - The SO₂ dial we can turn* 00:22:00 - Where to get SO₂ (gas supply stores, sourcing from oil)* 00:28:44 - Cost calculation: Just $1-2 billion per year* 00:34:15 - “If everyone paid $3 per year”* 00:42:38 - Counterarguments: moral hazard, termination shock* 00:44:21 - Being an energy hog is totally fine* 00:52:16 - What motivated Andrew (his kids, Luke Iseman)* 00:59:09 - “The stupidest problem humanity has created”* 01:11:26 - Offsetting CO₂ from OpenAI’s Stargate* 01:13:38 - Playing God is goodShow NotesMake Sunsets* Website: https://makesunsets.com* Tax-deductible donations (US): https://givebutter.com/makesunsetsPeople Mentioned* Casey Handmer: https://caseyhandmer.wordpress.com/* Emmett Shear: https://twitter.com/eshear* Palmer Luckey: https://twitter.com/PalmerLuckeyResources Referenced* Book: Termination Shock by Neal Stephenson* Book: The Rational Optimist by Matt Ridley* Book: Enlightenment Now by Steven Pinker* Harvard SCoPEx project (the Bill Gates-funded project that got blocked)* Climeworks (direct air capture company): https://climeworks.comData/Monitoring* NOAA (National Oceanic and Atmospheric Administration): https://www.noaa.gov* ESA Sentinel-5P TROPOMI satellite data---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
I’ve been puzzled by David Deutsch’s AI claims for years. Today I finally had the chance to hash it out: Brett Hall, one of the foremost educators of David Deutsch’s ideas around epistemology & science, was brave enough to debate me!Brett has been immersed in Deutsch’s philosophy since 1997 and teaches it on his Theory of Knowledge podcast, which has been praised by tech luminary Naval Ravikant. He agrees with Deutsch on 99.99% of issues, especially the dismissal of AI as an existential threat.In this debate, I stress-test the Deutschian worldview, and along the way we unpack our diverging views on epistemology, the orthogonality thesis, and pessimism vs optimism.Timestamps0:00 — Debate preview & introducing Brett Hall4:24 — Brett’s opening statement on techno-optimism13:44 — What’s Your P(Doom)?™15:43 — We debate the merits of Bayesian probabilities20:13 — Would Brett sign the AI risk statement?24:44 — Liron declares his “damn good reason” for AI oversight35:54 — Debate milestone: We identify our crux of disagreement!37:29 — Prediction vs prophecy44:28 — The David Deutsch CAPTCHA challenge1:00:41 — What makes humans special?1:15:16 — Reacting to David Deutsch’s recent statements on AGI1:24:04 — Debating what makes humans special1:40:25 — Brett reacts to Roger Penrose’s AI claims1:48:13 — Debating the orthogonality thesis1:56:34 — The powerful AI data center hypothetical2:03:10 — “It is a dumb tool, easily thwarted”2:12:18 — Clash of worldviews: goal-driven vs problem-solving2:25:05 — Ideological Turing test: We summarize each other’s positions2:30:44 — Are doomers just pessimists?Show NotesBrett’s website — https://www.bretthall.orgBrett’s Twitter — https://x.com/TokTeacherThe Deutsch Files by Brett Hall and Naval Ravikant* https://nav.al/deutsch-files-i* https://nav.al/deutsch-files-ii* https://nav.al/deutsch-files-iii* https://nav.al/deutsch-files-ivBooks:* The Fabric of Reality by David Deutsch* The Beginning of Infinity by David Deutsch* Superintelligence by Nick Bostrom* If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
We took Eliezer Yudkowsky and Nate Soares’s new book, If Anyone Builds It, Everyone Dies, on the streets to see what regular people think.Do people think that artificial intelligence is a serious existential risk? Are they open to considering the argument before it’s too late? Are they hostile to the idea? Are they totally uninterested?Watch this episode to see the full spectrum of reactions from a representative slice of America!---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Welcome to the Doom Debates + Wes Roth + Dylan Curious crossover episode!Wes & Dylan’s host a popular YouTube AI news show that’s better than mainstream media. They interview thought leaders like Prof. Nick Bostrom, Dr. Mike Israetel — and now, yours truly!This episode is Part 2, where Wes & Dylan come on Doom Debates to break down the latest AI news.In Part 1, I went on Wes & Dylan’s channel to talk about my AI doom worries:The only reasonable move is to subscribe to both channels and watch both parts!Timestamps00:00 — Cold open00:45 — Introducing Wes & Dylan & hearing their AI YouTuber origin stories05:38 — What’s Your P(Doom)? ™10:30 — Living with high P(doom)12:10 — AI News Roundup: If Anyone Builds It, Everyone Dies Reactions17:02 — AI Redlines at the UN & risk of AGI authoritarianism26:56 — Robot ‘violence test’29:20 — Anthropic gets honest about job loss impact32:43 — AGI hunger strikes, the rationale for Anthropic protests41:24 — Liron explains his proposal for a safer future, treaty & enforcement debate49:23 — US Government officials ignore scientists and use “catastrophists” pejorative55:59 — Experts’ p(doom) predictions59:41 — Wes gives his take on AI safety warnings1:02:04 — Wrap-up, Subscribe to Wes Roth and Dylan CuriousShow NotesWes & Dylans’s channel — https://www.youtube.com/wesrothIf Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.com---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
I’m excited to announce the launch of Doom Hut, the official Doom Debates merch store!This isn’t just another merch drop. It’s a way to spread an urgent message that humanity faces an imminent risk from superintelligent AI, and we need to build common knowledge about it now.What We’re OfferingThe Doom Hut is the premiere source of “high P(doom) fashion.” We’ve got t-shirts in men’s and women’s styles, tall tees, tote bags, and hats that keep both the sun and the doom out of your eyes.Our signature items are P(doom) pins. Whether your probability of doom is less than 10%, greater than 25%, or anywhere in between, you can represent your assessment with pride.Our shirts feature “Doom Debates” on the front and “What’s your P(doom)?” on the back. It’s a conversation starter that invites questions rather than putting people on the defensive.Why This MattersThis is crunch time. There’s no more time to beat around the bush or pretend everything’s okay.When you wear this merch, you’re not just making a fashion statement. You’re signaling that P(doom) is uncomfortably high and that we need to stop building dangerous AI before we know how to control it.People will wonder what “Doom Debates” means. They’ll see that you’re bold enough to acknowledge the world faces real threats. Maybe they’ll follow your example. Maybe they’ll start asking questions and learning more.Supporting the MissionThe merch store isn’t really about making money for us (we make about $2 per item sold). The show’s production and marketing costs are funded by donations from viewers like you. Visit doomdebates.com/donate to learn more.Donations to Doom Debates are fully tax-deductible through our fiscal sponsor, Manifund.org. You can also support us by becoming a premium subscriber to DoomDebates.com, which helps boost our Substack rankings and visibility.Join the CommunityJoin our Discord community here! It’s a vibrant space where people debate, share memes, discuss new episodes, and talk about where they “get off the doom train.”Moving Forward TogetherThe outpouring of support since we started accepting donations has been incredible. You get what we’re trying to do here—raising awareness and helping the average person understand that AI risk is an imminent threat to them and their loved ones.This is real. This is urgent. And together, we’re moving the Overton window. Thanks for your support. Get full access to Doom Debates at lironshapira.substack.com/subscribe
Eliezer Yudkowsky can warn humankind that If Anyone Builds It, Everyone Dies and get on the New York Times bestseller list, but he won’t get upvoted to the top of LessWrong.According to the leaders of LessWrong, that’s intentional. The rationalist community thinks aggregating community support for important claims is “political fighting”.Unfortunately, the idea that some other community will strongly rally behind Eliezer Yudkowsky’s message while LessWrong “stays out of the fray” and purposely prevents mutual knowledge of support from being displayed, is unrealistic.Our refusal to aggregate the rationalist community beliefs into signals and actions is why we live in a world where rationalists with double-digit P(Doom)s join AI-race companies instead of AI-pause movements.We let our community become a circular firing squad. What did we expect?Timestamps00:00:00 — Cold Open00:00:32 — Introducing Holly Elmore, Exec. Director of PauseAI US00:03:12 — “If Anyone Builds It, Everyone Dies”00:10:07 — What’s Your P(Doom)™00:12:55 — Liron’s Review of IABIED00:15:29 — Encouraging Early Joiners to a Movement00:26:30 — MIRI’s Communication Issues00:33:52 — Government Officials’ Reviews of IABIED00:40:33 — Emmett Shear’s Review of IABIED00:42:47 — Michael Nielsen’s Review of IABIED00:45:35 — New York Times Review of IABIED00:49:56 — Will MacAskill’s Review of IABIED01:11:49 — Clara Collier’s Review of IABIED01:22:17 — Vox Article Review01:28:08 — The Circular Firing Squad01:37:02 — Why Our Kind Can’t Cooperate01:49:56 — LessWrong’s Lukewarm Show of Support02:02:06 — The “Missing Mood” of Support02:16:13 — Liron’s “Statement of Support for IABIED”02:18:49 — LessWrong Community’s Reactions to the Statement02:29:47 — Liron & Holly’s Hopes for the Community02:39:01 — Call to ActionSHOW NOTESPauseAI US — https://pauseai-us.orgPauseAI US Upcoming Events — https://pauseai-us.org/eventsInternational PauseAI — https://pauseai.infoHolly’s Twitter — https://x.com/ilex_ulmusReferenced Essays & Posts* Liron’s Eliezer Yudkowsky interview post on LessWrong — https://www.lesswrong.com/posts/kiNbFKcKoNQKdgTp8/interview-with-eliezer-yudkowsky-on-rationality-and* Liron’s “Statement of Support for If Anyone Builds It, Everyone Dies” — https://www.lesswrong.com/posts/aPi4HYA9ZtHKo6h8N/statement-of-support-for-if-anyone-builds-it-everyone-dies* “Why Our Kind Can’t Cooperate” by Eliezer Yudkowsky (2009) — https://www.lesswrong.com/posts/7FzD7pNm9X68Gp5ZC/why-our-kind-can-t-cooperate* “Something to Protect” by Eliezer Yudkowsky — https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect* Center for AI Safety Statement on AI Risk — https://safe.ai/work/statement-on-ai-riskOTHER RESOURCES MENTIONED* Stephen Pinker’s new book on mutual knowledge, When Everyone Knows That Everyone Knows... — https://stevenpinker.com/publications/when-everyone-knows-everyone-knows-common-knowledge-and-mysteries-money-power-and* Scott Alexander’s “Ethnic Tension and Meaningless Arguments” — https://slatestarcodex.com/2014/11/04/ethnic-tension-and-meaningless-arguments/PREVIOUS EPISODES REFERENCEDHolly’s previous Doom Debates appearance debating the California SB 1047 bill — https://www.youtube.com/watch?v=xUP3GywD0yMLiron’s interview with Eliezer Yudkowsky about the IABI launch — https://www.youtube.com/watch?v=wQtpSQmMNP0---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Ex-Twitch Founder and OpenAI Interim CEO Emmett Shear is one of the rare established tech leaders to lend his name and credibility to Eliezer Yudkowsky’s warnings about AI existential risk.Even though he disagrees on some points, he chose to endorse the new book If Anyone Builds It, Everyone Dies:“Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous.”In this interview from the IABED launch party, we dive into Emmett’s endorsement, why the current path is so dangerous, and what he hopes to achieve by taking a different approach at his new startup, Softmax.Watch the full IABED livestream here: https://www.youtube.com/watch?v=lRITRf-jH1gWatch my reaction to Emmett’s talk about Softmax to see why I’m not convinced his preferred alignment track is likely to work: https://www.youtube.com/watch?v=CBN1E1fvh2g Get full access to Doom Debates at lironshapira.substack.com/subscribe
ATTENTION DOOM DEBATES LISTENERS:We interrupt our usual programming to request that you consider supporting the show.Do you want your family, children & friends to continue living life through these next 10-20 years and beyond? Want to experience the good future where we benefit from research advances that make us healthier, happier, wiser, and longer-lived? Want our descendants to continue flourishing for trillions of millennia?I sure as hell do. I’m thrilled and inspired by the grand future within our reach… Sadly, I’m VERY worried that artificial intelligence may soon cause human extinction, leaving the future permanently devoid of the incredible potential value it once had.I want to safeguard humanity’s chance at participating in the expansive future. I want to swerve away from a premature “GAME OVER” ending. That’s why I started Doom Debates.The MissionDoom Debates’s mission is twofold:* Raise awareness of AI existential risk* Raise the quality of discourse that people can engage with and trustThe mission is achieved when the average person realizes AI is life-threatening.Until the average person on Earth sees unaligned, uncontrollable superintelligent AI as life-threatening — that is, an imminent threat to their life and to everything they hold dear — it won’t be feasible for leaders to drive major decisive action to protect us from extinction from superintelligent AI. That’s why raising mainstream awareness of AI x-risk is Doom Debates’s point of highest leverage.Encouragingly, surveys already show that mainstream opinion is already on our side about the level and severity of AI x-risk. What’s missing is urgency. Moving the average person’s opinion from “worried, but in an abstract long-term way” to “alarmed, in an urgent way” creates grassroots demand for drastically better AI policy, and drastically better-informed policymakers.We needed this shift to happen yesterday; it feels late to have to drive it now. But we have no choice.By providing high-quality interviews and debates with mainstream appeal, we’ll make millions of people realize as soon as possible: “Hey, P(doom) is too damn high RIGHT NOW. And we need to do something about it URGENTLY!”One Weird Trick You Can Do To HelpIf you care about the show’s mission, the #1 way you can help right now is by donating some of your hard-earned cash money 🙏.Spending on production & marketing accelerates the show’s growth, which achieves our mission faster (hopefully before it’s too late). Read on for more details.Where Does The Money Go?I don’t make money from Doom Debates. My “real job” is running an online relationship coaching service, and my passion project is making this show. Any income from Doom Debates, i.e. viewer donations and ad revenue, is fully reinvested into production and marketing.* Producer OriThanks to a generous viewer donation, I’ve hired a full-time producer: Ori Nagel.Ori is a rockstar who I’ve known for many years, and he collaborated with me behind the scenes throughout the show’s first year (he’s also in the first episode) before officially joining Doom Debates as our Producer. I hired him away from the media & outreach team at ControlAI, where he 10x’d their growth on social media channels.You may have noticed the show has already started getting better at editing & thumbnails, putting out more episodes and clips, and landing more prominent guests. We’re getting more done across the board because 1 hour of Ori’s time working on everything except hosting the show = same output as 1 hour of my time.But it’s not all smooth sailing — now that Doom Debates is Ori’s full-time job, he apparently needs to “get paid every month” 🤷♂️, so I’m appealing to you to help Ori stay on the job. Help us keep delivering faster progress toward the mission.* Paid MarketingThe key to the show’s growth isn’t marketing; it’s the content. To date, we’ve had robust organic growth with minimal marketing spend. Audience size has been doubling every 3 months, and we’re seeing a snowball effect where bigger audiences attract higher-quality guests and vice versa. We’re also happy to see that the show has unusually high engagement and long-term viewer retention.That said, investing in marketing lets us pull forward the same viewership we’d eventually get from organic growth. We’ll soon invest in YouTube ads to go beyond the pace of organic growth. We also keep spending opportunistically on marketing initiatives like sponsoring the Manifest conference and giving away T-shirts.Donation TiersSUBSTACK SUPPORTERSYou can donate as little as $10/month by subscribing to the DoomDebates.com Substack, which shows your support and raises our profile on their leaderboard. I’ll send you a free Doom Debates T-shirt and P(Doom) pin so you can represent the show. Early supporters say the T-shirt is great at starting AI x-risk conversations in a non-confrontational way. 😊MISSION PARTNERSIf you’re serious about lowering P(Doom) and you have the money to spare, $1k+ is the level where you start to move the needle for the show’s budget. This is the level where you officially become a partner in achieving the show’s mission — a Mission Partner.A donation in the $thousands meaningfully increases our ability to execute on all the moving parts of a top-tier show:* Guest booking: Outreach to guests who are hard to get, and constant followup* Pre-production: E.g. preparing elaborate notes about the guest’s positions* Production: E.g. improving my studio* Post-production: Basically editing* Marketing: Making clips, TikTok shorts, YouTube ads, conference sponsorships, etcIf you think that’s worthwhile:Please click here to Donate via PayPalTo donate crypto or if you have questions, email me.Of course, some donation amounts are *extra* meaningful to accelerating the mission. If you’re able and willing to donate $25,000 or $100,000, that’s going to be, let’s see… 25x or 100x more impactful! For my well-off fans, I’d say funding us up to the next $million at the current margin is high positive impact per dollar.Q: What if I’m not doing any AI x-risk activism? Is it good enough if all I do is donate money to other people who are working on it more directly?A: YES! You’re putting your time & energy into making money to fund the show, just as I’m putting my time & energy into making it. Unlike the 99.9% of people who deny the crisis, ignore it, or passively/hopelessly worry about it, you’re stepping up to actively help the situation. You’ve identified “Making the average person see AI as life-threatening” as a leverage point. Now you’re taking a straight shot to lower P(Doom)!Mission Partners are the show’s braintrust, collaborating in the private #mission-partners channel on our Discord server to steer the show’s direction. They can see non-public information and discussion about upcoming guests, as well as gossip like which AI company CEO surprisingly liked my spicy tweets.We have a Mission Partners meeting once/month on Zoom to go over the latest updates, plans, and high-level strategy for those who are interested. Every Mission Partner is also credited on the show, unless you prefer to remain anonymous.How Much Should YOU Donate?The minimum donation for Mission Partners is a one-time $1,000, but if you can donate more than that and you believe in the mission, please consider scaling your donation according to your level of disposable income. The show’s expenses are over $100k/yr, so it’s extremely helpful if some of you can step up with a larger donation. Consider giving $10k, $100k, or whatever order of magnitude makes sense for you. Just stay within the maximum donation limit of $99 million.A few months ago, a viewer stepped up and donated over $25,000, and it’s been a game changer. Ori came on board full time and we started moving twice as fast. We promoted the show at the Manifest conference, which led to recruiting a series of high-profile guests: Scott Sumner, Richard Hanania, Carl Feynman, culminating in an episode with Vitalik Buterin! And more unannounced guests in the pipeline.If you’re ready to become a Mission Partner today:Please click here to Donate via PayPalTo donate crypto or if you have questions, email me.We’re seeing strong momentum after the first year of the show. The steady audience growth to date has been 100% organic:Building a platform to raise the quality of mainstream x-risk discourse and inform the average person’s opinion is a realistically achievable mission. It’s just a matter of growing to the point where we can shape the conversation while there’s still time. To that end, building a team of Mission Partners who support the show financially and strategically is critical.To everyone who believes in the Doom Debates mission, and believes in our ability to execute it, and acts on that belief by generously donating: THANK YOU! We won’t let you down. Get full access to Doom Debates at lironshapira.substack.com/subscribe
Science communicator and poker champion Liv Boeree has been concerned about the existential risk of AI for nearly a decade.In this conversation from my If Anyone Builds It, Everyone Dies launch party, Liv explains why even a 5% chance of AI extinction is unacceptable, why the industry often buries its head in the sand, and how advocates can actually make an impact.---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
Eliezer Yudkowsky and Nate Soares just launched their world-changing book, If Anyone Builds It, Everyone Dies. PLEASE BUY YOUR COPY NOW!!!We had an unofficial launch party here on Doom Debates to celebrate the occasion with an incredible group of guests (see timestamps below). Highly recommend watching if you missed it!Timestamps* 00:00 — Cold Open* 00:36 — Max Tegmark, MIT Physicist* 24:58 — Roman Yampolskiy, Cybersecurity Professor * 42:20 Michael of @lethal-intelligence* 48:30 — Liv Boeree, Host of the Win-Win Podcast ( @LivBoeree )* 1:10:44 — Michael Trazzi, Filmmaker who just went on a hunger strike against Google DeepMind (@TheInsideView)* 1:27:21 — Producer Ori* 1:43:53 — Emmett Shear, Founder of Twitch & Softmax * 1:55:32 — Holly Elmore, Executive Director of PauseAI US* 2:13:44 — Gary Marcus, Professor Emeritus of Psychology & Neural Science at NYU* 2:36:13 — Robert Wright, Author of the NonZero Newsletter (@Nonzero)* 2:55:50 — Roko Mijic, the Man Behind “Roko’s Basilisk” * 3:19:17 — Rob Miles, AI Safety Educator (@RobertMilesAI)* 3:48:31 — Doom Debates Robot Closes Out the Stream!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Max Tegmark is an MIT physics professor, science communicator, best-selling author, and the President and co-founder of the Future of Life Institute. There are few individuals who have done more to get the world leaders to come to a shared sense of reality about the extinction threat of AI.His endorsement of Eliezer’s book, If Anyone Builds It, Everyone Dies, states: "The most important book of the decade."Max shared his sharp criticisms of AI companies yesterday on my unofficial IABI launch party livestream!---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
It is Monday, September 15th and my interview with Eliezer Yudkowsky has dropped!!!👉 WATCH NOW 👈We had a very unique 2.5-hour discussion covering rationality topics, the AI doom argument, what the AI companies think they're doing, why they don't seem to understand the AI doom argument, what society is doing, how we can help. And of course, his book, "If Anyone Builds It, Everyone Dies," which is officially launching tomorrow.For now, go ahead and watch the Eliezer Yudkowsky interview, and please smash that like button. We gotta get this a lot of likes.Thanks very much for being a Doom Debates watcher. We've also got a launch party coming up. I'll get back to you with more details on that soon. Get full access to Doom Debates at lironshapira.substack.com/subscribe
The Eliezer Yudkowsky interview premiere is tomorrow (Mon Sep 15) at 9am PT!!!👉 https://www.youtube.com/watch?v=wQtpSQmMNP0 👈I can't believe it. We are entering the launch week for If Anyone Builds It, Everyone Dies - a new book by Eliezer Yudkowsky and Nate Soares. This is a hell of a book. I highly recommend it.Everybody should pick it up. Honestly, if you haven't gone and bought that book by now, not gonna lie, I'm kind of disappointed. Should you really even be watching this channel? Are you not getting the message that it's critical for you to buy this book?It's going to be on the New York Times bestseller list, and the only question is, what position will it be? That's going to depend on how many people like you take action. And by action, I mean, you know, pay $14.99, get it on Kindle. Really do your part. It's not that much.Once you've done that, remember, tomorrow something very special is happening on my personal channel - the long awaited interview between me and Eliezer Yudkowsky! We've all been waiting so long for me to talk to Eliezer. It finally happened as part of his book tour.Technically, it's not part of Doom Debates, it's just on my own channel because it's not branded as a Doom thing. It's just a Yudkowsky interview. So you're not gonna see me ask P(Doom). That's not gonna be part of it. We're just going to talk in terms of existential risk and looking at dangers. That's the preferred terminology here. There's a difference of opinion between him and me, and we gotta respect that.So action item for you. Once you've bought that book, "If Anyone Builds It, Everyone Dies," head over to the link in the show notes for this post or in the description for this post. There's going to be a link to a YouTube premiere. That YouTube premiere is the Eliezer Yudkowsky interview, which is happening 9:00 AM tomorrow, Monday, September 15th.That's really not that long after you're listening to this post. We're talking hours away. You want to be subscribed to that premiere. You don't want to miss it because it's not going to be a post in the Doom Debates feed. It's going to be its own YouTube premiere on my personal channel.So once again, check out those show notes. Check out the description to what you're watching right now. Click on that link and bookmark the premiere episode because I'm going to be there live in the live chat watching my own episode. Just like you're watching the episode, my producer Ori is going to be there live watching the episode. A bunch of your fellow Doom Debates fans are going to be in the live chat watching the episode.This is a really special event, both for Eliezer Yudkowsky and MIRI, and for Doom Debates and for the human population. At least the human population can say that there was a New York Times bestseller calling out what's about to happen to all of us if we don't stop it. I think it's a very special week.I hope you'll help it be as high profile as possible so that the algorithms will take notice and society will take notice and a grassroots movement will take notice and our politicians and other leaders will take notice.All right. Signing off for now. I'll see you all in the YouTube premiere of the Eliezer Yudkowsky interview. Thanks for watching!---P.S. This post also has more details about Tuesday’s Doom Debates launch party! Get full access to Doom Debates at lironshapira.substack.com/subscribe
In this special cross-post from Jona Ragogna’s channel, I'm interviewed about why superintelligent AI poses an imminent extinction threat, and how AI takeover is going to unfold.Newly exposed to AI x-risk, Jona asks sharp questions about why we’re racing toward superintelligence despite the danger, and what ordinary people can do now to lower p(doom). This is one of the most crisp explainers of the AI-doom argument I’ve done to date.Timestamps0:00 Intro0:41 Why AI is likely to cause human extinction2:55 How AI takeover happens4:55 AI systems have goals6:33 Liron explains p(Doom)8:50 The worst case scenario is AI sweeps us away12:46 The best case scenario is hard to define14:24 How to avoid doom15:09 Frontier AI companies are just doing "ad hoc" alignment20:30 Why "warning shots" from AI aren't scary yet23:19 Should young adults work on AI alignment research?24:46 We need a grassroots movement28:31 Life choices when AI doom is imminent32:35 Are AI forecasters just biased?34:12 The Doom Train™ and addressing counterarguments40:28 Anthropic's new AI welfare announcement isn't a major breakthrough44:35 It's unknown what's going on inside LLMs and AI systems53:22 Effective Altruism's ties to AI risk56:58 Will AI be a "worthy descendant"?1:01:08 How to calculate P(Doom)1:02:49 Join the unofficial If Anyone Builds It, Everyone Dies book launch party!Show NotesSubscribe to Jona Ragogna — https://youtube.com/@jonaragognaIF ANYONE BUILDS IT LAUNCH WEEK EVENTS:Mon Sep 15 @ 9am PT / 12pm ET / 1600 UTCMy Eliezer Yudkowsky premieres on YouTube! Stay tuned for details.Tue Sep 16 @ 2pm PT / 5pm ET / 2100 UTCThe Doom Debates unofficial IABI Launch Party!!!More details about launch week HERE!---Doom Debates’s Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Ladies and gentlemen, we are days away from the long-awaited release of Eliezer Yudkowsky and Nate Soares's new book, “If Anyone Builds It, Everyone Dies”!!!Mon Sep 15 @9am PT: My Interview with Eliezer YudkowskyWe'll be kicking things off the morning of Monday, September 15th with a live watch party of my very special new interview with the one and only Eliezer Yudkowsky!All of us will be in the YouTube live chat. I'll be there, producer Ori will be there, and you'll get a first look at this new & exciting interview: Questions he's never been asked before.We'll talk about my history first meeting Eliezer in 2008, how things have evolved, what's going on now, why everybody has their head in the sand. And we'll even go to the LessWrong sequences and do some rationality deep cuts.This will be posted on my personal YouTube channel. But if you're subscribed to Doom Debates, you'll get the link in your feed as we get closer to Monday.Mark your calendars for Monday, September 15th, 9:00am Pacific, 12:00 PM Eastern. That's 1600 UTC for my Europals out there, midnight for my China peeps.I think YouTube commenter @pairy5420 said it best: "Bro, I could have three essays due, my crush just text me, the FBI outside my house, me having a winning lottery ticket to turn in all at the same time, and I still wouldn't miss the interview."That's the spirit, @pairy5420. You won't want to miss the Eliezer interview YouTube premiere. I'll see you all there. It's gonna be a great start to the book's launch week.Tue Sep 16 @2pm PT: The Doom Debates Unofficial Book Launch PartyNow let's talk about the launch day: Tuesday, September 16th. Once you've picked up your copy of the book, get ready for the Doom Debates unofficial "If Anyone Builds It, Everyone Dies" launch party!!!This is going to be a very special, unprecedented event here on this channel. It's gonna be a three hour live stream hosted by me and producer Ori, our new full-time producer. He's gonna be the co-host of the party.He’s finally coming out from behind the scenes, making his debut on the channel. Unless you count the first episode of the show - he was there too. The two of us are going to be joined by a who's who of very special guests. Get a load of these guests who are gonna be dropping by the unofficial launch party:* John Sherman from the AIX Risk Network and Michael, my other co-host of Warning Shots* Roman Yampolskiy, a top researcher and communicator in the field of AI x-risk* Emmett Shear, founder and CEO of a new AI alignment company called Softmax. He was previously the CEO of Twitch, and the interim CEO of OpenAI* Roon, member of the technical staff at OpenAI and friend of the show* Gary Marcus, cognitive scientist, author, entrepreneur, and famous AGI skeptic* Liv Boeree, the YouTube star, poker champion, and science communicator* Robert Wright, bestselling author, podcaster, wide-ranging intellectual, about announce his new book about AI* Holly Elmore, executive director of PauseAI US* Roko Mijic, you guys know Roko 😉And that's not all. There's going to be even more that I can't tell you about now, but it will not disappoint.So I really hope to see you all there at the unofficial "If Anyone Builds It, Everyone Dies" launch party, Tuesday, September 16th.Same as the book’s launch day: Tuesday, September 16th at 2:00pm Pacific, 5:00pm Eastern.Pick up your copy of the book that morning. Don't come to the party without it. We're gonna have a bouncer stationed at the door, and if you don't show him that you've got a copy of "If Anyone Builds It, Everyone Dies," he's gonna give you a big thumbs down.BUY THE BOOK!!!In all seriousness though, please support the book if you like Doom Debates. If you feel like you've gotten some value out of the show and you wanna give back a little bit, that is my ask. Head over to ifanyonebuildsit.com and buy the book from their links there. Go to Amazon, Barnes and Noble, wherever you normally buy books, just buy the damn thing. It's $14.99 on Kindle. It's not gonna break the bank.Then spread the word. Tell your friends and family, tell your coworkers at the office. Try to get a few more copies sold. We don't have another book launch coming, guys. This is it.This is our chance to take a little bit of action when it can actually move the needle and help. If you've been procrastinating this whole time, you gotta stop. You gotta go buy it now because the New York Times is gonna be checking this week.This is the last week of pre-orders. You really want to give it that launch bump. Don't try to drag it out after launch week. Time is of the essence.The Doom Debates MissionUltimately that's why I do this show. This isn't just entertainment for smart people. There is actually an important mission. We're trying to optimize the mission here. Help me out.Or at the very least, help high quality discourse because a lot of people across the spectrum agree, this is a high quality book contributing to the discourse, and we need more books like it.Thanks again for being with me on this journey to lower P(Doom) by convincing the average person that AI is urgently life-threatening to them and their loved ones. It's really important work.See you all on Monday 9am PT at the Eliezer Yudkowsky interview (details coming soon), and Tuesday 2pm PT at the launch party (event link)! Get full access to Doom Debates at lironshapira.substack.com/subscribe
Louis Berman is a polymath who brings unique credibility to AI doom discussions. He's been coding AI for 25 years, served as CTO of major tech companies, recorded the first visual sighting of what became the dwarf planet Eris, and has now pivoted to full-time AI risk activism. He's lobbied over 60 politicians across multiple countries for PauseAI and authored two books on existential risk.Louis and I are both baffled by the calm, measured tone that dominates AI safety discourse. As Louis puts it: "No one is dealing with this with emotions. No one is dealing with this as, oh my God, if they're right. Isn't that the scariest thing you've ever heard about?"Louis isn't just talking – he's acting on his beliefs. He just bought a "bug out house" in rural Maryland, though he's refreshingly honest that this isn't about long-term survival. He expects AI doom to unfold over months or years rather than Eliezer's instant scenario, and he's trying to buy his family weeks of additional time while avoiding starvation during societal collapse.He's spent extensive time in congressional offices and has concrete advice about lobbying techniques. His key insight: politicians' staffers consistently claim "if just five people called about AGI, it would move the needle". We need more people like Louis!Timestamps* 00:00:00 - Cold Open: The Missing Emotional Response* 00:00:31 - Introducing Louis Berman: Polymath Background and Donor Disclosure* 00:03:40 - The Anodyne Reaction: Why No One Seems Scared* 00:07:37 - P-Doom Calibration: Gary Marcus and the 1% Problem* 00:11:57 - The Bug Out House: Prepping for Slow Doom* 00:13:44 - Being Amazed by LLMs While Fearing ASI* 00:18:41 - What’s Your P(Doom)™* 00:25:42 - Bayesian Reasoning vs. Heart of Hearts Beliefs* 00:32:10 - Non-Doom Scenarios and International Coordination* 00:40:00 - The Missing Mood: Where's the Emotional Response?* 00:44:17 - Prepping Philosophy: Buying Weeks, Not Years* 00:52:35 - Doom Scenarios: Slow Takeover vs. Instant Death* 01:00:43 - Practical Activism: Lobbying Politicians and Concrete Actions* 01:16:44 - Where to Find Louis's Books and Final Wrap-up* 01:18:17 - Outro: Super Fans and Mission PartnersLinksLouis’s website — https://xriskbooks.com — Buy his books!ControlAI’s form to easily contact your representative and make a difference — https://controlai.com/take-action/usa — Highly recommended!Louis’s interview about activism with John Sherman and Felix De Simone — https://www.youtube.com/watch?v=Djd2n4cufTMIf Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.comBecome a Mission Partner!Want to meaningfully help the show’s mission (raise awareness of AI x-risk & raise the level of debate around this crucial subject) become reality? Donate $1,000+ to the show (no upper limit) and I’ll invite you to the private Discord channel. Email me at liron@doomdebates.com if you have questions or want to donate crypto.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Rob Miles is the most popular AI safety educator on YouTube, with millions of views across his videos explaining AI alignment to general audiences. He dropped out of his PhD in 2011 to focus entirely on AI safety communication – a prescient career pivot that positioned him as one of the field's most trusted voices over a decade before ChatGPT made AI risk mainstream.Rob sits firmly in the 10-90% P(Doom) range, though he admits his uncertainty is "hugely variable" and depends heavily on how humanity responds to the challenge. What makes Rob particularly compelling is the contrast between his characteristic British calm and his deeply serious assessment of our situation. He's the type of person who can explain existential risk with the measured tone of a nature documentarian while internally believing we're probably headed toward catastrophe.Rob has identified several underappreciated problems, particularly around alignment stability under self-modification. He argues that even if we align current AI systems, there's no guarantee their successors will inherit those values – a discontinuity problem that most safety work ignores. He's also highlighted the "missing mood" in AI discourse, where people discuss potential human extinction with the emotional register of an academic conference rather than an emergency.We explore Rob's mainline doom scenario involving recursive self-improvement, why he thinks there's enormous headroom above human intelligence, and his views on everything from warning shots to the Malthusian dynamics that might govern a post-AGI world. Rob makes a fascinating case that we may be the "least intelligent species capable of technological civilization" – which has profound implications for what smarter systems might achieve.Our key disagreement centers on strategy: Rob thinks some safety-minded people should work inside AI companies to influence them from within, while I argue this enables "tractability washing" that makes the companies look responsible while they race toward potentially catastrophic capabilities. Rob sees it as necessary harm reduction; I see it as providing legitimacy to fundamentally reckless enterprises.The conversation also tackles a meta-question about communication strategy. Rob acknowledges that his measured, analytical approach might be missing something crucial – that perhaps someone needs to be "running around screaming" to convey the appropriate emotional urgency. It's a revealing moment from someone who's spent over a decade trying to wake people up to humanity's most important challenge, only to watch the world continue treating it as an interesting intellectual puzzle rather than an existential emergency.Timestamps* 00:00:00 - Cold Open* 00:00:28 - Introducing Rob Miles* 00:01:42 - Rob's Background and Childhood* 00:02:05 - Being Aspie* 00:04:50 - Less Wrong Community and "Normies"* 00:06:24 - Chesterton's Fence and Cassava Root* 00:09:30 - Transition to AI Safety Research* 00:11:52 - Discovering Communication Skills* 00:15:36 - YouTube Success and Channel Growth* 00:16:46 - Current Focus: Technical vs Political* 00:18:50 - Nuclear Near-Misses and Y2K* 00:21:55 - What’s Your P(Doom)™* 00:27:31 - Uncertainty About Human Response* 00:31:04 - Views on Yudkowsky and AI Risk Arguments* 00:42:07 - Mainline Catastrophe Scenario* 00:47:32 - Headroom Above Human Intelligence* 00:54:58 - Detailed Doom Scenario* 01:01:07 - Self-Modification and Alignment Stability* 01:17:26 - Warning Shots Problem* 01:20:28 - Moving the Overton Window* 01:25:59 - Protests and Political Action* 01:33:02 - The Missing Mood Problem* 01:40:28 - Raising Society's Temperature* 01:44:25 - "If Anyone Builds It, Everyone Dies"* 01:51:05 - Technical Alignment Work* 01:52:00 - Working Inside AI Companies* 01:57:38 - Tractability Washing at AI Companies* 02:05:44 - Closing Thoughts* 02:08:21 - How to Support Doom Debates: Become a Mission PartnerLinksRob’s YouTube channel — https://www.youtube.com/@RobertMilesAIRob’s Twitter — https://x.com/robertskmilesRational Animations (another great YouTube channel, narrated by Rob) — https://www.youtube.com/RationalAnimationsBecome a Mission Partner!Want to meaningfully help the show’s mission (raise awareness of AI x-risk & raise the level of debate around this crucial subject) become reality? Donate $1,000+ to the show (no upper limit) and I’ll invite you to the private Discord channel. Email me at liron@doomdebates.com if you have questions or want to donate crypto. Get full access to Doom Debates at lironshapira.substack.com/subscribe
Vitalik Buterin is the founder of Ethereum, the world's second-largest cryptocurrency by market cap, currently valued at around $500 billion. But beyond revolutionizing blockchain technology, Vitalik has become one of the most thoughtful voices on AI safety and existential risk.He's donated over $665 million to pandemic prevention and other causes, and has a 12% P(Doom) – putting him squarely in what I consider the "sane zone" for AI risk assessment. What makes Vitalik particularly interesting is that he's both a hardcore techno-optimist who built one of the most successful decentralized systems ever created, and someone willing to seriously consider AI regulation and coordination mechanisms.Vitalik coined the term "d/acc" – defensive, decentralized, democratic, differential acceleration – as a middle path between uncritical AI acceleration and total pause scenarios. He argues we need to make the world more like Switzerland (defensible, decentralized) and less like the Eurasian steppes (vulnerable to conquest).We dive deep into the tractability of AI alignment, whether current approaches like DAC can actually work when superintelligence arrives, and why he thinks a pluralistic world of competing AIs might be safer than a single aligned superintelligence. We also explore his vision for human-AI merger through brain-computer interfaces and uploading.The crux of our disagreement is that I think we're heading for a "plants vs. animals" scenario where AI will simply operate on timescales we can't match, while Vitalik believes we can maintain agency through the right combination of defensive technologies and institutional design.Finally, we tackle the discourse itself – I ask Vitalik to debunk the common ad hominem attacks against AI doomers, from "it's just a fringe position" to "no real builders believe in doom." His responses carry weight given his credibility as both a successful entrepreneur and someone who's maintained intellectual honesty throughout his career.Timestamps* 00:00:00 - Cold Open* 00:00:37 - Introducing Vitalik Buterin* 00:02:14 - Vitalik's altruism* 00:04:36 - Rationalist community influence* 00:06:30 - Opinion of Eliezer Yudkowsky and MIRI* 00:09:00 - What’s Your P(Doom)™* 00:24:42 - AI timelines* 00:31:33 - AI consciousness* 00:35:01 - Headroom above human intelligence* 00:48:56 - Techno optimism discussion* 00:58:38 - e/acc: Vibes-based ideology without deep arguments* 01:02:49 - d/acc: Defensive, decentralized, democratic acceleration* 01:11:37 - How plausible is d/acc?* 01:20:53 - Why libertarian acceleration can paradoxically break decentralization* 01:25:49 - Can we merge with AIs?* 01:35:10 - Military AI concerns: How war accelerates dangerous development* 01:42:26 - The intractability question* 01:51:10 - Anthropic and tractability-washing the AI alignment problem* 02:00:05 - The state of AI x-risk discourse* 02:05:14 - Debunking ad hominem attacks against doomers* 02:23:41 - Liron’s outroLinksVitalik’s website: https://vitalik.eth.limoVitalik’s Twitter: https://x.com/vitalikbuterinEliezer Yudkowsky’s explanation of p-Zombies: https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies—Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Today I’m sharing my interview on Robert Wright’s Nonzero Podcast from last May. Rob is an especially sharp interviewer who doesn't just nod along, he had great probing questions for me.This interview happened right after Ilya Sutskever and Jan Leike resigned from OpenAI in May 2024, continuing a pattern that goes back to Dario Amodei leaving to start Anthropic. These aren't fringe doomers; these are the people hired specifically to solve the safety problem, and they keep concluding it's not solvable at the current pace.00:00:00 - Liron’s preface00:02:10 - Robert Wright introduces Liron00:04:02 - PauseAI protests at OpenAI headquarters00:05:15 - OpenAI resignations (Ilya Sutskever, Jan Leike, Dario Amodei, Paul Christiano, Daniel Kokotajlo)00:15:30 - P vs NP problem as analogy for AI alignment difficulty00:22:31 - AI pause movement and protest turnout00:29:02 - Defining AI doom and sci-fi scenarios00:32:05 - What’s My P(Doom)™00:35:18 - Fast vs slow AI takeoff and Sam Altman's position00:42:33 - Paperclip thought experiment and instrumental convergence explanation00:54:40 - Concrete examples of AI power-seeking behavior (business assistant scenario)01:00:58 - GPT-4 TaskRabbit deception example and AI reasoning capabilities01:09:00 - AI alignment challenges and human values discussion01:17:33 - Wrap-up and transition to premium subscriber contentShow NotesThis episode on Rob’s Nonzero Newsletter. You can subscribe for premium access to the last 1 hour of our discussion! — https://www.nonzero.org/p/in-defense-of-ai-doomerism-robertThis episode on Rob’s YouTube — https://www.youtube.com/watch?v=VihA_-8kBNgPauseAI — https://pauseai.infoPauseAI US — http://pauseai-us.orgDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
Dr. Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem.Steve has a whopping 90% P(Doom), but unlike most AI safety researchers who focus on current LLMs, he argues that LLMs will plateau before becoming truly dangerous, and the real threat will come from next-generation "brain-like AGI" based on actor-critic reinforcement learning.For the last five years, he's been diving deep into neuroscience to reverse engineer how human brains actually work, and how to use that knowledge to solve the technical AI alignment problem. He's one of the few people who both understands why alignment is hard and is taking a serious technical shot at solving it.We cover his "two subsystems" model of the brain, why current AI safety approaches miss the mark, his disagreements with social evolution approaches, and why understanding human neuroscience matters for building aligned AGI.* 00:00:00 - Cold Open: Solving the technical alignment problem* 00:00:26 - Introducing Dr. Steven Byrnes and his impressive background* 00:01:59 - Steve's unique mental strengths* 00:04:08 - The cold fusion research story demonstrating Steve's approach* 00:06:18 - How Steve got interested in neuroscience through Jeff Hawkins* 00:08:18 - Jeff Hawkins' cortical uniformity theory and brain vs deep learning* 00:11:45 - When Steve first encountered Eliezer's sequences and became AGI-pilled* 00:15:11 - Steve's research direction: reverse engineering human social instincts* 00:21:47 - Four visions of alignment success and Steve's preferred approach* 00:29:00 - The two brain subsystems model: steering brain vs learning brain* 00:35:30 - Brain volume breakdown and the learning vs steering distinction* 00:38:43 - Cerebellum as the "LLM" of the brain doing predictive learning* 00:46:44 - Language acquisition: Chomsky vs learning algorithms debate* 00:54:13 - What LLMs fundamentally can't do: complex context limitations* 01:07:17 - Hypothalamus and brainstem doing more than just homeostasis* 01:13:45 - Why morality might just be another hypothalamus cell group* 01:18:00 - Human social instincts as model-based reinforcement learning* 01:22:47 - Actor-critic reinforcement learning mapped to brain regions* 01:29:33 - Timeline predictions: when brain-like AGI might arrive* 01:38:28 - Why humans still beat AI on strategic planning and domain expertise* 01:47:27 - Inner vs outer alignment: cocaine example and reward prediction* 01:55:13 - Why legible Python code beats learned reward models* 02:00:45 - Outcome pumps, instrumental convergence, and the Stalin analogy* 02:11:48 - What’s Your P(Doom)™* 02:16:45 - Massive headroom above human intelligence* 02:20:45 - Can AI take over without physical actuators? (Yes)* 02:26:18 - Steve's bold claim: 30 person-years from proto-AGI to superintelligence* 02:32:17 - Why overhang makes the transition incredibly dangerous* 02:35:00 - Social evolution as alignment solution: why it won't work* 02:46:47 - Steve's research program: legible reward functions vs RLHF* 02:59:52 - AI policy discussion: why Steven is skeptical of pause AI* 03:05:51 - Lightning round: offense vs defense, P(simulation), AI unemployment* 03:12:42 - Thanking Steve and wrapping up the conversation* 03:13:30 - Liron's outro: Supporting the show and upcoming episodes with Vitalik and EliezerShow Notes* Steven Byrnes' Website & Research — https://sjbyrnes.com/* Steve’s Twitter — https://x.com/steve47285* Astera Institute — https://astera.org/Steve’s Sequences* Intro to Brain-Like-AGI Safety — https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8* Foom & Doom 1: “Brain in a box in a basement” — https://www.alignmentforum.org/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement* Foom & Doom 2: Technical alignment is hard — https://www.alignmentforum.org/posts/bnnKGSCHJghAvqPjS/foom-and-doom-2-technical-alignment-is-hard---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe