DiscoverDoom Debates
Claim Ownership
Doom Debates
Author: Liron Shapira
Subscribed: 6Played: 88Subscribe
Share
© Liron Shapira
Description
Urgent disagreements that must be resolved before the world ends, hosted by Liron Shapira.
lironshapira.substack.com
lironshapira.substack.com
51 Episodes
Reverse
This week Liron was interview by Gaëtan Selle on @the-flares about AI doom.Cross-posted from their channel with permission.Original source: https://www.youtube.com/watch?v=e4Qi-54I9Zw0:00:02 Guest Introduction 0:01:41 Effective Altruism and Transhumanism 0:05:38 Bayesian Epistemology and Extinction Probability 0:09:26 Defining Intelligence and Its Dangers 0:12:33 The Key Argument for AI Apocalypse 0:18:51 AI’s Internal Alignment 0:24:56 What Will AI's Real Goal Be? 0:26:50 The Train of Apocalypse 0:31:05 Among Intellectuals, Who Rejects the AI Apocalypse Arguments? 0:38:32 The Shoggoth Meme 0:41:26 Possible Scenarios Leading to Extinction 0:50:01 The Only Solution: A Pause in AI Research? 0:59:15 The Risk of Violence from AI Risk Fundamentalists 1:01:18 What Will General AI Look Like? 1:05:43 Sci-Fi Works About AI 1:09:21 The Rationale Behind Cryonics 1:12:55 What Does a Positive Future Look Like? 1:15:52 Are We Living in a Simulation? 1:18:11 Many Worlds in Quantum Mechanics Interpretation 1:20:25 Ideal Future Podcast Guest for Doom Debates Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Roon is a member of the technical staff at OpenAI. He’s a highly respected voice on tech Twitter, despite being a pseudonymous cartoon avatar account. In late 2021, he invented the terms “shape rotator” and “wordcel” to refer to roughly visual/spatial/mathematical intelligence vs. verbal intelligence. He is simultaneously a serious thinker, a builder, and a shitposter. I'm excited to learn more about Roon, his background, his life, and of course, his views about AI and existential risk.00:00 Introduction02:43 Roon’s Quest and Philosophies22:32 AI Creativity30:42 What’s Your P(Doom)™54:40 AI Alignment57:24 Training vs. Production01:05:37 ASI01:14:35 Goal-Oriented AI and Instrumental Convergence01:22:43 Pausing AI01:25:58 Crux of Disagreement1:27:55 Dogecoin01:29:13 Doom Debates’s MissionShow NotesFollow Roon: https://x.com/tszzlFor Humanity: An AI Safety Podcast with John Sherman — https://www.youtube.com/@ForHumanityPodcastLethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16FgPauseAI, the volunteer organization I’m part of — https://pauseai.info/Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Today I’m reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov.Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. He’s best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog.Scott is one of my biggest intellectual influences. His famous Who Can Name The Bigger Number essay and his long-running blog are among my best memories of coming across high-quality intellectual content online as a teen. His posts and lectures taught me much of what I know about complexity theory.Scott recently completed a two-year stint at OpenAI focusing on the theoretical foundations of AI safety, so I was interested to hear his insider account.Unfortunately, what I heard in the interview confirms my worst fears about the meaning of “safety” at today’s AI companies: that they’re laughably clueless at how to achieve any measure of safety, but instead of doing the adult thing and slowing down their capabilities work, they’re pushing forward recklessly.00:00 Introducing Scott Aaronson02:17 Scott's Recruitment by OpenAI04:18 Scott's Work on AI Safety at OpenAI08:10 Challenges in AI Alignment12:05 Watermarking AI Outputs15:23 The State of AI Safety Research22:13 The Intractability of AI Alignment34:20 Policy Implications and the Call to Pause AI38:18 Out-of-Distribution Generalization45:30 Moral Worth Criterion for Humans51:49 Quantum Mechanics and Human Uniqueness01:00:31 Quantum No-Cloning Theorem01:12:40 Scott Is Almost An Accelerationist?01:18:04 Geoffrey Hinton's Proposal for Analog AI01:36:13 The AI Arms Race and the Need for Regulation01:39:41 Scott Aronson's Thoughts on Sam Altman01:42:58 Scott Rejects the Orthogonality Thesis01:46:35 Final Thoughts01:48:48 Lethal Intelligence Clip01:51:42 OutroShow NotesScott’s Interview on Win-Win with Liv Boeree and Igor Kurganov: https://www.youtube.com/watch?v=ANFnUHcYza0Scott’s Blog: https://scottaaronson.blogPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Today I’m reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they can’t truly reason.The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.00:00 Introduction02:54 Essentially N-Gram Models?10:31 The Manhole Cover Question20:54 Reasoning vs. Approximate Retrieval47:03 Explaining Jokes53:21 Caesar Cipher Performance01:10:44 Creativity vs. Reasoning01:33:37 Reasoning By Analogy01:48:49 Synthetic Data01:53:54 The ARC Challenge02:11:47 Correctness vs. Style02:17:55 AIs Becoming More Robust02:20:11 Block Stacking Problems02:48:12 PlanBench and Future Predictions02:58:59 Final ThoughtsShow NotesRao’s interview on Machine Learning Street Talk: https://www.youtube.com/watch?v=y1WnHpedi2ARao’s Twitter: https://x.com/rao2zPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).00:00 Nethys Introduction04:47 The Vulnerable World Hypothesis10:01 What’s Your P(Doom)™14:04 Nethys’s Banger YouTube Comment26:53 Living with High P(Doom)31:06 Losing Access to Distant Stars36:51 Defining AGI39:09 The Convergence of AI Models47:32 The Role of “Unlicensed” Thinkers52:07 The PauseAI Movement58:20 Lethal Intelligence Video ClipShow NotesEliezer Yudkowsky’s post on “Death with Dignity”: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategyPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Fraser Cain is the publisher of Universe Today, co-host of Astronomy Cast, a popular YouTuber about all things space, and guess what… he has a high P(doom)! That’s why he’s joining me on Doom Debates for a very special AI + space crossover episode.00:00 Fraser Cain’s Background and Interests5:03 What’s Your P(Doom)™07:05 Our Vulnerable World15:11 Don’t Look Up22:18 Cosmology and the Search for Alien Life31:33 Stars = Terrorists39:03 The Great Filter and the Fermi Paradox55:12 Grabby Aliens Hypothesis01:19:40 Life Around Red Dwarf Stars?01:22:23 Epistemology of Grabby Aliens01:29:04 Multiverses01:33:51 Quantum Many Worlds vs. Copenhagen Interpretation01:47:25 Simulation Hypothesis01:51:25 Final ThoughtsSHOW NOTESFraser’s YouTube channel: https://www.youtube.com/@frasercainUniverse Today (space and astronomy news): https://www.universetoday.com/Max Tegmark’s book that explains 4 levels of multiverses: https://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307744256Robin Hanson’s ideas:Grabby Aliens: https://grabbyaliens.comThe Great Filter: https://en.wikipedia.org/wiki/Great_FilterLife in a high-dimensional space: https://www.overcomingbias.com/p/life-in-1kdhtml---Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are back for a Part II! This time we’re going straight to debating my favorite topic, AI doom.00:00 Introduction02:23 High-Level AI Doom Argument17:06 How Powerful Could Intelligence Be?22:34 “Knowledge Creation”48:33 “Creativity”54:57 Stand-Up Comedy as a Test for AI01:12:53 Vaden & Ben’s Goalposts01:15:00 How to Change Liron’s Mind01:20:02 LLMs are Stochastic Parrots?01:34:06 Tools vs. Agents01:39:51 Instrumental Convergence and AI Goals01:45:51 Intelligence vs. Morality01:53:57 Mainline Futures02:16:50 Lethal Intelligence VideoShow NotesVaden & Ben’s Podcast: https://www.youtube.com/@incrementspodRecommended playlists from their podcast:* The Bayesian vs Popperian Epistemology Series* The Conjectures and Refutations SeriesVaden’s Twitter: https://x.com/vadenmasraniBen’s Twitter: https://x.com/BennyChuggWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Dr. Andrew Critch is the co-founder of the Center for Applied Rationality, a former Research Fellow at the Machine Intelligence Research Institute (MIRI), a Research Scientist at the UC Berkeley Center for Human Compatible AI, and the co-founder of a new startup called Healthcare Agents.Dr. Critch’s P(Doom) is a whopping 85%! But his most likely doom scenario isn’t what you might expect. He thinks humanity will successfully avoid a self-improving superintelligent doom scenario, only to still go extinct via the slower process of “industrial dehumanization”.00:00 Introduction01:43 Dr. Critch’s Perspective on LessWrong Sequences06:45 Bayesian Epistemology15:34 Dr. Critch's Time at MIRI18:33 What’s Your P(Doom)™26:35 Doom Scenarios40:38 AI Timelines43:09 Defining “AGI”48:27 Superintelligence53:04 The Speed Limit of Intelligence01:12:03 The Obedience Problem in AI01:21:22 Artificial Superintelligence and Human Extinction01:24:36 Global AI Race and Geopolitics01:34:28 Future Scenarios and Human Relevance01:48:13 Extinction by Industrial Dehumanization01:58:50 Automated Factories and Human Control02:02:35 Global Coordination Challenges02:27:00 Healthcare Agents02:35:30 Final Thoughts---Show NotesDr. Critch’s LessWrong post explaining his P(Doom) and most likely doom scenarios: https://www.lesswrong.com/posts/Kobbt3nQgv3yn29pr/my-motivation-and-theory-of-change-for-working-in-aiDr. Critch’s Website: https://acritch.com/Dr. Critch’s Twitter: https://twitter.com/AndrewCritchPhD---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
It’s time for AI Twitter Beefs #2:00:42 Jack Clark (Anthropic) vs. Holly Elmore (PauseAI US)11:02 Beff Jezos vs. Eliezer Yudkowsky, Carl Feynman18:10 Geoffrey Hinton vs. OpenAI & Meta25:14 Samuel Hammond vs. Liron30:26 Yann LeCun vs. Eliezer Yudkowsky37:13 Roon vs. Eliezer Yudkowsky41:37 Tyler Cowen vs. AI Doomers52:54 David Deutsch vs. LironTwitter people referenced:* Jack Clark: https://x.com/jackclarkSF* Holly Elmore: https://x.com/ilex_ulmus* PauseAI US: https://x.com/PauseAIUS* Geoffrey Hinton: https://x.com/GeoffreyHinton* Samuel Hammond: https://x.com/hamandcheese* Yann LeCun: https://x.com/ylecun* Eliezer Yudkowsky: https://x.com/esyudkowsky* Roon: https://x.com/tszzl* Beff Jezos: https://x.com/basedbeffjezos* Carl Feynman: https://x.com/carl_feynman* Tyler Cowen: https://x.com/tylercowen* David Deutsch: https://x.com/DavidDeutschOxfShow NotesHolly Elmore’s EA forum post about scouts vs. soldiersManifund info & donation page for PauseAI US: https://manifund.org/projects/pauseai-us-2025-through-q2PauseAI.info - join the Discord and find me in the #doom-debates channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are joining me to debate Bayesian vs. Popperian epistemology.I’m on the Bayesian side, heavily influenced by the writings of Eliezer Yudkowsky. Vaden and Ben are on the Popperian side, heavily influenced by David Deutsch and the writings of Popper himself.We dive into the theoretical underpinnings of Bayesian reasoning and Solomonoff induction, contrasting them with the Popperian perspective, and explore real-world applications such as predicting elections and economic policy outcomes.The debate highlights key philosophical differences between our two epistemological frameworks, and sets the stage for further discussions on superintelligence and AI doom scenarios in an upcoming Part II.00:00 Introducing Vaden and Ben02:51 Setting the Stage: Epistemology and AI Doom04:50 What’s Your P(Doom)™13:29 Popperian vs. Bayesian Epistemology31:09 Engineering and Hypotheses38:01 Solomonoff Induction45:21 Analogy to Mathematical Proofs48:42 Popperian Reasoning and Explanations54:35 Arguments Against Bayesianism58:33 Against Probability Assignments01:21:49 Popper’s Definition of “Content”01:31:22 Heliocentric Theory Example01:31:34 “Hard to Vary” Explanations01:44:42 Coin Flipping Example01:57:37 Expected Value02:12:14 Prediction Market Calibration02:19:07 Futarchy02:29:14 Prediction Markets as AI Lower Bound02:39:07 A Test for Prediction Markets2:45:54 Closing ThoughtsShow NotesVaden & Ben’s Podcast: https://www.youtube.com/@incrementspodVaden’s Twitter: https://x.com/vadenmasraniBen’s Twitter: https://x.com/BennyChuggBayesian reasoning: https://en.wikipedia.org/wiki/Bayesian_inferenceKarl Popper: https://en.wikipedia.org/wiki/Karl_PopperVaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/the_credence_assumption/Vaden’s disproof of probabilistic induction (including Solomonoff Induction): https://arxiv.org/abs/2107.00749Vaden’s referenced post about predictions being uncalibrated > 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#CalibrationsArticle by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/Sources for claim that superforecasters gave a P(doom) below 1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/https://www.astralcodexten.com/p/the-extinction-tournamentVaden’s Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdfDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade.If you haven't been following all the urgent warnings, I'm here to bring you up to speed.* Human-level AI is coming soon* It’s an existential threat to humanity* The situation calls for urgent actionListen to this 15-minute intro to get the lay of the land.Then follow these links to learn more and see how you can help:* The CompendiumA longer written introduction to AI doom by Connor Leahy et al* AGI Ruin — A list of lethalitiesA comprehensive list by Eliezer Yudkowksy of reasons why developing superintelligent AI is unlikely to go well for humanity* AISafety.infoA catalogue of AI doom arguments and responses to objections* PauseAI.infoThe largest volunteer org focused on lobbying world government to pause development of superintelligent AI* PauseAI DiscordChat with PauseAI members, see a list of projects and get involved---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Prof. Lee Cronin is the Regius Chair of Chemistry at the University of Glasgow. His research aims to understand how life might arise from non-living matter. In 2017, he invented “Assembly Theory” as a way to measure the complexity of molecules and gain insight into the earliest evolution of life.Today we’re debating Lee's claims about the limits of AI capabilities, and my claims about the risk of extinction from superintelligent AGI.00:00 Introduction04:20 Assembly Theory05:10 Causation and Complexity10:07 Assembly Theory in Practice12:23 The Concept of Assembly Index16:54 Assembly Theory Beyond Molecules30:13 P(Doom)32:39 The Statement on AI Risk42:18 Agency and Intent47:10 RescueBot’s Intent vs. a Clock’s53:42 The Future of AI and Human Jobs57:34 The Limits of AI Creativity01:04:33 The Complexity of the Human Brain01:19:31 Superintelligence: Fact or Fiction?01:29:35 Final ThoughtsLee’s Wikipedia: https://en.wikipedia.org/wiki/Leroy_CroninLee’s Twitter: https://x.com/leecroninLee’s paper on Assembly Theory: https://arxiv.org/abs/2206.02279Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Ben Horowitz, cofounder and General Partner at Andreessen Horowitz (a16z), says nuclear proliferation is good.I was shocked because I thought we all agreed nuclear proliferation is VERY BAD.If Ben and a16z can’t appreciate the existential risks of nuclear weapons proliferation, why would anyone ever take them seriously on the topic of AI regulation?00:00 Introduction00:49 Ben Horowitz on Nuclear Proliferation02:12 Ben Horowitz on Open Source AI05:31 Nuclear Non-Proliferation Treaties10:25 Escalation Spirals15:20 Rogue Actors16:33 Nuclear Accidents17:19 Safety Mechanism Failures20:34 The Role of Human Judgment in Nuclear Safety21:39 The 1983 Soviet Nuclear False Alarm22:50 a16z’s Disingenuousness23:46 Martin Casado and Marc Andreessen24:31 Nuclear Equilibrium26:52 Why I Care28:09 Wrap UpSources of this episode’s video clips:Ben Horowitz’s interview on Upstream with Erik Torenberg: https://www.youtube.com/watch?v=oojc96r3KuoMartin Casado and Marc Andreessen talking about AI on the a16z Podcast: https://www.youtube.com/watch?v=0wIUK0nsyUgRoger Skaer’s TikTok: https://www.tiktok.com/@rogerskaerGeorge W. Bush and John Kerry Presidential Debate (September 30, 2004): https://www.youtube.com/watch?v=WYpP-T0IcyABarack Obama’s Prague Remarks on Nuclear Disarmament: https://www.youtube.com/watch?v=QKSn1SXjj2sJohn Kerry’s Remarks at the 2015 Nuclear Nonproliferation Treaty Review Conference: https://www.youtube.com/watch?v=LsY1AZc1K7wShow notes:Nuclear War, A Scenario by Annie Jacobsen: https://www.amazon.com/Nuclear-War-Scenario-Annie-Jacobsen/dp/0593476093Dr. Strangelove or: How I learned to Stop Worrying and Love the Bomb: https://en.wikipedia.org/wiki/Dr._Strangelove1961 Goldsboro B-52 Crash: https://en.wikipedia.org/wiki/1961_Goldsboro_B-52_crash1983 Soviet Nuclera False Alarm Incident: https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incidentList of military nuclear accidents: https://en.wikipedia.org/wiki/List_of_military_nuclear_accidentsDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Today I’m reacting to Arvind Narayanan’s interview with Robert Wright on the Nonzero podcast: https://www.youtube.com/watch?v=MoB_pikM3NYDr. Narayanan is a Professor of Computer Science and the Director of the Center for Information Technology Policy at Princeton. He just published a new book called AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.Arvind claims AI is “normal technology like the internet”, and never sees fit to bring up the impact or urgency of AGI. So I’ll take it upon myself to point out all the questions where someone who takes AGI seriously would give different answers.00:00 Introduction01:49 AI is “Normal Technology”?09:25 Playing Chess vs. Moving Chess Pieces12:23 AI Has To Learn From Its Mistakes?22:24 The Symbol Grounding Problem and AI's Understanding35:56 Human vs AI Intelligence: The Fundamental Difference36:37 The Cognitive Reflection Test41:34 The Role of AI in Cybersecurity43:21 Attack vs. Defense Balance in (Cyber)War54:47 Taking AGI Seriously01:06:15 Final ThoughtsShow NotesThe original Nonzero podcast episode with Arvind Narayanan and Robert Wright: https://www.youtube.com/watch?v=MoB_pikM3NYArvind’s new book, AI Snake Oil: https://www.amazon.com/Snake-Oil-Artificial-Intelligence-Difference-ebook/dp/B0CW1JCKVLArvind’s Substack: https://aisnakeoil.comArvind’s Twitter: https://x.com/random_walkerRobert Wright’s Twitter: https://x.com/robertwrighterRobert Wright’s Nonzero Newsletter: https://nonzero.substack.comRob’s excellent post about symbol grounding (Yes, AIs ‘understand’ things): https://nonzero.substack.com/p/yes-ais-understand-thingsMy previous episode of Doom Debates reacting to Arvind Narayanan on Harry Stebbings’ podcast: https://www.youtube.com/watch?v=lehJlitQvZEDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Dr. Keith Duggar from Machine Learning Street Talk was the subject of my recent reaction episode about whether GPT o1 can reason. But instead of ignoring or blocking me, Keith was brave enough to come into the lion’s den and debate his points with me… and his P(doom) might shock you!First we debate whether Keith’s distinction between Turing Machines and Discrete Finite Automata is useful for understanding limitations of current LLMs. Then I take Keith on a tour of alignment, orthogonality, instrumental convergence, and other popular stations on the “doom train”, to compare our views on each.Keith was a great sport and I think this episode is a classic!00:00 Introduction00:46 Keith’s Background03:02 Keith’s P(doom)14:09 Are LLMs Turing Machines?19:09 Liron Concedes on a Point!21:18 Do We Need >1MB of Context?27:02 Examples to Illustrate Keith’s Point33:56 Is Terence Tao a Turing Machine?38:03 Factoring Numbers: Human vs. LLM53:24 Training LLMs with Turing-Complete Feedback1:02:22 What Does the Pillar Problem Illustrate?01:05:40 Boundary between LLMs and Brains1:08:52 The 100-Year View1:18:29 Intelligence vs. Optimization Power1:23:13 Is Intelligence Sufficient To Take Over?01:28:56 The Hackable Universe and AI Threats01:31:07 Nuclear Extinction vs. AI Doom1:33:16 Can We Just Build Narrow AI?01:37:43 Orthogonality Thesis and Instrumental Convergence01:40:14 Debating the Orthogonality Thesis02:03:49 The Rocket Alignment Problem02:07:47 Final ThoughtsShow NotesKeith’s show: https://www.youtube.com/@MachineLearningStreetTalkKeith’s Twitter: https://x.com/doctorduggarKeith’s fun brain teaser that LLMs can’t solve yet, about a pillar with four holes: https://youtu.be/nO6sDk6vO0g?si=diGUY7jW4VFsV0TJ&t=3684Eliezer Yudkowsky’s classic post about the “Rocket Alignment Problem”: https://intelligence.org/2018/10/03/rocket-alignment/Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates.📣 You can now chat with me and other listeners in the #doom-debates channel of the PauseAI discord: https://discord.gg/2XXWXvErfA This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.Is civil disobedience the right strategy to pause or stop AI?00:00 Introducing Stop AI00:38 Arrested at OpenAI Headquarters01:14 Stop AI’s Funding01:26 Blocking Entrances Strategy03:12 Protest Logistics and Arrest08:13 Blocking Traffic12:52 Arrest and Legal Consequences18:31 Commitment to Nonviolence21:17 A Day in the Life of a Protestor21:38 Civil Disobedience25:29 Planning the Next Protest28:09 Stop AI Goals and Strategies34:27 The Ethics and Impact of AI Protests42:20 Call to ActionShow NotesStopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.StopAI Website: https://StopAI.infoStopAI Discord: https://discord.gg/gbqGUt7ZN4Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.PauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAThere's also a special #doom-debates channel in the PauseAI Discord just for us :)Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
This episode is a continuation of Q&A #1 Part 1 where I answer YOUR questions!00:00 Introduction01:20 Planning for a good outcome?03:10 Stock Picking Advice08:42 Dumbing It Down for Dr. Phil11:52 Will AI Shorten Attention Spans?12:55 Historical Nerd Life14:41 YouTube vs. Podcast Metrics16:30 Video Games26:04 Creativity30:29 Does AI Doom Explain the Fermi Paradox?36:37 Grabby Aliens37:29 Types of AI Doomers44:44 Early Warning Signs of AI Doom48:34 Do Current AIs Have General Intelligence?51:07 How Liron Uses AI53:41 Is “Doomer” a Good Term?57:11 Liron’s Favorite Books01:05:21 Effective Altruism01:06:36 The Doom Debates Community---Show NotesPauseAI Discord: https://discord.gg/2XXWXvErfARobin Hanson’s Grabby Aliens theory: https://grabbyaliens.comProf. David Kipping’s response to Robin Hanson’s Grabby Aliens: https://www.youtube.com/watch?v=tR1HTNtcYw0My explanation of “AI completeness”, but actually I made a mistake because the term I previously coined is “goal completeness”: https://www.lesswrong.com/posts/iFdnb8FGRF4fquWnc/goal-completeness-is-like-turing-completeness-for-agi^ Goal-Completeness (and the corresponding Shapira-Yudkowsky Thesis) might be my best/only original contribution to AI safety research, albeit a small one. Max Tegmark even retweeted it.a16z's Ben Horowitz claiming nuclear proliferation is good, actually: https://x.com/liron/status/1690087501548126209---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Thanks for being one of the first Doom Debates subscribers and sending in your questions! This episode is Part 1; stay tuned for Part 2 coming soon.00:00 Introduction01:17 Is OpenAI a sinking ship?07:25 College Education13:20 Asperger's16:50 Elon Musk: Genius or Clown?22:43 Double Crux32:04 Why Call Doomers a Cult?36:45 How I Prepare Episodes40:29 Dealing with AI Unemployment44:00 AI Safety Research Areas46:09 Fighting a Losing Battle53:03 Liron’s IQ01:00:24 Final ThoughtsExplanation of Double Cruxhttps://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understandingBest Doomer ArgumentsThe LessWrong sequences by Eliezer Yudkowsky: https://ReadTheSequences.comLethalIntelligence.ai — Directory of people who are good at explaining doomRob Miles’ Explainer Videos: https://www.youtube.com/c/robertmilesaiFor Humanity Podcast with John Sherman - https://www.youtube.com/@ForHumanityPodcastPauseAI community — https://PauseAI.info — join the Discord!AISafety.info — Great reference for various argumentsBest Non-Doomer ArgumentsCarl Shulman — https://www.dwarkeshpatel.com/p/carl-shulmanQuintin Pope and Nora Belrose — https://optimists.aiRobin Hanson — https://www.youtube.com/watch?v=dTQb6N3_zu8How I prepared to debate Robin HansonIdeological Turing Test (me taking Robin’s side): https://www.youtube.com/watch?v=iNnoJnuOXFAWalkthrough of my outline of prepared topics: https://www.youtube.com/watch?v=darVPzEhh-IDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
In today’s episode, instead of reacting to a long-form presentation of someone’s position, I’m reporting on the various AI x-risk-related tiffs happening in my part of the world. And by “my part of the world” I mean my Twitter feed.00:00 Introduction01:55 Followup to my MSLT reaction episode03:48 Double Crux04:53 LLMs: Finite State Automata or Turing Machines?16:11 Amjad Masad vs. Helen Toner and Eliezer Yudkowsky17:29 How Will AGI Literally Kill Us?33:53 Roon37:38 Prof. Lee Cronin40:48 Defining AI Creativity43:44 Naval Ravikant46:57 Pascal's Scam54:10 Martin Casado and SB 104701:12:26 Final ThoughtsLinks referenced in the episode:* Eliezer Yudkowsky’s interview on the Logan Bartlett Show. Highly recommended: https://www.youtube.com/watch?v=_8q9bjNHeSo* Double Crux, the core rationalist technique I use when I’m “debating”: https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding* The problem with arguing “by definition”, a classic LessWrong post: https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definitionTwitter people referenced:* Amjad Masad: https://x.com/amasad* Eliezer Yudkowsky: https://x.com/esyudkowsky* Helen Toner: https://x.com/hlntnr* Roon: https://x.com/tszzl* Lee Cronin: https://x.com/leecronin* Naval Ravikant: https://x.com/naval* Geoffrey Miller: https://x.com/primalpoly* Martin Casado: https://x.com/martin_casado* Yoshua Bengio: https://x.com/yoshua_bengio* Your boy: https://x.com/lironDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
How smart is OpenAI’s new model, o1? What does “reasoning” ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?Dr. Tim Scarfe and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!00:00 Introduction02:14 Computability Theory03:40 Turing Machines07:04 Complexity Theory and AI23:47 Reasoning44:24 o147:00 Finding gold in the Sahara56:20 Self-Supervised Learning and Chain of Thought01:04:01 The Miracle of AI Optimization01:23:57 Collective Intelligence01:25:54 The Argument Against LLMs' Reasoning01:49:29 The Swiss Cheese Metaphor for AI Knowledge02:02:37 Final ThoughtsOriginal source: https://www.youtube.com/watch?v=nO6sDk6vO0gFollow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalkZvi Mowshowitz's authoritative GPT-o1 post: https://thezvi.wordpress.com/2024/09/16/gpt-4o1/Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
United States