DiscoverDoom Debates
Doom Debates
Claim Ownership

Doom Debates

Author: Liron Shapira

Subscribed: 1Played: 13
Share

Description

AI debates that must be resolved before the world ends
32 Episodes
Reverse
How smart is OpenAI’s new model, o1? What does “reasoning” ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?Dr. Tim Scarfe and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!00:00 Introduction02:14 Computability Theory03:40 Turing Machines07:04 Complexity Theory and AI23:47 Reasoning44:24 o147:00 Finding gold in the Sahara56:20 Self-Supervised Learning and Chain of Thought01:04:01 The Miracle of AI Optimization01:23:57 Collective Intelligence01:25:54 The Argument Against LLMs' Reasoning01:49:29 The Swiss Cheese Metaphor for AI Knowledge02:02:37 Final ThoughtsOriginal source: https://www.youtube.com/watch?v=nO6sDk6vO0gFollow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalkZvi Mowshowitz's authoritative GPT-o1 post: https://thezvi.wordpress.com/2024/09/16/gpt-4o1/Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Yuval Noah Harari is a historian, philosopher, and bestselling author known for his thought-provoking works on human history, the future, and our evolving relationship with technology. His 2011 book, Sapiens: A Brief History of Humankind, took the world by storm, offering a sweeping overview of human history from the emergence of Homo sapiens to the present day. Harari just published a new book which is largely about AI. It’s called Nexus: A Brief History of Information Networks from the Stone Age to AI. Let’s go through the latest interview he did as part of his book tour to see where he stands on AI extinction risk.00:00 Introduction04:30 Defining AI vs. non-AI20:43 AI and Language Mastery29:37 AI's Potential for Manipulation31:30 Information is Connection?37:48 AI and Job Displacement48:22 Consciousness vs. Intelligence52:02 The Alignment Problem59:33 Final ThoughtsSource podcast: https://www.youtube.com/watch?v=78YN1e8UXdMFollow Yuval Noah Harari: x.com/harari_yuvalFollow Steven Bartlett, host of Diary of a CEO: x.com/StevenBartlettJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
It's finally here, the Doom Debates / Dr. Phil crossover episode you've all been asking for 😂 The full episode is called “AI: The Future of Education?"While the main focus was AI in education, I'm glad the show briefly touched on how we're all gonna die. Everything in the show related to AI extinction is clipped here.Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Dr. Roman Yampolskiy is the director of the Cyber Security Lab at the University of Louisville. His new book is called AI: Unexplainable, Unpredictable, Uncontrollable.Roman’s P(doom) from AGI is a whopping 99.999%, vastly greater than my P(doom) of 50%. It’s a rare debate when I’m LESS doomy than my opponent!This is a cross-post from the For Humanity podcast hosted by John Sherman. For Humanity is basically a sister show of Doom Debates. Highly recommend subscribing!00:00 John Sherman’s Intro05:21 Diverging Views on AI Safety and Control12:24 The Challenge of Defining Human Values for AI18:04 Risks of Superintelligent AI and Potential Solutions33:41 The Case for Narrow AI45:21 The Concept of Utopia48:33 AI's Utility Function and Human Values55:48 Challenges in AI Safety Research01:05:23 Breeding Program Proposal01:14:05 The Reality of AI Regulation01:18:04 Concluding Thoughts01:23:19 Celebration of LifeThis episode on For Humanity’s channel: https://www.youtube.com/watch?v=KcjLCZcBFoQFor Humanity on YouTube: https://www.youtube.com/@ForHumanityPodcastFor Humanity on X: https://x.com/ForHumanityPodBuy Roman’s new book: https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626XJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Jobst Landgrebe, co-author of Why Machines Will Never Rule The World: Artificial Intelligence Without Fear, argues that AI is fundamentally limited in achieving human-like intelligence or consciousness due to the complexities of the human brain which are beyond mathematical modeling.Contrary to my view, Jobst has a very low opinion of what machines will be able to achieve in the coming years and decades.He’s also a devout Christian, which makes our clash of perspectives funnier.00:00 Introduction03:12 AI Is Just Pattern Recognition?06:46 Mathematics and the Limits of AI12:56 Complex Systems and Thermodynamics33:40 Transhumanism and Genetic Engineering47:48 Materialism49:35 Transhumanism as Neo-Paganism01:02:38 AI in Warfare01:11:55 Is This Science?01:25:46 ConclusionSource podcast: https://www.youtube.com/watch?v=xrlT1LQSyNUJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan.Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers. I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintelligent god.00:00 Introduction01:21 Arvind’s Perspective on AI02:07 Debating AI's Compute and Performance03:59 Synthetic Data vs. Real Data05:59 The Role of Compute in AI Advancement07:30 Challenges in AI Predictions26:30 AI in Organizations and Tacit Knowledge33:32 The Future of AI: Exponential Growth or Plateau?36:26 Relevance of Benchmarks39:02 AGI40:59 Historical Predictions46:28 OpenAI vs. Anthropic52:13 Regulating AI56:12 AI as a Weapon01:02:43 Sci-Fi01:07:28 ConclusionOriginal source: https://www.youtube.com/watch?v=8CvjVAyB4O4Follow Arvind Narayanan: x.com/random_walkerFollow Harry Stebbings: x.com/HarryStebbingsJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Today I’m reacting to the Bret Weinstein’s recent appearance on the Diary of a CEO podcast with Steven Bartlett. Bret is an evolutionary biologist known for his outspoken views on social and political issues.Bret gets off to a promising start, saying that AI risk should be “top of mind” and poses “five existential threats”. But his analysis is shallow and ad-hoc, and ends in him dismissing the idea of trying to use regulation as a tool to save our species from a recognized existential threat.I believe we can raise the level of AI doom discourse by calling out these kinds of basic flaws in popular media on the subject.00:00 Introduction02:02 Existential Threats from AI03:32 The Paperclip Problem04:53 Moral Implications of Ending Suffering06:31 Inner vs. Outer Alignment08:41 AI as a Tool for Malicious Actors10:31 Attack vs. Defense in AI18:12 The Event Horizon of AI21:42 Is Language More Prime Than Intelligence?38:38 AI and the Danger of Echo Chambers46:59 AI Regulation51:03 Mechanistic Interpretability56:52 Final ThoughtsOriginal source: youtube.com/watch?v=_cFu-b5lTMUFollow Bret Weinstein: x.com/BretWeinsteinFollow Steven Bartlett: x.com/StevenBartlettJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
California's SB 1047 bill, authored by CA State Senator Scott Wiener, is the leading attempt by a US state to regulate catastrophic risks from frontier AI in the wake of President Biden's 2023 AI Executive Order.Today’s debate:Holly Elmore, Executive Director of Pause AI US, representing Pro- SB 1047Greg Tanaka, Palo Alto City Councilmember, representing Anti- SB 1047Key Bill Supporters: Geoffrey Hinton, Yoshua Bengio, Anthropic, PauseAI, and about a 2/3 majority of California voters surveyed.Key Bill Opponents: OpenAI, Google, Meta, Y Combinator, Andreessen HorowitzLinksGreg mentioned that the "Supporters & Opponents" tab on this page lists organizations who registered their support and opposition. The vast majority of organizations listed here registered support against the bill: https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047Holly mentioned surveys of California voters showing popular support for the bill:1. Center for AI Safety survey shows 77% support: https://drive.google.com/file/d/1wmvstgKo0kozd3tShPagDr1k0uAuzdDM/view2. Future of Life Institute survey shows 59% support: https://futureoflife.org/ai-policy/poll-shows-popularity-of-ca-sb1047/Follow Holly: x.com/ilex_ulmusFollow Greg: x.com/GregTanakaJoin the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Today I’m reacting to David Shapiro’s response to my previous episode, and also to David’s latest episode with poker champion & effective altruist Igor Kurganov.I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.00:00 Introduction01:08 David's Response and Engagement03:02 The Corrigibility Problem05:38 Nirvana Fallacy10:57 Prophecy and Faith-Based Assertions22:47 AI Coexistence with Humanity35:17 Does Curiosity Make AI Value Humans?38:56 Instrumental Convergence and AI's Goals46:14 The Fermi Paradox and AI's Expansion51:51 The Future of Human and AI Coexistence01:04:56 Concluding ThoughtsJoin the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for listening. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Maciej Ceglowski is an entrepreneur and owner of the bookmarking site Pinboard. I’ve been a long-time fan of his sharp, independent-minded blog posts and tweets.In this episode, I react to a great 2016 talk he gave at WebCamp Zagreb titled Superintelligence: The Idea That Eats Smart People. This talk was impressively ahead of its time, as the AI doom debate really only heated up in the last few years.---00:00 Introduction02:13 Historical Analogies and AI Risks05:57 The Premises of AI Doom08:25 Mind Design Space and AI Optimization15:58 Recursive Self-Improvement and AI39:44 Arguments Against Superintelligence45:20 Mental Complexity and AI Motivations47:12 The Argument from Just Look Around You49:27 The Argument from Life Experience50:56 The Argument from Brain Surgery53:57 The Argument from Childhood58:10 The Argument from Robinson Crusoe01:00:17 Inside vs. Outside Arguments01:06:45 Transhuman Voodoo and Religion 2.001:11:24 Simulation Fever01:18:00 AI Cosplay and Ethical Concerns01:28:51 Concluding Thoughts and Call to Action---Follow Maciej: x.com/pinboardFollow Doom Debates:* youtube.com/@DoomDebates* DoomDebates.com* x.com/liron* Search “Doom Debates” in your podcast player This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Today I’m reacting to David Shapiro’s latest YouTube video: “Pausing AI is a spectacularly bad idea―Here's why”.In my opinion, every plan that doesn’t evolve pausing frontier AGI capabilities development now is reckless, or at least every plan that doesn’t prepare to pause AGI once we see a “warning shot” that enough people agree is terrifying.We’ll go through David’s argument point by point, to see if there are any good points about why maybe pausing AI might actually be a bad idea.00:00 Introduction01:16 The Pause AI Movement03:03 Eliezer Yudkowsky’s Epistemology12:56 Rationalist Arguments and Evidence24:03 Public Awareness and Legislative Efforts28:38 The Burden of Proof in AI Safety31:02 Arguments Against the AI Pause Movement34:20 Nuclear Proliferation vs. AI34:48 Game Theory and AI36:31 Opportunity Costs of an AI Pause44:18 Axiomatic Alignment47:34 Regulatory Capture and Corporate Interests56:24 The Growing Mainstream Concern for AI SafetyFollow David:* youtube.com/@DaveShap* x.com/DaveShapiFollow Doom Debates:* DoomDebates.com* youtube.com/@DoomDebates* x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
John Sherman and I go through David Brooks’s appallingly bad article in the New York Times titled “Many People Fear AI. They Shouldn’t.”For Humanity is basically the sister podcast to Doom Debates. We have the same mission to raise awareness of the urgent AI extinction threat, and build grassroots support for pausing new AI capabilities development until it’s safe for humanity.Subscribe to it on YouTube: https://www.youtube.com/@ForHumanityPodcastFollow it on X: https://x.com/ForHumanityPod This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Dr. Richard Sutton is a Professor of Computing Science at the University of Alberta known for his pioneering work on reinforcement learning, and his “bitter lesson” that scaling up an AI’s data and compute gives better results than having programmers try to handcraft or explicitly understand how the AI works.Dr. Sutton famously claims that AIs are the “next step in human evolution”, a positive force for progress rather than a catastrophic extinction risk comparable to nuclear weapons.Let’s examine Sutton’s recent interview with Daniel Fagella to understand his crux of disagreement with the AI doom position.---00:00 Introduction03:33 The Worthy vs. Unworthy AI Successor04:52 “Peaceful AI”07:54 “Decentralization”11:57 AI and Human Cooperation14:54 Micromanagement vs. Decentralization24:28 Discovering Our Place in the World33:45 Standard Transhumanism44:29 AI Traits and Environmental Influence46:06 The Importance of Cooperation48:41 The Risk of Superintelligent AI57:25 The Treacherous Turn and AI Safety01:04:28 The Debate on AI Control01:13:50 The Urgency of AI Regulation01:21:41 Final Thoughts and Call to Action---Original interview with Daniel Fagella: youtube.com/watch?v=fRzL5Mt0c8AFollow Richard Sutton: x.com/richardssuttonFollow Daniel Fagella: x.com/danfaggellaFollow Liron: x.com/lironSubscribe to my YouTube channel for full episodes and other bonus content: youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
David Pinsof is co-creator of the wildly popular Cards Against Humanity and a social science researcher at UCLA Social Minds Lab. He writes a blog called “Everything Is B******t”.He sees AI doomers as making many different questionable assumptions, and he sees himself as poking holes in those assumptions.I don’t see it that way at all; I think the doom claim is the “default expectation” we ought to have if we understand basic things about intelligence.At any rate, I think you’ll agree that his attempt to poke holes in my doom claims on today’s podcast is super good-natured and interesting.00:00 Introducing David Pinsof04:12 David’s P(doom)05:38 Is intelligence one thing?21:14 Humans vs. other animals37:01 The Evolution of Human Intelligence37:25 Instrumental Convergence39:05 General Intelligence and Physics40:25 The Blind Watchmaker Analogy47:41 Instrumental Convergence01:02:23 Superintelligence and Economic Models01:12:42 Comparative Advantage and AI01:19:53 The Fermi Paradox for Animal Intelligence01:34:57 Closing StatementsFollow David: x.com/DavidPinsofFollow Liron: x.com/lironThanks for watching. You can support Doom Debates by subscribing to the Substack, the YouTube channel (full episodes and bonus content), subscribing in your podcast player, and leaving a review on Apple Podcasts. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Princeton Comp Sci Ph.D. candidate Sayash Kapoor co-authored a blog post last week with his professor Arvind Narayanan called "AI Existential Risk Probabilities Are Too Unreliable To Inform Policy".While some non-doomers embraced the arguments, I see it as contributing nothing to the discourse besides demonstrating a popular failure mode: a simple misunderstanding of the basics of Bayesian epistemology.I break down Sayash's recent episode of Machine Learning Street Talk point-by-point to analyze his claims from the perspective of the one true epistemology: Bayesian epistemology.00:00 Introduction03:40 Bayesian Reasoning04:33 Inductive vs. Deductive Probability05:49 Frequentism vs Bayesianism16:14 Asteroid Impact and AI Risk Comparison28:06 Quantification Bias31:50 The Extinction Prediction Tournament36:14 Pascal's Wager and AI Risk40:50 Scaling Laws and AI Progress45:12 Final ThoughtsMy source material is Sayash's episode of Machine Learning Street Talk: https://www.youtube.com/watch?v=BGvQmHd4QPEI also recommend reading Scott Alexander’s related post: https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentistSayash's blogpost that he was being interviewed about is called "AI existential risk probabilities are too unreliable to inform policy": https://www.aisnakeoil.com/p/ai-existential-risk-probabilitiesFollow Sayash: https://x.com/sayashk This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Martin Casado is a General Partner at Andreessen Horowitz (a16z) who has strong views about AI.He claims that AI is basically just a buzzword for statistical models and simulations. As a result of this worldview, he only predicts incremental AI progress that doesn’t pose an existential threat to humanity, and he sees AI regulation as a net negative.I set out to understand his worldview around AI, and pinpoint the crux of disagreement with my own view.Spoiler: I conclude that Martin needs to go beyond analyzing AI as just statistical models and simulations, and analyze it using the more predictive concept of “intelligence” in the sense of hitting tiny high-value targets in exponentially-large search spaces.If Martin appreciated that intelligence is a quantifiable property that algorithms have, and that our existing AIs are getting close to surpassing human-level general intelligence, then hopefully he’d come around to raising his P(doom) and appreciating the urgent extinction risk we face.00:00 Introducing Martin Casado01:42 Martin’s AGI Timeline05:39 Martin’s Analysis of Self-Driving Cars15:30 Heavy-Tail Distributions38:03 Understanding General Intelligence38:29 AI's Progress in Specific Domains43:20 AI’s Understanding of Meaning47:16 Compression and Intelligence48:09 Symbol Grounding53:24 Human Abstractions and AI01:18:18 The Frontier of AI Applications01:23:04 Human vs. AI: Concept Creation and Reasoning01:25:51 The Complexity of the Universe and AI's Limitations01:28:16 AI's Potential in Biology and Simulation01:32:40 The Essence of Intelligence and Creativity in AI01:41:13 AI's Future Capabilities02:00:29 Intelligence vs. Simulation02:14:59 AI Regulation02:23:05 Concluding ThoughtsWatch the original episode of the Cognitive Revolution podcast with Martin and host Nathan Labenz.Follow Martin: @martin_casadoFollow Nate: @labenzFollow Liron: @lironSubscribe to the Doom Debates YouTube Channel to get full episodes plus other bonus content!Search “Doom Debates” to subscribe in your podcast player. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Tilek Mamutov is a Kyrgyzstani software engineer who worked at Google X for 11 years before founding his own international software engineer recruiting company, Outtalent.Since first encountering the AI doom argument at a Center for Applied Rationality bootcamp 10 years ago, he considers it a serious possibility, but he doesn’t currently feel convinced that doom is likely.Let’s explore Tilek’s worldview and pinpoint where he gets off the doom train and why!00:12 Tilek’s Background01:43 Life in Kyrgyzstan04:32 Tilek’s Non-Doomer Position07:12 Debating AI Doom Scenarios13:49 Nuclear Weapons and AI Analogies39:22 Privacy and Empathy in Human-AI Interaction39:43 AI's Potential in Understanding Human Emotions41:14 The Debate on AI's Empathy Capabilities42:23 Quantum Effects and AI's Predictive Models45:33 The Complexity of AI Control and Safety47:10 Optimization Power: AI vs. Human Intelligence48:39 The Risks of AI Self-Replication and Control51:52 Historical Analogies and AI Safety Concerns56:35 The Challenge of Embedding Safety in AI Goals01:02:42 The Future of AI: Control, Optimization, and Risks01:15:54 The Fragility of Security Systems01:16:56 Debating AI Optimization and Catastrophic Risks01:18:34 The Outcome Pump Thought Experiment01:19:46 Human Persuasion vs. AI Control01:21:37 The Crux of Disagreement: Robustness of AI Goals01:28:57 Slow vs. Fast AI Takeoff Scenarios01:38:54 The Importance of AI Alignment01:43:05 ConclusionFollow Tilekx.com/tilekLinksI referenced Paul Christiano’s scenario of gradual AI doom, a slower version that doesn’t require a Yudkowskian “foom”. Worth a read: What Failure Looks LikeI also referenced the concept of “edge instantiation” to explain that if you’re optimizing powerfully for some metric, you don’t get other intuitively nice things as a bonus, you *just* get the exact thing your function is measuring. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Dr. Mike Israetel is a well-known bodybuilder and fitness influencer with over 600,000 Instagram followers, and a surprisingly intelligent commentator on other subjects, including a whole recent episode on the AI alignment problem:Mike brought up many interesting points that were worth responding to, making for an interesting reaction episode. I also appreciate that he’s helping get the urgent topic of AI alignment in front of a mainstream audience.Unfortunately, Mike doesn’t engage with the possibility that AI alignment is an intractable technical problem on a 5-20 year timeframe, which I think is more likely than not. That’s the crux of why he and I disagree, and why I see most of his episode as talking past most other intelligent positions people take on AI alignment. I hope he’ll keep engaging with the topic and rethink his position.00:00 Introduction03:08 AI Risks and Scenarios06:42 Superintelligence Arms Race12:39 The Importance of AI Alignment18:10 Challenges in Defining Human Values26:11 The Outer and Inner Alignment Problems44:00 Transhumanism and AI's Potential45:42 The Next Step In Evolution47:54 AI Alignment and Potential Catastrophes50:48 Scenarios of AI Development54:03 The AI Alignment Problem01:07:39 AI as a Helper System01:08:53 Corporations and AI Development01:10:19 The Risk of Unaligned AI01:27:18 Building a Superintelligent AI01:30:57 ConclusionFollow Mike Israetel:* instagram.com/drmikeisraetel* youtube.com/@MikeIsraetelMakingProgressGet the full Doom Debates experience:* Subscribe to youtube.com/@DoomDebates* Subscribe to this Substack: DoomDebates.com* Search "Doom Debates" to subscribe in your podcast player* Follow me at x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
What did we learn from my debate with Robin Hanson? Did we successfully isolate the cruxes of disagreement? I actually think we did!In this post-debate analysis, we’ll review what those key cruxes are, and why I still think I’m right and Robin is wrong about them!I’ve taken the time to think much harder about everything Robin said during the debate, so I can give you new & better counterarguments than the ones I was able to make in realtime.Timestamps00:00 Debate Reactions06:08 AI Timelines and Key Metrics08:30 “Optimization Power” vs. “Innovation”11:49 Economic Growth and Diffusion17:56 Predicting Future Trends24:23 Crux of Disagreement with Robin’s Methodology34:59 Conjunction Argument for Low P(Doom)37:26 Headroom Above Human Intelligence41:13 The Role of Culture in Human Intelligence48:01 Goal-Completeness and AI Optimization50:48 Misaligned Foom Scenario59:29 Monitoring AI and the Rule of Law01:04:51 How Robin Sees Alignment01:09:08 Reflecting on the DebateLinksAISafety.info - The fractal of counterarguments to non-doomers’ argumentsFor the full Doom Debates experience:* Subscribe to youtube.com/@DoomDebates* Subscribe to this Substack: DoomDebates.com* Search "Doom Debates" to subscribe in your podcast player* Follow me at x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Robin Hanson is a legend in the rationality community and one of my biggest intellectual influences.In 2008, he famously debated Eliezer Yudkowsky about AI doom via a sequence of dueling blog posts known as the great Hanson-Yudkowsky Foom Debate. This debate picks up where Hanson-Yudkowsky left off, revisiting key arguments in the light of recent AI advances.My position is similar to Eliezer's: P(doom) is on the order of 50%. Robin's position is shockingly different: P(doom) is below 1%.00:00 Announcements03:18 Debate Begins05:41 Discussing AI Timelines and Predictions19:54 Economic Growth and AI Impact31:40 Outside Views vs. Inside Views on AI46:22 Predicting Future Economic Growth51:10 Historical Doubling Times and Future Projections54:11 Human Brain Size and Economic Metrics57:20 The Next Era of Innovation01:07:41 AI and Future Predictions01:14:24 The Vulnerable World Hypothesis01:16:27 AI Foom01:28:15 Genetics and Human Brain Evolution01:29:24 The Role of Culture in Human Intelligence01:31:36 Brain Size and Intelligence Debate01:33:44 AI and Goal-Completeness01:35:10 AI Optimization and Economic Impact01:41:50 Feasibility of AI Alignment01:55:21 AI Liability and Regulation02:05:26 Final Thoughts and Wrap-UpRobin's links:Twitter: x.com/RobinHansonHome Page: hanson.gmu.eduRobin’s top related essays:* What Are Reasonable AI Fears?* AIs Will Be Our Mind ChildrenPauseAI links:https://pauseai.info/https://discord.gg/2XXWXvErfACheck out https://youtube.com/@ForHumanityPodcast, the other podcast raising the alarm about AI extinction!For the full Doom Debates experience:* Subscribe to https://youtube.com/@DoomDebates* Subscribe to the Substack: https://DoomDebates.com* Search "Doom Debates" to subscribe in your podcast player* Follow me at https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
loading