DiscoverDoom Debates
Doom Debates
Claim Ownership

Doom Debates

Author: Liron Shapira

Subscribed: 2Played: 47
Share

Description

Urgent disagreements that must be resolved before the world ends, hosted by Liron Shapira.

lironshapira.substack.com
46 Episodes
Reverse
Fraser Cain is the publisher of Universe Today, co-host of Astronomy Cast, a popular YouTuber about all things space, and guess what… he has a high P(doom)! That’s why he’s joining me on Doom Debates for a very special AI + space crossover episode.00:00 Fraser Cain’s Background and Interests5:03 What’s Your P(Doom)™07:05 Our Vulnerable World15:11 Don’t Look Up22:18 Cosmology and the Search for Alien Life31:33 Stars = Terrorists39:03 The Great Filter and the Fermi Paradox55:12 Grabby Aliens Hypothesis01:19:40 Life Around Red Dwarf Stars?01:22:23 Epistemology of Grabby Aliens01:29:04 Multiverses01:33:51 Quantum Many Worlds vs. Copenhagen Interpretation01:47:25 Simulation Hypothesis01:51:25 Final ThoughtsSHOW NOTESFraser’s YouTube channel: https://www.youtube.com/@frasercainUniverse Today (space and astronomy news): https://www.universetoday.com/Max Tegmark’s book that explains 4 levels of multiverses: https://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307744256Robin Hanson’s ideas:Grabby Aliens: https://grabbyaliens.comThe Great Filter: https://en.wikipedia.org/wiki/Great_FilterLife in a high-dimensional space: https://www.overcomingbias.com/p/life-in-1kdhtml---Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are back for a Part II! This time we’re going straight to debating my favorite topic, AI doom.00:00 Introduction02:23 High-Level AI Doom Argument17:06 How Powerful Could Intelligence Be?22:34 “Knowledge Creation”48:33 “Creativity”54:57 Stand-Up Comedy as a Test for AI01:12:53 Vaden & Ben’s Goalposts01:15:00 How to Change Liron’s Mind01:20:02 LLMs are Stochastic Parrots?01:34:06 Tools vs. Agents01:39:51 Instrumental Convergence and AI Goals01:45:51 Intelligence vs. Morality01:53:57 Mainline Futures02:16:50 Lethal Intelligence VideoShow NotesVaden & Ben’s Podcast: https://www.youtube.com/@incrementspodRecommended playlists from their podcast:* The Bayesian vs Popperian Epistemology Series* The Conjectures and Refutations SeriesVaden’s Twitter: https://x.com/vadenmasraniBen’s Twitter: https://x.com/BennyChuggWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Dr. Andrew Critch is the co-founder of the Center for Applied Rationality, a former Research Fellow at the Machine Intelligence Research Institute (MIRI), a Research Scientist at the UC Berkeley Center for Human Compatible AI, and the co-founder of a new startup called Healthcare Agents.Dr. Critch’s P(Doom) is a whopping 85%! But his most likely doom scenario isn’t what you might expect. He thinks humanity will successfully avoid a self-improving superintelligent doom scenario, only to still go extinct via the slower process of “industrial dehumanization”.00:00 Introduction01:43 Dr. Critch’s Perspective on LessWrong Sequences06:45 Bayesian Epistemology15:34 Dr. Critch's Time at MIRI18:33 What’s Your P(Doom)™26:35 Doom Scenarios40:38 AI Timelines43:09 Defining “AGI”48:27 Superintelligence53:04 The Speed Limit of Intelligence01:12:03 The Obedience Problem in AI01:21:22 Artificial Superintelligence and Human Extinction01:24:36 Global AI Race and Geopolitics01:34:28 Future Scenarios and Human Relevance01:48:13 Extinction by Industrial Dehumanization01:58:50 Automated Factories and Human Control02:02:35 Global Coordination Challenges02:27:00 Healthcare Agents02:35:30 Final Thoughts---Show NotesDr. Critch’s LessWrong post explaining his P(Doom) and most likely doom scenarios: https://www.lesswrong.com/posts/Kobbt3nQgv3yn29pr/my-motivation-and-theory-of-change-for-working-in-aiDr. Critch’s Website: https://acritch.com/Dr. Critch’s Twitter: https://twitter.com/AndrewCritchPhD---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
It’s time for AI Twitter Beefs #2:00:42 Jack Clark (Anthropic) vs. Holly Elmore (PauseAI US)11:02 Beff Jezos vs. Eliezer Yudkowsky, Carl Feynman18:10 Geoffrey Hinton vs. OpenAI & Meta25:14 Samuel Hammond vs. Liron30:26 Yann LeCun vs. Eliezer Yudkowsky37:13 Roon vs. Eliezer Yudkowsky41:37 Tyler Cowen vs. AI Doomers52:54 David Deutsch vs. LironTwitter people referenced:* Jack Clark: https://x.com/jackclarkSF* Holly Elmore: https://x.com/ilex_ulmus* PauseAI US: https://x.com/PauseAIUS* Geoffrey Hinton: https://x.com/GeoffreyHinton* Samuel Hammond: https://x.com/hamandcheese* Yann LeCun: https://x.com/ylecun* Eliezer Yudkowsky: https://x.com/esyudkowsky* Roon: https://x.com/tszzl* Beff Jezos: https://x.com/basedbeffjezos* Carl Feynman: https://x.com/carl_feynman* Tyler Cowen: https://x.com/tylercowen* David Deutsch: https://x.com/DavidDeutschOxfShow NotesHolly Elmore’s EA forum post about scouts vs. soldiersManifund info & donation page for PauseAI US: https://manifund.org/projects/pauseai-us-2025-through-q2PauseAI.info - join the Discord and find me in the #doom-debates channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are joining me to debate Bayesian vs. Popperian epistemology.I’m on the Bayesian side, heavily influenced by the writings of Eliezer Yudkowsky. Vaden and Ben are on the Popperian side, heavily influenced by David Deutsch and the writings of Popper himself.We dive into the theoretical underpinnings of Bayesian reasoning and Solomonoff induction, contrasting them with the Popperian perspective, and explore real-world applications such as predicting elections and economic policy outcomes.The debate highlights key philosophical differences between our two epistemological frameworks, and sets the stage for further discussions on superintelligence and AI doom scenarios in an upcoming Part II.00:00 Introducing Vaden and Ben02:51 Setting the Stage: Epistemology and AI Doom04:50 What’s Your P(Doom)™13:29 Popperian vs. Bayesian Epistemology31:09 Engineering and Hypotheses38:01 Solomonoff Induction45:21 Analogy to Mathematical Proofs48:42 Popperian Reasoning and Explanations54:35 Arguments Against Bayesianism58:33 Against Probability Assignments01:21:49 Popper’s Definition of “Content”01:31:22 Heliocentric Theory Example01:31:34 “Hard to Vary” Explanations01:44:42 Coin Flipping Example01:57:37 Expected Value02:12:14 Prediction Market Calibration02:19:07 Futarchy02:29:14 Prediction Markets as AI Lower Bound02:39:07 A Test for Prediction Markets2:45:54 Closing ThoughtsShow NotesVaden & Ben’s Podcast: https://www.youtube.com/@incrementspodVaden’s Twitter: https://x.com/vadenmasraniBen’s Twitter: https://x.com/BennyChuggBayesian reasoning: https://en.wikipedia.org/wiki/Bayesian_inferenceKarl Popper: https://en.wikipedia.org/wiki/Karl_PopperVaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/the_credence_assumption/Vaden’s disproof of probabilistic induction (including Solomonoff Induction): https://arxiv.org/abs/2107.00749Vaden’s referenced post about predictions being uncalibrated > 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#CalibrationsArticle by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/Sources for claim that superforecasters gave a P(doom) below 1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/https://www.astralcodexten.com/p/the-extinction-tournamentVaden’s Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdfDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade.If you haven't been following all the urgent warnings, I'm here to bring you up to speed.* Human-level AI is coming soon* It’s an existential threat to humanity* The situation calls for urgent actionListen to this 15-minute intro to get the lay of the land.Then follow these links to learn more and see how you can help:* The CompendiumA longer written introduction to AI doom by Connor Leahy et al* AGI Ruin — A list of lethalitiesA comprehensive list by Eliezer Yudkowksy of reasons why developing superintelligent AI is unlikely to go well for humanity* AISafety.infoA catalogue of AI doom arguments and responses to objections* PauseAI.infoThe largest volunteer org focused on lobbying world government to pause development of superintelligent AI* PauseAI DiscordChat with PauseAI members, see a list of projects and get involved---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Prof. Lee Cronin is the Regius Chair of Chemistry at the University of Glasgow. His research aims to understand how life might arise from non-living matter. In 2017, he invented “Assembly Theory” as a way to measure the complexity of molecules and gain insight into the earliest evolution of life.Today we’re debating Lee's claims about the limits of AI capabilities, and my claims about the risk of extinction from superintelligent AGI.00:00 Introduction04:20 Assembly Theory05:10 Causation and Complexity10:07 Assembly Theory in Practice12:23 The Concept of Assembly Index16:54 Assembly Theory Beyond Molecules30:13 P(Doom)32:39 The Statement on AI Risk42:18 Agency and Intent47:10 RescueBot’s Intent vs. a Clock’s53:42 The Future of AI and Human Jobs57:34 The Limits of AI Creativity01:04:33 The Complexity of the Human Brain01:19:31 Superintelligence: Fact or Fiction?01:29:35 Final ThoughtsLee’s Wikipedia: https://en.wikipedia.org/wiki/Leroy_CroninLee’s Twitter: https://x.com/leecroninLee’s paper on Assembly Theory: https://arxiv.org/abs/2206.02279Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Ben Horowitz, cofounder and General Partner at Andreessen Horowitz (a16z), says nuclear proliferation is good.I was shocked because I thought we all agreed nuclear proliferation is VERY BAD.If Ben and a16z can’t appreciate the existential risks of nuclear weapons proliferation, why would anyone ever take them seriously on the topic of AI regulation?00:00 Introduction00:49 Ben Horowitz on Nuclear Proliferation02:12 Ben Horowitz on Open Source AI05:31 Nuclear Non-Proliferation Treaties10:25 Escalation Spirals15:20 Rogue Actors16:33 Nuclear Accidents17:19 Safety Mechanism Failures20:34 The Role of Human Judgment in Nuclear Safety21:39 The 1983 Soviet Nuclear False Alarm22:50 a16z’s Disingenuousness23:46 Martin Casado and Marc Andreessen24:31 Nuclear Equilibrium26:52 Why I Care28:09 Wrap UpSources of this episode’s video clips:Ben Horowitz’s interview on Upstream with Erik Torenberg: https://www.youtube.com/watch?v=oojc96r3KuoMartin Casado and Marc Andreessen talking about AI on the a16z Podcast: https://www.youtube.com/watch?v=0wIUK0nsyUgRoger Skaer’s TikTok: https://www.tiktok.com/@rogerskaerGeorge W. Bush and John Kerry Presidential Debate (September 30, 2004): https://www.youtube.com/watch?v=WYpP-T0IcyABarack Obama’s Prague Remarks on Nuclear Disarmament: https://www.youtube.com/watch?v=QKSn1SXjj2sJohn Kerry’s Remarks at the 2015 Nuclear Nonproliferation Treaty Review Conference: https://www.youtube.com/watch?v=LsY1AZc1K7wShow notes:Nuclear War, A Scenario by Annie Jacobsen: https://www.amazon.com/Nuclear-War-Scenario-Annie-Jacobsen/dp/0593476093Dr. Strangelove or: How I learned to Stop Worrying and Love the Bomb: https://en.wikipedia.org/wiki/Dr._Strangelove1961 Goldsboro B-52 Crash: https://en.wikipedia.org/wiki/1961_Goldsboro_B-52_crash1983 Soviet Nuclera False Alarm Incident: https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incidentList of military nuclear accidents: https://en.wikipedia.org/wiki/List_of_military_nuclear_accidentsDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Today I’m reacting to Arvind Narayanan’s interview with Robert Wright on the Nonzero podcast: https://www.youtube.com/watch?v=MoB_pikM3NYDr. Narayanan is a Professor of Computer Science and the Director of the Center for Information Technology Policy at Princeton. He just published a new book called AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.Arvind claims AI is “normal technology like the internet”, and never sees fit to bring up the impact or urgency of AGI. So I’ll take it upon myself to point out all the questions where someone who takes AGI seriously would give different answers.00:00 Introduction01:49 AI is “Normal Technology”?09:25 Playing Chess vs. Moving Chess Pieces12:23 AI Has To Learn From Its Mistakes?22:24 The Symbol Grounding Problem and AI's Understanding35:56 Human vs AI Intelligence: The Fundamental Difference36:37 The Cognitive Reflection Test41:34 The Role of AI in Cybersecurity43:21 Attack vs. Defense Balance in (Cyber)War54:47 Taking AGI Seriously01:06:15 Final ThoughtsShow NotesThe original Nonzero podcast episode with Arvind Narayanan and Robert Wright: https://www.youtube.com/watch?v=MoB_pikM3NYArvind’s new book, AI Snake Oil: https://www.amazon.com/Snake-Oil-Artificial-Intelligence-Difference-ebook/dp/B0CW1JCKVLArvind’s Substack: https://aisnakeoil.comArvind’s Twitter: https://x.com/random_walkerRobert Wright’s Twitter: https://x.com/robertwrighterRobert Wright’s Nonzero Newsletter: https://nonzero.substack.comRob’s excellent post about symbol grounding (Yes, AIs ‘understand’ things): https://nonzero.substack.com/p/yes-ais-understand-thingsMy previous episode of Doom Debates reacting to Arvind Narayanan on Harry Stebbings’ podcast: https://www.youtube.com/watch?v=lehJlitQvZEDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Dr. Keith Duggar from Machine Learning Street Talk was the subject of my recent reaction episode about whether GPT o1 can reason. But instead of ignoring or blocking me, Keith was brave enough to come into the lion’s den and debate his points with me… and his P(doom) might shock you!First we debate whether Keith’s distinction between Turing Machines and Discrete Finite Automata is useful for understanding limitations of current LLMs. Then I take Keith on a tour of alignment, orthogonality, instrumental convergence, and other popular stations on the “doom train”, to compare our views on each.Keith was a great sport and I think this episode is a classic!00:00 Introduction00:46 Keith’s Background03:02 Keith’s P(doom)14:09 Are LLMs Turing Machines?19:09 Liron Concedes on a Point!21:18 Do We Need >1MB of Context?27:02 Examples to Illustrate Keith’s Point33:56 Is Terence Tao a Turing Machine?38:03 Factoring Numbers: Human vs. LLM53:24 Training LLMs with Turing-Complete Feedback1:02:22 What Does the Pillar Problem Illustrate?01:05:40 Boundary between LLMs and Brains1:08:52 The 100-Year View1:18:29 Intelligence vs. Optimization Power1:23:13 Is Intelligence Sufficient To Take Over?01:28:56 The Hackable Universe and AI Threats01:31:07 Nuclear Extinction vs. AI Doom1:33:16 Can We Just Build Narrow AI?01:37:43 Orthogonality Thesis and Instrumental Convergence01:40:14 Debating the Orthogonality Thesis02:03:49 The Rocket Alignment Problem02:07:47 Final ThoughtsShow NotesKeith’s show: https://www.youtube.com/@MachineLearningStreetTalkKeith’s Twitter: https://x.com/doctorduggarKeith’s fun brain teaser that LLMs can’t solve yet, about a pillar with four holes: https://youtu.be/nO6sDk6vO0g?si=diGUY7jW4VFsV0TJ&t=3684Eliezer Yudkowsky’s classic post about the “Rocket Alignment Problem”: https://intelligence.org/2018/10/03/rocket-alignment/Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates.📣 You can now chat with me and other listeners in the #doom-debates channel of the PauseAI discord: https://discord.gg/2XXWXvErfA This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.Is civil disobedience the right strategy to pause or stop AI?00:00 Introducing Stop AI00:38 Arrested at OpenAI Headquarters01:14 Stop AI’s Funding01:26 Blocking Entrances Strategy03:12 Protest Logistics and Arrest08:13 Blocking Traffic12:52 Arrest and Legal Consequences18:31 Commitment to Nonviolence21:17 A Day in the Life of a Protestor21:38 Civil Disobedience25:29 Planning the Next Protest28:09 Stop AI Goals and Strategies34:27 The Ethics and Impact of AI Protests42:20 Call to ActionShow NotesStopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.StopAI Website: https://StopAI.infoStopAI Discord: https://discord.gg/gbqGUt7ZN4Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.PauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAThere's also a special #doom-debates channel in the PauseAI Discord just for us :)Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
This episode is a continuation of Q&A #1 Part 1 where I answer YOUR questions!00:00 Introduction01:20 Planning for a good outcome?03:10 Stock Picking Advice08:42 Dumbing It Down for Dr. Phil11:52 Will AI Shorten Attention Spans?12:55 Historical Nerd Life14:41 YouTube vs. Podcast Metrics16:30 Video Games26:04 Creativity30:29 Does AI Doom Explain the Fermi Paradox?36:37 Grabby Aliens37:29 Types of AI Doomers44:44 Early Warning Signs of AI Doom48:34 Do Current AIs Have General Intelligence?51:07 How Liron Uses AI53:41 Is “Doomer” a Good Term?57:11 Liron’s Favorite Books01:05:21 Effective Altruism01:06:36 The Doom Debates Community---Show NotesPauseAI Discord: https://discord.gg/2XXWXvErfARobin Hanson’s Grabby Aliens theory: https://grabbyaliens.comProf. David Kipping’s response to Robin Hanson’s Grabby Aliens: https://www.youtube.com/watch?v=tR1HTNtcYw0My explanation of “AI completeness”, but actually I made a mistake because the term I previously coined is “goal completeness”: https://www.lesswrong.com/posts/iFdnb8FGRF4fquWnc/goal-completeness-is-like-turing-completeness-for-agi^ Goal-Completeness (and the corresponding Shapira-Yudkowsky Thesis) might be my best/only original contribution to AI safety research, albeit a small one. Max Tegmark even retweeted it.a16z's Ben Horowitz claiming nuclear proliferation is good, actually: https://x.com/liron/status/1690087501548126209---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Thanks for being one of the first Doom Debates subscribers and sending in your questions! This episode is Part 1; stay tuned for Part 2 coming soon.00:00 Introduction01:17 Is OpenAI a sinking ship?07:25 College Education13:20 Asperger's16:50 Elon Musk: Genius or Clown?22:43 Double Crux32:04 Why Call Doomers a Cult?36:45 How I Prepare Episodes40:29 Dealing with AI Unemployment44:00 AI Safety Research Areas46:09 Fighting a Losing Battle53:03 Liron’s IQ01:00:24 Final ThoughtsExplanation of Double Cruxhttps://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understandingBest Doomer ArgumentsThe LessWrong sequences by Eliezer Yudkowsky: https://ReadTheSequences.comLethalIntelligence.ai — Directory of people who are good at explaining doomRob Miles’ Explainer Videos: https://www.youtube.com/c/robertmilesaiFor Humanity Podcast with John Sherman - https://www.youtube.com/@ForHumanityPodcastPauseAI community — https://PauseAI.info — join the Discord!AISafety.info — Great reference for various argumentsBest Non-Doomer ArgumentsCarl Shulman — https://www.dwarkeshpatel.com/p/carl-shulmanQuintin Pope and Nora Belrose — https://optimists.aiRobin Hanson — https://www.youtube.com/watch?v=dTQb6N3_zu8How I prepared to debate Robin HansonIdeological Turing Test (me taking Robin’s side): https://www.youtube.com/watch?v=iNnoJnuOXFAWalkthrough of my outline of prepared topics: https://www.youtube.com/watch?v=darVPzEhh-IDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
In today’s episode, instead of reacting to a long-form presentation of someone’s position, I’m reporting on the various AI x-risk-related tiffs happening in my part of the world. And by “my part of the world” I mean my Twitter feed.00:00 Introduction01:55 Followup to my MSLT reaction episode03:48 Double Crux04:53 LLMs: Finite State Automata or Turing Machines?16:11 Amjad Masad vs. Helen Toner and Eliezer Yudkowsky17:29 How Will AGI Literally Kill Us?33:53 Roon37:38 Prof. Lee Cronin40:48 Defining AI Creativity43:44 Naval Ravikant46:57 Pascal's Scam54:10 Martin Casado and SB 104701:12:26 Final ThoughtsLinks referenced in the episode:* Eliezer Yudkowsky’s interview on the Logan Bartlett Show. Highly recommended: https://www.youtube.com/watch?v=_8q9bjNHeSo* Double Crux, the core rationalist technique I use when I’m “debating”: https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding* The problem with arguing “by definition”, a classic LessWrong post: https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definitionTwitter people referenced:* Amjad Masad: https://x.com/amasad* Eliezer Yudkowsky: https://x.com/esyudkowsky* Helen Toner: https://x.com/hlntnr* Roon: https://x.com/tszzl* Lee Cronin: https://x.com/leecronin* Naval Ravikant: https://x.com/naval* Geoffrey Miller: https://x.com/primalpoly* Martin Casado: https://x.com/martin_casado* Yoshua Bengio: https://x.com/yoshua_bengio* Your boy: https://x.com/lironDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
How smart is OpenAI’s new model, o1? What does “reasoning” ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?Dr. Tim Scarfe and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!00:00 Introduction02:14 Computability Theory03:40 Turing Machines07:04 Complexity Theory and AI23:47 Reasoning44:24 o147:00 Finding gold in the Sahara56:20 Self-Supervised Learning and Chain of Thought01:04:01 The Miracle of AI Optimization01:23:57 Collective Intelligence01:25:54 The Argument Against LLMs' Reasoning01:49:29 The Swiss Cheese Metaphor for AI Knowledge02:02:37 Final ThoughtsOriginal source: https://www.youtube.com/watch?v=nO6sDk6vO0gFollow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalkZvi Mowshowitz's authoritative GPT-o1 post: https://thezvi.wordpress.com/2024/09/16/gpt-4o1/Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Yuval Noah Harari is a historian, philosopher, and bestselling author known for his thought-provoking works on human history, the future, and our evolving relationship with technology. His 2011 book, Sapiens: A Brief History of Humankind, took the world by storm, offering a sweeping overview of human history from the emergence of Homo sapiens to the present day. Harari just published a new book which is largely about AI. It’s called Nexus: A Brief History of Information Networks from the Stone Age to AI. Let’s go through the latest interview he did as part of his book tour to see where he stands on AI extinction risk.00:00 Introduction04:30 Defining AI vs. non-AI20:43 AI and Language Mastery29:37 AI's Potential for Manipulation31:30 Information is Connection?37:48 AI and Job Displacement48:22 Consciousness vs. Intelligence52:02 The Alignment Problem59:33 Final ThoughtsSource podcast: https://www.youtube.com/watch?v=78YN1e8UXdMFollow Yuval Noah Harari: x.com/harari_yuvalFollow Steven Bartlett, host of Diary of a CEO: x.com/StevenBartlettJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
It's finally here, the Doom Debates / Dr. Phil crossover episode you've all been asking for 😂 The full episode is called “AI: The Future of Education?"While the main focus was AI in education, I'm glad the show briefly touched on how we're all gonna die. Everything in the show related to AI extinction is clipped here.Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Dr. Roman Yampolskiy is the director of the Cyber Security Lab at the University of Louisville. His new book is called AI: Unexplainable, Unpredictable, Uncontrollable.Roman’s P(doom) from AGI is a whopping 99.999%, vastly greater than my P(doom) of 50%. It’s a rare debate when I’m LESS doomy than my opponent!This is a cross-post from the For Humanity podcast hosted by John Sherman. For Humanity is basically a sister show of Doom Debates. Highly recommend subscribing!00:00 John Sherman’s Intro05:21 Diverging Views on AI Safety and Control12:24 The Challenge of Defining Human Values for AI18:04 Risks of Superintelligent AI and Potential Solutions33:41 The Case for Narrow AI45:21 The Concept of Utopia48:33 AI's Utility Function and Human Values55:48 Challenges in AI Safety Research01:05:23 Breeding Program Proposal01:14:05 The Reality of AI Regulation01:18:04 Concluding Thoughts01:23:19 Celebration of LifeThis episode on For Humanity’s channel: https://www.youtube.com/watch?v=KcjLCZcBFoQFor Humanity on YouTube: https://www.youtube.com/@ForHumanityPodcastFor Humanity on X: https://x.com/ForHumanityPodBuy Roman’s new book: https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626XJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Jobst Landgrebe, co-author of Why Machines Will Never Rule The World: Artificial Intelligence Without Fear, argues that AI is fundamentally limited in achieving human-like intelligence or consciousness due to the complexities of the human brain which are beyond mathematical modeling.Contrary to my view, Jobst has a very low opinion of what machines will be able to achieve in the coming years and decades.He’s also a devout Christian, which makes our clash of perspectives funnier.00:00 Introduction03:12 AI Is Just Pattern Recognition?06:46 Mathematics and the Limits of AI12:56 Complex Systems and Thermodynamics33:40 Transhumanism and Genetic Engineering47:48 Materialism49:35 Transhumanism as Neo-Paganism01:02:38 AI in Warfare01:11:55 Is This Science?01:25:46 ConclusionSource podcast: https://www.youtube.com/watch?v=xrlT1LQSyNUJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan.Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers. I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintelligent god.00:00 Introduction01:21 Arvind’s Perspective on AI02:07 Debating AI's Compute and Performance03:59 Synthetic Data vs. Real Data05:59 The Role of Compute in AI Advancement07:30 Challenges in AI Predictions26:30 AI in Organizations and Tacit Knowledge33:32 The Future of AI: Exponential Growth or Plateau?36:26 Relevance of Benchmarks39:02 AGI40:59 Historical Predictions46:28 OpenAI vs. Anthropic52:13 Regulating AI56:12 AI as a Weapon01:02:43 Sci-Fi01:07:28 ConclusionOriginal source: https://www.youtube.com/watch?v=8CvjVAyB4O4Follow Arvind Narayanan: x.com/random_walkerFollow Harry Stebbings: x.com/HarryStebbingsJoin the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
loading