Discover
The Existential Hope Podcast
The Existential Hope Podcast
Author: Foresight Institute
Subscribed: 8Played: 114Subscribe
Share
© Foresight Institute
Description
The Existential Hope Podcast features in-depth conversations with people working on positive, high-tech futures. We explore how the future could be much better than today—if we steer it wisely.
Hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite the scientists, founders, and philosophers shaping tomorrow’s breakthroughs— AI, nanotech, longevity biotech, neurotech, space, smarter governance, and more.
About Foresight Institute: For 40 years the independent nonprofit Foresight Institute has mapped how emerging technologies can serve humanity. Its Existential Hope program is the North Star: mapping the futures worth aiming for and the breakthroughs needed to reach them. This podcast is that exploration in public. Follow along and help tip the century toward success.
Explore more:
- Transcript, listed resources, and more: https://www.existentialhope.com/podcasts
- Follow on X
Hosted on Acast. See acast.com/privacy for more information.
32 Episodes
Reverse
We think a lot about how AI will affect humanity, and for good reason. But AI could have an enormous impact on the trillions of animals that share our world (for better or worse), and almost nobody is talking about it.In this episode, we talk with Constance Li, founder of Sentient Futures, an organization working to make sure AI and other emerging technologies improve the lives of animals rather than harm them.We touch on:The enormous scale of animal suffering today, and why AI could either worsen or improve it depending on the decisions we make.Using computer vision and sensors to monitor animals and optimize for their welfare rather than just productivity.The research that’s being done to use AI to communicate with animals and what it’s already telling us about their well-being.Other sentient beings that could be impacted by emerging technologies, like artificial minds and biocomputing.Timestamps:0:00 Cold open1:57 Why AI and animals is an overlooked combination4:46 The staggering scale of factory farming8:26 How a physician became an animal welfare advocate10:19 What Sentient Futures does day-to-day11:38 What "AI for animals" actually means14:23 Why the organization was renamed Sentient Futures, and the question of AI moral patients18:08 The biggest misconceptions about AI for animals20:26 What is precision livestock farming?24:46 Best and worst-case scenarios for AI in farms27:46 Communication across species: promise and limitations35:56 Genetic welfare and using genetics in farms43:34 What a best-case scenario for AI and animals looks like in the next 5–10 years47:11 The biggest hurdles: funding and attention48:39 How to get involved with Sentient Futures50:44 What gives Constance hopeOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Having an AI boyfriend or girlfriend might seem creepy, but what if it helped you get better at human relationships? In this episode, we talk with David Eagleman, a professor of neuroscience at Stanford, bestselling author, and science communicator. We discuss how AI and other technologies can help us become better humans – wiser, kinder and more empathetic, not just more productive. We get a neuroscientist’s take on how human and artificial intelligence interact, including:How to use AI to better understand other people and improve our relationships.Using debate AIs in schools to make younger generations better at critical thinking and grasping both sides of an argument.Is AI making our lives too easy by removing the friction we need to learn?Technologies that could expand what’s possible with our brain, from mind uploading to brain-to-brain communication.Timestamps:0:00 Cold open1:38 How David Eagleman became a neuroscientist4:46 How malleable is the brain?6:29 Can AI make us better humans? The Reddit debate bot experiment11:00 AI relationships and becoming better at dating real people14:24 Using AI to hear his late father's voice again18:26 Mind uploading and digital immortality23:27 What technology could make us more kind and empathetic24:04 How AI could revolutionize debate education and critical thinking28:30 Why AI needs a "tough love" mode to help us grow30:17 Does AI making life easier rob us of useful friction for learning?34:21 Why brain-to-brain communication probably won't help us understand each other37:29 Could neurotechnology let us experience the world as another species?41:58 The current state of neuroscience and where it's heading48:05 How to get started if you're inspired by this conversationOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
What would the world look like if the poorest country was as rich as Switzerland is today? It turns out we could actually see it happen by 2100, and with an economic growth that is similar to the one we have been experiencing for the past 20 years.In this episode, we talk with Marc Canal, Senior Fellow at the McKinsey Global Institute, and co-author of the book A Century of Plenty. We unpack what a hundred years of data tells us about human progress, and map out the steps to an ambitious scenario we can build by the end of the century.We discuss:How much the world has actually changed since 1925: from one in five children dying before age five in Spain, to life expectancy growing by 40 years globally.What it would take to make today’s Swiss living standards the world’s floor by 2100 (while richer countries grow far beyond it), from energy efficiency to birth rates and geopolitics.How data shows economic growth is actually good for the climate and for human happiness.Why achieving a prosperous world currently depends more on our collective belief that progress is possible than on resource constraints.How you can thrive in an AI world, where 57% of work hours can be automated, by leaning into the “messy” jobs.Timestamps:0:00 - Cold open1:54 - Why the McKinsey Global Institute wrote “A Century of Plenty” 5:20 - What was the world like in 1925? 10:04 - The most surprising stats from 100 years of progress16:03 - Defining the “empowerment line” vs. the poverty line19:30 - Projecting 2100: can we make Switzerland the global "floor"? 22:26 - The 5 conditions for achieving a world of plenty26:14 - Can we grow the economy without sacrificing the environment?28:23 - Economic growth vs. climate change: mitigation and adaptation 34:05 - What are the biggest challenges to the “progress machine”? 36:30 - The demographic crisis, and solving falling fertility rates45:20 - Will AI speed up human innovation?48:21 - Geopolitics: is the world really de-globalizing? 52:30 - The crisis of hope: why are we so pessimistic?56:26 - How different nations reach the frontier of progress58:49 - Building a new culture of growth1:01:09 - Does economic progress actually make us happier?1:05:39 - How you can help make a century of plenty probableOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
To make the future go well, we might not need a perfect model for its end state, or an abstract philosophical theory to guide us. Can your own sense of “the right thing to do” actually help make the world better?In this episode we talk with SJ Beard, researcher at the Centre for the Study of Existential Risk, and author of the book “Existential Hope”.Some of the topics we discuss:How to shift our focus from "preventing the end of the world" to actively building a future worth living.Why aiming for a “happy ever after” state of the world might be dangerous, and why improving the world one generation at a time is less likely to backfire.Relying on our own sense of “the right thing to do” as a practical guide to make the world better.Why decisions about AI and global risk need input from a broad mix of people and their real-world experiences, not just experts at the top.Why building AI with compassion and curiosity about human values may be safer than giving it a rigid list of rules to follow.Timestamps:[01:31] SJ’s background in philosophy and existential risk[02:02] Why write a book on existential hope?[04:43] Defining existential hope, and its relationship with existential risks and existential anxiety[11:09] Human agency without the guilt[13:59] Why there are no truly "natural" disasters[16:49] Why we shouldn’t try to build a perfect utopia[19:05] Protopia: is iterative improvement enough?[22:19] Defining progress: what does it mean to "get better"?[26:13] Protopia vs. viatopia: setting goals and achieving a great future[29:48] Existential safety as a collective project[35:06] Using participatory tools to make global decisions[36:32] Making existential hope reasonably demanding[40:06] Can we achieve systemic change in a tech-focused world?[46:00] Concrete socio-technical projects for AI safety[49:02] Aligning AI by building its character[51:45] The importance of history in building a good future[54:24] Key 17th-century ideas that are shaping modern society[58:20] Cultivating "humanity as a virtue"[01:04:37] Lessons from nuclear near-misses: the example of Petrov[01:09:20] The trade-offs of a humanistic, bottom-up approach to decision-making[01:12:16] Literacy vs. orality: how ideas become simplified[01:16:45] Meme culture and the transmission of deep context[01:18:48] How writing the book changed SJ’s mind[01:21:38] SJ Beard’s vision for existential hopeOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Most scientists do “safe” research to secure their next grant. But what if more of them worked on the most important problems instead?In this episode, we talk with Anastasia Gamick, co-founder of Convergent Research, about how to raise our level of ambition for what science can actually achieve.Convergence Research incubates Focused Research Organizations: small, startup-style teams that build critical “public good” tech, which both academia and for-profits ignore.We discuss:What makes a research project truly high-impact in view of an AI worldConcrete examples of these projects: maps of brain synapses, software that’s provably safe, drug screening, good data for AI-powered scientific research, and moreHow to prioritize defensive technology, such as biosafety tools, instead of just pushing every frontier as fast as possibleHow young scientists can find the work that matters most for the future[00:00] Cold open[01:52] Introducing Anastasia Gamick and the mission of Convergent Research[02:44] Defining Focused Research Organizations (FROs) and their unique characteristics[09:46] Backcasting from 2075: what research to prioritize now to prepare for the intelligence age[19:08] The four types of projects Convergent decides not to fund[25:35] Biological and ecological dark matter: why we need better datasets for AI science[28:28] Why academia and industry aren’t incentivized to build tech capabilities for the public good[29:32] Defining “moonshot projects”: how boring drug screening creates massive downstream impact[32:56] The future of neuroscience: capturing videos of synapses firing[35:46] How the FRO model is catching on internationally[36:25] Steering vs. accelerating: selecting defense-dominant technology[41:22] Increasing human agency and how scientists can choose high-impact research areas[46:51] The evolution of scientific funding and the role of new philanthropy[48:05] Finding existential hope in the community of future-buildersOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Our fast-paced world isn’t spinning out of our control; we’re actually becoming more capable of steering it than ever before. Throughout history, technological progress has expanded human agency, that is our ability to choose our destiny rather than being subject to the whims of nature.Jason Crawford, founder of the Roots of Progress Institute, joins the podcast to discuss The Techno-Humanist Manifesto, his book exploring his philosophy of progress centered around human life and wellbeing. In our conversation, we dive into the core arguments of the manifesto:How we are more in control of our lives than ever beforeWhy we should reframe the goal of “stopping climate change” into “controlling climate change” and work toward installing a “thermostat for the Earth”The value of nature and its interaction with humanityAllowing ourselves to celebrate human achievement and industrial civilizationThe concept of “solutionism”, as a kind of optimism that acknowledges risks while keeping a proactive attitude towards solving problemsWhy two common fears around the slowing of progress – that we could run out of natural resources or new ideas – are actually unfoundedThe possibility that AI represents a transformation as significant as the Industrial Revolution or the invention of agricultureHow to rebuild a culture of progress in the 21st century, from reforming scientific institutions to creating new, non-dystopian science fictionChapters:[00:00] Cold open[01:30] Intro: Jason Crawford and the Techno-Humanist Manifesto[04:10] Defining progress as the expansion of human agency[06:16] How to use our newfound agency to live a meaningful life[10:07] Climate control: installing a “thermostat” for the Earth[13:26] Anthropocentrism and the value of nature[19:41] Ode to man: celebrating human achievement[20:53] Solutionism: believing in our problem-solving abilities to tackle risks[26:26] Why pessimism sounds smart but misses the solution space[31:29] The myth of finite natural resources and the power of knowledge[34:27] Why we are getting better at finding ideas faster than they get harder to find[39:03] The Intelligence Age: a new mode of production[41:19] Amplifying human agency in an AI-driven world[43:09] Developing a healthy relationship with AI and attention[46:28] The culture of progress and why we soured on the future[50:10] Building the infrastructure for a global progress movement[53:54] A 20-year vision for progress studies in the mainstream[57:33] High-leverage regulations for progress: from nuclear to supersonic flight[58:57] Jason Crawford’s existential hope visionOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
While dystopian fiction dominates our screens and bookshelves, Elle Griffin is busy researching how things might actually go right. She wanted to write a utopian novel and realized she needed a better understanding of what an ideal society could look like. In our conversation, we discuss how her favorite utopian literature influenced her views on a well-designed society. But we also explore practical ideas on how we could improve our systems:Tax autonomy: Why giving states and cities the power to collect their own taxes would allow them to fund the specific services their citizens actually want.A la carte federations: A model where cities and states choose to join specific agreements, like a "fishing EU" or a "healthcare EU," instead of being forced into one large, centralized government that manages every aspect of life.The Mondragon model: What we can learn from a massive network of worker-owned cooperatives in Spain that provides its own unemployment insurance and university.Who should control AI: Why giving voting authority to the employees who write the code (rather than investors or nonprofit boards) might be the best way to prevent unethical shortcuts.Singapore’s land model: How the government acts as a landlord to fund public services, allowing for lower income taxes while still providing universal social support.Fixing the Internet: How to use personal data and AI to make us wiser, rather than letting algorithms push us toward fast fashion and political radicalization.Chapters:Cold open (00:00:00)Introducing Elle Griffin (00:01:27)How writing a novel turned into a research project (00:02:27)Elle’s current work: From print pamphlets to "We Should Own the Economy" (00:04:21)The setup of Elle’s upcoming utopian novel (00:05:06)From gothic literature to utopian literature (00:06:30)Three classic utopian novels and their recurring lessons (00:15:42)Building a "future Asia" through mythology and technology (00:22:02)What if US States had the same autonomy as EU countries? (00:23:49)"A la carte" federalism: moving toward a modular government (00:28:11)The Mondragon model: a blueprint for worker-owned economies (00:32:54)Why the smallest government is the best government (00:36:18)The global monoculture and the rise of micro-cultures (00:44:29)Who should control AI? The case for employee-led governance (00:53:02)Fixing the Internet and using AI to make us wise, not just efficient (01:01:06)Why Victor Hugo’s "Les Misérables" is the ultimate masterpiece (01:06:14)An existential hope vision for the future (01:08:09)On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
When people think about AGI, most of them ask “When is it going to arrive?” or “What kind of AGI will we get?”. Andrew Critch, AI safety researcher and mathematician, argues that the most important question is actually “What will we do with it?”In our conversation, we explore the importance of our choices in the quest to make AGI a force for good. Andrew explains what AGI might look like in practical terms, and the consequences of it being trained on our culture. He also claims that finding the “best” values AI should have is a philosophical trap, and that we should instead focus on finding a basic agreement about “good” vs. “bad” behaviors.The episode also covers concrete takes on the transition to AGI, including: Why an advanced intelligence would likely find killing humans “mean.”How automated computer security checks could be one of the best uses of powerful AI.Why the best preparation for AGI is simply to build helpful products today.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Anna Gát, founder of the Interintellect community, joins us to explore the essential role of hopeful action and diverse communities in shaping the future. Anna shares why she started Interintellect as a space for intellectual inquiry free from political polarization and traditional gatekeeping, driven by the hope that constructive social collaboration is possible. She details the specific rules of gathering and hosting that can make online and offline groups successful, fostering deep, non-toxic, and life-changing conversations across polarizing topics.We also dive into the genesis of Anna's own podcast, The Hope Axis, and her frustration with the prevalent "complaint culture" and regressive narratives in wealthy societies. The conversation also touches on these questions:Why should communities be given a clear "job" to increase their longevity?How can we achieve diversity of thought in tight-knit groups?Why is constantly networking (with a finite-game approach) detrimental to human well-being?What does it mean to be a "realistic optimist"?How can we architecturally ensure that future AI serves groups and supports humans as social creatures, rather than further enabling solitary, hyper-addictive entertainment?On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Nuclear energy has a reputation problem. Despite being one of the safest and most reliable clean-energy technologies ever developed, public perception is dominated by a handful of accidents, Cold War imagery, and decades of political resistance. Isabelle Boemeke, model-turned-science-communicator and author of Rad Future, argues that this disconnect is not only irrational, but actively dangerous for humanity’s prospects.In this episode, Isabelle explains how nuclear became one of the most misunderstood technologies of the last century, why fears about waste, safety, and proliferation are often overstated, and what the data actually shows about nuclear relative to fossil fuels, hydropower, and renewables. She also talks about her unusual path to becoming the first “nuclear influencer,” why she thinks communication and aesthetics matter just as much as engineering, and why abundant, cheap energy is central to improving global living standards.Beyond nuclear itself, the conversation touches on broader questions:• Why are young people increasingly pessimistic about the future?• What explains the rise of degrowth thinking in wealthy countries?• How does meaning shift in a world where technology automates more of life?• And what would it take for the U.S. and Europe to build again at the pace of China?This special episode was recorded at the 2025 Progress Conference. Enormous thanks to Roots of Progress for organizing the event, and to Lighthaven for providing the podcast studio.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Young people across the Western world are struggling to start their lives. In most cases, it's not for lack of ambition, but because they can't find a place to live. The consequences show up anywhere from sluggish economies to low birth rates. But there's a way to fix it.In this episode, we talk with Sam Bowman, editor of Works in Progress, a magazine focused on high-leverage ideas to improve the world. We discuss why housing is the master key to some of the biggest challenges that Western societies are facing today.We discuss:Why the biggest bottleneck to economic growth in rich countries isn't technology, but where people are allowed to liveWhere laws on housing come from and why we should change themModels that have actually worked: from Israel's resident-led densification to Madrid’s low-cost metro expansionWhy aesthetics matter more than economists think when it comes to getting people to accept new housingWhat it would take for Western cities to grow the way Tokyo or the Pearl River Delta did, and what that could mean for growth, families and optimismThis special episode was recorded live at the 2025 Progress Conference, hosted by our friends at Roots of Progress. We’re grateful to them for bringing together so many thinkers reimagining how humanity can keep moving forward—and for making conversations like this one possible!Timestamps:0:00 Cold open0:38 Intro: Sam Bowman and Works in Progress4:14 Why a magazine format instead of a think tank or Substack10:13 When technology isn't the bottleneck to progress: housing, transport and energy17:56 Why San Francisco thrives despite its dysfunction24:19 Why industries develop in suboptimal places: the TSMC example27:06 Why it's so hard to build: the history of zoning laws36:12 Updates to regulation and policy: local decision-making models43:56 Housing as a western-world problem that drives everything else48:06 The role of aesthetics in getting people to accept new buildings55:48 Works in Progress and the journey to appreciating aesthetics58:55 Building movements to shift expectations about the future1:05:44 What a successful future looks like1:09:16 Italy, Spain and the birth rate crisis1:11:37 Housing and tech growth aren't in competition1:12:51 What DOGE got wrong about reforming government1:20:29 Other hopeful examples: the Madrid Metro projectOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
What if we could treat depression, anxiety, or chronic pain by tuning the brain, just as precisely as a pacemaker regulates the heart?Jacques Carolan, Program Director at the UK’s ARIA (Advanced Research and Invention Agency), joins us to talk about the next wave of precision neurotechnology; new tools that let us see and influence brain activity with far greater accuracy. We explore how ultrasound might gently stimulate mood circuits without surgery, how gene therapies could switch off seizures before they start, and how “living electrodes” could one day repair damaged brain tissue.Jacques also explains ARIA’s bold approach to funding high-risk science, what he’s learned from patient engagement, and why he believes the next decade will transform how we understand and care for the brain.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
What if chronic diseases, from Alzheimer’s to autoimmune conditions, share a hidden cause: lingering infections deep within our tissues?Microbiologist Amy Proal, co-founder of the PolyBio Research Foundation, joins host Allison Duettmann to discuss how persistent pathogens could drive inflammation, aging, and many chronic illnesses, and why our current “autoimmunity” model might be missing the root cause.They explore PolyBio’s groundbreaking work collecting rarely studied tissue samples, the link between viruses and Alzheimer’s, the rise of long COVID, and simple tools, like clean indoor air, that could prevent future pandemics. Amy also outlines an optimistic vision: strengthening, not suppressing, the immune system to build a healthier, more resilient civilization.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
In this episode, Ken Liu joins the podcast to explore how science fiction serves as our modern mythology. We discuss his new techno-thriller "All That We See or Seem", the concept of egolets (AI capturing facets of our identity), the noematograph (AI as a camera for thought), and the role of collective dreaming in making us more human. Ken also reflects on Frankenstein, Philip K. Dick, the challenge of translation, and why technology is “the mind made tangible.” Ken's new book is now available to buy: https://www.amazon.com/All-That-Seem-Julia-Novel/dp/1668083175/ref=sr_1_1?crid=YQBXYV3NPYRQ&dib=eyJ2IjoiMSJ9.qZEp-FJsQjZ1DeI_1aU9dUCHVQLskKq0l80APpXt8lY._8ZY1FJprDwz6sXFyMqa538OZaQZx-_KzsBkHjRww1g&dib_tag=se&keywords=ken+liu+all+that+we+see+or+seem&qid=1758810447&sprefix=ken+liu+all%2Caps%2C326&sr=8-1 On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
In this episode of the Existential Hope Podcast, Beatrice Erkers is joined by David Duvenaud, Associate Professor at the University of Toronto and former researcher at Anthropic.We discuss David’s work on post-AGI civilizational equilibria and the widely discussed paper Gradual Disempowerment. David reflects on why liberalism may not hold up in a world where humans are no longer needed, how UBI could be Goodharted into absurdity, and what it would take to design institutions that protect humans even when incentives don’t.We also cover:- Forecasting the long-term future using LLMs trained on historical data- Robin Hanson’s idea of futarchy (governance by prediction markets)- Asymmetrical but beneficial relationships between humans and AI- Uploading, cultural legacies, and the possibility of “worthy successors”On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
What does a genuinely positive future with AI look like? While dystopian visions are common, the most valuable—and scarcest—resource we have is a concrete, hopeful vision for where we're headed.In this episode, we're joined by Nathan Labenz, host of the popular Cognitive Revolution podcast, to explore the tangible possibilities of a beneficial AI-driven world. Nathan shares his insights on everything from the near-term transformations in education and healthcare—like AI-driven antibiotic discovery and personalized learning—to the grand, long-term visions of curing all diseases and becoming a multi-planetary species.We dive deep into crucial concepts like Eric Drexler's "comprehensive AI services" as a model for safety through narrowness, the transformative power of self-driving cars, and how we can collectively raise our ambitions to build the future we actually want. On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
For years, the conversation about the long-term future has been dominated by a crucial question: how do we avoid extinction? But what if ensuring our survival is only half the battle? In this episode, Beatrice is joined by Fin Moorhouse, a researcher at Forethought and co-author with Will MacAskill of the Better Futures series, to make the case for focusing on the other half: flourishing. Or as we'd like to say in this podcast: Existential Hope!Fin challenges the idea that a great future will emerge automatically if we just avoid the worst-case scenarios. Using the analogy of a grand sailing expedition, he explores the complexities of navigating towards a truly optimal world, questioning whether our current moral compass is enough to guide us.The conversation dives into the concept of "moral catastrophes"—profound ethical failings, like industrial animal farming, that could persist even in technologically advanced futures. Fin also tackles the complex challenges posed by digital minds, from the risk of accidental suffering to the creation of "willing servants." He argues for the power of "moral trade" as a tool to build a more pluralistic and prosperous world, and explains why we should aim for a "Viatopia"—a stable and self-sustaining state that makes a great future highly likely.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Is code just a technical skill for engineers, or is it a deeply humanistic art form capable of expanding our minds? In this episode, host Beatrice Erkers is joined by scientist, author, and Coder-in-Residence at Lux Capital, Sam Arbsman, to explore the profound ideas in his new book, The Magic of Code.Sam reframes our relationship with computing, arguing that code is one of history's most powerful "tools for thought," standing alongside the alphabet and paper in its ability to augment human intellect. He delves into the fascinating history of this idea, from Don Swanson's concept of "undiscovered public knowledge" in scientific literature to the modern potential of AI to connect disparate ideas and accelerate discovery.The conversation also explores the democratization of creation through "vibe coding," the power of thinking of an app as a "home-cooked meal," and the critical importance of humility as our technological systems become too complex for any single person to fully understand—a theme from his previous book, Overcomplicated. Sam connects these ideas to the ever-changing nature of knowledge itself, drawing from his first book, The Half-Life of Facts.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
The tech industry we read about every day accounts for only 2% of the global economy. So what about the other 98%? In this episode, host Beatrice Erkers talks to hacker, inventor, and author Pablos Holman about his new book, Deep Future, and why it’s time to look beyond software to solve the world’s biggest problems.Pablos argues that for decades, our brightest minds have been focused on apps and ads while ignoring the fundamental industries that civilization depends on: energy, manufacturing, shipping, and food. He makes the case for "deep tech"—everything but software—and explains why now is the perfect moment to deploy our "software toolkit" to reinvent these stagnant, trillion-dollar sectors.From computer-controlled sailing ships and factory-built nuclear reactors buried a mile underground, to the simple genius of a better milk jug that can double a farmer's income, Pablos shares mind-bending examples of technology that truly matters. He also offers a grounded take on AI, explaining why computational modeling for disease control is more impactful than AGI hype, and delivers a powerful vision for a future where energy abundance ends global conflict and automation frees humanity to focus on what makes us thrive: care, community, and connection.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
What if we could build an AI that doesn't just answer questions, but makes fundamental scientific discoveries on its own? That's the mission of Future House, and in this episode, host Allison Duettmann sits down with its co-founder, Andrew White.Andrew shares the incredible journey that led him from chemical engineering to the forefront of the AI for Science revolution. He gives us a look under the hood at Future House's flock of specialized AI agents, like Crow, Finch, and Owl, and reveals how they recently accomplished in just three weeks what could have taken years: identifying an existing drug as a potential new treatment for a common cause of blindness.But the conversation doesn't stop at the successes. Andrew offers a sharp critique of the current methods for evaluating AI, explaining what’s wrong with benchmarks like "Humanity's Last Exam" and why the ultimate test is real-world discovery. He also makes a compelling case for completely reinventing the slow and inefficient scientific publishing system for an era where machines are both the producers and consumers of research.Andrew is also fundraising for the Frontiers Society at IPAM to advance this work. If you’d like to support, you can donate here: IPAM Donation Page.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.




