DiscoverThe Existential Hope Podcast
The Existential Hope Podcast
Claim Ownership

The Existential Hope Podcast

Author: Foresight Institute

Subscribed: 9Played: 96
Share

Description

The Existential Hope Podcast features in-depth conversations with people working on positive, high-tech futures. We explore how the future could be much better than today—if we steer it wisely.


Hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite the scientists, founders, and philosophers shaping tomorrow’s breakthroughs— AI, nanotech, longevity biotech, neurotech, space, smarter governance, and more.


About Foresight Institute: For 40 years the independent nonprofit Foresight Institute has mapped how emerging technologies can serve humanity. Its Existential Hope program is the North Star: mapping the futures worth aiming for and the breakthroughs needed to reach them. This podcast is that exploration in public. Follow along and help tip the century toward success.


Explore more:


Hosted on Acast. See acast.com/privacy for more information.

29 Episodes
Reverse
To make the future go well, we might not need a perfect model for its end state, or an abstract philosophical theory to guide us. Can your own sense of “the right thing to do” actually help make the world better?In this episode we talk with SJ Beard, researcher at the Centre for the Study of Existential Risk, and author of the book “Existential Hope”.Some of the topics we discuss:How to shift our focus from "preventing the end of the world" to actively building a future worth living.Why aiming for a “happy ever after” state of the world might be dangerous, and why improving the world one generation at a time is less likely to backfire.Relying on our own sense of “the right thing to do” as a practical guide to make the world better.Why decisions about AI and global risk need input from a broad mix of people and their real-world experiences, not just experts at the top.Why building AI with compassion and curiosity about human values may be safer than giving it a rigid list of rules to follow.Timestamps:[01:31] SJ’s background in philosophy and existential risk[02:02] Why write a book on existential hope?[04:43] Defining existential hope, and its relationship with existential risks and existential anxiety[11:09] Human agency without the guilt[13:59] Why there are no truly "natural" disasters[16:49] Why we shouldn’t try to build a perfect utopia[19:05] Protopia: is iterative improvement enough?[22:19] Defining progress: what does it mean to "get better"?[26:13] Protopia vs. viatopia: setting goals and achieving a great future[29:48] Existential safety as a collective project[35:06] Using participatory tools to make global decisions[36:32] Making existential hope reasonably demanding[40:06] Can we achieve systemic change in a tech-focused world?[46:00] Concrete socio-technical projects for AI safety[49:02] Aligning AI by building its character[51:45] The importance of history in building a good future[54:24] Key 17th-century ideas that are shaping modern society[58:20] Cultivating "humanity as a virtue"[01:04:37] Lessons from nuclear near-misses: the example of Petrov[01:09:20] The trade-offs of a humanistic, bottom-up approach to decision-making[01:12:16] Literacy vs. orality: how ideas become simplified[01:16:45] Meme culture and the transmission of deep context[01:18:48] How writing the book changed SJ’s mind[01:21:38] SJ Beard’s vision for existential hopeOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Most scientists do “safe” research to secure their next grant. But what if more of them worked on the most important problems instead?In this episode, we talk with Anastasia Gamick, co-founder of Convergent Research, about how to raise our level of ambition for what science can actually achieve.Convergence Research incubates Focused Research Organizations: small, startup-style teams that build critical “public good” tech, which both academia and for-profits ignore.We discuss:What makes a research project truly high-impact in view of an AI worldConcrete examples of these projects: maps of brain synapses, software that’s provably safe, drug screening, good data for AI-powered scientific research, and moreHow to prioritize defensive technology, such as biosafety tools, instead of just pushing every frontier as fast as possibleHow young scientists can find the work that matters most for the future[00:00] Cold open[01:52] Introducing Anastasia Gamick and the mission of Convergent Research[02:44] Defining Focused Research Organizations (FROs) and their unique characteristics[09:46] Backcasting from 2075: what research to prioritize now to prepare for the intelligence age[19:08] The four types of projects Convergent decides not to fund[25:35] Biological and ecological dark matter: why we need better datasets for AI science[28:28] Why academia and industry aren’t incentivized to build tech capabilities for the public good[29:32] Defining “moonshot projects”: how boring drug screening creates massive downstream impact[32:56] The future of neuroscience: capturing videos of synapses firing[35:46] How the FRO model is catching on internationally[36:25] Steering vs. accelerating: selecting defense-dominant technology[41:22] Increasing human agency and how scientists can choose high-impact research areas[46:51] The evolution of scientific funding and the role of new philanthropy[48:05] Finding existential hope in the community of future-buildersOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Our fast-paced world isn’t spinning out of our control; we’re actually becoming more capable of steering it than ever before. Throughout history, technological progress has expanded human agency, that is our ability to choose our destiny rather than being subject to the whims of nature.Jason Crawford, founder of the Roots of Progress Institute, joins the podcast to discuss The Techno-Humanist Manifesto, his book exploring his philosophy of progress centered around human life and wellbeing. In our conversation, we dive into the core arguments of the manifesto:How we are more in control of our lives than ever beforeWhy we should reframe the goal of “stopping climate change” into “controlling climate change” and work toward installing a “thermostat for the Earth”The value of nature and its interaction with humanityAllowing ourselves to celebrate human achievement and industrial civilizationThe concept of “solutionism”, as a kind of optimism that acknowledges risks while keeping a proactive attitude towards solving problemsWhy two common fears around the slowing of progress – that we could run out of natural resources or new ideas – are actually unfoundedThe possibility that AI represents a transformation as significant as the Industrial Revolution or the invention of agricultureHow to rebuild a culture of progress in the 21st century, from reforming scientific institutions to creating new, non-dystopian science fictionChapters:[00:00] Cold open[01:30] Intro: Jason Crawford and the Techno-Humanist Manifesto[04:10] Defining progress as the expansion of human agency[06:16] How to use our newfound agency to live a meaningful life[10:07] Climate control: installing a “thermostat” for the Earth[13:26] Anthropocentrism and the value of nature[19:41] Ode to man: celebrating human achievement[20:53] Solutionism: believing in our problem-solving abilities to tackle risks[26:26] Why pessimism sounds smart but misses the solution space[31:29] The myth of finite natural resources and the power of knowledge[34:27] Why we are getting better at finding ideas faster than they get harder to find[39:03] The Intelligence Age: a new mode of production[41:19] Amplifying human agency in an AI-driven world[43:09] Developing a healthy relationship with AI and attention[46:28] The culture of progress and why we soured on the future[50:10] Building the infrastructure for a global progress movement[53:54] A 20-year vision for progress studies in the mainstream[57:33] High-leverage regulations for progress: from nuclear to supersonic flight[58:57] Jason Crawford’s existential hope visionOn the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
While dystopian fiction dominates our screens and bookshelves, Elle Griffin is busy researching how things might actually go right. She wanted to write a utopian novel and realized she needed a better understanding of what an ideal society could look like. In our conversation, we discuss how her favorite utopian literature influenced her views on a well-designed society. But we also explore practical ideas on how we could improve our systems:Tax autonomy: Why giving states and cities the power to collect their own taxes would allow them to fund the specific services their citizens actually want.A la carte federations: A model where cities and states choose to join specific agreements, like a "fishing EU" or a "healthcare EU," instead of being forced into one large, centralized government that manages every aspect of life.The Mondragon model: What we can learn from a massive network of worker-owned cooperatives in Spain that provides its own unemployment insurance and university.Who should control AI: Why giving voting authority to the employees who write the code (rather than investors or nonprofit boards) might be the best way to prevent unethical shortcuts.Singapore’s land model: How the government acts as a landlord to fund public services, allowing for lower income taxes while still providing universal social support.Fixing the Internet: How to use personal data and AI to make us wiser, rather than letting algorithms push us toward fast fashion and political radicalization.Chapters:Cold open (00:00:00)Introducing Elle Griffin (00:01:27)How writing a novel turned into a research project (00:02:27)Elle’s current work: From print pamphlets to "We Should Own the Economy" (00:04:21)The setup of Elle’s upcoming utopian novel (00:05:06)From gothic literature to utopian literature (00:06:30)Three classic utopian novels and their recurring lessons (00:15:42)Building a "future Asia" through mythology and technology (00:22:02)What if US States had the same autonomy as EU countries? (00:23:49)"A la carte" federalism: moving toward a modular government (00:28:11)The Mondragon model: a blueprint for worker-owned economies (00:32:54)Why the smallest government is the best government (00:36:18)The global monoculture and the rise of micro-cultures (00:44:29)Who should control AI? The case for employee-led governance (00:53:02)Fixing the Internet and using AI to make us wise, not just efficient (01:01:06)Why Victor Hugo’s "Les Misérables" is the ultimate masterpiece (01:06:14)An existential hope vision for the future (01:08:09)On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
When people think about AGI, most of them ask “When is it going to arrive?” or “What kind of AGI will we get?”. Andrew Critch, AI safety researcher and mathematician, argues that the most important question is actually “What will we do with it?”In our conversation, we explore the importance of our choices in the quest to make AGI a force for good. Andrew explains what AGI might look like in practical terms, and the consequences of it being trained on our culture. He also claims that finding the “best” values AI should have is a philosophical trap, and that we should instead focus on finding a basic agreement about “good” vs. “bad” behaviors.The episode also covers concrete takes on the transition to AGI, including: Why an advanced intelligence would likely find killing humans “mean.”How automated computer security checks could be one of the best uses of powerful AI.Why the best preparation for AGI is simply to build helpful products today.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Anna Gát, founder of the Interintellect community, joins us to explore the essential role of hopeful action and diverse communities in shaping the future. Anna shares why she started Interintellect as a space for intellectual inquiry free from political polarization and traditional gatekeeping, driven by the hope that constructive social collaboration is possible. She details the specific rules of gathering and hosting that can make online and offline groups successful, fostering deep, non-toxic, and life-changing conversations across polarizing topics.We also dive into the genesis of Anna's own podcast, The Hope Axis, and her frustration with the prevalent "complaint culture" and regressive narratives in wealthy societies. The conversation also touches on these questions:Why should communities be given a clear "job" to increase their longevity?How can we achieve diversity of thought in tight-knit groups?Why is constantly networking (with a finite-game approach) detrimental to human well-being?What does it mean to be a "realistic optimist"?How can we architecturally ensure that future AI serves groups and supports humans as social creatures, rather than further enabling solitary, hyper-addictive entertainment?On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Nuclear energy has a reputation problem. Despite being one of the safest and most reliable clean-energy technologies ever developed, public perception is dominated by a handful of accidents, Cold War imagery, and decades of political resistance. Isabelle Boemeke, model-turned-science-communicator and author of Rad Future, argues that this disconnect is not only irrational, but actively dangerous for humanity’s prospects.In this episode, Isabelle explains how nuclear became one of the most misunderstood technologies of the last century, why fears about waste, safety, and proliferation are often overstated, and what the data actually shows about nuclear relative to fossil fuels, hydropower, and renewables. She also talks about her unusual path to becoming the first “nuclear influencer,” why she thinks communication and aesthetics matter just as much as engineering, and why abundant, cheap energy is central to improving global living standards.Beyond nuclear itself, the conversation touches on broader questions:• Why are young people increasingly pessimistic about the future?• What explains the rise of degrowth thinking in wealthy countries?• How does meaning shift in a world where technology automates more of life?• And what would it take for the U.S. and Europe to build again at the pace of China?‍This special episode was recorded at the 2025 Progress Conference. Enormous thanks to Roots of Progress for organizing the event, and to Lighthaven for providing the podcast studio.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
What if the biggest driver of economic growth isn’t new technology, but simply fixing what’s broke, housing, transport, and energy?Sam Bowman, editor of Works in Progress, joins us to explore how smarter cities, faster transit, and abundant energy could unlock human potential on an unprecedented scale. We discuss why restrictive zoning laws keep millions from opportunity, how beauty and design shape public attitudes toward progress, and why rediscovering growth could restore optimism in the West.Sam also shares what he’s learned from success stories around the world, from Houston’s neighborhood-led zoning reforms to Madrid’s low-cost metro expansion, and why he believes rebuilding belief in progress is just as important as building the future itself.This special episode was recorded live at the 2025 Progress Conference, hosted by our friends at Roots of Progress. We’re grateful to them for bringing together so many thinkers reimagining how humanity can keep moving forward—and for making conversations like this one possible!On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
What if we could treat depression, anxiety, or chronic pain by tuning the brain, just as precisely as a pacemaker regulates the heart?Jacques Carolan, Program Director at the UK’s ARIA (Advanced Research and Invention Agency), joins us to talk about the next wave of precision neurotechnology; new tools that let us see and influence brain activity with far greater accuracy. We explore how ultrasound might gently stimulate mood circuits without surgery, how gene therapies could switch off seizures before they start, and how “living electrodes” could one day repair damaged brain tissue.Jacques also explains ARIA’s bold approach to funding high-risk science, what he’s learned from patient engagement, and why he believes the next decade will transform how we understand and care for the brain.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
What if chronic diseases, from Alzheimer’s to autoimmune conditions, share a hidden cause: lingering infections deep within our tissues?Microbiologist Amy Proal, co-founder of the PolyBio Research Foundation, joins host Allison Duettmann to discuss how persistent pathogens could drive inflammation, aging, and many chronic illnesses, and why our current “autoimmunity” model might be missing the root cause.They explore PolyBio’s groundbreaking work collecting rarely studied tissue samples, the link between viruses and Alzheimer’s, the rise of long COVID, and simple tools, like clean indoor air, that could prevent future pandemics. Amy also outlines an optimistic vision: strengthening, not suppressing, the immune system to build a healthier, more resilient civilization.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
In this episode, Ken Liu joins the podcast to explore how science fiction serves as our modern mythology. We discuss his new techno-thriller "All That We See or Seem", the concept of egolets (AI capturing facets of our identity), the noematograph (AI as a camera for thought), and the role of collective dreaming in making us more human. Ken also reflects on Frankenstein, Philip K. Dick, the challenge of translation, and why technology is “the mind made tangible.” Ken's new book is now available to buy: https://www.amazon.com/All-That-Seem-Julia-Novel/dp/1668083175/ref=sr_1_1?crid=YQBXYV3NPYRQ&dib=eyJ2IjoiMSJ9.qZEp-FJsQjZ1DeI_1aU9dUCHVQLskKq0l80APpXt8lY._8ZY1FJprDwz6sXFyMqa538OZaQZx-_KzsBkHjRww1g&dib_tag=se&keywords=ken+liu+all+that+we+see+or+seem&qid=1758810447&sprefix=ken+liu+all%2Caps%2C326&sr=8-1 On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
In this episode of the Existential Hope Podcast, Beatrice Erkers is joined by David Duvenaud, Associate Professor at the University of Toronto and former researcher at Anthropic.We discuss David’s work on post-AGI civilizational equilibria and the widely discussed paper Gradual Disempowerment. David reflects on why liberalism may not hold up in a world where humans are no longer needed, how UBI could be Goodharted into absurdity, and what it would take to design institutions that protect humans even when incentives don’t.We also cover:- Forecasting the long-term future using LLMs trained on historical data- Robin Hanson’s idea of futarchy (governance by prediction markets)- Asymmetrical but beneficial relationships between humans and AI- Uploading, cultural legacies, and the possibility of “worthy successors”On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
What does a genuinely positive future with AI look like? While dystopian visions are common, the most valuable—and scarcest—resource we have is a concrete, hopeful vision for where we're headed.In this episode, we're joined by Nathan Labenz, host of the popular Cognitive Revolution podcast, to explore the tangible possibilities of a beneficial AI-driven world. Nathan shares his insights on everything from the near-term transformations in education and healthcare—like AI-driven antibiotic discovery and personalized learning—to the grand, long-term visions of curing all diseases and becoming a multi-planetary species.We dive deep into crucial concepts like Eric Drexler's "comprehensive AI services" as a model for safety through narrowness, the transformative power of self-driving cars, and how we can collectively raise our ambitions to build the future we actually want. On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
For years, the conversation about the long-term future has been dominated by a crucial question: how do we avoid extinction? But what if ensuring our survival is only half the battle? In this episode, Beatrice is joined by Fin Moorhouse, a researcher at Forethought and co-author with Will MacAskill of the Better Futures series, to make the case for focusing on the other half: flourishing. Or as we'd like to say in this podcast: Existential Hope!Fin challenges the idea that a great future will emerge automatically if we just avoid the worst-case scenarios. Using the analogy of a grand sailing expedition, he explores the complexities of navigating towards a truly optimal world, questioning whether our current moral compass is enough to guide us.The conversation dives into the concept of "moral catastrophes"—profound ethical failings, like industrial animal farming, that could persist even in technologically advanced futures. Fin also tackles the complex challenges posed by digital minds, from the risk of accidental suffering to the creation of "willing servants." He argues for the power of "moral trade" as a tool to build a more pluralistic and prosperous world, and explains why we should aim for a "Viatopia"—a stable and self-sustaining state that makes a great future highly likely.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Is code just a technical skill for engineers, or is it a deeply humanistic art form capable of expanding our minds? In this episode, host Beatrice Erkers is joined by scientist, author, and Coder-in-Residence at Lux Capital, Sam Arbsman, to explore the profound ideas in his new book, The Magic of Code.Sam reframes our relationship with computing, arguing that code is one of history's most powerful "tools for thought," standing alongside the alphabet and paper in its ability to augment human intellect. He delves into the fascinating history of this idea, from Don Swanson's concept of "undiscovered public knowledge" in scientific literature to the modern potential of AI to connect disparate ideas and accelerate discovery.The conversation also explores the democratization of creation through "vibe coding," the power of thinking of an app as a "home-cooked meal," and the critical importance of humility as our technological systems become too complex for any single person to fully understand—a theme from his previous book, Overcomplicated. Sam connects these ideas to the ever-changing nature of knowledge itself, drawing from his first book, The Half-Life of Facts.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
The tech industry we read about every day accounts for only 2% of the global economy. So what about the other 98%? In this episode, host Beatrice Erkers talks to hacker, inventor, and author Pablos Holman about his new book, Deep Future, and why it’s time to look beyond software to solve the world’s biggest problems.Pablos argues that for decades, our brightest minds have been focused on apps and ads while ignoring the fundamental industries that civilization depends on: energy, manufacturing, shipping, and food. He makes the case for "deep tech"—everything but software—and explains why now is the perfect moment to deploy our "software toolkit" to reinvent these stagnant, trillion-dollar sectors.From computer-controlled sailing ships and factory-built nuclear reactors buried a mile underground, to the simple genius of a better milk jug that can double a farmer's income, Pablos shares mind-bending examples of technology that truly matters. He also offers a grounded take on AI, explaining why computational modeling for disease control is more impactful than AGI hype, and delivers a powerful vision for a future where energy abundance ends global conflict and automation frees humanity to focus on what makes us thrive: care, community, and connection.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
What if we could build an AI that doesn't just answer questions, but makes fundamental scientific discoveries on its own? That's the mission of Future House, and in this episode, host Allison Duettmann sits down with its co-founder, Andrew White.Andrew shares the incredible journey that led him from chemical engineering to the forefront of the AI for Science revolution. He gives us a look under the hood at Future House's flock of specialized AI agents, like Crow, Finch, and Owl, and reveals how they recently accomplished in just three weeks what could have taken years: identifying an existing drug as a potential new treatment for a common cause of blindness.But the conversation doesn't stop at the successes. Andrew offers a sharp critique of the current methods for evaluating AI, explaining what’s wrong with benchmarks like "Humanity's Last Exam" and why the ultimate test is real-world discovery. He also makes a compelling case for completely reinventing the slow and inefficient scientific publishing system for an era where machines are both the producers and consumers of research.Andrew is also fundraising for the Frontiers Society at IPAM to advance this work. If you’d like to support, you can donate here: IPAM Donation Page.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
What if the most desirable AI future is made of powerful tools, not autonomous agents? Physicist and futurist Anthony Aguirre joins us to unpack the Tool AI pathway, how incentives, liability, and design choices could steer us toward AI that empowers people rather than replaces them. We also situate this episode in AI Pathways, our two-scenario project exploring Tool AI and d/acc futures. Explore the project: https://ai-pathways.existentialhope.com/On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
Self-driving cars aren’t science fiction, they’re already here. But what kind of future are they steering us toward?In this episode, Beatrice speaks with Andrew Miller, mobility expert and author of The End of Driving, about the transformational promise, and very real risks, of autonomous vehicles. They explore why driverless tech isn’t just about hardware or software, but about regulation, land use, curb management, jobs, and values.From robo-taxis in San Francisco and driverless trucks in Texas, to curb chaos, job displacement, and how we reclaim space from parked cars, this episode goes far beyond the hype. On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
How do we shape a future worth rooting for? In this episode, Beatrice Erkers talks with Jim O'Shaughnessy, founder of O'Shaughnessy Ventures and author of What Works on Wall Street, about his third act: backing creators, thinkers, and innovators across publishing, film, AI, and investment. They dive into the cultural power of storytelling, what it means to be “AI-first,” and why cognitive diversity and personal agency are key to navigating a rapidly changing world.Jim shares his existential hope for the next 30 years, explores how to make AI work for everyone, and offers a call to action for people with ideas: get in the arena. Along the way, we cover self-driving cars, tutoring AIs, philosophical simulations, and why beautiful books still matter.On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures. Full transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X. Hosted on Acast. See acast.com/privacy for more information.
loading
Comments