Discover
The Foresight Institute Podcast

The Foresight Institute Podcast
Author: Foresight Institute
Subscribed: 46Played: 1,115Subscribe
Share
© 2023 The Foresight Institute Podcast
Description
Welcome to the Foresight Institute’s podcast!
Since 1986, Foresight has advanced technologies for the long-term benefit of life, focusing on science and technology too early-stage or interdisciplinary for traditional institutions to support.
Our podcast features technical seminars in Molecular Machines, Biotech, Intelligent Cooperation, Neurotech, and Spacetech, alongside our Existential Hope content and special episodes.
To view presentations of our technical work and to stay up-to-date on new content, subscribe on YouTube and follow us on Twitter. The Foresight Institute Podcast is AI-searchable using Fathom.fm.
Hosted on Acast. See acast.com/privacy for more information.
182 Episodes
Reverse
A Special Podcast Series on Zero-knowledge-enabled Cooperation, sponsored by Zcash-FoundationThis is a three part privacy podcast series, #ShieldedTransactions, that examines the increasing surveillance of contemporary society, and the effective tech solutions that can help change that. Andrew MillerI'm an Assistant Professor at the University of Illinois, Urbana-Champaign, in Electrical and Computer Engineering and affiliate in Computer Science. I'm also an Associate Director of the Initiative for Cryptocurrencies and Contracts (IC3) and a board member of the Zcash Foundation and Ethereum Enterprise Alliance. I received my Ph.D. from the University of Maryland Cybersecurity Center. I am Director of the Decentralized Systems Lab at UIUC!Session summary: (425) Shielded Transactions | Who Will Own Your Privacy & Identity In the Future? (with Andrew Miller) - YouTube The Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. Apply to Foresight’s virtual salons and in person workshops here!We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page.Visit our website for more content, or join us here:TwitterFacebookLinkedInEvery word ever spoken on this podcast is now AI-searchable using Fathom.fm, a search engine for podcasts. Hosted on Acast. See acast.com/privacy for more information.
In this special “minisode” of the Existential Hope podcast, Allison and Beatrice from Foresight Institute sit down to discuss their newly launched, free worldbuilding course on Udemy: The AI Futures Worldbuilding course. This course—created in partnership with the Future of Life Institute—helps participants imagine and shape positive visions for AI’s impact on technology, governance, economics, and everyday life.Hear about expert guest lectures from leaders like Anousheh Ansari (XPRIZE), Helen Toner (CSET), Hannah Ritchie (Our World in Data), Ada Palmer (University of Chicago), Anthony Aguirre (FLI), and more. If you’re curious how to chart a better future with AI, or simply need a dose of optimism, tune in for practical insights and inspiring ideas.• Take the course – Search for “Building Hopeful Futures with AI” on Udemy or visit existentialhope.com• Submit your vision – Share your optimistic vision for 2035 using the form at existentialhope.com, and explore submissions from others.• Spread the word – If you know someone who could use a hopeful perspective on our AI future, invite them to join this journey!Learn more about the course: https://www.udemy.com/course/worldbuilding-hopeful-futures-with-ai/ Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
In this episode of the Existential Hope Podcast, cognitive psychologist and bestselling author Steven Pinker explores why, despite massive gains in human progress, many people remain pessimistic about the future—and why that matters for shaping what comes next.Steven argues that while progress isn’t automatic, it is real. By tracking long-term trends in violence, poverty, democracy, and innovation, we can see how human effort—driven by reason, science, and cooperation—has repeatedly pushed civilization forward. Yet, media narratives and cognitive biases often make us blind to these achievements, reinforcing a sense of stagnation or decline.In this conversation, we explore:The hidden progress shaping our world today—from rising literacy rates to declining poverty, and why these trends rarely make the news.Why pessimism can be self-defeating—and how a more accurate understanding of history can help us build a better future.The role of AI, biotech, and clean energy—and why they might unlock transformative improvements, if used wisely.How to communicate ideas that inspire hope—including Steven’s advice on cutting through jargon and tribalism to make ideas stick.If you’ve ever wondered whether humanity is on the right track—or how to ensure we stay on it—this episode is for you. Listen now to hear how we can move from existential dread to existential hope. 🚀Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
"We’ve saved the world so many times throughout history. Now we just have to do it again."What if speculative fiction could do more than entertain—what if it could reshape how we think about governance, technology, and societal progress? In this episode of the Existential Hope Podcast, historian and sci-fi author Ada Palmer discusses how we can harness lessons from both history and fiction to reimagine what’s possible for humanity.Ada argues that one of the most critical advantages we have over past generations is our ability to envision a future radically different from our present. Unlike Renaissance thinkers limited by their own history, today’s societies can draw from an endless array of speculative worlds—both utopian and dystopian—to expand the horizons of what we dare to demand.In this wide-ranging conversation, Ada digs into everything from concrete ideas for how to govern in a more pluralistic, adaptable world, to the importance of storytelling in addressing existential risks, exploring:Why pluralism might be the antidote to centralized, one-size-fits-all governance and how speculative fiction shows us ways to make it work.How past and present technological advancements—like eradicating malaria—can inspire hope for tackling today’s most urgent challenges.What makes despair the ultimate barrier to progress, and how celebrating successes can keep us moving forward.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Beatrice Erkers and Allison Duettmann What if we could reimagine the future from a place of hope instead of fear?In this special episode of the Existential Hope Podcast, Allison Duettmann and Beatrice Erkers turn the tables and interview each other instead of a guest, sharing insights into their journeys, hopes, and visions for humanity. Together, they explore big concepts like moral circle expansion, how neurotech could deepen empathy (even with animals!), and why worldbuilding in 2045 can help us envision and create better futures today. Prepare for the new year by diving into strategies for building a future worth striving for.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Adam Marblestone is the CEO of Convergent Research. He is working with a large and growing network of collaborators and advisors to develop a strategic roadmap for future FROs. Outside of CR, he serves on the boards of several non-profits pursuing new methods of funding and organizing scientific research including Norn Group and New Science, and as an interviewer for the Hertz Foundation. Previously, he was a Schmidt Futures Innovation Fellow, a Fellow with the Federation of American Scientists (FAS), a research scientist at Google DeepMind, Chief Strategy Officer of the brain-computer interface company Kernel, a research scientist at MIT, a PhD student in biophysics with George Church and colleagues at Harvard, and a theoretical physics student at Yale. He has also previously helped to start companies like BioBright, and advised foundations such as Open Philanthropy.Session SummaryIn this episode of the Existential Hope Podcast, our guest is Adam Marblestone, CEO of Convergent Research. Adam shares his journey from working on nanotechnology and neuroscience to pioneering a bold new model for scientific work and funding: Focused Research Organizations (FROs). These nonprofit, deep-tech startups are designed to fill critical gaps in science by building the infrastructure needed to accelerate discovery. Tune in to hear how FROs are unlocking innovation, tackling bottlenecks across fields, and inspiring a new approach to advancing humanity’s understanding of the world.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
SpeakerDr Ariel Zeleznikow-Johnston is a neuroscientist at Monash University, Australia, where he investigates methods for characterising the nature of conscious experiences. In 2019, he obtained his PhD from The University of Melbourne, where he researched how genetic and environmental factors affect cognition. His research interests range from the decline, preservation and rescue of cognitive function at different stages of the lifespan, through to comparing different people’s conscious experience of colour.Session SummaryIn this edition of the Hope Drop, we dive into a thought-provoking conversation with Dr. Ariel Zeleznikow-Johnston, neuroscientist and author of The Future Loves You: How and Why We Should Abolish Death. Ariel explores the science and philosophy of brain preservation, questioning long-held beliefs about life, death, and personal identity. We explore how neuroscience might redefine what it means to truly live—and challenge assumptions around mortality.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Eli Dourado is the Chief Economist at the Abundance Institute. He is a former Senior Research Fellow at the Mercatus Center at George Mason University and he has studied and written about a wide range of technology policy issues, including Internet governance, intellectual property, cybersecurity, and cryptocurrency.Session SummaryThis episode covers topics including Dourado’s efforts to accelerate economic growth in the U.S., his views on policy reforms in key sectors such as health, housing, energy, and transportation, and the challenges of regulatory complacency. We also explore the potential of new technologies such as AI and biotechnology. Dourado shares his vision of a future with scalable healthcare solutions, more efficient housing, rapid deployment of energy technologies, and advancements in transportation like supersonic flights and electric vehicles. The conversation concludes on the importance of broad literacy and continuous writing in supporting progress.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
SpeakerAmanda Ngo is a 2024 Foresight Fellow. Recently, she has built Elicit.org from inception to 100k+ monthly users, leading a team of 5 engineers and designers, presented on forecasting, safe AI systems, and LLM research tools at conferences (EAG, Foresight Institute), ran a 60-person hackathon with FiftyYears using LLMs to improve our wellbeing (event, write up), analyzed Ideal Parent Figure transcripts and built an automated IPF chatbot (demo), and co-organized a 400-person retreat for Interact, a technology for social good fellowship.Session Summary“Imagine waking up every day in a state of flow, where all the knots and fears are replaced with a deep sense of ease and joy.”This week we are dropping another special episode of the Existential Hope podcast, featuring Amanda Ngo, a Foresight Institute Existential Hope fellow specializing in AI innovation for wellbeing. Amanda speaks about her work on leveraging AI to enhance human flourishing, sharing insights on the latest advancements and their potential impacts. Her app: https://www.mysunrise.app/Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
SpeakerKristian Rönn is the CEO and co-founder of Normative. He has a background in mathematics, philosophy, computer science and artificial intelligence. Before he started Normative he worked at the University of Oxford’s Future of Humanity Institute on issues related to global catastrophic risks.Session SummaryWhen people talk about today’s biggest challenges they tend to frame the conversation around “bad people” doing “bad things.” But is there more to the story? In this month’s Hope Drop we speak to Kristian Rönn, an entrepreneur formerly affiliated with the Future of Humanity Institute. Kristian calls these deeply rooted impulses “Darwinian demons.” These forces, a by-product of natural selection, can lead us to act in shortsighted ways that harm others—and even imperil our survival as a species. In our latest episode, Kristian explains how we can escape these evolutionary traps through cooperation and innovative thinking. Kristian's new book, The Darwinian Trap, is being published on September 24th. Be sure to preorder it today!Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
SpeakerSiméon Campos is president and founder of SaferAI, an organization working on developing the infrastructure for general-purpose AI auditing and risk management. He worked on large language models for the last two years and is highly committed to making AI safer.Session Summary“I think safe AGI can both prevent a catastrophe and offer a very promising pathway into a eucatastrophe.”This week we are dropping a special episode of the Existential Hope podcast, where we sit down with Siméon Campos, president and founder of Safer AI, and a Foresight Institute fellow in the Existential Hope track. Siméon shares his experience working on AI governance, discusses the current state and future of large language models, and explores crucial measures needed to guide AI for the greater good.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
James Pethokoukis is a senior fellow and the DeWitt Wallace Chair at the American Enterprise Institute, where he analyzes US economic policy, writes and edits the AEIdeas blog, and hosts AEI’s Political Economy podcast. He is also a contributor to CNBC and writes the Faster, Please! newsletter on Substack. He is the author of The Conservative Futurist: How to Create the Sci-Fi World We Were Promised (Center Street, 2023). He has also written for many publications, including the Atlantic, Commentary, Financial Times, Investor’s Business Daily, National Review, New York Post, the New York Times, USA Today, and the Week. Session SummaryIn this episode, James joins us to discuss his book, The Conservative Futurist, and his perspectives on technology and economic growth. James explores his background, the spectrum of 'upwing' (pro-progress) versus 'downwing' (anti-progress), and the role of technology in solving global challenges. He explains his reasoning for being pro-progress and pro-growth as well as highlighting the importance of positive storytelling and education in developing a more advanced and prosperous world.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
The Flourishing FoundationIn February 2024, we partnered with the Future of Life Institute on a hackathon to design institutions that can guide and govern the development of AI. The winner of the hackathon was the Flourishing Foundation, who are focused on our relationship with AI and other emerging technologies. They challenge innovators to envision and build life-centered products, services, and systems, specifcially, to enable TAI-enabled consumer technologies to promote human well-being by developing new norms, processes, and community-driven ecosystems.At their core, they explore the question of "Can AI make us happier?"Connect: https://www.flourishing.foundation/Read about the hackathon: https://foresight.org/2024-xhope-hackathon/Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Dr Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year National Science Foundation IGERT (Integrative Graduate Education and Research Traineeship) fellowship. His main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games, and he is an author of over 100 publications including multiple journal articles and books.Session SummaryWe discuss everything AI safety with Dr. Roman Yampolskiy. As AI technologies advance at a breakneck pace, the conversation highlights the pressing need to balance innovation with rigorous safety measures. Contrary to many other voices in the safety space, argues for the necessity of maintaining AI as narrow, task-oriented systems: “I'm arguing that it's impossible to indefinitely control superintelligent systems”. Nonetheless, Yampolskiy is optimistic about narrow AI future capabilities, from politics to longevity and health. Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
This episode features an interview with the 1st place winners of our 2045 Worldbuilding challenge! Why Worldbuilding?We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?Cities of Orare – our 1st place winnersCities of Orare imagines a future where AI-powered prediction markets called Orare amplify collective intelligence, enhancing liberal democracy, economic distribution, and policy-making. Its adoption across Africa and globally has fostered decentralized governance, democratizing decision-making, and spurring significant health and economic advancements.Read more about the 2045 world of Cities of Orare: https://www.existentialhope.com/worlds/beyond-collective-intelligence-cities-of-orareAccess the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuildingExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
This episode features an interview with the 2nd place winners of our 2045 Worldbuilding challenge! Why Worldbuilding?We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?Rising Choir – our 2nd place winnersRising Choir envisions a 2045 where advanced AI and robotics are seamlessly integrated into everyday life, enhancing productivity and personal care. The V.O.I.C.E. system revolutionizes communication and democratic participation, developing a sense of inclusion across all levels of society. Energy abundance, driven by solar and battery advancements, addresses climate change challenges, while the presence of humanoid robots in every household marks a new era of economic output and personal convenience. Read more about the 2045 world of Rising Choir: https://www.existentialhope.com/worlds/rising-choir-a-symphony-of-clashing-voicesAccess the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuildingExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
This episode features an interview with the 3rd place winners of our 2045 Worldbuilding challenge! Why Worldbuilding?We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?FloraTech – our 3rd place winnersIn the world of 2045, a network of bounded AI agents, imbued with robust ethical constraints and specialized capabilities, has become the backbone of a thriving, harmonious global society. These AI collaborators have unlocked unprecedented possibilities for localized, sustainable production of goods and services, empowering communities to meet their needs through advanced manufacturing technologies and smart resource allocation. Read more about the 2045 world of FloraTech: https://www.existentialhope.com/worlds/floratech-2045-co-evolving-with-technology-for-collective-flourishingAccess the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuildingExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
SpeakerStuart Buck is the Executive Director of the Good Science Project, and a Senior Advisor at the Social Science Research Council. Formerly, he was the Vice President of Research at Arnold Ventures. His efforts to improve research transparency and reproducibility have been featured in Wired, New York Times, The Atlantic, Slate, The Economist, and more. He has given advice to DARPA, IARPA (the CIA’s research arm), the Department of Veterans Affairs, and the White House Social and Behavioral Sciences Team on rigorous research processes, as well as publishing in top journals (such as Science and BMJ) on how to make research more accurate.Session SummaryWorking in the field of meta-science, Stuart cares deeply about who gets funding and how, the engulfment of bureaucracy for researchers, everywhere, how we can fund more innovative science, ensuring results are reproducible and true, and much more. Among many things, he has funded renowned work showing that scientific research is often irreproducible, including the Reproducibility Projects in Psychology and Cancer Biology.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
SpeakerHannu Rajaniemi is a Finnish American author of science fiction and fantasy, who writes in both English and Finnish. He lives in Oakland, California, and was a founding director of a commercial research organisation ThinkTank Maths.EpisodePerhaps most famously known for his 2010 release, The Quantum Thief, Rajaniemi is settled firmly in both the science-fiction and the biotech startup worlds. Away from writing, Rajaniemi is the co-founder of Helix Nanotechnologies, a startup building the world's most advanced mRNA platform to create a unified interface to the immune system. We’ll discuss his ideas on the impact of sci-fi on the real world and explore his views on the upcoming biotechnology wave, including gene editing and biohacking. We then address how these new technologies could be implemented and governed, before finally focusing on an idea that is, at present, only in sci-fi: the immune-computer interface.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
This month, we talk to David Pearce, a thought leader who is always pushing the boundaries of human potential. Pearce, a philosopher and bioethicist, is best known for his advocacy of transhumanism – the movement that seeks to enhance human capabilities through technology.We'll explore his ideas on overcoming aging, achieving radical life extension, embracing cognitive enhancement, adn ending all suffering. Join us as we unpack the ethics and possibilities of the future, and discover Pearce's vision for a world where humanity transcends its biological limitationsFull transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
hiooh!¡!!!! me so creepy out by the ro-bo-tic voice so please change it!!!!!!!! SO HARD TO IGNORE IT WHEN I SLEEP!!!!!!!!
Me don't like robot 🤖 voice it creaky me up so pls 🛑
I eat 💩
I Hanna eat 💩💥💩