DiscoverThe Future of Life
The Future of Life
Claim Ownership

The Future of Life

Author: Future of Life Institute

Subscribed: 642Played: 22,618
Share

Description

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.
112 Episodes
Reverse
It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want.  Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI.  Topics discussed in this episode include: -Inner and outer alignment -How and why inner alignment can fail -Training competitiveness and performance competitiveness -Evaluating imitative amplification, AI safety via debate, and microscope AI You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/ Timestamps:  0:00 Intro  2:07 How Evan got into AI alignment research 4:42 What is AI alignment? 7:30 How Evan approaches AI alignment 13:05 What are inner alignment and outer alignment? 24:23 Gradient descent 36:30 Testing for inner alignment 38:38 Wrapping up on outer alignment 44:24 Why is inner alignment a priority? 45:30 How inner alignment fails 01:11:12 Training competitiveness and performance competitiveness 01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness 01:17:30 Imitative amplification 01:23:00 AI safety via debate 01:26:32 Microscope AI 01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment 01:34:45 Where to follow Evan and find more of his work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
This is a mix by Barker, Berlin-based music producer, that was featured on our last podcast: Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix). We hope that you'll find inspiration and well-being in this soundscape. You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/ Tracklist: Delta Rain Dance - 1 John Beltran - A Different Dream Rrose - Horizon Alexandroid - lvpt3 Datassette - Drizzle Fort Conrad Sprenger - Opening JakoJako - Wavetable#1 Barker & David Goldberg - #3 Barker & Baumecker - Organik (Intro) Anthony Linell - Fractal Vision Ametsub - Skydroppin’ Ladyfish\Mewark - Comfortable JakoJako & Barker - [unreleased] Where to follow Sam Barker : Soundcloud: @voltek Twitter: twitter.com/samvoltek Instagram: www.instagram.com/samvoltek/ Website: www.voltek-labs.net/ Bandcamp: sambarker.bandcamp.com/ Where to follow Sam's label, Ostgut Ton: Soundcloud: @ostgutton-official Facebook: www.facebook.com/Ostgut.Ton.OFFICIAL/ Twitter: twitter.com/ostgutton Instagram: www.instagram.com/ostgut_ton/ Bandcamp: ostgut.bandcamp.com/ This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named "Utility." Sam's artistic excellence, motivated by blissful visions of the future, and David's philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David's work, how it informed his music production, and Sam and David's optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content. Topics discussed in this episode include: -The relationship between Sam's music and David's writing -Existential hope -Ideas from the Hedonistic Imperative -Sam's albums -The future of art and music You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/ You can find the mix with no interview portion of the podcast here: https://soundcloud.com/futureoflife/barker-hedonic-recalibration-mix Where to follow Sam Barker : Soundcloud: https://soundcloud.com/voltek Twitter: https://twitter.com/samvoltek Instagram: https://www.instagram.com/samvoltek/ Website: https://www.voltek-labs.net/ Bandcamp: https://sambarker.bandcamp.com/ Where to follow Sam's label, Ostgut Ton:  Soundcloud: https://soundcloud.com/ostgutton-official Facebook: https://www.facebook.com/Ostgut.Ton.OFFICIAL/ Twitter: https://twitter.com/ostgutton Instagram: https://www.instagram.com/ostgut_ton/ Bandcamp: https://ostgut.bandcamp.com/ Timestamps:  0:00 Intro 5:40 The inspiration around Sam's music 17:38 Barker - Maximum Utility 20:03 David and Sam on their work 23:45 Do any of the tracks evoke specific visions or hopes? 24:40 Barker - Die-Hards Of The Darwinian Order 28:15 Barker - Paradise Engineering 31:20 Barker - Hedonic Treadmill 33:05 The future and evolution of art 54:03 David on how good the future can be 58:36 Guest mix by Barker Tracklist: Delta Rain Dance – 1 John Beltran – A Different Dream Rrose – Horizon Alexandroid – lvpt3 Datassette – Drizzle Fort Conrad Sprenger – Opening JakoJako – Wavetable#1 Barker & David Goldberg – #3 Barker & Baumecker – Organik (Intro) Anthony Linell – Fractal Vision Ametsub – Skydroppin’ Ladyfish\Mewark – Comfortable JakoJako & Barker – [unreleased] This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more.  Topics discussed in this episode include: -The historical and intellectual foundations of AI  -How AI systems achieve or do not achieve intelligence in the same way as the human mind -The rise of AI and what it signifies  -The benefits and risks of AI in both the short and long term  -Whether superintelligent AI will pose an existential risk to humanity You can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/ You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps:  0:00 Intro  4:30 The historical and intellectual foundations of AI  11:11 Moving beyond dualism  13:16 Regarding the objectives of an agent as fixed  17:20 The distinction between artificial intelligence and deep learning  22:00 How AI systems achieve or do not achieve intelligence in the same way as the human mind 49:46 What changes to human society does the rise of AI signal?  54:57 What are the benefits and risks of AI?  01:09:38 Do superintelligent AI systems pose an existential threat to humanity?  01:51:30 Where to find and follow Steve and Stuart This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them. Topics discussed in this episode include: -The problem of communication  -Global priorities  -Existential risk  -Animal suffering in both wild animals and factory farmed animals  -Global poverty  -Artificial general intelligence risk and AI alignment  -Ethics -Sam’s book, The Moral Landscape You can find the page for this podcast here: https://futureoflife.org/2020/06/01/on-global-priorities-existential-risk-and-what-matters-most-with-sam-harris/ You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps:  0:00 Intro 3:52 What are the most important problems in the world? 13:14 Global priorities: existential risk 20:15 Why global catastrophic risks are more likely than existential risks 25:09 Longtermist philosophy 31:36 Making existential and global catastrophic risk more emotionally salient 34:41 How analyzing the self makes longtermism more attractive 40:28 Global priorities & effective altruism: animal suffering and global poverty 56:03 Is machine suffering the next global moral catastrophe? 59:36 AI alignment and artificial general intelligence/superintelligence risk 01:11:25 Expanding our moral circle of compassion 01:13:00 The Moral Landscape, consciousness, and moral realism 01:30:14 Can bliss and wellbeing be mathematically defined? 01:31:03 Where to follow Sam and concluding thoughts Photo by Christopher Michel: https://www.flickr.com/photos/cmichel67/ This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing knowledge of synthetic biology, its role in the future of life, and what we can do to make sure it remains beneficial. Will our wisdom keep pace with our expanding capabilities? Topics discussed in this episode include: -Existential risk -Computational substrates and AGI -Genetics and aging -Risks of synthetic biology -Obstacles to space colonization -Great Filters, consciousness, and eliminating suffering You can find the page for this podcast here: https://futureoflife.org/2020/05/15/on-the-future-of-computation-synthetic-biology-and-life-with-george-church/ You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps:  0:00 Intro 3:58 What are the most important issues in the world? 12:20 Collective intelligence, AI, and the evolution of computational systems 33:06 Where we are with genetics 38:20 Timeline on progress for anti-aging technology 39:29 Synthetic biology risk 46:19 George's thoughts on COVID-19 49:44 Obstacles to overcome for space colonization 56:36 Possibilities for "Great Filters" 59:57 Genetic engineering for combating climate change 01:02:00 George's thoughts on the topic of "consciousness" 01:08:40 Using genetic engineering to phase out voluntary suffering 01:12:17 Where to find and follow George This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Essential to our assessment of risk and ability to plan for the future is our understanding of the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effectively. Predicting the future is, obviously, far from easy — and yet a community of "superforecasters" are attempting to do just that. Not only are they trying, but these superforecasters are also reliably outperforming subject matter experts at making predictions in their own fields. Robert de Neufville joins us on this episode of the FLI Podcast to explain what superforecasting is, how it's done, and the ways it can help us with crucial decision making.  Topics discussed in this episode include: -What superforecasting is and what the community looks like -How superforecasting is done and its potential use in decision making -The challenges of making predictions -Predictions about and lessons from COVID-19 You can find the page for this podcast here: https://futureoflife.org/2020/04/30/on-superforecasting-with-robert-de-neufville/ You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps:  0:00 Intro 5:00 What is superforecasting? 7:22 Who are superforecasters and where did they come from? 10:43 How is superforecasting done and what are the relevant skills? 15:12 Developing a better understanding of probabilities 18:42 How is it that superforecasters are better at making predictions than subject matter experts? 21:43 COVID-19 and a failure to understand exponentials 24:27 What organizations and platforms exist in the space of superforecasting? 27:31 Whats up for consideration in an actual forecast 28:55 How are forecasts aggregated? Are they used? 31:37 How accurate are superforecasters? 34:34 How is superforecasting complementary to global catastrophic risk research and efforts? 39:15 The kinds of superforecasting platforms that exist 43:00 How accurate can we get around global catastrophic and existential risks? 46:20 How to deal with extremely rare risk and how to evaluate your prediction after the fact 53:33 Superforecasting, expected value calculations, and their use in decision making 56:46 Failure to prepare for COVID-19 and if superforecasting will be increasingly applied to critical decision making 01:01:55 What can we do to improve the use of superforecasting? 01:02:54 Forecasts about COVID-19 01:11:43 How do you convince others of your ability as a superforecaster? 01:13:55 Expanding the kinds of questions we do forecasting on 01:15:49 How to utilize subject experts and superforecasters 01:17:54 Where to find and follow Robert This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned systems. Much has happened in the past twelve months, so we've invited Rohin — along with fellow researcher Buck Shlegeris — back for a follow-up conversation. Today's episode focuses especially on the state of current research efforts for beneficial AI, as well as Buck's and Rohin's thoughts about the varying approaches and the difficulties we still face. This podcast thus serves as a non-exhaustive overview of how the field of AI alignment has updated and how thinking is progressing.  Topics discussed in this episode include: -Rohin's and Buck's optimism and pessimism about different approaches to aligned AI -Traditional arguments for AI as an x-risk -Modeling agents as expected utility maximizers -Ambitious value learning and specification learning/narrow value learning -Agency and optimization -Robustness -Scaling to superhuman abilities -Universality -Impact regularization -Causal models, oracles, and decision theory -Discontinuous and continuous takeoff scenarios -Probability of AI-induced existential risk -Timelines for AGI -Information hazards You can find the page for this podcast here: https://futureoflife.org/2020/04/15/an-overview-of-technical-ai-alignment-in-2018-and-2019-with-buck-shlegeris-and-rohin-shah/ Timestamps:  0:00 Intro 3:48 Traditional arguments for AI as an existential risk 5:40 What is AI alignment? 7:30 Back to a basic analysis of AI as an existential risk 18:25 Can we model agents in ways other than as expected utility maximizers? 19:34 Is it skillful to try and model human preferences as a utility function? 27:09 Suggestions for alternatives to modeling humans with utility functions 40:30 Agency and optimization 45:55 Embedded decision theory 48:30 More on value learning 49:58 What is robustness and why does it matter? 01:13:00 Scaling to superhuman abilities 01:26:13 Universality 01:33:40 Impact regularization 01:40:34 Causal models, oracles, and decision theory 01:43:05 Forecasting as well as discontinuous and continuous takeoff scenarios 01:53:18 What is the probability of AI-induced existential risk? 02:00:53 Likelihood of continuous and discontinuous take off scenarios 02:08:08 What would you both do if you had more power and resources? 02:12:38 AI timelines 02:14:00 Information hazards 02:19:19 Where to follow Buck and Rohin and learn more This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
The global spread of COVID-19 has put tremendous stress on humanity’s social, political, and economic systems. The breakdowns triggered by this sudden stress indicate areas where national and global systems are fragile, and where preventative and preparedness measures may be insufficient. The COVID-19 pandemic thus serves as an opportunity for reflecting on the strengths and weaknesses of human civilization and what we can do to help make humanity more resilient. The Future of Life Institute's Emilia Javorsky and Anthony Aguirre join us on this special episode of the FLI Podcast to explore the lessons that might be learned from COVID-19 and the perspective this gives us for global catastrophic and existential risk. Topics discussed in this episode include: -The importance of taking expected value calculations seriously -The need for making accurate predictions -The difficulty of taking probabilities seriously -Human psychological bias around estimating and acting on risk -The massive online prediction solicitation and aggregation engine, Metaculus -The risks and benefits of synthetic biology in the 21st Century You can find the page for this podcast here: https://futureoflife.org/2020/04/08/lessons-from-covid-19-with-emilia-javorsky-and-anthony-aguirre/ Timestamps:  0:00 Intro  2:35 How has COVID-19 demonstrated weakness in human systems and risk preparedness  4:50 The importance of expected value calculations and considering risks over timescales  10:50 The importance of being able to make accurate predictions  14:15 The difficulty of trusting probabilities and acting on low probability high cost risks 21:22 Taking expected value calculations seriously  24:03 The lack of transparency, explanation, and context around how probabilities are estimated and shared 28:00 Diffusion of responsibility and other human psychological weaknesses in thinking about risk 38:19 What Metaculus is and its relevance to COVID-19  45:57 What is the accuracy of predictions on Metaculus and what has it said about COVID-19? 50:31 Lessons for existential risk from COVID-19  58:42 The risk of synthetic bio enabled pandemics in the 21st century  01:17:35 The extent to which COVID-19 poses challenges to democratic institutions This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity" has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. "The Precipice" thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time. Topics discussed in this episode include: -An overview of Toby's new book -What it means to be standing at the precipice and how we got here -Useful arguments for why existential risk matters -The risks themselves and their likelihoods -What we can do to safeguard humanity's potential You can find the page for this podcast here: https://futureoflife.org/2020/03/31/he-precipice-existential-risk-and-the-future-of-humanity-with-toby-ord/ Timestamps:  0:00 Intro  03:35 What the book is about  05:17 What does it mean for us to be standing at the precipice?  06:22 Historical cases of global catastrophic and existential risk in the real world 10:38 The development of humanity’s wisdom and power over time   15:53 Reaching existential escape velocity and humanity’s continued evolution 22:30 On effective altruism and writing the book for a general audience  25:53 Defining “existential risk”  28:19 What is compelling or important about humanity’s potential or future persons? 32:43 Various and broadly appealing arguments for why existential risk matters 50:46 Short overview of natural existential risks 54:33 Anthropogenic risks 58:35 The risks of engineered pandemics  01:02:43 Suggestions for working to mitigate x-risk and safeguard the potential of humanity  01:09:43 How and where to follow Toby and pick up his book This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Lethal autonomous weapons represent the novel miniaturization and integration of modern AI and robotics technologies for military use. This emerging technology thus represents a potentially critical inflection point in the development of AI governance. Whether we allow AI to make the decision to take human life and where we draw lines around the acceptable and unacceptable uses of this technology will set precedents and grounds for future international AI collaboration and governance. Such regulation efforts or lack thereof will also shape the kinds of weapons technologies that proliferate in the 21st century. On this episode of the AI Alignment Podcast, Paul Scharre joins us to discuss autonomous weapons, their potential benefits and risks, and the ongoing debate around the regulation of their development and use.  Topics discussed in this episode include: -What autonomous weapons are and how they may be used -The debate around acceptable and unacceptable uses of autonomous weapons -Degrees and kinds of ways of integrating human decision making in autonomous weapons  -Risks and benefits of autonomous weapons -Whether there is an arms race for autonomous weapons -How autonomous weapons issues may matter for AI alignment and long-term AI safety You can find the page for this podcast here: https://futureoflife.org/2020/03/16/on-lethal-autonomous-weapons-with-paul-scharre/ Timestamps:  0:00 Intro 3:50 Why care about autonomous weapons? 4:31 What are autonomous weapons?  06:47 What does “autonomy” mean?  09:13 Will we see autonomous weapons in civilian contexts?  11:29 How do we draw lines of acceptable and unacceptable uses of autonomous weapons?  24:34 Defining and exploring human “in the loop,” “on the loop,” and “out of loop”  31:14 The possibility of generating international lethal laws of robotics 36:15 Whether autonomous weapons will sanitize war and psychologically distance humans in detrimental ways 44:57 Are persons studying the psychological aspects of autonomous weapons use?  47:05 Risks of the accidental escalation of war and conflict  52:26 Is there an arms race for autonomous weapons?  01:00:10 Further clarifying what autonomous weapons are 01:05:33 Does the successful regulation of autonomous weapons matter for long-term AI alignment considerations? 01:09:25 Does Paul see AI as an existential risk? This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, or will it become increasingly concentrated in the hands of the few who wield AI technologies? Cullen O'Keefe joins us on this episode of the FLI Podcast for a conversation about the Windfall Clause, a mechanism that attempts to ensure the abundance and wealth created by transformative AI benefits humanity globally. Topics discussed in this episode include: -What the Windfall Clause is and how it might function -The need for such a mechanism given AGI generated economic windfall -Problems the Windfall Clause would help to remedy  -The mechanism for distributing windfall profit and the function for defining such profit -The legal permissibility of the Windfall Clause  -Objections and alternatives to the Windfall Clause You can find the page for this podcast here: https://futureoflife.org/2020/02/28/distributing-the-benefits-of-ai-via-the-windfall-clause-with-cullen-okeefe/ Timestamps:  0:00 Intro 2:13 What is the Windfall Clause?  4:51 Why do we need a Windfall Clause?  06:01 When we might reach windfall profit and what that profit looks like 08:01 Motivations for the Windfall Clause and its ability to help with job loss 11:51 How the Windfall Clause improves allocation of economic windfall  16:22 The Windfall Clause assisting in a smooth transition to advanced AI systems 18:45 The Windfall Clause as assisting with general norm setting 20:26 The Windfall Clause as serving AI firms by generating goodwill, improving employee relations, and reducing political risk 23:02 The mechanism for distributing windfall profit and desiderata for guiding it’s formation  25:03 The windfall function and desiderata for guiding it’s formation  26:56 How the Windfall Clause is different from being a new taxation scheme 30:20 Developing the mechanism for distributing the windfall  32:56 The legal permissibility of the Windfall Clause in the United States 40:57 The legal permissibility of the Windfall Clause in China and the Cayman Islands 43:28 Historical precedents for the Windfall Clause 44:45 Objections to the Windfall Clause 57:54 Alternatives to the Windfall Clause 01:02:51 Final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom's Superintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But just what can anyone do in the present day policy sphere to help ensure that powerful AI systems remain beneficial and aligned with human values? Do today's AI policies matter at all for AGI risk? Jared Brown and Nicolas Moës join us on today's podcast to explore these questions and the importance of AGI-risk sensitive persons' involvement in present day AI policy discourse.  Topics discussed in this episode include: -The importance of current AI policy work for long-term AI risk -Where we currently stand in the process of forming AI policy -Why persons worried about existential risk should care about present day AI policy -AI and the global community -The rationality and irrationality around AI race narratives You can find the page for this podcast here: https://futureoflife.org/2020/02/17/on-the-long-term-importance-of-current-ai-policy-with-nicolas-moes-and-jared-brown/ Timestamps:  0:00 Intro 4:58 Why it’s important to work on AI policy  12:08 Our historical position in the process of AI policy 21:54 For long-termists and those concerned about AGI risk, how is AI policy today important and relevant?  33:46 AI policy and shorter-term global catastrophic and existential risks 38:18 The Brussels and Sacramento effects 41:23 Why is racing on AI technology bad?  48:45 The rationality of racing to AGI  58:22 Where is AI policy currently? This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Our perceptions of reality are based on the physics of interactions ranging from millimeters to miles in scale. But when it comes to the very small and the very massive, our intuitions often fail us. Given the extent to which modern physics challenges our understanding of the world around us, how wrong could we be about the fundamental nature of reality? And given our failure to anticipate the counterintuitive nature of the universe, how accurate are our intuitions about metaphysical and personal identity? Just how seriously should we take our everyday experiences of the world? Anthony Aguirre, cosmologist and FLI co-founder, returns for a second episode to offer his perspective on these complex questions. This conversation explores the view that reality fundamentally consists of information and examines its implications for our understandings of existence and identity. Topics discussed in this episode include: - Views on the nature of reality - Quantum mechanics and the implications of quantum uncertainty - Identity, information and description - Continuum of objectivity/subjectivity You can find the page and transcript for this podcast here: https://futureoflife.org/2020/01/31/fli-podcast-identity-information-the-nature-of-reality-with-anthony-aguirre/ Timestamps: 3:35 - General history of views on fundamental reality 9:45 - Quantum uncertainty and observation as interaction 24:43 - The universe as constituted of information 29:26 - What is information and what does the view of reality as information have to say about objects and identity 37:14 - Identity as on a continuum of objectivity and subjectivity 46:09 - What makes something more or less objective? 58:25 - Emergence in physical reality and identity 1:15:35 - Questions about the philosophy of identity in the 21st century 1:27:13 - Differing views on identity changing human desires 1:33:28 - How the reality as information perspective informs questions of identity 1:39:25 - Concluding thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of light. There, a similar machine recreates your exact atomic structure and composition using locally available resources. Have you just traveled, Parfit asks, or have you committed suicide? Would you step into this machine? Is the person who emerges on Mars really you? Questions like these –– those that explore the nature of personal identity and challenge our commonly held intuitions about it –– are becoming increasingly important in the face of 21st century technology. Emerging technologies empowered by artificial intelligence will increasingly give us the power to change what it means to be human. AI enabled bio-engineering will allow for human-species divergence via upgrades, and as we arrive at AGI and beyond we may see a world where it is possible to merge with AI directly, upload ourselves, copy and duplicate ourselves arbitrarily, or even manipulate and re-program our sense of identity. Are there ways we can inform and shape human understanding of identity to nudge civilization in the right direction? Topics discussed in this episode include: -Identity from epistemic, ontological, and phenomenological perspectives -Identity formation in biological evolution -Open, closed, and empty individualism -The moral relevance of views on identity -Identity in the world today and on the path to superintelligence and beyond You can find the page and transcript for this podcast here: https://futureoflife.org/2020/01/15/identity-and-the-ai-revolution-with-david-pearce-and-andres-gomez-emilsson/ Timestamps:  0:00 - Intro 6:33 - What is identity? 9:52 - Ontological aspects of identity 12:50 - Epistemological and phenomenological aspects of identity 18:21 - Biological evolution of identity 26:23 - Functionality or arbitrariness of identity / whether or not there are right or wrong answers 31:23 - Moral relevance of identity 34:20 - Religion as codifying views on identity 37:50 - Different views on identity 53:16 - The hard problem and the binding problem 56:52 - The problem of causal efficacy, and the palette problem 1:00:12 - Navigating views of identity towards truth 1:08:34 - The relationship between identity and the self model 1:10:43 - The ethical implications of different views on identity 1:21:11 - The consequences of different views on identity on preference weighting 1:26:34 - Identity and AI alignment 1:37:50 - Nationalism and AI alignment 1:42:09 - Cryonics, species divergence, immortality, uploads, and merging. 1:50:28 - Future scenarios from Life 3.0 1:58:35 - The role of identity in the AI itself This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you'll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us. Topics discussed include: -Max and Yuval's views and intuitions about consciousness -How they ground and think about morality -Effective altruism and its cause areas of global health/poverty, animal suffering, and existential risk -The function of myths and stories in human society -How emerging science, technology, and global paradigms challenge the foundations of many of our stories -Technological risks of the 21st century You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/31/on-consciousness-morality-effective-altruism-myth-with-yuval-noah-harari-max-tegmark/ Timestamps: 0:00 Intro 3:14 Grounding morality and the need for a science of consciousness 11:45 The effective altruism community and it's main cause areas 13:05 Global health 14:44 Animal suffering and factory farming 17:38 Existential risk and the ethics of the long-term future 23:07 Nuclear war as a neglected global risk 24:45 On the risks of near-term AI and of artificial general intelligence and superintelligence 28:37 On creating new stories for the challenges of the 21st century 32:33 The risks of big data and AI enabled human hacking and monitoring 47:40 What does it mean to be human and what should we want to want? 52:29 On positive global visions for the future 59:29 Goodbyes and appreciations 01:00:20 Outro and supporting the Future of Life Institute Podcast This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
As 2019 is coming to an end and the opportunities of 2020 begin to emerge, it's a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that will possibly lead to the extinction or the permanent and drastic curtailing of the potential of Earth-originating intelligent life. While this is important and useful, much has been done at FLI and in the broader world to address these issues in service of the common good. It can be skillful to reflect on this progress to see how far we've come, to develop hope for the future, and to map out our path ahead. This podcast is a special end of the year episode focused on meeting and introducing the FLI team, discussing what we've accomplished and are working on, and sharing our feelings and reasons for existential hope going into 2020 and beyond. Topics discussed include: -Introductions to the FLI team and our work -Motivations for our projects and existential risk mitigation efforts -The goals and outcomes of our work -Our favorite projects at FLI in 2019 -Optimistic directions for projects in 2020 -Reasons for existential hope going into 2020 and beyond You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/27/existential-hope-in-2020-and-beyond-with-the-fli-team/ Timestamps: 0:00 Intro 1:30 Meeting the Future of Life Institute team 18:30 Motivations for our projects and work at FLI 30:04 What we strive to result from our work at FLI 44:44 Favorite accomplishments of FLI in 2019 01:06:20 Project directions we are most excited about for 2020 01:19:43 Reasons for existential hope in 2020 and beyond 01:38:30 Outro
Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan's journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empirical AI safety research — why empirical safety research is important and how this has lead him to his work on recursive reward modeling. We also discuss research directions he's optimistic will lead to safely scalable systems, more facets of his own thinking, and other work being done at DeepMind.  Topics discussed in this episode include: -Theoretical and empirical AI safety research -Jan's and DeepMind's approaches to AI safety -Jan's work and thoughts on recursive reward modeling -AI safety benchmarking at DeepMind -The potential modularity of AGI -Comments on the cultural and intellectual differences between the AI safety and mainstream AI communities -Joining the DeepMind safety team You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/16/ai-alignment-podcast-on-deepmind-ai-safety-and-recursive-reward-modeling-with-jan-leike/ Timestamps:  0:00 intro 2:15 Jan's intellectual journey in computer science to AI safety 7:35 Transitioning from theoretical to empirical research 11:25 Jan's and DeepMind's approach to AI safety 17:23 Recursive reward modeling 29:26 Experimenting with recursive reward modeling 32:42 How recursive reward modeling serves AI safety 34:55 Pessimism about recursive reward modeling 38:35 How this research direction fits in the safety landscape 42:10 Can deep reinforcement learning get us to AGI? 42:50 How modular will AGI be? 44:25 Efforts at DeepMind for AI safety benchmarking 49:30 Differences between the AI safety and mainstream AI communities 55:15 Most exciting piece of empirical safety work in the next 5 years 56:35 Joining the DeepMind safety team
We could all be more altruistic and effective in our service of others, but what exactly is it that's stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford's Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psychology and philosophy. This conversation explores the steps that researchers like Stefan are taking to better understand psychology in service of doing the most good we can. Topics discussed include: -The psychology of existential risk, longtermism, effective altruism, and speciesism -Stefan's study "The Psychology of Existential Risks: Moral Judgements about Human Extinction" -Various works and studies Stefan Schubert has co-authored in these spaces -How this enables us to be more altruistic You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/02/the-psychology-of-existential-risk-and-effective-altruism-with-stefan-schubert/ Timestamps: 0:00 Intro 2:31 Stefan's academic and intellectual journey 5:20 How large is this field? 7:49 Why study the psychology of X-risk and EA? 16:54 What does a better understanding of psychology here enable? 21:10 What are the cognitive limitations psychology helps to elucidate? 23:12 Stefan's study "The Psychology of Existential Risks: Moral Judgements about Human Extinction" 34:45 Messaging on existential risk 37:30 Further areas of study 43:29 Speciesism 49:18 Further studies and work by Stefan
In this brief epilogue, Ariel reflects on what she's learned during the making of Not Cool, and the actions she'll be taking going forward.
loading
Comments (7)

Marion Grau

What is with the demographics of the people interviewed? White male circle jerk? Few women and fewer POC.

Jul 2nd
Reply

masoud hajian

as great as usual

Apr 11th
Reply

Salar Basiri

great insightful conversation, thanks for sharing!

Mar 18th
Reply

Marco Gorelli

her best advice is to buy organic? wtf?

Oct 13th
Reply (1)

ForexTraderNYC

interviewer has amazing questioning skills impressive very open n concise.. however interviewee..Gonzalez fella could be less monotone, some enthusiasm n be concise. some pause,less jargon... im so harsh! haha..honestly its constructive criticism.. i ma perfectionist.

Jul 25th
Reply (1)
Download from Google Play
Download from App Store