DiscoverRadio Bostrom
Radio Bostrom
Claim Ownership

Radio Bostrom

Author: Team Radio Bostrom

Subscribed: 23Played: 382
Share

Description

Audio narrations of academic papers by Nick Bostrom.
28 Episodes
Reverse
Nick Bostrom’s latest book, Deep Utopia: Life and Meaning in a Solved World, will be published on 27th March, 2024. It’s available to pre-order now: https://nickbostrom.com/deep-utopia/ The publisher describes the book as follows: A greyhound catching the mechanical lure—what would he actually do with it? Has he given this any thought? Bostrom’s previous book, Superintelligence: Paths, Dangers, Strategies changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong. But what if things go right? Suppose that we develop superintelligence safely, govern it well, and make good use of the cornucopian wealth and near magical technological powers that this technology can unlock. If this transition to the machine intelligence era goes well, human labor becomes obsolete. We would thus enter a condition of “post-instrumentality”, in which our efforts are not needed for any practical purpose. Furthermore, at technological maturity, human nature becomes entirely malleable. Here we confront a challenge that is not technological but philosophical and spiritual. In such a solved world, what is the point of human existence? What gives meaning to life? What do we do all day? Deep Utopia shines new light on these old questions, and gives us glimpses of a different kind of existence, which might be ours in the future.
By Nick Bostrom, Thomas Douglas & Anders Sandberg.Abstract:In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal. We suggest that this phenomenon, which we call the unilateralist’s curse, arises in many contexts, including some that are important for public policy. To lift the curse, we propose a principle of conformity, which would discourage unilateralist action. We consider three different models for how this principle could be implemented, and respond to an objection that could be raised against it.Read the full paper:https://nickbostrom.com/papers/unilateralist.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:00) Intro(01:20) 1. Introduction(10:02) 2. The Unilateralist's Curse: A Model(11:31) 3. Lifting the Curse(13:54) 3.1. The Collective Deliberation Model(15:21) 3.2. The Meta-rationality Model(18:15) 3.3. The Moral Deference Model(33:24) 4. Discussion(37:53) 5. Concluding Thoughts(41:04) Outro & credits
By Nick Bostrom.Abstract:Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman’ state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision.Read the full paper:https://nickbostrom.com/ethics/dignityMore episodes at:https://radiobostrom.com/ ---Outline:(00:02) Introduction(00:21) Abstract(01:57) Transhumanists vs. bioconservatives(06:42) Two fears about the posthuman(19:44) Is human dignity incompatible with posthuman dignity?(29:03) Why we need posthuman dignity(34:38) Outro & credits
By Nick Bostrom.Abstract:Rarely does philosophy produce empirical predictions. The Doomsday argument is an important exception. From seemingly trivial premises it seeks to show that the risk that humankind will go extinct soon has been systematically underestimated. Nearly everybody's first reaction is that there must be something wrong with such an argument. Yet despite being subjected to intense scrutiny by a growing number of philosophers, no simple flaw in the argument has been identified.Read the full paper:https://anthropic-principle.com/q=anthropic_principle/doomsday_argument/More episodes at:https://radiobostrom.com/
By Nick Bostrom and Carl Shulman. Draft version 1.10.Abstract:AIs with moral status and political rights? We'll need a modus vivendi, and it’s becoming urgent to figure out the parameters for that. This paper makes a load of specific claims that begin to stake out a position.Read the full paper:https://nickbostrom.com/propositions.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:00) Introduction(00:36) Disclaimer(01:07) Consciousness and metaphysics(06:48) Respecting AI interests(21:41) Security and stability(32:04) AI-empowered social organization(38:07) Satisfying multiple values(42:23) Mental malleability, persuasion, and lock-in(47:20) Epistemology(53:36) Status of existing AI systems(59:52) Recommendations regarding current practises and AI systems(01:07:08) Impact paths and modes of advocacy(01:11:11) Closing credits
By Nick Bostrom.Draft version 0.9Abstract:New theoretical ideas for a big expedition in metaethics.Read the full paper:https://nickbostrom.com/papers/mountethics.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:17) Metametaethics/preamble(02:48) Genealogy(09:41) Metaethics(21:30) Value representors(26:56) Moral motivation(30:02) The weak(33:25) Hedonism(41:38) Hierarchical norm structure and higher morality(55:30) Questions for future research
By Nick Bostrom.Abstract:Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase. We formally define it as follows:(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies.Transhumanism can be viewed as an extension of humanism, from which it is partially derived. Humanists believe that humans matter, that individuals matter. We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings. Transhumanists agree with this but also emphasize what we have the potential to become. Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as “human”.Read the full paper:https://nickbostrom.com/views/transhumanist.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:25) 1 GENERAL QUESTIONS ABOUT TRANSHUMANISM(00:31) 1.1 What is transhumanism?(05:48) 1.2 What is a posthuman?(10:11) 1.3 What is a transhuman?(12:57) 2 TECHNOLOGIES AND PROJECTIONS(13:02) 2.1 Biotechnology, genetic engineering, stem cells, and cloning – what are they and what are they good for?(19:51) 2.2 What is molecular nanotechnology?(31:24) 2.3 What is superintelligence?(39:58) 2.4 What is virtual reality?(44:52) 2.5 What is cryonics? Isn’t the probability of success too small?(49:52) 2.6 What is uploading?(57:26) 2.7 What is the singularity?(01:00:26) 3 SOCIETY AND POLITICS(01:00:34) 3.1 Will new technologies only benefit the rich and powerful?(01:03:50) 3.2 Do transhumanists advocate eugenics?(01:10:17) 3.3 Aren’t these future technologies very risky? Could they even cause our extinction?(01:19:57) 3.4 If these technologies are so dangerous, should they be banned? What can be done to reduce the risks?(01:27:47) 3.5 Shouldn’t we concentrate on current problems such as improving the situation of the poor, rather than putting our efforts into planning for the “far” future?(01:31:17) 3.6 Will extended life worsen overpopulation problems?(01:40:53) 3.7 Is there any ethical standard by which transhumanists judge “improvement of the human condition”?(01:45:25) 3.8 What kind of society would posthumans live in?(01:48:43) 3.9 Will posthumans or superintelligent machines pose a threat to humans who aren’t augmented?(01:53:38) 4 TRANSHUMANISM AND NATURE(01:53:44) 4.1 Why do transhumanists want to live longer?(01:56:51) 4.2 Isn’t this tampering with nature?(02:00:01) 4.3 Will transhuman technologies make us inhuman?(02:01:48) 4.4 Isn’t death part of the natural order of things?(02:07:28) 4.5 Are transhumanist technologies environmentally sound?(02:09:49) 5 TRANSHUMANISM AS A PHILOSOPHICAL AND CULTURAL VIEWPOINT(02:09:56) 5.1 What are the philosophical and cultural antecedents of transhumanism?(02:27:36) 5.2 What currents are there within transhumanism? Is extropianism the same as transhumanism?(02:33:07) 5.3 How does transhumanism relate to religion?(02:35:59) 5.4 Won’t things like uploading, cryonics, and AI fail because they can’t preserve or create the soul?(02:38:02) 5.5 What kind of transhumanist art is there?(02:41:17) 6 PRACTICALITIES(02:41:21) 6.1 What are the reasons to expect all these changes?(02:44:46) 6.2 Won’t these developments take thousands or millions of years?(02:48:10) 6.3 What if it doesn’t work?(02:49:45) 6.4 How can I use transhumanism in my own life?(02:51:27) 6.5 How could I become a posthuman?(02:53:23) 6.6 Won’t it be boring to live forever in a perfect world?(02:58:16) 6.7 How can I get involved and contribute?(03:00:41) ACKNOWLEDGEMENTS AND DOCUMENT HISTORY
By Nick Bostrom.Abstract:Evolutionary development is sometimes thought of as exhibiting an inexorable trend towards higher, more complex, and normatively worthwhile forms of life. This paper explores some dystopian scenarios where freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being that we care about. We then consider how such catastrophic outcomes could be avoided and argue that under certain conditions the only possible remedy would be a globally coordinated policy to control human evolution by modifying the fitness function of future intelligent life forms.Read the full paper:https://nickbostrom.com/fut/evolutionMore episodes at:https://radiobostrom.com/ ---Outline:(00:16) Abstract(01:06) 1. The Panglossian view(07:50) 2. Two dystopian "upward" evolutionary scenarios(17:35) 3. Ours is an evolutionary disequilibrium(24:55) 4. Costly signaling and flamboyant display?(30:07) 5. Two senses of outcompeted(32:31) 6. Could we control our own evolution?(35:30) 7. Preventing non-eudaemonic agents from arising(39:56) 8. Modifying the fitness function(43:34) 9. Policies for evolutionary steering(49:48) 10. Detour(52:44) 11. Only a singleton could control evolution(59:22) 12. Conclusion
By Nick Bostrom.Abstract:The purpose of this paper, boldly stated, is to propose a new type of philosophy, a philosophy whose aim is prediction. The pace of technological progress is increasing very rapidly: it looks as if we are witnessing an exponential growth, the growth-rate being proportional to the size already obtained, with scientific knowledge doubling every 10 to 20 years since the second world war, and with computer processor speed doubling every 18 months or so. It is argued that this technological development makes urgent many empirical questions which a philosopher could be well-suited to help answering. I try to cover a broad range of interesting problems and approaches, which means that I won't go at all deeply into any of them; I only try to say enough to show what some of the problems are, how one can begin to work with them, and why philosophy is relevant. My hope is that this will whet your appetite to deal with these questions, or at least increase general awareness that they worthy tasks for first-class intellects, including ones which might belong to philosophers.Read the full paper:https://nickbostrom.com/old/predictMore episodes at:https://radiobostrom.com/ ---Outline:(00:19) Abstract(01:52) 1. The Polymath(08:41) 2. The Carter-Leslie Doomsday Argument and the Anthropic Principle(15:28) 3. The Fermi Paradox(43:49) 4. Superintelligence(58:27) 5. Uploading, Cyberspace and Cosmology(01:16:14) 6. Attractors and Values(01:28:41) 7. Transhumanism
By Nick Bostrom. Translated by Jill Drouillard. Abstract:The good life: just how good could it be? A vision of the future from the future.Read the full paper:https://www.nickbostrom.com/translations/utopie.pdfMore episodes at:https://radiobostrom.com/
By Nick Bostrom.Abstract:This note introduces the concept of a "singleton" and suggests that this concept is useful for formulating and analyzing possible scenarios for the future of humanity.Read the full paper:https://nickbostrom.com/fut/singletonMore episodes at:https://radiobostrom.com/ ---Outline:(00:18) Abstract(00:32) 1. Definition(01:35) 2. Examples and Elaboration(05:42) 3. Advantages with a Singleton(08:02) 4. Disadvantages with a Singleton(10:14) 5. The Singleton Hypothesis
By Carl Shulman and Nick Bostrom. Abstract:Human capital is an important determinant of individual and aggregate economic outcomes, and a major input to scientific progress. It has been suggested that advances in genomics may open up new avenues to enhance human intellectual abilities genetically, complementing environmental interventions such as education and nutrition. One way to do this would be via embryo selection in the context of in vitro fertilization (IVF). In this article, we analyze the feasibility, timescale, and possible societal impacts of embryo selection for cognitive enhancement. We find that embryo selection, on its own, may have significant (but likely not drastic) impacts over the next 50 years, though large effects could accumulate over multiple generations. However, there is a complementary technology – stem cell-derived gametes – which has been making rapid progress and which could amplify the impact of embryo selection, enabling very large changes if successfully applied to humans. Read the full paper:https://nickbostrom.com/papers/embryo.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:28) Abstract(01:42) Policy Implications(03:27) From carrier-screening to cognitive enhancement(07:42) Impact of cognitive ability(11:20) How much cognitive enhancement from embryo selection?(13:56) (Text Resumes)(19:16) Stem-cell derived gametes could produce much larger effects(21:55) Rate of adoption and public opinion(24:35) (Text Resumes)(27:17) Total impacts on human capital(32:30) (Text Resumes)(35:32) Conclusions
By Nick Bostrom.Abstract:The good life: just how good could it be? A vision of the future from the future.Read the full paper:https://nickbostrom.com/utopia More episodes at:https://radiobostrom.com/
By Nick Bostrom.Abstract:Technological revolutions are among the most important things that happen to humanity. Ethical assessment in the incipient stages of a potential technological revolution faces several difficulties, including the unpredictability of their long‐term impacts, the problematic role of human agency in bringing them about, and the fact that technological revolutions rewrite not only the material conditions of our existence but also reshape culture and even – perhaps – human nature. This essay explores some of these difficulties and the challenges they pose for a rational assessment of the ethical and policy issues associated with anticipated technological revolutions.Read the full paper:https://nickbostrom.com/revolutions.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(01:22) 1. Introduction(06:28) 2. ELSI research, and public concerns about science and technology(16:29) 3. Unpredictability(32:50) 4. Strategic considerations in S&T policy(47:37) 5. Limiting the scope of our deliberations?(01:04:59) 6. Expanding the scope of our deliberations?
By Nick Bostrom and Julian Savulescu.Abstract:Are we good enough? If not, how may we improve ourselves? Must we restrict ourselves to traditional methods like study and training? Or should we also use science to enhance some of our mental and physical capacities more directly?Over the last decade, human enhancement has grown into a major topic of debate in applied ethics. Interest has been stimulated by advances in the biomedical sciences, advances which to many suggest that it will become increasingly feasible to use medicine and technology to reshape, manipulate, and enhance many aspects of human biology even in healthy individuals. To the extent that such interventions are on the horizon (or already available) there is an obvious practical dimension to these debates. This practical dimension is underscored by an outcrop of think tanks and activist organizations devoted to the biopolitics of enhancement.Read the full paper:https://nickbostrom.com/ethics/human-enhancement-ethics.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:20) 1. Background(08:46) 2. Enhancement in General(24:14) 3. Enhancements of Certain Kinds(43:48) 4. Enhancement as Practical Challenge(47:48) 5. Conclusion
By Nick Bostrom and Matthew van der Merwe.Abstract:Sooner or later a technology capable of wiping out human civilisation might be invented. How far would we go to stop it?Read the full paper:https://aeon.co/essays/none-of-our-technologies-has-managed-to-destroy-humanity-yetLinks:- The Vulnerable World Hypothesis (2019) (original academic paper)- The Vulnerable World Hypothesis (2019) (narration by Radio Bostrom)Notes:This article is an adaption of Bostrom's academic paper "The Vulnerable World Hypothesis (2019)".The article was first published in Aeon Magazine. The narration was provided by Curio. We are grateful to Aeon and Curio for granting us permission to re-use the audio. Curio are offering Radio Bostrom listeners a 25% discount on their annual subscription.
By Nick Bostrom.Abstract:Within a utilitarian context, one can perhaps try to explicate [crucial considerations] as follows: a crucial consideration is a consideration that radically changes the expected value of pursuing some high-level subgoal. The idea here is that you have some evaluation standard that is fixed, and you form some overall plan to achieve some high-level subgoal. This is your idea of how to maximize this evaluation standard. A crucial consideration, then, would be a consideration that radically changes the expected value of achieving this subgoal, and we will see some examples of this. Now if you stop limiting your view to some utilitarian context, then you might want to retreat to these earlier more informal formulations, because one of the things that could be questioned is utilitarianism itself. But for most of this talk we will be thinking about that component.Read the full paper:https://www.effectivealtruism.org/articles/crucial-considerations-and-wise-philanthropy-nick-bostromMore episodes at:https://radiobostrom.com/ ---Outline:(00:14) What is a crucial consideration?(04:27) Should I vote in the national election?(08:18) Should we favor more funding for x-risk tech research?(14:32) Crucial considerations and utilitarianism(18:52) Evaluation Functions(19:03) Some tentative signposts(20:35) (Text resumes)(27:28) Possible areas with additional crucial considerations(30:03) Some partial remedies
By Nick Bostrom.Abstract:With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore an opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large. However, the lesson for utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur.Read the full paper:https://nickbostrom.com/astronomical/wasteMore episodes at:https://radiobostrom.com/
By Nick Bostrom. Abstract:This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor‐simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.Read the full paper:https://www.simulation-argument.com/simulation.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:19) Abstract(01:11) Section 1. Introduction(04:08) Section 2. The Assumption of Substrate-independence(06:32) Section 3. The Technological Limits of Computation(15:53) Section 4. The Core of the Simulation Argument(16:58) Section 5. A Bland Indifference Principle(22:57) Section 6. Interpretation(35:22) Section 7. Conclusion(36:53) Acknowledgements
By Nick Bostrom and Eliezer Yudkowsky.Abstract:The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill. Read the full paper:https://nickbostrom.com/ethics/artificial-intelligence.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(01:19) Ethics in Machine Learning and Other Domain‐Specific AI Algorithms(07:23) Artificial General Intelligence(17:01) Machines with Moral Status(28:32) Minds with Exotic Properties(42:45) Superintelligence(56:39) Conclusion(57:59) Author biographies
loading
Comments 
Download from Google Play
Download from App Store