By Nick Bostrom.Abstract:The good life: just how good could it be? A vision of the future from the future.Read the full paper:https://nickbostrom.com/utopiaMore episodes at:https://radiobostrom.com/
By Nick Bostrom.Draft version 0.9Abstract:New theoretical ideas for a big expedition in metaethics.Read the full paper:https://nickbostrom.com/papers/mountethics.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:17) Metametaethics/preamble(02:48) Genealogy(09:41) Metaethics(21:30) Value representors(26:56) Moral motivation(30:02) The weak(33:25) Hedonism(41:38) Hierarchical norm structure and higher morality(55:30) Questions for future research
By Nick Bostrom.Abstract:Within a utilitarian context, one can perhaps try to explicate [crucial considerations] as follows: a crucial consideration is a consideration that radically changes the expected value of pursuing some high-level subgoal. The idea here is that you have some evaluation standard that is fixed, and you form some overall plan to achieve some high-level subgoal. This is your idea of how to maximize this evaluation standard. A crucial consideration, then, would be a consideration that radically changes the expected value of achieving this subgoal, and we will see some examples of this. Now if you stop limiting your view to some utilitarian context, then you might want to retreat to these earlier more informal formulations, because one of the things that could be questioned is utilitarianism itself. But for most of this talk we will be thinking about that component.Read the full paper:https://www.effectivealtruism.org/articles/crucial-considerations-and-wise-philanthropy-nick-bostromMore episodes at:https://radiobostrom.com/ ---Outline:(00:14) What is a crucial consideration?(04:27) Should I vote in the national election?(08:18) Should we favor more funding for x-risk tech research?(14:32) Crucial considerations and utilitarianism(18:52) Evaluation Functions(19:03) Some tentative signposts(20:35) (Text resumes)(27:28) Possible areas with additional crucial considerations(30:03) Some partial remedies
By Nick Bostrom.Abstract:This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor‐simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.Read the full paper:https://www.simulation-argument.com/simulation.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:19) Abstract(01:11) Section 1. Introduction(04:08) Section 2. The Assumption of Substrate-independence(06:32) Section 3. The Technological Limits of Computation(15:53) Section 4. The Core of the Simulation Argument(16:58) Section 5. A Bland Indifference Principle(22:57) Section 6. Interpretation(35:22) Section 7. Conclusion(36:53) Acknowledgements
By Nick Bostrom, Anders Sandberg, and Matthew van der Merwe.This is an updated version of The Wisdom of Nature, first published in the book Human Enhancement (Oxford University Press, 2009).Abstract:Human beings are a marvel of evolved complexity. When we try to enhance poorly-understood complex evolved systems, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. A recognition of this reality can manifest as a vaguely normative intuition, to the effect that it is “hubristic” to try to improve on nature, or that biomedical therapy is ok while enhancement is morally suspect. We suggest that one root of these moral intuitions may be fundamentally prudential rather than ethical. More importantly, we develop a practical heuristic, the “evolutionary optimality challenge”, for evaluating the plausibility that specific candidate biomedical interventions would be safe and effective. This heuristic recognizes the grain of truth contained in “nature knows best” attitudes while providing criteria for identifying the special cases where it may be feasible, with present or near-future technology, to enhance human nature.Read the full paper:https://www.nickbostrom.com/evolutionary-optimality.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:31) Abstract(01:58) Introduction(07:22) The Evolutionary Optimality Challenge(11:13) Altered tradeoffs(12:18) Evolutionary incapacity(13:33) Value discordance(14:47) Altered tradeoffs(17:50) Changes in resources(23:24) Changes in demands(28:44) Evolutionary incapacity(30:54) Fundamental inability(32:54) Local optima(34:17) Example: the appendix(36:37) Example: the ε4 allele(37:52) Example: the sickle-cell allele(42:33) Lags(45:51) Marker 17(46:26) Example: lactase persistence(47:18) Value discordance(49:05) Example: contraceptives(50:55) Good for the individual(55:22) Example: happiness(56:40) Good for society(58:18) Example: compassion(01:00:03) The heuristic(01:00:30) Current ignorance prevents us from forming any plausible idea about the evolutionary factors at play(01:01:43) We come up with a plausible idea about the relevant evolutionary factors, and they suggest that the intervention would be harmful(01:02:31) We come up with several different plausible ideas about the relevant evolutionary factors(01:03:26) We develop a plausible idea about the relevant evolutionary factors, and they imply we wouldn’t have evolved the enhanced capacity even if it were beneficial(01:08:23) Conclusion(01:09:11) References(01:09:18) Thanks to
By Nick Bostrom.Abstract:Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman’ state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision.Read the full paper:https://nickbostrom.com/ethics/dignityMore episodes at:https://radiobostrom.com/ ---Outline:(00:02) Introduction(00:21) Abstract(01:57) Transhumanists vs. bioconservatives(06:42) Two fears about the posthuman(19:44) Is human dignity incompatible with posthuman dignity?(29:03) Why we need posthuman dignity(34:38) Outro & credits
By Nick Bostrom.Abstract:With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore an opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large. However, the lesson for utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur.Read the full paper:https://nickbostrom.com/astronomical/wasteMore episodes at:https://radiobostrom.com/
By Nick Bostrom & Carl Shulman. Draft version 1.10.AIs with moral status and political rights? We'll need a modus vivendi, and it’s becoming urgent to figure out the parameters for that. This paper makes a load of specific claims that begin to stake out a position.Read the full paper:https://nickbostrom.com/propositions.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:00) Introduction(00:36) Disclaimer(01:07) Consciousness and metaphysics(06:48) Respecting AI interests(21:41) Security and stability(32:04) AI-empowered social organization(38:07) Satisfying multiple values(42:23) Mental malleability, persuasion, and lock-in(47:20) Epistemology(53:36) Status of existing AI systems(59:52) Recommendations regarding current practises and AI systems(01:07:08) Impact paths and modes of advocacy(01:11:11) Closing credits
By Nick Bostrom.Abstract:Technological revolutions are among the most important things that happen to humanity. This paper discusses some of the ethical and policy issues raised by anticipated technological revolutions, such as nanotechnology.Read the full paper:https://nickbostrom.com/revolutions.pdfMore episodes at:https://radiobostrom.com ---Outline:(01:22) 1. Introduction(06:28) 2. ELSI research, and public concerns about science and technology(16:29) 3. Unpredictability(32:50) 4. Strategic considerations in S&T policy(47:37) 5. Limiting the scope of our deliberations?(01:04:59) 6. Expanding the scope of our deliberations?
By Nick Bostrom.Abstract:Recounts the Tale of a most vicious Dragon that ate thousands of people every day, and of the actions that the King, the People, and an assembly of Dragonologists took with respect thereto.Read the full paper:https://nickbostrom.com/fable/dragonMore episodes at:https://radiobostrom.com/
By Nick Bostrom.Abstract:Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this paper, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability.Read the full paper:https://existential-risk.org/conceptMore episodes at:https://radiobostrom.com/ ---Outline:(00:15) Abstract(01:10) Policy Implications(02:43) The maxipok rule(10:45) Qualitative risk categories(17:00) Magnitude of expected loss in existential catastrophe(26:48) Maxipok(29:13) Classification of existential risk(31:19) Human extinction(34:18) Permanent stagnation(42:00) Flawed realisation(45:37) Subsequent ruination(48:44) Capability and value(54:16) Some other ethical perspectives(01:01:57) Existential risk and normative uncertainty(01:06:07) Keeping our options alive(01:14:38) Outlook(01:15:15) Barriers to thought and action(01:24:36) Grounds for optimism(01:31:40) Author Information
By Nick Bostrom and Matthew van der Merwe.Abstract:Sooner or later a technology capable of wiping out human civilisation might be invented. How far would we go to stop it?Read the full paper:https://aeon.co/essays/none-of-our-technologies-has-managed-to-destroy-humanity-yetLinks:- The Vulnerable World Hypothesis (2019) (original academic paper)- The Vulnerable World Hypothesis (2019) (narration by Radio Bostrom)Notes:This article is an adaption of Bostrom's academic paper "The Vulnerable World Hypothesis (2019)".The article was first published in Aeon Magazine. The narration was provided by Curio. We are grateful to Aeon and Curio for granting us permission to re-use the audio. Curio are offering Radio Bostrom listeners a 25% discount on their annual subscription.
This series presents an introductory selection of Bostrom's work. We have narrated many more of Nick Bostrom's papers. You can find them at radiobostrom.com.This episode includes:Part 1. An abridged version of Nick Bostrom's biographyPart 2. An abridged version of a text where he summarises the motivation and scope of his work.Part 3. Bostrom's answer to the 2008 Edge.org question: What have you changed you mind about?Further reading:New Yorker profile of Nick BostromAeon Magazine profile of Nick Bostrom ---Outline:(01:43) Part 1. Biography(03:14) Part 2. Bostrom in his own words(07:05) Part 3. What have you changed your mind about?(11:34) Outro