Future Matters Reader

Future Matters Reader releases audio versions of most of the writings summarized in the Future Matters newsletter

Holden Karfnosky — Success without dignity: a nearcasting story of avoiding catastrophe by luck

Success without dignity: a nearcasting story of avoiding catastrophe by luck, by Holden Karnofsky. https://forum.effectivealtruism.org/posts/75CtdFj79sZrGpGiX/success-without-dignity-a-nearcasting-story-of-avoiding Note: Footnotes in the original article have been omitted.

03-20
19:26

Larks — A Windfall Clause for CEO could worsen AI race dynamics

In this post, Larks argues that the proposal to make AI firms promise to donate a large fraction of profits if they become extremely profitable will primarily benefitting the management of those firms and thereby give managers an incentive to move fast, aggravating race dynamics and in turn increasing existential risk. https://forum.effectivealtruism.org/posts/ewroS7tsqhTsstJ44/a-windfall-clause-for-ceo-could-worsen-ai-race-dynamics

03-20
14:35

Otto Barten — Paper summary: The effectiveness of AI existential risk communication to the American and Dutch public

This is Otto Barten's summary of 'The effectiveness of AI existential risk communication to the American and Dutch public' by Alexia Georgiadis. In this paper Alexia measures changes in participants' awareness of AGI risks after consuming various media interventions. Summary: https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary-the-effectiveness-of-ai-existential-risk Original paper: https://existentialriskobservatory.org/papers_and_reports/The_Effectiveness_of_AI_Existential_Risk_Communication_to_the_American_and_Dutch_Public.pdf Note: Some tables in the summary have been omitted in this audio version.

03-20
07:15

Shulman & Thornley — How much should governments pay to prevent catastrophes? Longtermism's limited role

Carl Shulman & Elliott Thornley argue that the goal of longtermists should be to get governments to adopt global catastrophic risk policies based on standard cost-benefit analysis rather than arguments that stress the overwhelming importance of the future. https://philpapers.org/archive/SHUHMS.pdf Note: Tables, notes and references in the original article have been omitted.

03-20
57:30

Elika Somani — Advice on communicating in and around the biosecurity policy community

"The field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make more headway on safer biosecurity policy." https://forum.effectivealtruism.org/posts/HCuoMQj4Y5iAZpWGH/advice-on-communicating-in-and-around-the-biosecurity-policy Note: Some footnotes in the original article have been omitted.

03-14
13:06

Riley Harris — Summary of 'Are we living at the hinge of history?' by William MacAskill.

The Global Priorities Institute has published a new paper summary: 'Are we living at the hinge of history?' by William MacAskill. https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/ Note: Footnotes and references in the original article have been omitted.

03-14
07:10

Riley Harris — Summary of 'Longtermist institutional reform' by Tyler M. John and William MacAskill

The Global Priorities Institute has published a new paper summary: 'Longtermist institutional reform' by Tyler John & William MacAskill. https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/ Note: Footnotes and references in the original article have been omitted.

03-14
05:32

Hayden Wilkinson — Global priorities research: Why, how, and what have we learned?

The Global Priorities Institute has released Hayden Wilkinson's presentation on global priorities research. (The talk was given in mid-September last year but remained unlisted until now.) https://globalprioritiesinstitute.org/hayden-wilkinson-global-priorities-research-why-how-and-what-have-we-learned/

03-13
44:42

Piper — What should be kept off-limits in a virology lab?

New rules around gain-of-function research make progress in striking a balance between reward — and catastrophic risk. https://www.vox.com/future-perfect/2023/2/1/23580528/gain-of-function-virology-covid-monkeypox-catastrophic-risk-pandemic-lab-accident

03-13
07:49

Ezra Klein — This changes everything

"One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough." https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html

03-13
10:42

Victoria Krakovna — Near-term motivation for AGI alignment

Victoria Krakovna makes the point that you don't have to be a longtermist to care about AI alignment.

03-11
04:39

Anthropic — Core views on AI safety: when, why, what, and how

Anthropic shares a summary of their views about AI progress and its associated risks, as well as their approach to AI safety. https://www.anthropic.com/index/core-views-on-ai-safety Note: Some footnotes in the original article have been omitted.

03-11
38:11

Noah Smith — LLMs are not going to destroy the human race

Noah Smith argues that, although AGI might eventually kill humanity, large language models are not AGI, may not be a step toward AGI, and there's no plausible way they could cause extinction. https://noahpinion.substack.com/p/llms-are-not-going-to-destroy-the

03-08
16:46

Andy Greenberg — A privacy hero's final wish: an institute to redirect AI's future

Peter Eckersley did groundbreaking work to encrypt the web. After his sudden death, a new organization he founded is carrying out his vision to steer artificial intelligence toward “human flourishing.” https://www.wired.com/story/peter-eckersley-ai-objectives-institute/

03-08
12:58

Noy & Zhang — Experimental evidence on the productivity effects of generative artificial intelligence

A working paper by Shakked Noy and Whitney Zhang examines the effects of ChatGPT on production and labor markets. https://economics.mit.edu/sites/default/files/inline-files/Noy_Zhang_1.pdf Note: Some tables and footnotes in the original article have been omitted.

03-04
24:58

Robin Hanson — AI risk, again

Robin Hanson restates his views on AI risk. https://www.overcomingbias.com/p/ai-risk-again

03-04
08:29

Williams and Kane — Preventing the Misuse of DNA Synthesis

In an Institute for Progress report, Bridget Williams and Rowan Kane make five policy recommendations to mitigate risks of catastrophic pandemics from synthetic biology. https://progress.institute/preventing-the-misuse-of-dna-synthesis/

03-04
41:09

Kevin Collier — What is consciousness? ChatGPT and advanced AI might redefine our answer

Kevin Collier on how ChatGPT and advanced AI might redefine our definition of consciousness. https://www.nbcnews.com/tech/tech-news/chatgpt-ai-consciousness-rcna71777

03-02
07:28

Landgrebe, Barnes & Hobbhahn — Reflection Mechanisms as an Alignment Target - Attitudes on “near-term” AI

Eric Landgrebe, Beth Barnes and Marius Hobbhahn discuss a survey of 1000 participants on their views about what values should be put into powerful AIs. https://www.lesswrong.com/posts/4iAkmnhhqNZe8JzrS/reflection-mechanisms-as-an-alignment-target-attitudes-on Note: Some tables in the original article have been omitted.

03-02
14:28

Risto Uuk — The EU AI Act Newsletter #24

Risto Uuk published the EU AI Act Newsletter #24. https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-24

03-02
06:14

Recommend Channels