Discover
80,000 Hours - Narrations
80,000 Hours - Narrations
Author: 80000 Hours
Subscribed: 1Played: 3Subscribe
Share
© 2026 All rights reserved
Description
Narrations of articles from 80000hours.org.
Expect evidence-based career advice and research on the world’s most pressing problems. For interviews and discussions, subscribe to The 80,000 Hours Podcast.
Some articles are narrated by the authors, while others are read by AI.
Expect evidence-based career advice and research on the world’s most pressing problems. For interviews and discussions, subscribe to The 80,000 Hours Podcast.
Some articles are narrated by the authors, while others are read by AI.
242 Episodes
Reverse
What if the next decade doesn't gradually ease us into an AI future, but catapults us through a century's worth of change in less than ten years? This article explores three interconnected 'explosions' that could unfold once AI becomes capable enough to improve itself, revealing how feedback loops — where progress in one area accelerates progress everywhere else — could transform the pace of civilisation faster than most people realise. Narrated by AI. ---Outline:(02:10) 1. The intelligence explosion(02:14) Algorithmic feedback loops(06:26) Hardware feedback loops(09:19) Where could this end up?(11:07) 2. The technological explosion(13:33) 3. The industrial explosion(13:38) Robotic worker feedback loops(17:36) A few common counterarguments(20:08) Two views of the future of advanced AI The original text contained 28 footnotes which were omitted from this narration. ---
First published:
February 4th, 2026
Source:
https://80000hours.org/articles/how-ai-driven-feedback-loops-could-make-things-very-crazy-very-fast
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Power is already concentrated today: over 800 million people live on less than $3 a day, the three richest men in the world are worth over $1 trillion, and almost six billion people live in countries without free and fair elections.This is a problem in its own right. There is still substantial distribution of power though: global income inequality is falling, over two billion people live in electoral democracies, no country earns more than a quarter of GDP, and no company earns as much as 1%.But in the future, advanced AI could enable much more extreme power concentration than we’ve seen so far.Many believe that within the next decade the leading AI projects will be able to run millions of superintelligent AI systems thinking many times faster than humans. These systems could displace human workers, leading to much less economic and political power for the vast majority of people; and unless we take action to prevent it, they may end up being controlled by a tiny number of people, with no effective oversight. Once these systems are deployed across the economy, government, and the military, whatever goals they’re built to have will become the primary force shaping the future. If those goals are chosen by the few, then a small number of people could end up with the power to make all of the important decisions about the future. Narrated by a human. ---Outline:(00:00) Introduction(02:15) Summary(07:02) Section 1: Why might AI-enabled power concentration be a pressing problem?(45:02) Section 2: What are the top arguments against working on this problem?(56:36) Section 3: What can you do to help? ---
First published:
April 24th, 2025
Source:
https://80000hours.org/problem-profiles/extreme-power-concentration
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The US government may be the single most important actor for shaping how AI develops. If you want to improve the trajectory of AI and reduce catastrophic risks, you could have an outsized impact by working on US policy. But the US policy ecosystem is huge and confusing. And policy shaping AI is made by specific people in specific places — so where you work matters enormously. Narrated by AI. ---Outline:(01:09) Part 1: How to find the most impactful places to work on AI policy(01:31) Prioritise building career capital(03:37) Work backwards from the most important issues(05:32) Find levers of influence(07:07) Prepare for 'policy windows'(10:21) Consider personal fit(11:44) Part 2: Our best guess at the most impactful places (right now)(11:58) 1. Executive Office of the President(18:43) 2. Federal departments and agencies(24:25) 3. Congress(32:28) 4. State governments(37:47) 5. Think tanks and advocacy organisations(41:04) Conclusion(41:49) Want one-on-one advice on pursuing this path?(42:10) Learn more about how and why to pursue a career in US AI policy(42:16) Top recommendations(43:11) Further reading The original text contained 45 footnotes which were omitted from this narration. ---
First published:
November 17th, 2025
Source:
https://80000hours.org/articles/the-us-ai-policy-landscape-where-to-have-the-biggest-impact
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The arrival of AGI could “compress a century of progress in a decade”, forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI development also presents an opportunity: we could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll these decision-making tools out quickly enough, humanity could be far better equipped to navigate the critical period ahead. We’d be excited to see some more people trying to speed up the development and adoption of these tools. We think that for the right person, this path could be very impactful. That said, this is not a mature area. There's significant uncertainty about what work will actually be most useful, and getting involved has potential downside risks. So our guess is that, at this stage, it’d be great if something like a few hundred particularly thoughtful and entrepreneurial people worked on using AI to improve societal decision making. If the field proves promising, they could pave the way for more people to get involved later. Narrated by the author. ---Outline:(00:00) Introduction(00:17) Summary(01:48) Section 1: Why advancing AI decision-making tools might matter a lot(04:55) AI tools could help us make much better decisions(10:00) We might be able to differentially speed up the rollout of AI decision-making tools(12:14) Section 2: What are the arguments against working to advance AI decision-making tools?(25:15) Section 3: How to work in this area(28:46) Want one on one advice?(29:43) Thank you for listening ---
Source:
https://80000hours.org/problem-profiles/ai-enhanced-decision-making
This path involves aiming to found new organisations that aim to tackle bottlenecks in the problems we think are most pressing. In particular, it involves identifying an idea, testing it, and then helping to build an organisation by investing in strategy, hiring, management, culture and so on, with the aim that the organisation can continue to function well without you in the long term. Our focus is on non-profit models since they have the greatest need right now, but for-profits can also be a route to impact. Narrated by AI. ---Outline:(01:33) Recommended(01:42) Review status(01:48) Why might founding a new project be high impact?(04:35) What does it take to succeed?(05:05) A good enough idea(06:11) You need to be able to convince donors(09:21) An idea that really motivates you(10:45) Leadership potential(11:20) Generalist skills(12:07) Enough knowledge of the area(12:41) Good judgement(13:02) The ability, willingness, and resilience to work on something that might not work out(13:48) Examples of people pursuing this path(13:54) Helen Toner(14:31) Holden Karnofsky(15:05) Next steps if you already have an idea(19:14) Next steps if you don't have an idea yet(22:49) Lists of ideas(24:16) Podcast episodes with founders(24:52) Want one-on-one advice on pursuing this path?(25:11) Learn about other high-impact careers The original text contained 2 footnotes which were omitted from this narration. ---
First published:
November 10th, 2021
Last updated:
August 11th, 2025
Source:
https://80000hours.org/career-reviews/founder-impactful-organisations
---
Narrated by TYPE III AUDIO.
The future of AI is difficult to predict. But while AI systems could have substantial positive effects, there's a growing consensus about the dangers of AI. Narrated by AI. ---Outline:(01:42) Summary(02:34) Our overall view(05:16) Why are risks from power-seeking AI a pressing world problem?(06:45) 1. Humans will likely build advanced AI systems with long-term goals(11:23) 2. AIs with long-term goals may be inclined to seek power and aim to disempower humanity(12:26) We don't know how to reliably control the behaviour of AI systems(15:56) There's good reason to think that AIs may seek power to pursue their own goals(19:40) Advanced AI systems seeking power might be motivated to disempower humanity(22:43) 3. These power-seeking AI systems could successfully disempower humanity and cause an existential catastrophe(23:16) The path to disempowerment(27:56) Why this would be an existential catastrophe(29:35) How likely is an existential catastrophe from power-seeking AI?(32:17) 4. People might create power-seeking AI systems without enough safeguards, despite the risks(32:59) People may think AI systems are safe, when they in fact are not(36:13) People may dismiss the risks or feel incentivised to downplay them(38:59) 5. Work on this problem is neglected and tractable(40:30) Technical safety approaches(46:19) Governance and policy approaches(48:32) What are the arguments against working on this problem?(48:51) Maybe advanced AI systems wont pursue their own goals; theyll just be tools controlled by humans.(51:07) Even if AI systems develop their own goals, they might not seek power to achieve them.(54:49) If this argument is right, why arent all capable humans dangerously power-seeking?(57:11) Maybe we wont build AIs that are smarter than humans, so we dont have to worry about them taking over.(58:32) We might solve these problems by default anyway when trying to make AI systems useful.(01:00:54) Powerful AI systems of the future will be so different that work today isnt useful.(01:02:56) The problem might be extremely difficult to solve.(01:03:38) Couldnt we just unplug an AI thats pursuing dangerous goals?(01:05:30) Couldnt we just sandbox any potentially dangerous AI until we know its safe?(01:07:15) A truly intelligent system would know not to do harmful things.(01:08:33) How you can help(01:10:47) Want one-on-one advice on pursuing this path?(01:11:22) Learn more(01:14:08) Acknowledgements(01:14:25) Notes and references The original text contained 84 footnotes which were omitted from this narration. ---
First published:
July 17th, 2025
Source:
https://80000hours.org/problem-profiles/risks-from-power-seeking-ai
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Being an early employee at a startup is similar to being a startup founder, except (i) the impact and financial return are usually lower (ii) the risk is lower and (iii) the personal demands are lower. It's a promising path if you'd like to found a startup, but don't have a good idea and co-founder, or want a less demanding option. Narrated by AI. ---Outline:(00:13) Summary(00:56) Key facts on fit(01:03) Next steps(01:38) Sometimes recommended -- highly competitive(01:49) Review status(01:54) What is this path?(02:57) Why do this job?(05:47) What are some points against this path?(08:12) How can I find a good startup job?(09:44) How can you find a startup that's focused on doing good?(10:18) What are the key differences compared to founding a startup?(11:54) Who should join a startup rather than found?(12:35) Want one-on-one advice on pursuing this path?(12:56) Further reading ---
First published:
July 22nd, 2015
Last updated:
June 27th, 2025
Source:
https://80000hours.org/career-reviews/startup-early-employee
---
Narrated by TYPE III AUDIO.
On July 16, 1945, humanity had a disturbing first: scientists tested a technology — nuclear weapons — that could cause the destruction of civilisation. Since the attacks on Hiroshima and Nagasaki, humanity hasn’t launched any more nuclear strikes. In part, this is because our institutions developed international norms that, while imperfect, managed to prevent more nuclear attacks. We expect advanced AI will speed up technological advances, with some expecting a century of scientific progress in a decade. Faster scientific progress could have enormous benefits, from cures for deadly diseases to space exploration. Yet this breakneck pace could lower the barriers to creating devastating new weapons while outpacing our ability to build the safeguards needed to control them. Without proper controls, a country, group, or individual could use AI-created weapons of mass destruction to cause a global catastrophe. Advanced AI systems may dramatically accelerate scientific progress, potentially compressing decades of research into just a few years. This rapid advancement could enable the development of devastating new weapons of mass destruction — including enhanced bioweapons and entirely new categories of dangerous technologies — faster than we can build adequate safeguards. Without proper controls, state and non-state actors could use AI-developed [...] Narrated by AI. ---Outline:(01:09) Summary(01:59) Advanced AI could accelerate scientific progress(04:56) What kinds of weapons could advanced AI create?(05:01) Bioweapons(07:27) Cyberweapons(08:22) New dangerous technologies(09:56) These weapons would pose global catastrophic risks(12:33) There are promising approaches to reducing these risks(12:53) Governance and policy approaches(14:36) Technical approaches to reduce misuse risks The original text contained 5 footnotes which were omitted from this narration. ---
First published:
June 24th, 2025
Source:
https://80000hours.org/problem-profiles/catastrophic-ai-misuse
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The recording may not reflect the most recent changes to this article. ---Outline:(00:32) What are these lists based on?(03:11) Why did you make this page?(04:27) Why don't you list more familiar global issues?(06:08) Isn't it inappropriate to rank different issues?(07:25) Should I just take your word for it that these are the most pressing problems in the world?(08:16) How do you think your list is most likely to be wrong?(10:18) Why do you have a list if you don't think it's right?(11:45) Do you think everyone should work on your top list of world problems?(14:10) I want to help tackle one of these global issues. What should I do?(16:11) I'm not motivated to work on any of these issues. What should I do?(17:25) My values are pretty different — how can I figure out my own list of world problems?(19:46) How do I figure out which world problem is the best fit for me to work on? ---
First published:
August 15th, 2018
Source:
https://80000hours.org/problem-profiles
About half of people are worried they'll lose their job to AI. And they're right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more safely than humans, and do accurate medical diagnosis. And over the next five years, it's set to continue to improve rapidly. Eventually, mass automation and falling wages are a real possibility. Narrated by the author. ---Outline:(00:00) Introduction(04:17) 1: What people misunderstand about automation(08:56) 1.1: What would ‘full automation’ mean for wages?(11:19) 2: Four types of skills most likely to increase in value(12:42) 2.1: Skills AI won’t easily be able to perform(21:41) 2.2: Skills that are needed for AI deployment(24:56) 2.3: Skills where we could use far more of what they produce(26:25) 2.4: Skills that are difficult for others to learn(28:05) 3.1: Skills using AI to solve real problems(29:22) 3.2: Personal effectiveness(31:59) 3.3: Leadership skills(36:25) 3.4: Communications and taste(37:23) 3.5: Getting things done in government(38:24) 3.6: Complex physical skills(38:57) 4: Skills with a more uncertain future(39:18) 4.1: Routine knowledge work: writing, admin, analysis, advice(43:22) 4.2: Coding, maths, data science, and applied STEM(45:31) 4.3: Visual creation(46:05) 4.4: More predictable manual jobs(46:46) 5: Some closing thoughts on career strategy(46:54) 5.1: Look for ways to leapfrog entry-level white collar jobs(48:44) 5.2: Be cautious about starting long training periods, like PhDs and
medicine(49:52) 5.3: Make yourself more resilient to change(50:16) 5.4: Ride the wave(50:37) Take action(50:58) Thank you for listening ---
First published:
June 15th, 2025
Source:
https://80000hours.org/ai/guide/skills-ai-makes-valuable
Advanced AI technology may enable its creators, or others who control it, to attempt and achieve unprecedented societal power grabs. Under certain circumstances, they could use these systems to take control of whole economies, militaries, and governments. This kind of power grab from a single person or small group would pose a major threat to the rest of humanity. Narrated by AI. ---Outline:(00:10) Summary(00:46) Why is this a pressing problem?(03:56) What can be done to mitigate these risks?(05:44) Learn more The original text contained 2 footnotes which were omitted from this narration. ---
First published:
April 24th, 2025
Source:
https://80000hours.org/problem-profiles/ai-enabled-power-grabs
---
Narrated by TYPE III AUDIO.
New tax data lets us accurately estimate which 11 jobs and industries are the highest paying on average, and for top performers. Narrated by AI. ---Outline:(02:14) List of the highest paying jobs according to tax data(03:51) Which jobs are included in the table?(05:52) How do the highest-earning people make their money?(07:07) Which high-paying jobs are missing? How to make it into the top 1% as a blue collar worker.(09:07) What about narrower categories of job?(11:28) What about other countries?(12:03) What about capital gains?(15:30) What about lifetime income?(18:00) So which career paths are highest paid overall?(19:57) In which job should you earn to give?(22:28) You might also be interested in The original text contained 10 footnotes which were omitted from this narration. ---
First published:
May 10th, 2017
Last updated:
April 13th, 2025
Source:
https://80000hours.org/articles/highest-paying-jobs
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The proliferation of advanced AI systems may lead to the gradual disempowerment of humanity, even if efforts to prevent them from becoming power-seeking or scheming are successful. Humanity may be incentivised to hand over increasing amounts of control to AIs, giving them power over the economy, politics, culture, and more. Over time, humanity's interests may be sidelined and our control over the future undermined, potentially constituting an existential catastrophe. There's disagreement over how serious a problem this is and how it relates to other concerns about AI alignment [...] Narrated by AI. ---Outline:(00:09) Summary(01:10) Why might gradual disempowerment be an especially pressing problem?(06:48) How pressing is this issue?(07:21) What are the arguments against this being a pressing problem?(08:14) What can you do to help?(10:17) Key organisations in this space(10:50) Learn more(11:58) Explore other pressing world problems The original text contained 1 footnote which was omitted from this narration. ---
First published:
April 4th, 2025
Source:
https://80000hours.org/problem-profiles/gradual-disempowerment
---
Narrated by TYPE III AUDIO.
I'm writing a new guide to careers to help artificial general intelligence (AGI) go well. Here's a summary of the bottom lines that'll be in the guide as it stands. Stay tuned to hear our full reasoning and updates as our views evolve. In short: The chance of an AGI-driven technological explosion before 2030 — creating one of the most pivotal periods in history — is high enough to act on. Narrated by AI. ---Outline:(01:58) Get the full guide in your inbox as its released(02:11) Why AGI could be here by 2030(04:28) AGI could lead to 100 years of technological progress in under 10(07:49) What might happen next?(11:11) What needs to be done?(13:25) What can you do to help?(13:28) There are hundreds of jobs(14:49) Mid-career advice(16:26) Early-career advice(17:59) Should you work on this issue?(20:10) How should you plan your career given AGI might arrive soon?(20:16) Given the urgency, should you drop everything to try to work on AI right away?(22:41) If youre still uncertain about what to do(23:38) Next steps(24:07) Get notified when we publish new articles in this series ---
First published:
March 14th, 2025
Source:
https://80000hours.org/ai/guide/summary
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress:OpenAI’s Sam Altman: Shifted from saying in November “the rate of progress continues” to declaring in January “we are now confident we know how to build AGI”Anthropic’s Dario Amodei: Stated in January “I’m more confident than I’ve ever been that we’re close to powerful capabilities… in the next 2-3 years”Google DeepMind’s Demis Hassabis: Changed from “as soon as 10 years” in autumn to “probably three to five years away” by January.What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)There’s no single point at which a system becomes ‘AGI,’ and the term gets used in many different ways.More fundamentally, we can classify AI systems based on the (i) strength and (ii) breadth of their capabilities.‘Narrow’ AI demonstrates strong performance at a small range of tasks (e.g. chess-playing AI). Most technologies have very narrow applications.‘General’ AI is supposed to have strong capabilities in a wide range of domains, in the same way that humans can learn to do a wide range of jobs. But there’s no single point at which narrow becomes general – it’s just a spectrum. Narrated by the author. ---Outline:(00:00) Introduction(00:33) The case for AGI by 2030(04:04) The article in a nutshell(05:46) Section One: what''s driven recent AI Progress?(05:52) How we got here: the deep learning era(07:45) Where are we now: the four key drivers(08:57) 1: Scaling pretraining(12:14) Algorithmic efficiency(14:22) How much further can pretraining scale?(16:15) 2: Training the models to reason(22:06) How far can scaling reasoning continue?(25:01) 3. Increasing how long medels think(28:00) 4: Building better agents(33:38) How far can agent improvements continue?(35:57) Section Two: how good will AI become by 2030?(37:40) Trend extrapolation of AI capabilities(39:56) What jobs would these systems help with?(40:48) Software Engineering(42:10) Scientific Research(43:19) AI research(44:28) What''s the case against this?(49:16) Additional resources on the skeptical view(49:48) When do the ''experts'' expect AGI?(51:04) Section Three: why the next 5 years are crucial(52:08) Bottlenecks around 2030(56:00) Two potential futures for AI(58:02) Conclusion(59:24) Thanks for listening ---
First published:
March 20th, 2025
Source:
https://80000hours.org/ai/guide/when-will-agi-arrive
At some point in the 21st century, an unwinnable war may be fought.A modern great power war could see nuclear weapons, bioweapons, autonomous weapons, and other destructive new technologies deployed on an unprecedented scale.It would probably be the most destructive event in history, shattering our world. It could even threaten us with extinction.We’ve come perilously close to just this kind of catastrophe before. Narrated by the author. ---
First published:
June 19th, 2023
Source:
https://80000hours.org/problem-profiles/great-power-conflict
The recording may not reflect the most recent changes to this article.Some of the deadliest events in history have been pandemics.For comparisons of the deadliest events in history, see Luke Muehlhauser’s survey of the deadliest events in history, also cited in our full profile on reducing global catastrophic biological risks. Note three of the top 10 are pandemics:Global catastrophic biological risks (GCBRs) are risks of severe pandemics that are serious enough to threaten the future of humanity.For reasons we discuss below, we think the chances of such a biological catastrophe are uncomfortably high. There are also a number of practical options for reducing these risks. So we think working to reduce GCBRs is a promising way to safeguard the future of humanity right now.Note: This page gives a broad overview of the problem area and links to many resources on the topic. For an in-depth review of the issue and additional details about work in this area, see our full report on global catastrophic biological risks. (That report was published in March 2020 and largely written prior to the COVID-19 pandemic — but we think its conclusions still stand today.) ---
First published:
April 22nd, 2020
Source:
https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics
The Khmer Rouge ruled Cambodia for just four years, yet in that time they murdered about one-quarter of Cambodia's population. Even short-lived totalitarian regimes can inflict enormous harm. Narrated by AI. ---Outline:(03:48) Summary(04:18) Our overall view(07:12) Why might the risk of stable totalitarianism be an especially pressing problem?(07:36) Could totalitarianism be an existential risk?(08:45) Is any of this remotely plausible?(12:03) Will totalitarian regimes arise in future?(13:06) Could a totalitarian regime dominate the world?(13:29) Domination by force(16:38) Controlling a powerful government(19:24) Could a totalitarian regime last forever?(23:11) The chance of stable totalitarianism(27:50) Preventing long-term totalitarianism in particular seems pretty neglected(30:13) Why might you choose not to work on this problem?(31:06) What can you do to help?(31:33) AI Governance(33:10) Researching risks of global coordination(34:30) Working on defensive technologies(36:39) Protecting democratic institutions(38:15) Learn more about risks of stable totalitarianism(39:27) Explore other pressing world problems The original text contained 13 footnotes which were omitted from this narration. ---
First published:
June 19th, 2024
Last updated:
October 29th, 2024
Source:
https://80000hours.org/problem-profiles/risks-of-stable-totalitarianism
---
Narrated by TYPE III AUDIO.
This is Part Four of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions, Part Two: Fighting pandemics, and Part Three: Infohazards. One of the most prominently discussed catastrophic risks from AI is the potential for an AI-enabled bioweapon. But discussions of future technologies are necessarily speculative. Narrated by AI. ---Outline:(01:23) Who we spoke to and why theyre anonymous(03:06) Expert 1: A unique policy window to mitigate harms(05:44) Expert 2: Restricting digital to physical capabilities(06:17) Expert 3: A plausible emerging threat(07:27) Expert 4: Reducing 'meta' knowledge barriers(09:04) Expert 5: Large language models are not a threat, but other AI tools may be(11:04) Expert 6: Using AI tools to help detect threats(12:39) Expert 7: Enabling both lone and state actors(13:50) Expert 8: Accelerating risk and mitigation approaches(14:49) Expert 9: New technologies and overhype(16:03) Expert 10: Collapsing timelines and missing datasets(16:47) Expert 11: Early days and space to contribute(18:29) Expert 12: Renewed urgency(19:01) Learn more ---
First published:
October 11th, 2024
Source:
https://80000hours.org/articles/anonymous-answers-could-advances-in-ai-supercharge-biorisk
---
Narrated by TYPE III AUDIO.
*This is Part Three of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions and Part Two: Fighting pandemics. In the field of biosecurity, many experts are concerned with managing information hazards (or infohazards). This is information that some believe could be dangerous if it were widely known — such as the gene sequence of a deadly virus or particular threat models. Narrated by AI. ---Outline:(01:17) Who we spoke to and why theyre anonymous(03:06) Expert 1: Strike the right balance(04:22) Expert 2: Help researchers use information responsibly(05:45) Expert 3: Limit secrecy to improve problem solving(07:43) Expert 4: Keep the goal in mind when sharing information(08:46) Expert 5: Discuss threats more openly to reduce risks(11:49) Expert 6: Communicate based on the audience(12:31) Expert 7: Consider the effects of withholding information(14:44) Expert 8: Handle information sensitively to support diplomacy(16:01) Expert 9: Share information more widely (but responsibly)(18:55) Expert 10: Consider the purpose of sharing and manage credibility(20:15) Expert 11: Disclose information publicly only when absolutely necessary(21:49) Learn more ---
First published:
September 26th, 2024
Source:
https://80000hours.org/articles/anonymous-answers-how-can-we-manage-infohazards-in-biosecurity
---
Narrated by TYPE III AUDIO.



