Discover
EA Forum Podcast (Curated & popular)
EA Forum Podcast (Curated & popular)
Author: EA Forum Team
Subscribed: 31Played: 1,710Subscribe
Share
© 2025 All rights reserved
Description
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
501 Episodes
Reverse
Cross-post from Good Structures. Over the last few years, I helped run several dozen hiring rounds for around 15 high-impact organizations. I've also spent the last few months talking with organizations about their recruitment. I've noticed three recurring themes: Candidates generally have a terrible time Work tests are often unpleasant (and the best candidates have to complete many of them), there are hundreds or thousands of candidates for each role, and generally, people can't get the jobs they’ve been told are the best path to impact. Organizations are often somewhat to moderately unhappy with their candidate pools Organizations really struggle to find the talent they want, despite the number of candidates who apply. Organizations can't find or retain the recruiting talent they want It's extremely hard to find people to do recruitment in this space. Talented recruiters rarely want to stay in their roles. I think the first two points need more discussion, but I haven't seen much discussion about the last. I think this is a major issue: recruitment is probably the most important function for a growing organization, and a skilled recruiter has a fairly large counterfactual impact for the organization they support. So why is it [...] ---Outline:(01:33) Recruitment is high leverage and high impact(03:33) Organizations struggle to hire recruiters(07:52) Many of the people applying to recruitment roles emphasize their experience in recruitment. This isnt the background organizations need(08:44) Almost no one is appropriately obsessed with hiring(10:29) The state of evidence on hiring practices is bad(13:22) Retaining strong recruiters is really hard(14:51) Why might this be less important than I think?(16:40) Im trying to find people interested in this kind of approach to hiring. If this is you, please reach out. ---
First published:
November 3rd, 2025
Source:
https://forum.effectivealtruism.org/posts/HLktkw5LXeqSLCchH/recruitment-is-extremely-important-and-impactful-some-people
---
Narrated by TYPE III AUDIO.
16 minute read We update our list of Recommended Charities annually. This year, we announced recommendations on November 4. Each year, hundreds of billions of animals are trapped in the food industry and killed for food —that is more than all the humans who have ever walked on the face of the Earth.1 When faced with such a magnitude of suffering, it can feel overwhelming and hard to know how to help. One of the most impactful things you can do to help animals is to donate to effective animal charities—even a small donation can have a big impact. Our goal is to help you do the most good for animals by providing you with effective giving opportunities that greatly reduce their suffering. Following our comprehensive charity evaluations, we are pleased to announce our Recommended Charities!Charities awarded the status in 2025Charities retaining the status from 2024Animal Welfare ObservatoryAquatic Life InstituteShrimp Welfare ProjectÇiftlik Hayvanlarını Koruma DerneğiSociedade Vegetariana BrasileiraDansk Vegetarisk ForeningThe Humane LeagueGood Food FundWild Animal InitiativeSinergia Animal The Humane League (working globally), Shrimp Welfare Project (in Central and South America, Southeast Asia, and India), and Wild Animal Initiative (global) have continued to work on the most important issues for animals [...] ---Outline:(03:54) Charities Recommended in 2025(03:59) Animal Welfare Observatory(05:44) Shrimp Welfare Project(07:38) Sociedade Vegetariana Brasileira(09:41) The Humane League(11:22) Wild Animal Initiative(13:15) Charities Recommended in 2024(13:20) Aquatic Life Institute(15:25) Çiftlik Hayvanlarını Koruma Derneği(17:34) Dansk Vegetarisk Forening(19:18) The Good Food Fund(21:19) Sinergia Animal(23:20) Support our Recommended Charities The original text contained 2 footnotes which were omitted from this narration. ---
First published:
November 4th, 2025
Source:
https://forum.effectivealtruism.org/posts/waL3iwczrjNt8PreZ/announcing-ace-s-2025-charity-recommendations
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
(Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.) Last Friday was my last day at Open Philanthropy. I’ll be starting a new role at Anthropic in mid-November, helping with the design of Claude's character/constitution/spec. This post reflects on my time at Open Philanthropy, and it goes into more detail about my perspective and intentions with respect to Anthropic – including some of my takes on AI-safety-focused people working at frontier AI companies. (I shared this post with Open Phil and Anthropic comms before publishing, but I’m speaking only for myself and not for Open Phil or Anthropic.)On my time at Open PhilanthropyI joined Open Philanthropy full-time at the beginning of 2019.[1] At the time, the organization was starting to spin up a new “Worldview Investigations” team, aimed at investigating and documenting key beliefs driving the organization's cause prioritization – and with a special focus on how the organization should think about the potential impact at stake in work on transformatively powerful AI systems.[2] I joined (and eventually: led) the team devoted to this effort, and it's been an amazing project to be a part of. I remember [...] ---Outline:(00:51) On my time at Open Philanthropy(08:11) On going to Anthropic ---
First published:
November 3rd, 2025
Source:
https://forum.effectivealtruism.org/posts/EFF6wSRm9h7Xc6RMt/leaving-open-philanthropy-going-to-anthropic
---
Narrated by TYPE III AUDIO.
This is the latest in a series of essays on AI Scaling. You can find the others on my site. Summary: RL-training for LLMs scales surprisingly poorly. Most of its gains are from allowing LLMs to productively use longer chains of thought, allowing them to think longer about a problem. There is some improvement for a fixed length of answer, but not enough to drive AI progress. Given the scaling up of pre-training compute also stalled, we'll see less AI progress via compute scaling than you might have thought, and more of it will come from inference scaling (which has different effects on the world). That lengthens timelines and affects strategies for AI governance and safety. The current era of improving AI capabilities using reinforcement learning (from verifiable rewards) involves two key types of scaling: Scaling the amount of compute used for RL during training Scaling [...] ---Outline:(09:12) How do these compare to pre-training scaling?(13:42) Conclusion ---
First published:
October 22nd, 2025
Source:
https://forum.effectivealtruism.org/posts/TysuCdgwDnQjH3LyY/how-well-does-rl-scale
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
TL;DR: I took the 🔸10% Pledge in 2016 and haven’t kept to it consistently. I’ve decided not to pay the backlog donations, and instead to recommit fresh from today, with simple systems to keep me on track. Sharing this for transparency and in the hope it may be helpful to others - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Why I’m posting In 2016, as a university student, I took the Giving What We Can 10% Pledge. I made my pledge publicly, and my social media profiles show the 🔸10% Pledge badge. For integrity's sake, I want to be equally public that I fell short—and [...] ---Outline:(00:29) Why I'm posting(00:52) What happened(01:53) Going forward ---
First published:
October 28th, 2025
Source:
https://forum.effectivealtruism.org/posts/3vcpERphsumgEzqeB/recommitting-to-giving-a-personal-update
---
Narrated by TYPE III AUDIO.
Many thanks to @Felix_Werdermann 🔸 @Engin Arıkan and @Ana Barreiro for your feedback and comments on this, and for the encouragement from many people to finally write this up into an EA forum post. For years, much of the career advice in the Effective Altruism community has implicitly (or explicitly) suggested that impact = working at an EA nonprofit. That narrative made sense when the community and its talent pool were smaller. But as EA grows, it's worth reassessing whether we’re overconcentrating on nonprofit careers, a trend that may be limiting our community's impact and leaving higher-leverage opportunities on the table. Why Now? As the EA movement has grown, it has attracted far more talent than the nonprofit sector can realistically absorb. This creates an urgent need to develop alternative pathways for talented, mission-aligned people. Under the current status quo, many end up feeling frustrated after going through multiple [...] ---Outline:(00:51) Why Now?(02:06) Important Caveats(03:20) The argument for roles outside of non-profits(03:25) Institutions Dwarf Nonprofit Capacity(05:19) Salaries Are Covered Outside the Movement(06:12) Counterfactual Impact Is Often Greater(06:57) A Healthier Distribution of Talent(07:54) Why This Might Be Wrong Advice(08:24) The Challenges of External Roles(10:13) Why These Risks Still Seem Worth Taking(10:57) Why Steering Everyone Toward Nonprofits Might Hurt the EA Community(11:03) Nonprofit Roles Are Saturated(11:27) Nonprofits Have Low Absorbency(12:24) Too Many Advising Channels, One Bottlenecked Funnel(12:49) Nonprofits are not a good fit for everyone, and they may be a much better fit for roles in other sectors.(13:31) Important Final Caveats ---
First published:
October 16th, 2025
Source:
https://forum.effectivealtruism.org/posts/FAmCmCavZ5vTzbRcM/why-many-eas-may-have-more-impact-outside-of-nonprofits-in
---
Narrated by TYPE III AUDIO.
Summary As part of our ongoing work to study how to best frame EA, we experimentally tested different phrases and sentences that CEA were considering using on effectivealtruism.org. Doing Good Better taglines We observed a consistent pattern where taglines that included the phrase ‘do[ing] good better’ received less support from respondents and inspired less interest in learning about EA. We replicated these results in a second experiment, where we confirmed that taglines referring to “do[ing] good better” performed less well than those referring to “do[ing] the most good”. Nouns and sentences Nouns: The effect of using different nouns to refer to EA was small, but referring to EA as a ‘philosophy’ or ‘movement’ inspired the most curiosity compared to options including ‘project’ and ‘research field’. Sentences: “Find the most effective ways to do good with your time, money, and career” and “Effective altruism asks the question of how we [...] ---Outline:(00:12) Summary(01:23) Method(02:18) Taglines (Study 1)(03:40) Doing Good Better replication (Study 2)(05:23) Sentences (Study 1)(06:45) Nouns (Study 1)(07:41) Effectiveness focus(07:55) Conclusion(08:56) Acknowledgments ---
First published:
October 27th, 2025
Source:
https://forum.effectivealtruism.org/posts/Y6zMpdwkkAQ8rF56w/framing-ea-doing-good-better-did-worse
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. In Ugandan villages where non-governmental organisations (NGOs) hired away the existing government health worker, infant mortality went up. This happened in 39%[1] of villages that already had a government worker. The NGO arrived with funding and good intentions, but the likelihood that villagers received care from any health worker declined by ~23%. Brain Misallocation “Brain drain”, - the movement of people from poorer countries to wealthier ones, has been extensively discussed for decades[2]. But there's a different dynamic that gets far less attention: “brain misallocation”. In many low- and middle-income countries (LMICs), the brightest talents are being incentivised towards organisations that don’t utilise their potential for national development. They’re learning how to get grants from multilateral alphabet organisations rather than build businesses or make good policy. This isn’t about talent leaving the country. It's about talent being misdirected and mistrained within it. Examples Nick Laing [...] ---Outline:(00:36) Brain Misallocation(01:16) Examples(05:37) The Incentive Trap(07:48) When Help Becomes Harm(08:48) Conclusion ---
First published:
October 23rd, 2025
Source:
https://forum.effectivealtruism.org/posts/6rmdyddEateJFWb4L/the-charity-trap-brain-misallocation
Linkpost URL:https://gdea.substack.com/p/the-charity-trap-brain-misallocation
---
Narrated by TYPE III AUDIO.
This is a link post. Biological risks are more severe than has been widely appreciated. Recent discussions of mirror bacteria highlight an extreme scenario: a single organism that could infect and kill humans, plants, and animals, exhibits environmental persistence in soil or dust, and might be capable of spreading worldwide within several months. In the worst-case scenario, this could pose an existential risk to humanity, especially if the responses/countermeasures were inadequate.
Less severe pandemic pathogens could still cause hundreds of millions (or billions) of casualties if they were engineered to cause harm. Preventing such catastrophes should be a top priority for humanity. However, if prevention fails, it would also be prudent to have a backup plan.
One way of doing this would be to enumerate the types of pathogens that might be threatening (e.g. viruses, bacteria, fungi, etc), enumerate the subtypes (e.g. adenoviruses, coronaviruses, paramyxoviruses, etc), analyze the [...] ---Outline:(04:20) PPE(09:56) Biohardening(14:36) Detection(17:00) Expression of interest and acknowledgements The original text contained 34 footnotes which were omitted from this narration. ---
First published:
October 2nd, 2025
Source:
https://forum.effectivealtruism.org/posts/33t5jPzxEcFXLCPjq/the-four-pillars-a-hypothesis-for-countering-catastrophic
Linkpost URL:https://defensesindepth.bio/the-four-pillars-a-hypothesis-for-countering-catastrophic-biological-risk/
---
Narrated by TYPE III AUDIO.
I’ve used the phrase “entertainment for EAs” a bunch to describe a failure mode that I’m trying to avoid with my career. Maybe it’d be useful for other people working in meta-EA, so I’m sharing it here as a quick draft amnesty post. There's a motivational issue in meta-work where it's easy to start treating the existing EA community as stakeholders. The real stakeholders in my work (and meta-work in general) are the ultimate beneficiaries — the minds (animal, human, digital?) that could benefit from work I help to initiate. But those beneficiaries aren’t present to me — they aren’t my friends, they don’t work in the same building as me. To keep your eyes on the real prize takes constant work. When that work slips, you could end up working on ‘entertainment for EAs’, i.e. something which gets great feedback from EAs, but only hazily, if [...] ---
First published:
October 17th, 2025
Source:
https://forum.effectivealtruism.org/posts/AkSDhiPuvnRNbjXAf/entertainment-for-eas
---
Narrated by TYPE III AUDIO.
All quotes are from their blog post "Why we chose to invest another $100 million in cash transfers", highlights are my own: Today, we’re announcing a new $100 million USD commitment over the next four years to expand our partnership with GiveDirectly and help empower an additional 185,000 people living in extreme poverty. We’re also funding new research, and pilot variants, to further understand how we can maximize the impact of each dollar. This is on top of another $50 million USD they gave to GiveDirectly before: We started partnering with GiveDirectly in 2021. Since then, we’ve donated $50 million USD to support their work across Malawi, through direct cash transfers to those living in extreme poverty. We’ve already reached more than 85,000 people, helping to provide life changing resources and the dignity of choice. For context, the Cash for Poverty Relief program by Give Directly [...] ---Outline:(01:24) About their founding-to-give model(02:15) Other Engagement---
First published:
October 22nd, 2025
Source:
https://forum.effectivealtruism.org/posts/ktFpWLkvRAAygbbtH/canva-to-donate-usd100m-over-4-years-to-givedirectly
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I have some claim to be an “old hand” EA:[1] I was in the room when the creation Giving What We Can was announced (although I vacillated about joining for quite a while) I first went to EA Global in 2015 I worked on a not-very successful EA project for a while But I have not really been much involved in the community since about 2020. The interesting thing about this is that my withdrawal from the community has nothing to do with disagreements, personal conflicts, or FTX. I still pretty much agree with most “orthodox EA” positions, and I think that both the idea of EA and the movement remain straightforwardly good and relevant. Hence why I describe the process as “senescence”: intellectually and philosophically I am still on board and I still donate, I just… don’t particularly want to participate beyond that. Boredom I won’t sugar-coat [...] ---Outline:(01:00) Boredom(04:05) What do I have to offer?---
First published:
October 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/rJqQGD2z2DaupCbZE/my-ea-senescence
---
Narrated by TYPE III AUDIO.
TLDR EA is a community where time tracking is already very common and yet most people I talk to don't because It's too much work (when using toggl, clockify, ...) It's not accurate enough (when using RescueTime, rize, ...) I built https://donethat.ai that solves both of these with AI as part of AIM's Founding to Give program. It's live on Product Hunt today, please support it. You should probably track your time I'd argue that for most people, your time is your most valuable resource.[1] Even though your day has 24 hours, eight of those are already used up for sleep, another eight probably for social life, gym, food prep and eating, life admin, commute, leaving max eight hours to have impact. Oliver Burkeman argues in his recent book Meditations for Mortals that eight is still too high - most high impact work gets done in four hours [...] ---Outline:(00:11) TLDR(00:40) You should probably track your time(02:21) It just got easier---
First published:
October 14th, 2025
Source:
https://forum.effectivealtruism.org/posts/wt8gKaH9usKy3LQmK/you-should-probably-track-your-time-and-it-just-got-easier
---
Narrated by TYPE III AUDIO.
The following is a quick collection of forecasting markets and opinions from experts which give some sense of how well-informed people are thinking about the state of US democracy. This isn't meant to be a rigorous proof that this is the case (DM me for that), just a collection which I hope will get people thinking about what's happening in the US now. Before looking at the forecasts you might first ask yourself: What probability would I put on authoritarian capture?, and At what probability of authoritarian capture would I think that more concern and effort is warranted? Forecasts[1] The US won’t be a democracy by 2030: 25% - Metaculus Will Trump 2.0 be the end of Democracy as we know it?: 48% - Manifold If Trump is elected, will the US still be a liberal democracy at the end of his term? (V-DEM): 61% [...] ---Outline:(00:45) Forecasts(01:50) Quotes from experts & commentators(03:20) Some relevant research---
First published:
October 8th, 2025
Source:
https://forum.effectivealtruism.org/posts/eJNH2CikC4scTsqYs/experts-and-markets-think-authoritarian-capture-of-the-us
---
Narrated by TYPE III AUDIO.
or Maximizing Good Within Your Personal Constraints Note: The specific numbers and examples below are approximations meant to illustrate the framework. Your actual calculations will vary based on your situation, values, and cause area. The goal isn't precision—it's to start thinking explicitly about impact per unit of sacrifice rather than assuming certain actions are inherently virtuous. You're at an EA meetup. Two people are discussing their impact: Alice: "I went vegan, buy only secondhand, bike everywhere, and donate 5% of my nonprofit salary to animal charities." Bob: "I work in finance, eat whatever, and donate 40% of my income to animal charities." Who gets more social approval? Alice. Who prevents more animal suffering? Bob—by orders of magnitude. Alice's choices improve welfare for hundreds of animal-years annually through diet change and her $2,500 donation. Bob's $80,000 donation improves tens of thousands of animal-years through corporate campaigns. Yet Alice is [...] ---Outline:(00:11) or Maximizing Good Within Your Personal Constraints(01:31) The Personal Constraint Framework(02:26) Return on Sacrifice (RoS): The Core Metric(03:05) Case Studies: Where Good Intentions Go Wrong(03:10) Career: The Counterfactual Question(04:32) Environmental Action: Personal vs. Systemic(05:13) Information and Influence(05:45) Truth vs. Reach(06:17) The Uncomfortable Truth About Offsets(07:43) When Personal Practice Actually Matters(08:22) Your Personal Impact Portfolio(09:38) The Reallocation Exercise(10:40) Addressing the Predictable Objections(11:41) The Call to Action(12:10) The Bottom Line---
First published:
September 10th, 2025
Source:
https://forum.effectivealtruism.org/posts/u9WzAcyZkBhgWAew5/your-sacrifice-portfolio-is-probably-terrible
---
Narrated by TYPE III AUDIO.
This post is based on a memo I wrote for this year's Meta Coordination Forum. See also Arden Koehler's recent post, which hits a lot of similar notes. Summary The EA movement stands at a crossroads. In light of AI's very rapid progress, and the rise of the AI safety movement, some people view EA as a legacy movement set to fade away; others think we should refocus much more on “classic” cause areas like global health and animal welfare. I argue for a third way: EA should embrace the mission of making the transition to a post-AGI society go well, significantly expanding our cause area focus beyond traditional AI safety. This means working on neglected areas like AI welfare, AI character, AI persuasion and epistemic disruption, human power concentration, space governance, and more (while continuing work on global health, animal welfare, AI safety, and biorisk). These additional [...] ---Outline:(00:20) Summary(02:38) Three possible futures for the EA movement(07:07) Reason #1: Neglected cause areas(10:49) Reason #2: EA is currently intellectually adrift(13:08) Reason #3: The benefits of EA mindset for AI safety and biorisk(14:53) This isn't particularly Will-idiosyncratic(15:57) Some related issues(16:10) Principles-first EA(17:30) Cultivating vs growing EA(21:27) PR mentality(24:48) What I'm not saying(28:31) What to do?(29:00) Local groups(31:26) Online(35:18) Conferences(36:05) Conclusion---
First published:
October 10th, 2025
Source:
https://forum.effectivealtruism.org/posts/R8AAG4QBZi5puvogR/effective-altruism-in-the-age-of-agi
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Here's a talk I gave at an EA university group organizers’ retreat recently, which I've been strongly encouraged to share on the forum. I'd like to make it clear I don't recommend or endorse everything discussed in this talk (one example in particular which hopefully will be self-evident), but do think serious shifts in how we engage with ethics and EA would be quite beneficial for the world. Part 1: Taking ethics seriously To set context for this talk, I want to go through an Our World in Data style birds-eye view of how things are trending across key issues often discussed in EA. This is to help get better intuitions for questions like “How well will the future go by default?” and “Is the world on track to eventually solve the most pressing problems?” - which can inform high-level strategy questions like “Should we generally be doing more [...] ---Outline:(00:32) Part 1: Taking ethics seriously(04:26) Incentive shifts and moral progress(05:07) What is incentivized by society?(07:08) Heroic Responsibility(11:30) Excerpts from Strangers drowning(14:37) Opening our eyes to what is unbearable(18:07) Increasing effectiveness vs. increasing altruism(20:20) Cognitive dissonance(21:27) Paragons of moral courage(23:15) The monk who set himself on fire to protect Buddhism, and didn't flinch an inch(27:46) What do I most deeply want to honour in this life?(29:43) Moral Courage and defending EA(31:55) Acknowledging opportunity cost and grappling with guilt(33:33) Part 2: Enjoying the process(33:38) Celebrating what's really beautiful - what our hearts care about(42:08) Enjoying effective altruism(44:43) Training our minds to cultivate the qualities we endorse(46:54) Meditation isnt a silver bullet(52:35) The timeless words of MLK---
First published:
October 4th, 2025
Source:
https://forum.effectivealtruism.org/posts/gWyvAQztk75xQvRxD/taking-ethics-seriously-and-enjoying-the-process
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
TL;DR - AIM's applicants skew towards global health & development. We’ve recommended four new animal welfare charities, have the capacity to launch all four, but expect to struggle to find the talent to do so. If you’ve considered moving into animal welfare work, applying to Charity Entrepreneurship to launch a new charity in the space could be of huge counterfactual value. Part 1: Why you should launch an animal welfare charity Our existing animal charities have had a lot of impact—improving the lives of over 1 billion animals worldwide. - from Shrimp Welfare Project securing corporate commitments globally and featuring on the Daily Show, to FarmKind's recent success coordinating a $2 million dollar fundraiser for the animal movement on the Dwarkshesh podcast, not to mention the progress of the 40 person army at the Fish Welfare Initiative, Scale Welfare's direct hand-on work at fish farms, and Animal Policy [...] ---Outline:(00:37) Part 1: Why you should launch an animal welfare charity(02:07) A few notes on counterfactual founder value(05:57) Part 2 - The Charity Entrepreneurship Program & Our Latest Animal Welfare Ideas(06:04) What is the Charity Entrepreneurship Incubation Program?(06:47) Our recommended animal welfare ideas for 2026(07:10) 1. Driving supermarket commitments to shift diets away from meat(07:58) 2. Securing scale-up funding for the alternative protein industry(08:51) 3. Cage-free farming in the Middle East(09:30) 4. Preventing painful injuries in laying hens(10:02) Applications close on October 5th: Apply here.---
First published:
September 29th, 2025
Source:
https://forum.effectivealtruism.org/posts/aeky2EWd32bjjPJqf/charity-entrepreneurship-is-bottlenecked-by-a-lack-of-great
---
Narrated by TYPE III AUDIO.
Summary: Consumers rejected genetically modified crops, and I expect they will do the same for cultivated meat. The meat lobby will fight to discredit the new technology, and as consumers are already primed to believe it's unnatural, it won’t be difficult to persuade them. When I hear people talk about cultivated meat (i.e. lab-grown meat) and how it will replace traditional animal agriculture, I find it depressingly reminiscent of the techno-optimists of the 1980s and ‘90s speculating about how genetic modification will solve all our food problems. The optimism of the time was understandable: in 1994 the first GMO product was introduced to supermarkets, and the benefits of the technology promised incredible rewards. GMOs were predicted to bring about the end of world hunger, all while requiring less water, pesticides, and land.Today, thirty years later, in the EU GM foods are so regulated that they are [...] ---Outline:(01:56) Why did GMOs fail to be widely adopted?(02:44) A Bad First Impression(05:54) Unpopular Corporate Concentration(07:22) Cultivated Meat IS GMO(08:45) What timeline are we in?(10:24) What can be done to prevent cultivated meat from becoming irrelevant?(10:30) Expect incredible opposition(11:46) Be ready to tell a clear story about the benefits.(13:17) A proactive PR Effort(15:01) First impressions matter(17:16) Labeling(19:35) Be ready to discuss concerns about unnaturalness(21:56) Limitations of the comparison(23:07) Conclusion---
First published:
September 22nd, 2025
Source:
https://forum.effectivealtruism.org/posts/rMQA9w7ZM7ioZpaN6/cultivated-meat-a-wakeup-call-for-optimists
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Note: I am the web programme director at 80,000 Hours and the view expressed here currently helps shape the web team's strategy. However, this shouldn't be taken to be expressing something on behalf of 80k as a whole, and writing and posting this memo was not undertaken as an 80k project. 80,000 Hours, where I work, has made helping people make AI go well [1]its focus. As part of this work, I think my team should continue to: Talk about / teach ideas and thinking styles that have historically been central to effective altruism (e.g. via our career guide, cause analysis content, and podcasts) Encourage people to get involved in the EA community explicitly and via linking to content. I wrote this memo for the MCF (Meta Coordination Forum), because I wasn't sure this was intuitive to others. I think talking about EA ideas and encouraging people to get [...] ---Outline:(01:21) 1. The effort to make AGI go well needs people who are flexible and equipped to to make their own good decisions(02:10) Counterargument: Agendas are starting to take shape, so this is less true than it used to be.(02:43) 2. Making AGI go well calls for a movement that thinks in explicitly moral terms(03:59) Counterargument: movements can be morally good without being explicitly moral, and being morally good is whats important.(04:41) 3. EA is (A) at least somewhat able to equip people to flexibly make good decisions, (B) explicitly morally focused.(04:52) (A) EA is at least somewhat able to equip people to flexibly make good decisions(06:04) (B) EA is explicitly morally focused(06:49) Counterargument: A different flexible & explicitly moral movement could be better for trying to make AGI go well.(07:49) Appendix: What are the relevant alternatives?(12:13) Appendix 2: anon notes from others---
First published:
September 25th, 2025
Source:
https://forum.effectivealtruism.org/posts/oPue7R3outxZaTXzp/why-i-think-capacity-building-to-make-agi-go-well-should
---
Narrated by TYPE III AUDIO.



