Discover
EA Forum Podcast (Curated & popular)
EA Forum Podcast (Curated & popular)
Author: EA Forum Team
Subscribed: 31Played: 1,700Subscribe
Share
© 2025 All rights reserved
Description
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
496 Episodes
Reverse
Many thanks to @Felix_Werdermann 🔸 @Engin Arıkan and @Ana Barreiro for your feedback and comments on this, and for the encouragement from many people to finally write this up into an EA forum post. For years, much of the career advice in the Effective Altruism community has implicitly (or explicitly) suggested that impact = working at an EA nonprofit. That narrative made sense when the community and its talent pool were smaller. But as EA grows, it's worth reassessing whether we’re overconcentrating on nonprofit careers, a trend that may be limiting our community's impact and leaving higher-leverage opportunities on the table. Why Now? As the EA movement has grown, it has attracted far more talent than the nonprofit sector can realistically absorb. This creates an urgent need to develop alternative pathways for talented, mission-aligned people. Under the current status quo, many end up feeling frustrated after going through multiple [...] ---Outline:(00:51) Why Now?(02:06) Important Caveats(03:20) The argument for roles outside of non-profits(03:25) Institutions Dwarf Nonprofit Capacity(05:19) Salaries Are Covered Outside the Movement(06:12) Counterfactual Impact Is Often Greater(06:57) A Healthier Distribution of Talent(07:54) Why This Might Be Wrong Advice(08:24) The Challenges of External Roles(10:13) Why These Risks Still Seem Worth Taking(10:57) Why Steering Everyone Toward Nonprofits Might Hurt the EA Community(11:03) Nonprofit Roles Are Saturated(11:27) Nonprofits Have Low Absorbency(12:24) Too Many Advising Channels, One Bottlenecked Funnel(12:49) Nonprofits are not a good fit for everyone, and they may be a much better fit for roles in other sectors.(13:31) Important Final Caveats ---
First published:
October 16th, 2025
Source:
https://forum.effectivealtruism.org/posts/FAmCmCavZ5vTzbRcM/why-many-eas-may-have-more-impact-outside-of-nonprofits-in
---
Narrated by TYPE III AUDIO.
Summary As part of our ongoing work to study how to best frame EA, we experimentally tested different phrases and sentences that CEA were considering using on effectivealtruism.org. Doing Good Better taglines We observed a consistent pattern where taglines that included the phrase ‘do[ing] good better’ received less support from respondents and inspired less interest in learning about EA. We replicated these results in a second experiment, where we confirmed that taglines referring to “do[ing] good better” performed less well than those referring to “do[ing] the most good”. Nouns and sentences Nouns: The effect of using different nouns to refer to EA was small, but referring to EA as a ‘philosophy’ or ‘movement’ inspired the most curiosity compared to options including ‘project’ and ‘research field’. Sentences: “Find the most effective ways to do good with your time, money, and career” and “Effective altruism asks the question of how we [...] ---Outline:(00:12) Summary(01:23) Method(02:18) Taglines (Study 1)(03:40) Doing Good Better replication (Study 2)(05:23) Sentences (Study 1)(06:45) Nouns (Study 1)(07:41) Effectiveness focus(07:55) Conclusion(08:56) Acknowledgments ---
First published:
October 27th, 2025
Source:
https://forum.effectivealtruism.org/posts/Y6zMpdwkkAQ8rF56w/framing-ea-doing-good-better-did-worse
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. In Ugandan villages where non-governmental organisations (NGOs) hired away the existing government health worker, infant mortality went up. This happened in 39%[1] of villages that already had a government worker. The NGO arrived with funding and good intentions, but the likelihood that villagers received care from any health worker declined by ~23%. Brain Misallocation “Brain drain”, - the movement of people from poorer countries to wealthier ones, has been extensively discussed for decades[2]. But there's a different dynamic that gets far less attention: “brain misallocation”. In many low- and middle-income countries (LMICs), the brightest talents are being incentivised towards organisations that don’t utilise their potential for national development. They’re learning how to get grants from multilateral alphabet organisations rather than build businesses or make good policy. This isn’t about talent leaving the country. It's about talent being misdirected and mistrained within it. Examples Nick Laing [...] ---Outline:(00:36) Brain Misallocation(01:16) Examples(05:37) The Incentive Trap(07:48) When Help Becomes Harm(08:48) Conclusion ---
First published:
October 23rd, 2025
Source:
https://forum.effectivealtruism.org/posts/6rmdyddEateJFWb4L/the-charity-trap-brain-misallocation
Linkpost URL:https://gdea.substack.com/p/the-charity-trap-brain-misallocation
---
Narrated by TYPE III AUDIO.
This is a link post. Biological risks are more severe than has been widely appreciated. Recent discussions of mirror bacteria highlight an extreme scenario: a single organism that could infect and kill humans, plants, and animals, exhibits environmental persistence in soil or dust, and might be capable of spreading worldwide within several months. In the worst-case scenario, this could pose an existential risk to humanity, especially if the responses/countermeasures were inadequate.
Less severe pandemic pathogens could still cause hundreds of millions (or billions) of casualties if they were engineered to cause harm. Preventing such catastrophes should be a top priority for humanity. However, if prevention fails, it would also be prudent to have a backup plan.
One way of doing this would be to enumerate the types of pathogens that might be threatening (e.g. viruses, bacteria, fungi, etc), enumerate the subtypes (e.g. adenoviruses, coronaviruses, paramyxoviruses, etc), analyze the [...] ---Outline:(04:20) PPE(09:56) Biohardening(14:36) Detection(17:00) Expression of interest and acknowledgements The original text contained 34 footnotes which were omitted from this narration. ---
First published:
October 2nd, 2025
Source:
https://forum.effectivealtruism.org/posts/33t5jPzxEcFXLCPjq/the-four-pillars-a-hypothesis-for-countering-catastrophic
Linkpost URL:https://defensesindepth.bio/the-four-pillars-a-hypothesis-for-countering-catastrophic-biological-risk/
---
Narrated by TYPE III AUDIO.
I’ve used the phrase “entertainment for EAs” a bunch to describe a failure mode that I’m trying to avoid with my career. Maybe it’d be useful for other people working in meta-EA, so I’m sharing it here as a quick draft amnesty post. There's a motivational issue in meta-work where it's easy to start treating the existing EA community as stakeholders. The real stakeholders in my work (and meta-work in general) are the ultimate beneficiaries — the minds (animal, human, digital?) that could benefit from work I help to initiate. But those beneficiaries aren’t present to me — they aren’t my friends, they don’t work in the same building as me. To keep your eyes on the real prize takes constant work. When that work slips, you could end up working on ‘entertainment for EAs’, i.e. something which gets great feedback from EAs, but only hazily, if [...] ---
First published:
October 17th, 2025
Source:
https://forum.effectivealtruism.org/posts/AkSDhiPuvnRNbjXAf/entertainment-for-eas
---
Narrated by TYPE III AUDIO.
All quotes are from their blog post "Why we chose to invest another $100 million in cash transfers", highlights are my own: Today, we’re announcing a new $100 million USD commitment over the next four years to expand our partnership with GiveDirectly and help empower an additional 185,000 people living in extreme poverty. We’re also funding new research, and pilot variants, to further understand how we can maximize the impact of each dollar. This is on top of another $50 million USD they gave to GiveDirectly before: We started partnering with GiveDirectly in 2021. Since then, we’ve donated $50 million USD to support their work across Malawi, through direct cash transfers to those living in extreme poverty. We’ve already reached more than 85,000 people, helping to provide life changing resources and the dignity of choice. For context, the Cash for Poverty Relief program by Give Directly [...] ---Outline:(01:24) About their founding-to-give model(02:15) Other Engagement---
First published:
October 22nd, 2025
Source:
https://forum.effectivealtruism.org/posts/ktFpWLkvRAAygbbtH/canva-to-donate-usd100m-over-4-years-to-givedirectly
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I have some claim to be an “old hand” EA:[1] I was in the room when the creation Giving What We Can was announced (although I vacillated about joining for quite a while) I first went to EA Global in 2015 I worked on a not-very successful EA project for a while But I have not really been much involved in the community since about 2020. The interesting thing about this is that my withdrawal from the community has nothing to do with disagreements, personal conflicts, or FTX. I still pretty much agree with most “orthodox EA” positions, and I think that both the idea of EA and the movement remain straightforwardly good and relevant. Hence why I describe the process as “senescence”: intellectually and philosophically I am still on board and I still donate, I just… don’t particularly want to participate beyond that. Boredom I won’t sugar-coat [...] ---Outline:(01:00) Boredom(04:05) What do I have to offer?---
First published:
October 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/rJqQGD2z2DaupCbZE/my-ea-senescence
---
Narrated by TYPE III AUDIO.
TLDR EA is a community where time tracking is already very common and yet most people I talk to don't because It's too much work (when using toggl, clockify, ...) It's not accurate enough (when using RescueTime, rize, ...) I built https://donethat.ai that solves both of these with AI as part of AIM's Founding to Give program. It's live on Product Hunt today, please support it. You should probably track your time I'd argue that for most people, your time is your most valuable resource.[1] Even though your day has 24 hours, eight of those are already used up for sleep, another eight probably for social life, gym, food prep and eating, life admin, commute, leaving max eight hours to have impact. Oliver Burkeman argues in his recent book Meditations for Mortals that eight is still too high - most high impact work gets done in four hours [...] ---Outline:(00:11) TLDR(00:40) You should probably track your time(02:21) It just got easier---
First published:
October 14th, 2025
Source:
https://forum.effectivealtruism.org/posts/wt8gKaH9usKy3LQmK/you-should-probably-track-your-time-and-it-just-got-easier
---
Narrated by TYPE III AUDIO.
The following is a quick collection of forecasting markets and opinions from experts which give some sense of how well-informed people are thinking about the state of US democracy. This isn't meant to be a rigorous proof that this is the case (DM me for that), just a collection which I hope will get people thinking about what's happening in the US now. Before looking at the forecasts you might first ask yourself: What probability would I put on authoritarian capture?, and At what probability of authoritarian capture would I think that more concern and effort is warranted? Forecasts[1] The US won’t be a democracy by 2030: 25% - Metaculus Will Trump 2.0 be the end of Democracy as we know it?: 48% - Manifold If Trump is elected, will the US still be a liberal democracy at the end of his term? (V-DEM): 61% [...] ---Outline:(00:45) Forecasts(01:50) Quotes from experts & commentators(03:20) Some relevant research---
First published:
October 8th, 2025
Source:
https://forum.effectivealtruism.org/posts/eJNH2CikC4scTsqYs/experts-and-markets-think-authoritarian-capture-of-the-us
---
Narrated by TYPE III AUDIO.
or Maximizing Good Within Your Personal Constraints Note: The specific numbers and examples below are approximations meant to illustrate the framework. Your actual calculations will vary based on your situation, values, and cause area. The goal isn't precision—it's to start thinking explicitly about impact per unit of sacrifice rather than assuming certain actions are inherently virtuous. You're at an EA meetup. Two people are discussing their impact: Alice: "I went vegan, buy only secondhand, bike everywhere, and donate 5% of my nonprofit salary to animal charities." Bob: "I work in finance, eat whatever, and donate 40% of my income to animal charities." Who gets more social approval? Alice. Who prevents more animal suffering? Bob—by orders of magnitude. Alice's choices improve welfare for hundreds of animal-years annually through diet change and her $2,500 donation. Bob's $80,000 donation improves tens of thousands of animal-years through corporate campaigns. Yet Alice is [...] ---Outline:(00:11) or Maximizing Good Within Your Personal Constraints(01:31) The Personal Constraint Framework(02:26) Return on Sacrifice (RoS): The Core Metric(03:05) Case Studies: Where Good Intentions Go Wrong(03:10) Career: The Counterfactual Question(04:32) Environmental Action: Personal vs. Systemic(05:13) Information and Influence(05:45) Truth vs. Reach(06:17) The Uncomfortable Truth About Offsets(07:43) When Personal Practice Actually Matters(08:22) Your Personal Impact Portfolio(09:38) The Reallocation Exercise(10:40) Addressing the Predictable Objections(11:41) The Call to Action(12:10) The Bottom Line---
First published:
September 10th, 2025
Source:
https://forum.effectivealtruism.org/posts/u9WzAcyZkBhgWAew5/your-sacrifice-portfolio-is-probably-terrible
---
Narrated by TYPE III AUDIO.
This post is based on a memo I wrote for this year's Meta Coordination Forum. See also Arden Koehler's recent post, which hits a lot of similar notes. Summary The EA movement stands at a crossroads. In light of AI's very rapid progress, and the rise of the AI safety movement, some people view EA as a legacy movement set to fade away; others think we should refocus much more on “classic” cause areas like global health and animal welfare. I argue for a third way: EA should embrace the mission of making the transition to a post-AGI society go well, significantly expanding our cause area focus beyond traditional AI safety. This means working on neglected areas like AI welfare, AI character, AI persuasion and epistemic disruption, human power concentration, space governance, and more (while continuing work on global health, animal welfare, AI safety, and biorisk). These additional [...] ---Outline:(00:20) Summary(02:38) Three possible futures for the EA movement(07:07) Reason #1: Neglected cause areas(10:49) Reason #2: EA is currently intellectually adrift(13:08) Reason #3: The benefits of EA mindset for AI safety and biorisk(14:53) This isn't particularly Will-idiosyncratic(15:57) Some related issues(16:10) Principles-first EA(17:30) Cultivating vs growing EA(21:27) PR mentality(24:48) What I'm not saying(28:31) What to do?(29:00) Local groups(31:26) Online(35:18) Conferences(36:05) Conclusion---
First published:
October 10th, 2025
Source:
https://forum.effectivealtruism.org/posts/R8AAG4QBZi5puvogR/effective-altruism-in-the-age-of-agi
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Here's a talk I gave at an EA university group organizers’ retreat recently, which I've been strongly encouraged to share on the forum. I'd like to make it clear I don't recommend or endorse everything discussed in this talk (one example in particular which hopefully will be self-evident), but do think serious shifts in how we engage with ethics and EA would be quite beneficial for the world. Part 1: Taking ethics seriously To set context for this talk, I want to go through an Our World in Data style birds-eye view of how things are trending across key issues often discussed in EA. This is to help get better intuitions for questions like “How well will the future go by default?” and “Is the world on track to eventually solve the most pressing problems?” - which can inform high-level strategy questions like “Should we generally be doing more [...] ---Outline:(00:32) Part 1: Taking ethics seriously(04:26) Incentive shifts and moral progress(05:07) What is incentivized by society?(07:08) Heroic Responsibility(11:30) Excerpts from Strangers drowning(14:37) Opening our eyes to what is unbearable(18:07) Increasing effectiveness vs. increasing altruism(20:20) Cognitive dissonance(21:27) Paragons of moral courage(23:15) The monk who set himself on fire to protect Buddhism, and didn't flinch an inch(27:46) What do I most deeply want to honour in this life?(29:43) Moral Courage and defending EA(31:55) Acknowledging opportunity cost and grappling with guilt(33:33) Part 2: Enjoying the process(33:38) Celebrating what's really beautiful - what our hearts care about(42:08) Enjoying effective altruism(44:43) Training our minds to cultivate the qualities we endorse(46:54) Meditation isnt a silver bullet(52:35) The timeless words of MLK---
First published:
October 4th, 2025
Source:
https://forum.effectivealtruism.org/posts/gWyvAQztk75xQvRxD/taking-ethics-seriously-and-enjoying-the-process
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
TL;DR - AIM's applicants skew towards global health & development. We’ve recommended four new animal welfare charities, have the capacity to launch all four, but expect to struggle to find the talent to do so. If you’ve considered moving into animal welfare work, applying to Charity Entrepreneurship to launch a new charity in the space could be of huge counterfactual value. Part 1: Why you should launch an animal welfare charity Our existing animal charities have had a lot of impact—improving the lives of over 1 billion animals worldwide. - from Shrimp Welfare Project securing corporate commitments globally and featuring on the Daily Show, to FarmKind's recent success coordinating a $2 million dollar fundraiser for the animal movement on the Dwarkshesh podcast, not to mention the progress of the 40 person army at the Fish Welfare Initiative, Scale Welfare's direct hand-on work at fish farms, and Animal Policy [...] ---Outline:(00:37) Part 1: Why you should launch an animal welfare charity(02:07) A few notes on counterfactual founder value(05:57) Part 2 - The Charity Entrepreneurship Program & Our Latest Animal Welfare Ideas(06:04) What is the Charity Entrepreneurship Incubation Program?(06:47) Our recommended animal welfare ideas for 2026(07:10) 1. Driving supermarket commitments to shift diets away from meat(07:58) 2. Securing scale-up funding for the alternative protein industry(08:51) 3. Cage-free farming in the Middle East(09:30) 4. Preventing painful injuries in laying hens(10:02) Applications close on October 5th: Apply here.---
First published:
September 29th, 2025
Source:
https://forum.effectivealtruism.org/posts/aeky2EWd32bjjPJqf/charity-entrepreneurship-is-bottlenecked-by-a-lack-of-great
---
Narrated by TYPE III AUDIO.
Summary: Consumers rejected genetically modified crops, and I expect they will do the same for cultivated meat. The meat lobby will fight to discredit the new technology, and as consumers are already primed to believe it's unnatural, it won’t be difficult to persuade them. When I hear people talk about cultivated meat (i.e. lab-grown meat) and how it will replace traditional animal agriculture, I find it depressingly reminiscent of the techno-optimists of the 1980s and ‘90s speculating about how genetic modification will solve all our food problems. The optimism of the time was understandable: in 1994 the first GMO product was introduced to supermarkets, and the benefits of the technology promised incredible rewards. GMOs were predicted to bring about the end of world hunger, all while requiring less water, pesticides, and land.Today, thirty years later, in the EU GM foods are so regulated that they are [...] ---Outline:(01:56) Why did GMOs fail to be widely adopted?(02:44) A Bad First Impression(05:54) Unpopular Corporate Concentration(07:22) Cultivated Meat IS GMO(08:45) What timeline are we in?(10:24) What can be done to prevent cultivated meat from becoming irrelevant?(10:30) Expect incredible opposition(11:46) Be ready to tell a clear story about the benefits.(13:17) A proactive PR Effort(15:01) First impressions matter(17:16) Labeling(19:35) Be ready to discuss concerns about unnaturalness(21:56) Limitations of the comparison(23:07) Conclusion---
First published:
September 22nd, 2025
Source:
https://forum.effectivealtruism.org/posts/rMQA9w7ZM7ioZpaN6/cultivated-meat-a-wakeup-call-for-optimists
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Note: I am the web programme director at 80,000 Hours and the view expressed here currently helps shape the web team's strategy. However, this shouldn't be taken to be expressing something on behalf of 80k as a whole, and writing and posting this memo was not undertaken as an 80k project. 80,000 Hours, where I work, has made helping people make AI go well [1]its focus. As part of this work, I think my team should continue to: Talk about / teach ideas and thinking styles that have historically been central to effective altruism (e.g. via our career guide, cause analysis content, and podcasts) Encourage people to get involved in the EA community explicitly and via linking to content. I wrote this memo for the MCF (Meta Coordination Forum), because I wasn't sure this was intuitive to others. I think talking about EA ideas and encouraging people to get [...] ---Outline:(01:21) 1. The effort to make AGI go well needs people who are flexible and equipped to to make their own good decisions(02:10) Counterargument: Agendas are starting to take shape, so this is less true than it used to be.(02:43) 2. Making AGI go well calls for a movement that thinks in explicitly moral terms(03:59) Counterargument: movements can be morally good without being explicitly moral, and being morally good is whats important.(04:41) 3. EA is (A) at least somewhat able to equip people to flexibly make good decisions, (B) explicitly morally focused.(04:52) (A) EA is at least somewhat able to equip people to flexibly make good decisions(06:04) (B) EA is explicitly morally focused(06:49) Counterargument: A different flexible & explicitly moral movement could be better for trying to make AGI go well.(07:49) Appendix: What are the relevant alternatives?(12:13) Appendix 2: anon notes from others---
First published:
September 25th, 2025
Source:
https://forum.effectivealtruism.org/posts/oPue7R3outxZaTXzp/why-i-think-capacity-building-to-make-agi-go-well-should
---
Narrated by TYPE III AUDIO.
Intro and summary “How many chickens spared from cages is worth not being with my parents as they get older?!” - Me, exasperated (September 18, 2021) This post is about something I haven’t seen discussed on the EA forum but I often talk about with my friends in their mid 30s. It's about something I wish I'd understood better ten years ago: if you are ~25 and debating whether to move to an EA Hub, you are probably underestimating how much the calculus will change when you’re ~35, largely related to having kids and aging parents. Since this is underappreciated, moving to an EA Hub, and building a life there, can lead to tougher decisions later that can sneak up on you. If you’re living in an EA hub, or thinking about moving, this post explores reasons you might want to head home as you get older, different ways [...] ---Outline:(00:11) Intro and summary(01:49) Why move to an EA Hub in the first place?(02:57) How things change as you get older(05:33) Why YOU might be more likely to feel the pull to head home(06:49) How did I decide? How should you decide?(08:38) Consolation prize - moving to a Hub isn't all or nothing(09:38) Conclusion---
First published:
September 23rd, 2025
Source:
https://forum.effectivealtruism.org/posts/ZEWE6K74dmzv7kXHP/moving-to-a-hub-getting-older-and-heading-home
---
Narrated by TYPE III AUDIO.
It's been several years since I was an EA student group organiser, so please forgive any part of this post which feels out of touch (& correct me in comments!) Wow, student group organising is hard. A few structural things that make it hard to be an organiser: You maybe haven’t had a job before, or have only had kind of informal jobs. So, you might not have learned a lot of stuff about how to accomplish things at work. You’re probably trying to do a degree at the same time, which is hard enough on its own! You don’t have the structure and benefits provided by a regular 9-5 job at an organisation, like: A manager An office Operational support People you can ask for help & advice A network You have, at most, a year or so to skill up before you might be responsible [...] ---
First published:
September 12th, 2025
Source:
https://forum.effectivealtruism.org/posts/zMBFSesYeyfDp6Fj4/student-group-organising-is-hard-and-important
---
Narrated by TYPE III AUDIO.
Hi, have you been rejected from all the 80K listed EA jobs you’ve applied for? It sucks, right? Welcome to the club. What might be comforting is that you (and I) are not alone. EA Job listings are extremely competitive, and in the classic EA career path, you just get rejected over and over. Many others have written about their rejection experience, here, here, and here. Even if it is quite normal for very smart, hardworking, proactive, and highly motivated EAs to get rejected from high-impact positions, it still sucks. It sucks because we sincerely want to make the world a radically better place. We’ve read everything, planned accordingly, gone through fellowships, rejected other options, and worked very hard just to get the following message: "Thank you for your interest in [Insert EA Org Name]... we have decided to move forward with other candidates for this role... we're unfortunately [...] ---Outline:(06:13) A note on AI timelines(08:51) Time to go forward---
First published:
September 5th, 2025
Source:
https://forum.effectivealtruism.org/posts/pzbtpZvL2bYfssdkr/rejected-from-all-the-ea-jobs-you-applied-for-what-to-do-now
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Early work on ”GiveWell for AI Safety” Intro EA was founded on the principle of cost-effectiveness. We should fund projects that do more with less, and more generally, spend resources as efficiently as possible. And yet, while much interest, funding, and resources in EA have shifted towards AI safety, it's rare to see any cost-effectiveness calculations. The focus on AI safety is based on vague philosophical arguments that the future could be very large and valuable, and thus whatever is done towards this end is worth orders of magnitude more than most short-term effects. Even if AI safety is the most important problem, you should still strive to optimize how resources are spent to achieve maximum impact, since there are limited resources. Global health organizations and animal welfare organizations work hard to measure cost-effectiveness, evaluate charities, make sure effects are counterfactual, run RCTs, estimate moral weights, scope out interventions [...] ---Outline:(00:11) Early work on GiveWell for AI Safety(00:16) Intro(02:43) Step 1: Gathering data(03:00) Viewer minutes(03:35) Costs and revenue(04:49) Results(05:08) Step 2: Quality-adjusting(05:40) Quality of Audience (Qa)(06:58) Fidelity of Message (Qf)(08:05) Alignment of Message (Qm)(08:53) Results(09:37) Observations(12:37) How to help(13:36) Appendix: Examples of Data Collection(13:42) Rob Miles(14:18) AI Species (Drew Spartz)(14:56) Rational Animations(15:32) AI in Context(15:52) Cognitive Revolution---
First published:
September 12th, 2025
Source:
https://forum.effectivealtruism.org/posts/SBsGCwkoAemPawfJz/how-cost-effective-are-ai-safety-youtubers
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
There's a huge amount of energy spent on how to get the most QALYs/$. And a good amount of energy spent on how to increase total $. And you might think that across those efforts, we are succeeding in maximizing total QALYs. I think a third avenue is under investigated: marginally improving the effectiveness of ineffective capital. That's to say, improving outcomes, only somewhat, for the pool of money that is not at all EA-aligned. This cash is not being spent optimally, and likely never will be. But the sheer volume could make up for the lack of efficacy. Say you have the option to work for the foundation of one of two donors: Donor A only has an annual giving budget of $100,000, but will do with that money whatever you suggest. If you say “bed nets” he says “how many”. Donor B has a much larger [...] ---Outline:(01:34) Most money is not EA money(04:32) How much money is there?(05:49) Effective Everything?---
First published:
September 8th, 2025
Source:
https://forum.effectivealtruism.org/posts/o5LBbv9bfNjKxFeHm/marginally-more-effective-altruism
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.



