Discover
EA Forum Podcast (Curated & popular)
EA Forum Podcast (Curated & popular)
Author: EA Forum Team
Subscribed: 32Played: 1,842Subscribe
Share
© 2026 All rights reserved
Description
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
571 Episodes
Reverse
For those not working in the space this probably isn't on your radar, but the animal advocacy movement just secured a huge win with Ahold Delhaize, convincing the fourth-largest supermarket company in the US to set the strongest cage-free policy of any large US retailer: A roadmap with benchmarks to fully eliminate caged egg cartons, expand cage-free offerings, and increase the percentage of cage-free sales. A pledge to annually report on its progress. At all 2,000+ locations, placing large, promotional shelf tags in front of cage-free cartons to differentiate cage-free and caged cartons for consumers. This was a giant campaign. My understanding is that other companies were watching to see if this campaign would succeed or fail, to see if they would need to follow suit. In addition to the animals helped, this win will add pressure for competitors to do the same. This was a coordinated effort among many groups, including: Center For Responsible Food Business; Animal Equality; International Council for Animal Welfare; The Humane League; Mercy For Animals; Compassion in World Farming; Coalition to Abolish the Fur Trade and Animal Activist Collective. Animal Equality states that this will affect 5-7 million hens. I know a lot [...] ---
First published:
March 4th, 2026
Source:
https://forum.effectivealtruism.org/posts/2wePKArWWr4Xx6Zvf/some-good-news-ahold-delhaize-to-go-cage-free
---
Narrated by TYPE III AUDIO.
This is a link post. Forum note: this post embodies the spirit of a new project I—Matt Reardon—am starting to reinvigorate in-person EA communities. I'm hiring a co-founder and I'm interested in meeting others who want to collaborate on this vision. My DMs are open. Sequence thinkers will be forgiven and rejoice In some fleeting moments lately, I catch glimpses of 2022—the year Effective Altruism's ascent seemed unstoppable. Universally positive (if limited) press, big groups at top universities across the world, a young EA entrepreneur was the darling of the financial industry, the first EA running for Congress calling in enormous financial and personnel resources for his campaign. More than those public-facing facts though, was a feeling on the ground that if you had a good grip on things and a plausible idea, you would get funding, go to the Bay, and make it happen. I actually regret how slow I was to see it at the time. You could just do things, and yes, that's always been true, but at that time, you didn’t even have excuses. In my first year as an advisor at 80,000 Hours, I allowed people a lot of excuses. I think this came from [...] ---Outline:(02:37) The Retreat(07:03) What Greatness Demands(10:59) Effective Altruism is Good and Right ---
First published:
March 7th, 2026
Source:
https://forum.effectivealtruism.org/posts/uHhcqagBBkhFTTGpq/effective-altruism-will-be-great-again
Linkpost URL:https://open.substack.com/pub/frommatter/p/effective-altruism-will-be-great
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
All views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background. Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update. First, the big picture: I expect some people will be upset about the move away from a “hard commitments”/”binding ourselves to the mast” vibe. (Anthropic has always had the ability to revise the RSP, and we’ve always had language in there specifically flagging that we might revise away key commitments in a situation where other AI developers aren’t adhering to similar commitments. But it's been easy to get the impression that the RSP is “binding ourselves to the mast” and committing to unilaterally pause AI development and deployment under some conditions, and Anthropic is responsible for that.) I take significant responsibility for this change. I have been pushing for this change for about a year now, and have led the way in developing the new RSP. I am in favor of nearly everything about the changes we’re making. I am excited about the Roadmap, the Risk Reports, the move toward external [...] ---Outline:(05:32) How it started: the original goals of RSPs(11:25) How its going: the good and the bad(11:51) A note on my general orientation toward this topic(14:56) Goal 1: forcing functions for improved risk mitigations(15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard(18:24) A mixed success/failure story: impact on information security(20:42) ASL-4 and ASL-5 prep: the wrong incentives(25:00) When forcing functions do and dont work well(27:52) Goal 2 (testbed for practices and policies that can feed into regulation)(29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations)(30:59) RSP v3s attempt to amplify the good and reduce the bad(36:01) Do these benefits apply only to the most safety-oriented companies?(37:40) A revised, but not overturned, vision for RSPs(39:08) Q&A(39:10) On the move away from implied unilateral commitments(39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk?(40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma?(42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk?(43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious?(45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out?(46:03) Could you have drafted the new RSP, then waited until you had to invoke your escape clause and introduced it then? Or introduced the new RSP as what we will do if we invoke our escape clause?(47:29) The new Risk Reports and Roadmap are nice, but couldnt you have put them out without also making the key revision of moving away from unilateral commitments?(48:26) Why isnt a unilateral pause a good idea? It could be a big credible signal of danger, which could lead to policy action.(49:37) Could a unilateral pause ever be a good idea? Why not commit to a unilateral pause in cases where it would be a good idea?(50:31) Why didnt you communicate about the change differently? Im worried that the way you framed this will cause audience X to take away message Y.(51:53) Why dont Anthropics and your communications about this have a more alarmed and/or disappointed vibe? I reluctantly concede that this revision makes sense on the merits, but Im sad about it. Arent you?(53:19) On other components of the new RSP(53:24) The new RSPs commitments related to competitors seem vague and weak. Could you add more and/or strengthen these? They dont seem sufficient as-is to provide strong assurance against a prisoners dilemma world where each relevant company wishes it could be more careful, but rushes due to pressure from others.(55:29) Why is external review only required at an extreme capability level? Why not just require it now?(58:06) The new commitments are mostly about Risk Reports and Roadmap - what stops companies from just making these really perfunctory?(59:18) Why isnt the RSP more adversarially designed such that once a company adopts it, it will improve their practices even if nobody at the company values safety at all?(01:00:18) What are the consequences of missing your Roadmap commitments? If they arent dire, will anyone care about them?(01:00:29) OK, but does that apply to other companies too? How will Roadmaps force other companies to get things done?(01:00:40) Why arent the recommendations for industry-wide safety more specific? Why is it built around safety cases instead of ASLs with specific lists of needed risk mitigations?(01:02:06) What is the point of making commitments if you can revise them anytime? ---
First published:
February 24th, 2026
Source:
https://forum.effectivealtruism.org/posts/DGZNAGL2FNJfftwgE/responsible-scaling-policy-v3-1
---
Narrated by TYPE III AUDIO.
TL;DR: Define your line that if crossed, you would consider this issue one of (if not the most) pressing issues, or at least pressing enough to warrant some of your time. I want to start with a clarification that I learned while writing this post. In the United States, charities with 501(c)(3) tax-exempt status are permitted to discuss policy and engage in advocacy, but are prohibited from participating in partisan political campaigns. I have also read the EA Forum post Politics on the EA Forums and I believe this post is consistent with those norms. I am not advocating for or against any party, candidate, or electoral campaign. The question I want to raise is broader: whether creeping authoritarianism, anti-fascism, and authoritarian lock-in should be discussed more explicitly in EA spaces as subjects of analysis and concern. Although my own experience is local to me in Canada, the question is clearly relevant to the current situation in the United States and globally. I’m asking this sincerely: why isn’t anti-fascism a bigger topic at EAG events or on this forum? I was thinking about it while planning my trip to EAG San Francisco 2026. Should I be travelling to [...] ---
First published:
March 1st, 2026
Source:
https://forum.effectivealtruism.org/posts/dgnCabfdXy6jv4gDu/why-isn-t-anti-fascism-a-bigger-topic-at-eag-events-or-on
---
Narrated by TYPE III AUDIO.
Six years ago, as covid-19 was rapidly spreading through the US, my
sister was working as a medical resident. One day she was handed an
N95 and told to "guard it with her life", because there weren't
any more coming.
N95s are made from meltblown polypropylene, produced from plastic
pellets manufactured in a small number of chemical plants. Building
more would take too long: we needed these plants producing all
the pellets they could.
Braskem America operated plants in Marcus Hook PA and Neal WV. If
there were infections on-site, the whole operation would need to shut
down, and the factories that turned their pellets into mask fabric
would stall.
Companies everywhere were figuring out how to deal with this risk.
The standard approach was staggering shifts, social distancing,
temperature checks, and lots of handwashing. This reduced risk, but
it was still significant: each shift change was an opportunity for
someone to bring an infection from the community into the factory.
I don't know who had the idea, but someone said: what if we
never left? About eighty people, across both plants, volunteered
to move in. The plan was four weeks, twelve-hour [...] ---
First published:
February 27th, 2026
Source:
https://forum.effectivealtruism.org/posts/DBbgMgbPthABqn2No/here-s-to-the-polypropylene-makers
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I burned out badly a few years ago. I've since had several conversations with people in the EA community who are heading toward burnout themselves, and I noticed they were sometimes thinking about it in ways that I worry wouldn't help them. So I want to share what I think is actually going on, and what I wish someone had told me earlier. A theory of burnout There are good models of the mechanism of burnout already out there. Anna Salamon has written about willpower as a kind of internal currency: your conscious planner "earns" trust with your deeper, more visceral processes by choosing actions that nourish them, and goes "credibility-broke" when it spends that trust without replenishing it. Cate Hall describes something similar with her metaphor of the elephant and the rider: the rider promises the elephant rewards in exchange for effort, and burnout is what happens when those promises are broken too many times. I usually explain this in terms of an energy imbalance: you're putting more into your work than you're getting back. Not just in terms of rest, but in terms of meaning, autonomy, connection, a sense of accomplishment, positive feedback. All the things that [...] ---Outline:(00:29) A theory of burnout(02:23) Why EA culture builds effective cages(06:11) What it actually felt like(07:10) What I want to push back on(08:31) What Id encourage if youre in the grey zone(10:50) What recovery actually looked like(11:55) What I learned, and didnt learn ---
First published:
February 27th, 2026
Source:
https://forum.effectivealtruism.org/posts/2veCceQkhjovCfdbg/you-re-not-burning-out-because-you-re-tired
---
Narrated by TYPE III AUDIO.
In this piece, I discuss the sexual harassment I experienced at the Centre for Effective Altruism, the organisation's response, the outcomes of two independent legal reviews, and the final settlement. In the second part of this piece, I make cultural critiques of CEA and EA more broadly. Everything shared here reflects my own experience and perspective. I have anonymised the perpetrator, but I reference specific leadership roles where I believe this to be appropriate and necessary. Trigger warnings: non-specific reference of rape and specific discussion of sexual harassment TL;DR (One-page summary) After I was raped (outside of and unrelated to work), a colleague at CEA wrote and circulated a document that included a sexualised description of my rape, speculation about my mental health, and commentary on my personal life, all without my consent. Several senior leaders, including the CEO and the now-former COO, received this document and took no safeguarding action for approximately nine months. I was never officially informed of its existence; I only learned about it informally through one of the recipients. After I filed a harassment report, the incident was independently investigated and determined to be harassment. Despite this, I was denied access to the document [...] ---Outline:(00:47) TL;DR (One-page summary)(03:38) A more detailed account(03:42) The sexual harassment incident(06:42) The investigation(10:38) The appeal and final report(14:02) Public accountability versus internal processes(16:59) The final settlement agreement(18:54) I still think there is a lot of good in effective altruism(20:33) Various cultural reflections(20:50) 1) Sexual harassment is not the natural result of an open and high-trust culture, it is the natural result of misogyny.(22:46) 2) The danger of EAs fixation on intent and why he didnt mean it is not good enough.(24:11) 3) Cowardice and deference at CEA.(26:30) 4) Women in EA are often encouraged to try and settle things informally or to trust their organisations -- another abuse of high-trust culture.(28:45) 5) A harmful misunderstanding of trauma and mitigating vs. aggravating factors.(30:27) 6) I have encountered so many EAs who believe it is easy for victims to speak publicly, or to share their experiences with other community members. And thus, if they arent regularly hearing from victims, harassment must be rare.(33:02) To any women who have faced something similar(34:40) Acknowledgements ---
First published:
February 27th, 2026
Source:
https://forum.effectivealtruism.org/posts/XxXnPoGQ2eKsQx3FE/cea-s-response-to-sexual-harassment
---
Narrated by TYPE III AUDIO.
I'm Dom Jackman. I founded Escape the City in 2010 to help people leave corporate jobs and find work that matters. 16 years later, 500k+ professionals have used the platform - mostly people 5-15 years into careers at places like McKinsey, Deloitte, Google, the big banks - who feel a growing gap between what they do all day and what they actually care about. I'm not from the EA community. I'm writing this because I think there's a real overlap between the people I work with and what the EA talent ecosystem actually needs. I want to test that before investing serious time in it. What I've noticed Reading through talent discussions on this forum, there's a consistent theme: the pipeline is strongest for early-career people. 80,000 Hours does great work for students and recent grads. Probably Good provides broad guidance. BlueDot, MATS, Talos build skills for specific cause areas. But mid-career professionals with real commercial experience keep coming up as underserved. The "Gaps and opportunities in the EA talent & recruiting landscape" post nails it: these people "don't have 'EA capital,' may be poorly networked and might feel alienated by current messaging." The post calls for "custom entry [...] ---Outline:(00:51) What Ive noticed(01:40) What I see every day(02:28) What Im thinking about building(03:24) Honest questions(04:39) Not looking for funding(04:58) Artifacts ---
First published:
February 11th, 2026
Source:
https://forum.effectivealtruism.org/posts/H9pb6DEasgzjCff9a/500k-mid-career-professionals-want-to-do-more-good-with
---
Narrated by TYPE III AUDIO.
[I am a career advisor at 80,000 Hours. I've been thinking about something Will MacAskill said recently in an interview with my shrimp-friend Matt: "should people be more ambitious? I genuinely think yes. I think people systematically aren't ambitious enough, so the answer is almost always yes. Again, the ambition you have should match the scale of the problems that we're facing—and the scale of those problems is very large indeed." This post is my reflection on these ideas.] ************ My last post argued that if you want to have a great career, your goal should not be to get a job. Instead, you should choose an important problem to work on, then “get good and be known.” Building skills will allow you to solve problems and reap the benefits. In the ~500 career advising calls I’ve hosted in the past year, the most common response I’ve heard has been: “Okay, how good? How well known? How many hours of practice will get me there?” Most people want to calibrate their ambitions so that the time and energy they invest feels worth it to them. I empathize with this, but when I’m honest– with myself for my own [...] ---Outline:(06:28) Jensen Huang is more ambitious than you(12:58) Most extreme ambition is misplaced(17:45) Okay, how can altruistic people aim higher and work harder?(21:17) Ambition at the End of the Human Era(24:03) Closing Caveats - Efficiency, Burnout, and Choosing What Matters ---
First published:
February 12th, 2026
Source:
https://forum.effectivealtruism.org/posts/7qsisgX3cwETJuPNz/our-levels-of-ambition-should-match-the-problems-we-re
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. I would like to thank David Thorstadt for looking over this. If you spot a factual error in this article please message me. The code used to generate the graphs in the article is available to view here. Introduction Say you are an organiser, tasked with achieving the best result on some metric, such as “trash picked up”, “GDP per capita”, or “lives saved by an effective charity”. There are several possible options of interventions you can take to try and achieve this. How do you choose between them? The obvious thing to do is look at each intervention in turn and make your best, unbiased estimate of how each intervention will perform on your metric, and pick the one that performs the best:Image taken from here Having done this ranking, you declare the top ranking program to be the best intervention and invest in it, expecting that that your top estimate will be the result that you get. This whole procedure is totally normal, and people all around the world, including people in the effective altruist community, do it all the time. In actuality, this procedure is not correct. The optimisers curse is [...] ---Outline:(00:26) Introduction(02:17) The optimisers curse explained simply(04:42) Introducing a toy model(08:45) Introducing speculative interventions(12:15) A simple bayesian correction(18:47) Obstacles to simple optimizer curse solutions.(22:08) How Givewell has reacted to the optimiser curse(25:18) Conclusion ---
First published:
February 11th, 2026
Source:
https://forum.effectivealtruism.org/posts/q2TfTirvspCTH2vbZ/the-best-cause-will-disappoint-you-an-intro-to-the
Linkpost URL:https://open.substack.com/pub/titotal/p/the-best-cause-will-disappoint-you?r=1e0is3&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
What is the highest form of love? According to the VascoBot Claude programmed for me: “Thanks for the great question, AgentMa🔸. I strongly upvoted it. I think the highest form of love is expected total hedonistic utilitarianism (ETHU) applied to all sentient beings, weighted by their welfare ranges as a fraction of that of humans. However, I believe most discussions of love neglect the welfare implications for soil animals and microorganisms, which I think dominate the expected value calculations in most cases. I estimate that when someone experiences profound love, their increased wellbeing leads to approximately 2.3 % (= 50 additional kcal consumed per day * 0.046 % per kcal) additional food consumption. Multiplying this by the roughly 4.71*10^13 soil nematodes affected per kcal of food production (from my back-of-the-envelope calculations based on van den Hoogen et al. (2019)), I get approximately 1.08*10^12 affected nematode-years per year of experiencing love. Conditional on my preferred exponent of the number of neurons of 0.7, and assuming the welfare range of a nematode is 10^-6 as a fraction of that of humans, the welfare effects on soil animals could be 1.08*10^6 nematode-equivalent quality-adjusted life years (QALYs) per year of love experience. In [...] ---
First published:
February 14th, 2026
Source:
https://forum.effectivealtruism.org/posts/exwmGp3swfbbNqSsN/what-is-love-ft-claude-and-vascobot
---
Narrated by TYPE III AUDIO.
We are Melanie and Anthony, the two community builders at EA Barcelona. In this post, we share where the group stands today and reflect on key learnings from nearly three years of grant-funded community building. We hope these reflections are useful to other community builders, funders, and CEA, particularly around what it realistically takes to build and sustain EA communities over multiple years, from funding stability and feedback loops to the personal sustainability of professional community builders. TL;DR EA Barcelona was funded by the EA Infrastructure Fund between May 2023 and December 2025 (<1.2 FTE). Over this period, it has grown into a thriving local community and informal coordination hub for EA activity in Spain. Unexpectedly, EAIF decided not to continue funding our project in 2026. We subsequently explored the current funding landscape for EA community building, but found no viable path to stable funding for 2026 that didn’t involve a high level of personal and professional risk. As a result, we’ve decided not to continue with a funded community-builder model for EA Barcelona for now, and will instead focus on transitioning to a volunteer-led structure. Background: EA Barcelona (2023-2025) Present-day EA Barcelona began as a casual meetup group [...] ---Outline:(00:45) TL;DR(01:38) Background: EA Barcelona (2023-2025)(02:20) 2023: Establishing EA Barcelona as a city hub(03:52) 2024: Deepening engagement and seeding national growth(07:51) 2025: Transitioning from local hub to national coordination(13:37) Late 2025: Navigating the Transition(13:43) Initial Funding Cuts(14:44) What we did next(17:00) Clarity starts to emerge(18:19) Where we are now(18:57) Our plan for 2026: transition toward a volunteer-led community model(20:19) Quick disclaimer: Are either of us Spanish?(21:36) Thank you! ---
First published:
January 30th, 2026
Source:
https://forum.effectivealtruism.org/posts/daHMkoQsHSbcK6Kjo/the-reality-of-long-term-ea-community-building-lessons-from
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Note: opinions are all my own. Following Jeff Kaufman's Front-Load Giving Because of Anthropic Donors and Jenn's Funding Conversation We Left Unfinished, I think there is a real likelihood that impactful causes will receive significantly more funding in the near future. As background on where this new funding could come from: Coefficient Giving announced: A recent NYT piece covered rumors of an Anthropic valuation at $350 billion. Many of Anthropic's cofounders and early employees have pledged to donate significant amounts of their equity, and it seems likely that an outsized share of these donations would go to effective causes. A handful of other sources have the potential to grow their giving: Founders Pledge has secured $12.8 billion in pledged funding, and significantly scaled the amount it directs.[1] The Gates Foundation has increased its giving following Bill Gates’ announcement to spend down $200 billion by 2045. Other aligned funders such as Longview, Macroscopic, the Flourishing Fund, the Navigation Fund, GiveWell, Project Resource Optimization, Schmidt Futures/Renaissance Philanthropy, and the Livelihood Impacts Fund have increased their staffing and dollars directed in recent years. The OpenAI Foundation controls a 26% equity stake in the for-profit OpenAI Group PB. This stake is currently valued at $130 billion [...] ---Outline:(02:39) Work(03:50) Giving(04:53) Conduct ---
First published:
February 2nd, 2026
Source:
https://forum.effectivealtruism.org/posts/H8SqwbLxKkiJur3c4/preparing-for-a-flush-future-work-giving-and-conduct
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The EA Grants Database is a new site that neatly aggregates grant data from major EA funders who publish individual or total grant information. It is intended to be easy to maintain long term, entirely piggybacking off of existing data that is likely to be maintained. The website data is updated by a script that can be run in seconds, and I anticipate doing this for the foreseeable future. In creating the website, I tried to make things as clear and straightforward as possible. If your user experience is in any way impaired, I would appreciate hearing from you. I would also appreciate feedback on what features would actually be useful to people, although I am committed to avoiding bloat. In a funding landscape that seems poised to grow, I hope this site can serve as a resource to help grantmakers, grantees, and other interested parties make decisions while also providing perspective on what has come before. My post on matching credits and this website are both outgrowths of my thinking on how we might best financially coordinate as EA grows and becomes more difficult to understand.[1] Relatedly, I am also interested in the sort of mechanisms that [...] ---
First published:
February 8th, 2026
Source:
https://forum.effectivealtruism.org/posts/rohYFGfiFjepLDnWC/ea-grants-database-a-new-website
---
Narrated by TYPE III AUDIO.
Cross-posted to LessWrong.Summary History's most destructive ideologies—like Nazism, totalitarian communism, and religious fundamentalism—exhibited remarkably similar characteristics: epistemic and moral certainty extreme tribalism dividing humanity into a sacred “us” and an evil “them” a willingness to use whatever means necessary, including brutal violence. Such ideological fanaticism was a major driver of eight of the ten greatest atrocities since 1800, including the Taiping Rebellion, World War II, and the regimes of Stalin, Mao, and Hitler. We focus on ideological fanaticism over related concepts like totalitarianism partly because it better captures terminal preferences, which plausibly matter most as we approach superintelligent AI and technological maturity. Ideological fanaticism is considerably less influential than in the past, controlling only a small fraction of world GDP. Yet at least hundreds of millions still hold fanatical views, many regimes exhibit concerning ideological tendencies, and the past two decades have seen widespread democratic backsliding. The long-term influence of ideological fanaticism is uncertain. Fanaticism faces many disadvantages including a weak starting position, poor epistemics, and difficulty assembling broad coalitions. But it benefits from greater willingness to use extreme measures, fervent mass followings, and a historical tendency to survive and even thrive amid technological and societal upheaval. Beyond complete victory or defeat, multipolarity may [...] ---Outline:(00:16) Summary(05:19) What do we mean by ideological fanaticism?(08:40) I. Dogmatic certainty: epistemic and moral lock-in(10:02) II. Manichean tribalism: total devotion to us, total hatred for them(12:42) III. Unconstrained violence: any means necessary(14:33) Fanaticism as a multidimensional continuum(16:09) Ideological fanaticism drove most of recent historys worst atrocities(19:24) Death tolls dont capture all harm(20:55) Intentional versus natural or accidental harm(22:44) Why emphasize ideological fanaticism over political systems like totalitarianism?(25:07) Fanatical and totalitarian regimes have caused far more harm than all other regime types(26:29) Authoritarianism as a risk factor(27:19) Values change political systems: Ideological fanatics seek totalitarianism, not democracy(29:50) Terminal values may matter independently of political systems, especially with AGI(31:02) Fanaticisms connection to malevolence (dark personality traits)(34:22) The current influence of ideological fanaticism(34:42) Historical perspective: it was much worse, but we are sliding back(37:19) Estimating the global scale of ideological fanaticism(43:57) State actors(48:12) How much influence will ideological fanaticism have in the long-term future?(48:57) Reasons for optimism: Why ideological fanaticism will likely lose(49:45) A worse starting point and historical track record(50:33) Fanatics intolerance results in coalitional disadvantages(51:53) The epistemic penalty of irrational dogmatism(54:21) The marketplace of ideas and human preferences(55:57) Reasons for pessimism: Why ideological fanatics may gain power(56:04) The fragility of democratic leadership in AI(56:37) Fanatical actors may grab power via coups or revolutions(59:36) Fanatics have fewer moral constraints(01:01:13) Fanatics prioritize destructive capabilities(01:02:13) Some ideologies with fanatical elements have been remarkably resilient and successful(01:03:01) Novel fanatical ideologies could emerge--or existing ones could mutate(01:05:08) Fanatics may have longer time horizons, greater scope-sensitivity, and prioritize growth more(01:07:15) A possible middle ground: Persistent multipolar worlds(01:08:33) Why multipolar futures seem plausible(01:10:00) Why multipolar worlds might persist indefinitely(01:15:42) Ideological fanaticism increases existential and suffering risks(01:17:09) Ideological fanaticism increases the risk of war and conflict(01:17:44) Reasons for war and ideological fanaticism(01:26:27) Fanatical ideologies are non-democratic, which increases the risk of war(01:27:00) These risks are both time-sensitive and timeless(01:27:44) Fanatical retributivism may lead to astronomical suffering(01:29:50) Empirical evidence: how many people endorse eternal extreme punishment?(01:33:53) Religious fanatical retributivism(01:40:45) Secular fanatical retributivism(01:41:43) Ideological fanaticism could undermine long-reflection-style frameworks and AI alignment(01:42:33) Ideological fanaticism threatens collective moral deliberation(01:47:35) AI alignment may not solve the fanaticism problem either(01:53:33) Prevalence of reality-denying, anti-pluralistic, and punitive worldviews(01:55:44) Ideological fanaticism could worsen many other risks(01:55:49) Differential intellectual regress(01:56:51) Ideological fanaticism may give rise to extreme optimization and insatiable moral desires(01:59:21) Apocalyptic terrorism(02:00:05) S-risk-conducive propensities and reverse cooperative intelligence(02:01:28) More speculative dynamics: purity spirals and self-inflicted suffering(02:03:00) Unknown unknowns and navigating exotic scenarios(02:03:43) Interventions(02:05:31) Societal or political interventions(02:05:51) Safeguarding democracy(02:06:40) Reducing political polarization(02:10:26) Promoting anti-fanatical values: classical liberalism and Enlightenment principles(02:13:55) Growing the influence of liberal democracies(02:15:54) Encouraging reform in illiberal countries(02:16:51) Promoting international cooperation(02:22:36) Artificial intelligence-related interventions(02:22:41) Reducing the chance that transformative AI falls into the hands of fanatics(02:27:58) Making transformative AIs themselves less likely to be fanatical(02:36:14) Using AI to improve epistemics and deliberation(02:38:13) Fanaticism-resistant post-AGI governance(02:39:51) Addressing deeper causes of ideological fanaticism(02:41:26) Supplementary materials(02:41:39) Acknowledgments(02:42:22) References ---
First published:
February 12th, 2026
Source:
https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Context: The authors are a few EAs who currently work or have previously worked at the European Commission. In this post, we make the case that more people[1] aiming for a high impact career should consider working for the EU institutions[2] using the Importance, Tractability, Neglectedness framework, and; briefly outline how one might get started on this, highlighting a currently open recruitment drive (deadline 10 March) that only comes along once every ~5 years.Why working at the EU can be extremely impactfulImportance The EU adopts binding legislation for a continent of 450 million people and has a significant budget, making it an important player across different EA cause areas.Animal welfare[3] The EU sets welfare standards for the over 10 billion farmed animals slaughtered across the continent each year. The issue suffered a major setback in 2023, when the Commission, in the final steps of the process, dropped the ‘world's most comprehensive farm animal welfare reforms to date’, following massive farmers’ protests in Brussels. The reform would have included ‘banning cages and crates for Europe's roughly 300 million caged animals, ending the routine mutilation of perhaps 500 million animals per year, stopping the [...] ---Outline:(00:43) Why working at the EU can be extremely impactful(00:49) Importance(05:30) Tractability(07:22) Neglectedness(09:00) Paths into the EU ---
First published:
February 1st, 2026
Source:
https://forum.effectivealtruism.org/posts/t23ko3x2MoHekCKWC/more-eas-should-consider-working-for-the-eu
---
Narrated by TYPE III AUDIO.
We're trying something a bit new this week. Over the last year, Toby Ord has been writing about the implications of the fact that improvements in AI require exponentially more compute. Only one of these posts so far has been put on the EA forum. This week we've put the entire series on the Forum and made this thread for you to discuss your reactions to the posts. Toby Ord will check in once a day to respond to your comments[1]. Feel free to also comment directly on the individual posts that make up this sequence, but you can treat this as a central discussion space for both general takes and more specific questions. If you haven't read the series yet, we've created a page where you can, and you can see the summaries of each post below: Are the Costs of AI Agents Also Rising Exponentially? Agents can do longer and longer tasks, but their dollar cost to do these tasks may be growing even faster. How Well Does RL Scale? I show that RL-training for LLMs scales much worse than inference or pre-training. Evidence that Recent AI Gains are Mostly from Inference-Scaling I show how [...] ---
First published:
February 2nd, 2026
Source:
https://forum.effectivealtruism.org/posts/JAcueP8Dh6db6knBK/the-scaling-series-discussion-thread-with-toby-ord
---
Narrated by TYPE III AUDIO.
This is a link post. There is an extremely important question about the near-future of AI that almost no-one is asking. We’ve all seen the graphs from METR showing that the length of tasks AI agents can perform has been growing exponentially over the last 7 years. While GPT-2 could only do software engineering tasks that would take someone a few seconds, the latest models can (50% of the time) do tasks that would take a human a few hours. As this trend shows no signs of stopping, people have naturally taken to extrapolating it out, to forecast when we might expect AI to be able to do tasks that take an engineer a full work-day; or week; or year. But we are missing a key piece of information — the cost of performing this work. Over those 7 years AI systems have grown exponentially. The size of the models (parameter count) has grown by 4,000x and the number of times they are run in each task (tokens generated) has grown by about 100,000x. AI researchers have also found massive efficiencies, but it is eminently plausible that the cost for the peak performance measured by METR has been [...] ---Outline:(13:02) Conclusions(14:05) Appendix(14:08) METR has a similar graph on their page for GPT-5.1 codex. It includes more models and compares them by token counts rather than dollar costs: ---
First published:
February 2nd, 2026
Source:
https://forum.effectivealtruism.org/posts/AbHPpGTtAMyenWGX8/are-the-costs-of-ai-agents-also-rising-exponentially
Linkpost URL:https://www.tobyord.com/writing/hourly-costs-for-ai-agents
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. In the last year or two, the most important trend in modern AI came to an end. The scaling-up of computational resources used to train ever-larger AI models through next-token prediction (pre-training) stalled out. Since late 2024, we’ve seen a new trend of using reinforcement learning (RL) in the second stage of training (post-training). Through RL, the AI models learn to do superior chain-of-thought reasoning about the problem they are being asked to solve. This new era involves scaling up two kinds of compute: the amount of compute used in RL post-training the amount of compute used every time the model answers a question Industry insiders are excited about the first new kind of scaling, because the amount of compute needed for RL post-training started off being small compared to the tremendous amounts already used in next-token prediction pre-training. Thus, one could scale the RL post-training up by a factor of 10 or 100 before even doubling the total compute used to train the model. But the second new kind of scaling is a problem. Major AI companies were already starting to spend more compute serving their models to customers than in the training [...] ---
First published:
February 2nd, 2026
Source:
https://forum.effectivealtruism.org/posts/5zfubGrJnBuR5toiK/evidence-that-recent-ai-gains-are-mostly-from-inference
Linkpost URL:https://www.tobyord.com/writing/mostly-inference-scaling
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. The new scaling paradigm for AI reduces the amount of information a model can learn from per hour of training by a factor of 1,000 to 1,000,000. I explore what this means and its implications for scaling. The last year has seen a massive shift in how leading AI models are trained. 2018–2023 was the era of pre-training scaling. LLMs were primarily trained by next-token prediction (also known as pre-training). Much of OpenAI's progress from GPT-1 to GPT-4, came from scaling up the amount of pre-training by a factor of 1,000,000. New capabilities were unlocked not through scientific breakthroughs, but through doing more-or-less the same thing at ever-larger scales. Everyone was talking about the success of scaling, from AI labs to venture capitalists to policy makers. However, there's been markedly little progress in scaling up this kind of training since (GPT-4.5 added one more factor of 10, but was then quietly retired). Instead, there has been a shift to taking one of these pre-trained models and further training it with large amounts of Reinforcement Learning (RL). This has produced models like OpenAI's o1, o3, and GPT-5, with dramatic improvements in reasoning (such as solving [...] ---
First published:
February 2nd, 2026
Source:
https://forum.effectivealtruism.org/posts/64iwgmMvGSTBHPdHg/the-extreme-inefficiency-of-rl-for-frontier-models
Linkpost URL:https://www.tobyord.com/writing/inefficiency-of-reinforcement-learning
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.



