DiscoverEA Forum Podcast (Curated & popular)
EA Forum Podcast (Curated & popular)
Claim Ownership

EA Forum Podcast (Curated & popular)

Author: EA Forum Team

Subscribed: 19Played: 916
Share

Description

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.

If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
275 Episodes
Reverse
This post was partly inspired by, and shares some themes with, this Joe Carlsmith post. My post (unsurprisingly) expresses fewer concepts with less clarity and resonance, but is hopefully of some value regardless. Content warning: description of animal death. I live in a small, one-bedroom flat in central London. Sometime in the summer of 2023, I started noticing moths around my flat. I didn’t pay much attention to it, since they seemed pretty harmless: they obviously weren’t food moths, since they were localised in my bedroom, and they didn’t seem to be chewing holes in any of my clothes — months went by and no holes appeared. [1] The larvae only seemed to be in my carpet. Eventually, their numbers started increasing, so I decided to do something about it. I Googled humane and nonlethal ways to deal with moth infestations, but found nothing. There were lots of sources [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: March 25th, 2024 Source: https://forum.effectivealtruism.org/posts/Ax5PwjqtrunQJgjsA/killing-the-moths --- Narrated by TYPE III AUDIO.
I've been writing a few posts critical of EA over at my blog. They might be of interest to people here: Unflattering aspects of Effective Altruism Alternative Visions of Effective Altruism Auftragstaktik Hurdles of using forecasting as a tool for making sense of AI progress Brief thoughts on CEA's stewardship of the EA Forum Why are we not harder, better, faster, stronger? ...and there are a few smaller pieces on my blog as well. I appreciate comments and perspectives anywhere, but prefer them over at the individual posts at my blog, since I have a large number of disagreements with the EA Forum's approach to moderation, curation, aesthetics or cost-effectiveness. --- First published: March 15th, 2024 Source: https://forum.effectivealtruism.org/posts/coWvsGuJPyiqBdrhC/unflattering-aspects-of-effective-altruism --- Narrated by TYPE III AUDIO.
Satisfaction with the EA community Reported satisfaction, from 1 (Very dissatisfied) to 10 (Very satisfied), in December 2023/January 2024 was lower than when we last measured it shortly after the FTX crisis at the end of 2022 (6.77 vs. 6.99, respectively).  However, December 2023/January 2024 satisfaction ratings were higher than what people recalled their satisfaction being “shortly after the FTX collapse” (and their recalled level of satisfaction was lower than what we measured their satisfaction as being at the end of 2022). We think it's plausible that satisfaction reached a nadir at some point later than December 2022, but may have improved since that point, while still being lower than pre-FTX. Reasons for dissatisfaction with EA: A number of factors were cited a similar number of times by respondents as Very important reasons for dissatisfaction, among those who provided a reason: Cause prioritization (22%), Leadership (20%), Justice, Equity, Inclusion and [...] ---Outline:(04:15) Community satisfaction over time(08:12) Reasons for dissatisfaction with the EA community(13:07) Changes in EA engagement(14:00) Changes in EA-related behaviors(15:57) Perception of issues in the EA community(16:14) Leadership vacuum(16:46) Desire for more community change following FTX(17:13) Trust in EA-related organizations(18:53) Appendix(18:56) Effect sizes for satisfaction over time(20:12) Email vs non-email referrers(21:30) AcknowledgmentsThe original text contained 7 footnotes which were omitted from this narration. --- First published: March 20th, 2024 Source: https://forum.effectivealtruism.org/posts/aF6nh4LW6sSbgMLzL/updates-on-community-health-survey-results --- Narrated by TYPE III AUDIO.
TLDR: we think the limiting factor for new charities has shifted from founder talent to early stage funding being the top limiting factor. We have historically written about limiting factors and how they affect our thinking about the highest impact areas. For new charities, over the past 4-5 years, fairly consistently, the limiting factor has been people; specifically, the fairly rare founder profile that we look for and think has the best chance at founding a field-leading charity. However, we think over the last 12 months this picture has changed in some important ways: Firstly, we have started founding more charities: After founding ~5 charities a year in 2021 and 2022 we founded 8 charities in 2023 and we think there are good odds we will be able to found ~10-12 charities in 2024. This is a pretty large change. We have not changed our standards for [...] --- First published: March 19th, 2024 Source: https://forum.effectivealtruism.org/posts/AXhC4JhWFfsjBB4CA/the-current-limiting-factor-for-new-charities --- Narrated by TYPE III AUDIO.
I've written before about trying to bring US private foundations into EA as major funders. I got some helpful feedback and haven't really pursued it further. I study US private foundations as a researcher and recently conducted a qualitative data collection of staff at 20 very large US private foundations ($100m+ assets). The subject of the study isn't directly EA related (focused mostly on how they use accounting/effectiveness information and accountability), but it got me thinking a lot! Some interesting observations that I am going to explore further, in future forum posts (if y'all think it's interesting) and future research papers: Trust-based philanthropy (TBP), a funder movement that's only been around since 2020, has had a HUGE impact on very large private foundations. All 20 indicated that they had already/were in the process of integrating TBP into their grantmaking. I can't emphasize enough how influential TBP has been. [...] --- First published: March 15th, 2024 Source: https://forum.effectivealtruism.org/posts/Pnv6PRyeCPZknsbEw/the-lack-of-ea-in-us-private-foundations --- Narrated by TYPE III AUDIO.
This is a draft amnesty post. Summary. It seems plausible that fetuses can suffer from 12 weeks of age, and quite reasonable that then can suffer from 24 weeks of age. Some late-term abortion procedures seem that they might cause a fetus excruciating suffering. Over 35,000 of these procedures occur each year in the US alone. Further research would be desired on interventions to reduce this suffering, such as mandating fetal anesthesia for late-term abortions. BackgroundMost people agree that a fetus has the capacity to suffer at some point. If a fetus has the capacity to suffer, then we ought to reduce that suffering when possible. Fetal anesthesia is standard practice for fetal surgery,[1] but I am unaware of it ever being used during late-term abortions. If the fetus can suffer, these procedures likely cause the fetus extreme pain. I think the cultural environment EAs usually [...] ---Outline:(00:40) Background(01:28) Surgical Abortion Procedures(01:34) LI (Labor Induction)(02:18) DandE (Dilation and Evacuation)(02:38) When Can a Fetus Suffer?(03:46) Scale in US and UK(03:50) 2021 UK(04:05) 2020 USA(05:02) InterventionsThe original text contained 11 footnotes which were omitted from this narration. --- First published: March 17th, 2024 Source: https://forum.effectivealtruism.org/posts/vhKZ7hyzmcrWuBwDL/the-scale-of-fetal-suffering-in-late-term-abortions --- Narrated by TYPE III AUDIO.
I like Open Phil's worldview diversification. But I don't think their current roster of worldviews does a good job of justifying their current practice. In this post, I'll suggest a reconceptualization that may seem radical in theory but is conservative in practice. Something along these lines strikes me as necessary to justify giving substantial support to paradigmatic Global Health & Development charities in the face of competition from both Longtermist/x-risk and Animal Welfare competitor causes. Current Orthodoxy I take it that Open Philanthropy's current "cause buckets" or candidate worldviews are typically conceived of as follows: neartermist - incl. animal welfare neartermist - human-only longtermism / x-risk We're told that how to weigh these cause areas against each other "hinge[s] on very debatable, uncertain questions." (True enough!) But my impression is that EAs often take the relevant questions to be something like, should we be speciesist? and should we [...] ---Outline:(00:35) Current Orthodoxy(01:17) The Problem(02:43) A Proposed Solution(04:26) ImplicationsThe original text contained 3 footnotes which were omitted from this narration. --- First published: March 18th, 2024 Source: https://forum.effectivealtruism.org/posts/dmEwQZSbPsYhFay2G/ea-worldviews-need-rethinking --- Narrated by TYPE III AUDIO.
In 2022, Aquatic Life Institute (ALI) led the charge in Banding Together to Ban Octopus Farming. In 2024, we are ecstatic to see these efforts come to fruition in Washington State. This landmark achievement underscores our collective commitment to rejecting the introduction of additional animals into the seafood system and positions Washington State as a true pioneer in aquatic animal welfare legislation. In light of this success, ALI is joining forces with various organizations to advocate for similar bans across the United States and utilizing these monumental examples as leverage in continuous European endeavors. 2022 Aquatic Life Institute (ALI) and members of the Aquatic Animal Alliance (AAA) comment on the Environmental Impact of Nueva Pescanova before the Government of the Canary Islands: General Directorate of Fisheries and the General Directorate for the Fight against Climate Change and the Environment. Allowing this industrial octopus farm to operate [...] ---Outline:(00:45) 2022(01:44) 2023(03:37) 2024(04:50) March 14, 2024--- First published: March 15th, 2024 Source: https://forum.effectivealtruism.org/posts/AD8QchabkrygXkdgm/we-did-it-victory-for-octopus-in-washington-state --- Narrated by TYPE III AUDIO.
Maternal Health Initiative (MHI) was founded out of Charity Entrepreneurship (AIM)'s 2022 Incubation Program and has since piloted two interventions integrating postpartum (post-birth) contraceptive counselling into routine care appointments in Ghana. We concluded this pilot work in December 2023. A stronger understanding of the context and impact of postpartum family planning work, on the back of our pilot results, has led us to conclude that our intervention is not among the most cost-effective interventions available. We’ve therefore decided to shut down and redirect our funding to other organisations. This article summarises MHI's work, our assessment of the value of postpartum family planning programming, and our decision to shut down MHI as an organisation in light of our results. We also share some lessons learned. An in-depth report expanding on the same themes is available on our website. We encourage you to skip to the sections that are [...] ---Outline:(01:40) Why we chose to pursue postpartum family planning(01:45) Why family planning?(02:28) Why postpartum (post-birth)?(03:49) MHI: An overview of our work(06:16) Pilot: Design(08:54) Pilot: Results(08:58) Sample Population(09:36) Implementation(10:19) Changes in Contraceptive Uptake(11:13) Conclusions From Our Pilot Results(13:13) Why We No Longer Believe Postpartum Family Planning Is Among The Most Cost-Effective Interventions(13:35) Evidence of Limited Effects on Unintended Pregnancies(16:06) The Prevalence and Impact of Postpartum Insusceptibility(17:07) Short-Spaced Pregnancies(17:48) Theory of Change(18:14) Other Factors(18:28) Broader Thoughts on Family Planning(18:45) Concerns(20:43) Reasons We Still Believe In The Importance Of Family Planning Work(22:53) Choosing to Shut Down(23:43) Considering a Pivot(26:47) Proceeding to Shut Down(27:38) Lessons(33:39) Conclusions--- First published: March 15th, 2024 Source: https://forum.effectivealtruism.org/posts/MWSwSXNmsSBaEKtKw/maternal-health-initiative-is-shutting-down --- Narrated by TYPE III AUDIO.
by Anemone Franz and Tessa Alexanian 80,000 Hours ranks preventing catastrophic pandemics as one of the most pressing problems in the world, and we have advised many of our readers to work in biosecurity to have high-impact careers. But biosecurity is a complex field, and while the threat is undoubtedly large, there's a lot of disagreement about how best to conceptualise and mitigate the risks. We wanted to get a better sense of how the people thinking about these threats every day perceive the risks. So we decided to talk to more than a dozen biosecurity experts to better understand their views. To make them feel comfortable speaking candidly, we granted the experts we spoke to anonymity. Sometimes disagreements in this space can get contentious, and certainly many of the experts we spoke to disagree with one another. We don’t endorse every position they’ve articulated below. We think, though [...] ---Outline:(02:47) Expert 1: Failures of imagination and appeals to authority(04:54) Expert 2: Exaggerating small-scale risks and Western chauvinism(06:32) Expert 3: Useless reports and the world beyond effective altruism(07:09) Expert 4: Overconfidence in threat models(08:05) Expert 5: Interventions that don’t address the largest threats(09:01) Expert 6: Over-reliance on past experience and overconfidence about the future(10:59) Expert 7: Resistance from governments and antagonising expert communities(13:50) Expert 8: Preparing for the “Big One” looks like preparing for the smaller ones(14:31) Expert 9: ChatGPT is not dangerous(16:52) Expert 10: The barriers to bioweapons are not that high(18:24) Expert 11: We don’t need silver bullets(18:57) Expert 12: Three failure modes(19:32) Expert 13: Groupthink and failing to be scope sensitive(20:47) Expert 14: COVID-19 does not equal biological weapons--- First published: February 29th, 2024 Source: https://forum.effectivealtruism.org/posts/NGFkW4Qxww9jGESrk/what-are-the-biggest-misconceptions-about-biosecurity-and Linkpost URL:https://80000hours.org/articles/anonymous-misconceptions-about-biosecurity/ --- Narrated by TYPE III AUDIO.
This might be one of the best pieces of introductory content to the concepts of effective giving that GWWC has produced in recent years! I hit the streets of London to engage with everyday people about their views on charity, giving back, and where they thought they stood on the global income scale. This video was made to engage people with some of the core concepts of income inequality and charity effectiveness in the hope of getting more people interested in giving effectively. If you enjoy it - I'd really appreciate a like, comment or share on YouTube to help us reach more people! There's a blog post and transcript of the video available too. Big thanks to Suzy Sheperd for directing and editing this project and to Julian Jamison and Habiba Banu for being interviewed! --- First published: March 13th, 2024 Source: https://forum.effectivealtruism.org/posts/tX2MqRfZtz7TqYCQi/new-video-you-re-richer-than-you-realise Linkpost URL:https://www.youtube.com/watch?v=ekIRVhbpiQw --- Narrated by TYPE III AUDIO.
Authors of linked report: Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Rose Hadshar, Zachary Jacobs, Philip Tetlock[1] Today, the Forecasting Research Institute (FRI) released “Roots of Disagreement on AI Risk: Exploring the Potential and Pitfalls of Adversarial Collaboration,” which discusses the results of an adversarial collaboration focused on forecasting risks from AI. In this post, we provide a brief overview of the methods, findings, and directions for further research. For much more analysis and discussion, see the full report: https://forecastingresearch.org/s/AIcollaboration.pdf Abstract. We brought together generalist forecasters and domain experts (n=22) who disagreed about the risk AI poses to humanity in the next century. The “concerned” participants (all of whom were domain experts) predicted a 20% chance of an AI-caused existential catastrophe by 2100, while the “skeptical” group (mainly “superforecasters”) predicted a 0.12% chance. Participants worked together to find the strongest near-term cruxes: forecasting questions resolving by 2030 that [...] ---Outline:(02:13) Extended Executive Summary(02:44) Methods(03:53) Results: What drives (and doesn’t drive) disagreement over AI risk(04:32) Hypothesis #1 - Disagreements about AI risk persist due to lack of engagement among participants, low quality of participants, or because the skeptic and concerned groups did not understand each others arguments(05:11) Hypothesis #2 - Disagreements about AI risk are explained by different short-term expectations (e.g. about AI capabilities, AI policy, or other factors that could be observed by 2030)(07:53) Hypothesis #3 - Disagreements about AI risk are explained by different long-term expectations(10:35) Hypothesis #4 - These groups have fundamental worldview disagreements that go beyond the discussion about AI(11:31) Results: Forecasting methodology(12:15) Broader scientific implications(13:09) Directions for further researchThe original text contained 10 footnotes which were omitted from this narration. --- First published: March 11th, 2024 Source: https://forum.effectivealtruism.org/posts/orhjaZ3AJMHzDzckZ/results-from-an-adversarial-collaboration-on-ai-risk-fri Linkpost URL:https://forecastingresearch.org/s/AIcollaboration.pdf --- Narrated by TYPE III AUDIO.
Topic of the post: I list potential things to work on other than keeping AI under human control. Motivation The EA community has long been worried about AI safety. Most of the efforts going into AI safety are focused on making sure humans are able to control AI. Regardless of whether we succeed at this, I think there's a lot of additional value on the line. First of all, if we succeed at keeping AI under human control, there are still a lot of things that can go wrong. My perception is that this has recently gotten more attention, for example here, here, here, and at least indirectly here (I haven’t read all these posts. and have chosen them to illustrate that others have made this point purely based on how easily I could find them). Why controlling AI doesn’t solve everything is not the main topic of this [...] ---Outline:(00:14) Motivation(04:03) Other works and where this list fits in(05:44) The list(05:47) Making AI-powered humanity wiser, making AIs wiser(06:35) More detailed definition(08:15) Preparing AIs for reasoning about philosophically complex topics with high stakes(09:25) Improve epistemics during an early AI period(10:35) Metacognition for areas where it is better for you to avoid information(13:26) Improve decision theoretical reasoning and anthropic beliefs(17:34) Compatibility of earth-originating AI with other intelligent life(19:16) More detailed definition(20:55) How to make progress on this(24:46) Surrogate goals and other Safe Pareto improvements(25:54) More detailed definition(27:28) How to make progress on this(28:39) AI personality profiling and avoiding the worst AI personality traits(30:20) How to make progress on this(31:46) Avoiding harm from how we train AIs: Niceness, near miss, and sign flip(32:10) More detailed definition(35:07) How to make progress on this(37:55) Reducing human malevolence(38:32) How to make progress on this(39:14) Hot take: I want more surveys(40:35) Acknowledgements--- First published: March 3rd, 2024 Source: https://forum.effectivealtruism.org/posts/wE7KPnjZHBjxLKNno/ai-things-that-are-perhaps-as-important-as-human-controlled --- Narrated by TYPE III AUDIO.
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. How factory farmers block progress — and what we can do about it Most people agree that farmed animals deserve better legal protections: 84% of Europeans, 61-80% of Americans, 70% of Brazilians, 51-66% of Chinese, and 52% of Indians agree with some version of that statement. Yet almost all farmed animals globally still lack even the most basic protections. America has about five times more vegetarians than farmers — and many more omnivores who care about farm animals. Yet the farmers wield much more political power. Fully 89% of Europeans think it's important that animals not be kept in individual cages. Yet the European Commission just implicitly sided with the 8% who don’t by shelving [...] --- First published: February 28th, 2024 Source: https://forum.effectivealtruism.org/posts/BvXkG3PLfdmvoECFb/this-is-why-we-can-t-have-nice-laws --- Narrated by TYPE III AUDIO.
This post shares our journey in starting an Effective Altruism (EA) charity/project focused on Mediterranean fish welfare, the challenges we faced, our key learnings, and the reasons behind our decision to conclude the project. Actual research results are published in a Literature review and article. Key points The key points of this post are summarized as follows: We launched a project with the goal of enhancing fish welfare in Mediterranean aquaculture. We chose to limit our project to gathering information and decided against continuing our advocacy efforts after our initial six months. Our strategy, which focused on farmer-friendly outreach, was not effective in engaging farmers. The rationale behind our decision is the recognition that existing organizations are already performing excellent work, and we believe that funders should support these established organizations instead of starting a new one. The support and resources from the Effective Altruism (EA) and [...] ---Outline:(00:24) Key points(01:41) Personal/Project background(02:52) Why work on Mediterranean fish welfare?(06:46) Project plans and initial work(10:32) Initial work(13:12) Farmer outreach(19:52) Wrapping up the project(21:33) Other takeaways from starting a project(23:48) Resources for launching a new charityThe original text contained 1 footnote which was omitted from this narration. --- First published: February 26th, 2024 Source: https://forum.effectivealtruism.org/posts/z59wybc56FCAysrAe/how-we-started-our-own-ea-charity-and-why-we-decided-to-wrap --- Narrated by TYPE III AUDIO.
Epistemic status: the stories here are all as true as possible from memory, but my memory is so so.An AI made this This is going to be big It's late Summer 2017. I am on a walk in the Mendip Hills. It's warm and sunny and the air feels fresh. With me are around 20 other people from the Effective Altruism London community. We’ve travelled west for a retreat to discuss how to help others more effectively with our donations and careers. As we cross cow field after cow field, I get talking to one of the people from the group I don’t know yet. He seems smart, and cheerful. He tells me that he is an AI researcher at Google DeepMind. He explains how he is thinking about how to make sure that any powerful AI system actually does what we want it to. I ask him if [...] ---Outline:(00:16) This is going to be big(01:21) This is going to be bad(02:44) It's a long way off though(03:50) This is fine(05:10) It's probably something else in your life(06:15) No-one in my org puts money in their pension(07:16) Doom-vibes(08:45) Maths might help(10:28) A problem shared is…(12:36) Hope--- First published: February 16th, 2024 Source: https://forum.effectivealtruism.org/posts/YScdhSQBhkxpfcF3t/no-one-in-my-org-puts-money-in-their-pension Linkpost URL:https://seekingtobejolly.substack.com/p/no-one-in-my-org-puts-money-in-their --- Narrated by TYPE III AUDIO.
Open Philanthropy strives to help others as much as we can with the resources available to us. To find the best opportunities to help others, we rely heavily on scientific and social scientific research. In some cases, we would find it helpful to have more research in order to evaluate a particular grant or cause area. Below, we’ve listed a set of social scientific questions for which we are actively seeking more evidence.[1] We believe the answers to these questions have the potential to impact our grantmaking. (See also our list of research topics for animal welfare.) If you know of any research that touches on these questions, we would welcome hearing from you. At this point, we are not actively making grants to further investigate these questions. It is possible we may do so in the future, though, so if you plan to research any of these, please [...] ---Outline:(00:59) Land Use Reform(05:53) Health(15:39) Migration(17:21) Education(19:27) Science and Metascience(29:18) Global Development(31:26) OtherThe original text contained 4 footnotes which were omitted from this narration. --- First published: February 15th, 2024 Source: https://forum.effectivealtruism.org/posts/3Y7c7MXf3BzgruTWv/social-science-research-we-d-like-to-see-on-global-health Linkpost URL:https://www.openphilanthropy.org/research/social-science-research-topics-for-global-health-and-wellbeing/ --- Narrated by TYPE III AUDIO.
Google cofounder Larry Page thinks superintelligent AI is “just the next step in evolution.” In fact, Page, who's worth about $120 billion, has reportedly argued that efforts to prevent AI-driven extinction and protect human consciousness are “speciesist” and “sentimental nonsense.” In July, former Google DeepMind senior scientist Richard Sutton — one of the pioneers of reinforcement learning, a major subfield of AI — said that the technology “could displace us from existence,” and that “we should not resist succession.” In a 2015 talk, Sutton said, suppose “everything fails” and AI “kill[s] us all”; he asked, “Is it so bad that humans are not the final form of intelligent life in the universe?” This is how I begin the cover story for Jacobin's winter issue on AI. Some very influential people openly welcome an AI-driven future, even if humans aren’t part of it. Whether you're new to the topic [...] --- First published: February 12th, 2024 Source: https://forum.effectivealtruism.org/posts/pRAjLTQxWJJkygWwK/my-cover-story-in-jacobin-on-ai-capitalism-and-the-x-risk Linkpost URL:https://jacobin.com/2024/01/can-humanity-survive-ai --- Narrated by TYPE III AUDIO.
A friend sent me on a lovely trip down memory lane last week. She forwarded me an email chain from 12 years ago in which we all pretended we left the EA community, and explained why. We were partly thinking through what might cause us to drift in order that we could do something to make that less likely, and partly joking around. It was a really nice reminder of how uncertain things felt back then, and how far we’ve come. Although the emails hadn’t led us to take any specific actions, almost everyone on the thread was still devoting their time to helping others as much as they can. We’re also, for the most part, still supporting each other in doing so. Of the 10 or so people on there, for example, Niel Bowerman is now my boss - CEO of 80k. Will MacAskill and Toby Ord [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: February 12th, 2024 Source: https://forum.effectivealtruism.org/posts/zEMvHK9Qa4pczWbJg/on-being-an-ea-for-decades --- Narrated by TYPE III AUDIO.
A lot of great projects have started in informal ways: a startup in someone's garage, or a scrappy project run by volunteers. Sometimes people jump into these and are happy they did so. But I’ve also seen people caught off-guard by arrangements that weren’t what they expected, especially early in their careers. I’ve been there, when I was a new graduate interning at a religious center that came with room, board, and $200 a month. I remember my horror when my dentist checkup cost most of a month's income, or when I found out that my nine-month internship came with zero vacation days. It was an overall positive experience for me (after we worked out the vacation thing), but it's better to go in clear-eyed. First, I’ve listed a bunch of things to consider. These are drawn from several different situations I’ve heard about, both inside and outside EA. [...] ---Outline:(01:11) Things to consider(08:06) Advice from a few EAs--- First published: February 12th, 2024 Source: https://forum.effectivealtruism.org/posts/RXFcmrf7E5fLhb43e/things-to-check-about-a-job-or-internship --- Narrated by TYPE III AUDIO.
loading
Comments 
Download from Google Play
Download from App Store