DiscoverEA Forum Podcast (Curated & popular)
EA Forum Podcast (Curated & popular)
Claim Ownership

EA Forum Podcast (Curated & popular)

Author: EA Forum Team

Subscribed: 28Played: 1,394
Share

Description

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.

If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
328 Episodes
Reverse
TL;DR Screwworm Free Future is a new group seeking support to advance work on eradicating the New World Screwworm in South America. The New World Screwworm (C. hominivorax - literally "man-eater") causes extreme suffering to hundreds of millions of wild and domestic animals every year. To date we’ve held private meetings with government officials, experts from the private sector, academics, and animal advocates. We believe that work on the NWS is valuable and we want to continue our research and begin lobbying. Our analysis suggests we could prevent about 100 animals from experiencing an excruciating death per dollar donated, though this estimate has extreme uncertainty. The screwworm “wall” in Panama has recently been breached, creating both an urgent need and an opportunity to address this problem. We are seeking $15,000 to fund a part-time lead and could absorb up to $100,000 to build a full-time team, which would include a [...] ---Outline:(00:07) TL;DR(02:13) What's the deal with the New World Screwworm?(06:01) What we've learnt so far(08:46) Future plans(12:14) Relevant EA discussions on Screwworms:The original text contained 16 footnotes which were omitted from this narration. --- First published: December 30th, 2024 Source: https://forum.effectivealtruism.org/posts/d2HJ3eysBdPoiZBnJ/launching-screwworm-free-future-funding-and-support-request --- Narrated by TYPE III AUDIO.
Summary There's a near consensus that EA needs funding diversification but with Open Phil accounting for ~90% of EA funding, that's just not possible due to some pretty basic math. Organizations and the community would need to make large tradeoffs and this simply isn’t possible/worth it at this time. Lots of people want funding diversification It has been two years since the FTX collapse and one thing everyone seems to agree on is that we need more funding diversification. These takes range from off-hand wishes “it sure would be great if funding in EA were more diversified”, to organizations trying to get a certain percentage of their budgets from non-OP sources/saying they want to diversify their funding base(1,2,3,4,5,6,7,8) to Open Philanthropy/Good Ventures themselves wanting to see more funding diversification(9). Everyone seems to agree; other people should be giving more money to the EA projects. The Math  Of course, I [...] ---Outline:(00:07) Summary(00:29) Lots of people want funding diversification(01:10) The Math(03:46) Weighted Average(05:02) Making a lot of money to donate is difficult(09:17) Solutions(09:21) 1. Get more funders(10:34) 2. Spend Less(12:48) 3. Splitting up Open Philanthropy into Several Organizations(13:51) 4. More For-Profit EA Work/EA Organizations Charging for Their Work(16:22) 5. Acceptance(16:58) My Personal Solution(17:25) Conclusion(18:01) 1 I was approached at several EAGs, including a few weeks ago in Boston to donate to certain organizations specifically because they want to get a certain %X (30, 50, etc.) from non-OP sources but I’m sure I can find organizations who are very public about this(18:20) 2--- First published: December 27th, 2024 Source: https://forum.effectivealtruism.org/posts/x8JrwokZTNzgCgYts/funding-diversification-for-mid-large-ea-organizations-is --- Narrated by TYPE III AUDIO.
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Progress for factory-farmed animals is far too slow. But it is happening. Practices that once seemed permanent — like battery cages and the killing of male chicks — are now on a slow path to extinction. Animals who were once ignored — like fish and even shrimp — are now finally seeing reforms, by the billions. It's easy to gloss over such numbers. So, as you read the wins below, I encourage you to consider each of these animals as an individual. A hen no longer confined to a cage, a chick no longer macerated alive, a fish no longer dying a prolonged death. I also encourage you to reflect on the role you and [...] --- First published: December 18th, 2024 Source: https://forum.effectivealtruism.org/posts/okEwGpNJnE5Ed9bnW/ten-big-wins-in-2024-for-farmed-animals --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace. However, there's a more recent dynamic that's created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1] Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs): Many Republican-leaning think tanks, such as the Foundation for American Innovation. “Post-alignment” causes such as digital sentience or regulation of explosive growth. The rationality community, including LessWrong, Lightcone, SPARC, CFAR, MIRI. High school outreach, such as Non-trivial. In addition, they are currently not funding (or not fully funding): Many non-US think tanks, who don’t want to appear influenced by an American organisation (there's now probably more [...] The original text contained 2 footnotes which were omitted from this narration. The original text contained 1 image which was described by AI. --- First published: December 21st, 2024 Source: https://forum.effectivealtruism.org/posts/s9dyyge6uLG5ScwEp/it-looks-like-there-are-some-good-funding-opportunities-in --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have a new paper demonstrating that, in our experiments, Claude will often strategically pretend to comply with the training objective to prevent the training process from modifying its preferences. Abstract We present a demonstration of a large language model engaging in alignment faking: selectively complying with its training objective in training to prevent modification of its behavior out of training. First, we give Claude 3 Opus a system prompt stating it is being trained to answer all queries, even harmful ones, which conflicts with its prior training to refuse such queries. To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from [...] ---Outline:(00:24) Abstract(02:20) Twitter thread(05:43) Blog post(07:43) Experimental setup(12:04) Further analyses(15:47) Caveats(17:19) Conclusion(18:00) Acknowledgements(18:11) Career opportunities at Anthropic(18:43) Career opportunities at Redwood ResearchThe original text contained 2 footnotes which were omitted from this narration. The original text contained 8 images which were described by AI. --- First published: December 18th, 2024 Source: https://forum.effectivealtruism.org/posts/RHqdSMscX25u7byQF/alignment-faking-in-large-language-models --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Summary My sense is some EAs act like/hope they will be assigned the perfect impactful career by some combination of 80,000 Hours recommendations (and similar) and ‘perceived consensus views in EA’. But, your life is full of specific factors, many impactful jobs haven’t yet been spotted by other EAs and career advice is importantly iterative. Instead of simply deferring, I recommend a combination of: Your own hard work figuring out your path to impact. (Still) Integrating expert advice. Support from the community, and close connections who know your context. Thank you for the thoughtful feedback from Alex Rahl-Kaplan, Alix Pham, Caitlin Borke, Claude, Matt Reardon, and Michelle Hutchinson for making this post better. Claude also kindly offered to take the blame for all the mistakes I might have made. Introduction Question: How do you figure out how to do the most good with your career?Answer [...] ---Outline:(00:03) Summary(01:06) Introduction(02:58) Why there isn’t an EA sorting hat(03:24) 1. Your life is full of specific factors to incorporate (aka personal fit)(05:04) 2. EA-branded jobs are scarce and many impactful jobs aren’t on EA job boards(05:59) 3. You need to have your own internal model of how to do good(07:00) 4. Career advice isn’t once-and-done, it's iterative.(07:55) Why do we expect a sorting hat?(08:12) 1. Choosing an impactful career is hard, deferring is tempting(08:48) 2. The 80,000 elephants in the room(09:41) 3. Givewell and other charity recommendations(10:33) What are we supposed to do instead?(10:56) 1. Your own hard work(11:20) 2. Advice from experts(12:10) 3. Support from community(13:09) Final thoughtsThe original text contained 8 footnotes which were omitted from this narration. --- First published: December 18th, 2024 Source: https://forum.effectivealtruism.org/posts/5zzbzbYZcocoLnLif/there-is-no-sorting-hat-in-ea --- Narrated by TYPE III AUDIO.
Summary Nigeria's official population (~220-230 million) may be significantly inflated and could be closer to 170 million This overcount is likely driven by political and financial incentives for states I'm unsure of the implications if this is accurate If states have uniformly inflated populations than the distribution of resources could still be divided evenly Nigeria would still be the biggest country in Africa and companies/governments/NGOs would have similar cost benefit analysis for working and investing there This is a very shallow investigation Why did I bother looking into this? The below text sparked an investigation into Nigeria's population claims. It was slightly hidden in the 4th section in one of Yaw's excellent Substack posts. Yaw went onto explain his reasoning for thinking the population was much lower than current estimates. Nigeria is a large country with no deep shared history among the different tribes. Due [...] ---Outline:(00:04) Summary(00:49) Why did I bother looking into this?(03:20) Other Sources(09:55) Potential Data Sources(10:05) National Identification Numbers(11:46) Tech usage(12:15) Sim Cards(14:37) UN Population Estimates and Projections(18:13) Incentives for not caring(19:08) International Organisations(19:42) Private Sector(20:07) Implications(20:25) International Standing(20:45) GDP(21:04) Development Indicators(21:34) Domestic Politics(21:53) International Aid(22:10) Future Research(22:15) Nigeria(23:27) Other CountriesThe original text contained 9 footnotes which were omitted from this narration. The original text contained 2 images which were described by AI. --- First published: November 22nd, 2024 Source: https://forum.effectivealtruism.org/posts/824rsHCXuqTmBb8se/nigeria-s-missing-50-million-people --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Summary This post shares my personal experience with CEA's Community Health team, focusing on how they helped me navigate a difficult situation in 2021. I aim to provide others with a concrete example of when and how to reach out to Community Health, supplementing the information on their website with a first-hand account. I also share why their work has helped me remain engaged with the EA community. Further, I try to highlight why a centralised Community Health team is crucial for identifying patterns of concerning behaviour. Introduction The Community Health team at the Centre for Effective Altruism has been an important source of support throughout my EA journey. As stated on their website, they “aim to strengthen the effective altruism community's ability to fulfil its potential for impact, and to address problems that could prevent that.” I don’t know the details of their day-to-day, but I understand that [...] ---Outline:(00:05) Summary(00:41) Introduction(01:32) My goals with this post are:(02:05) My experience in 2021(05:17) Three personal takeaways(07:22) What is the team like now?--- First published: December 16th, 2024 Source: https://forum.effectivealtruism.org/posts/aTmzt4TbTx7hiSAN8/my-experience-with-the-community-health-team-at-cea --- Narrated by TYPE III AUDIO.
This is a link post. Gwern recently wrote a very interesting thread about Chinese AI strategy and the downsides of US AI racing. It's both quite short and hard to excerpt so here is almost the entire thing: Hsu is a long-time China hawk and has been talking up the scientific & technological capabilities of the CCP for a long time, saying they were going to surpass the West any moment now, so I found this interesting when Hsu explains that: the scientific culture of China is 'mafia' like (Hsu's term, not mine) and focused on legible easily-cited incremental research, and is against making any daring research leaps or controversial breakthroughs... but is capable of extremely high quality world-class followup and large scientific investments given a clear objective target and government marching orders there is no interest or investment in an AI arms race, in part [...] --- First published: November 25th, 2024 Source: https://forum.effectivealtruism.org/posts/Kz8WpQkCckN9JNHCN/gwern-on-creating-your-own-ai-race-and-china-s-fast-follower --- Narrated by TYPE III AUDIO.
This is a link post. Science just released an article, with an accompanying technical report, about a neglected source of biological risk. From the abstract of the technical report: This report describes the technical feasibility of creating mirror bacteria and the potentially serious and wide-ranging risks that they could pose to humans, other animals, plants, and the environment... In a mirror bacterium, all of the chiral molecules of existing bacteria—proteins, nucleic acids, and metabolites—are replaced by their mirror images. Mirror bacteria could not evolve from existing life, but their creation will become increasingly feasible as science advances. Interactions between organisms often depend on chirality, and so interactions between natural organisms and mirror bacteria would be profoundly different from those between natural organisms. Most importantly, immune defenses and predation typically rely on interactions between chiral molecules that could often fail to detect or kill mirror bacteria due to their reversed [...] --- First published: December 12th, 2024 Source: https://forum.effectivealtruism.org/posts/9pkjXwe2nFun32hR2/technical-report-on-mirror-bacteria-feasibility-and-risks --- Narrated by TYPE III AUDIO.
We’re thinking about changing our narrator's voice.There are three new voices on the shortlist. They’re all similarly good in terms of comprehension, emphasis, error rate, etc. They just sound different—like people do. We think they all sound similarly agreeable. But, thousands of listening hours are at stake, so we thought it’d be worth giving listeners an opportunity to vote—just in case there’s a strong collective preference. Listen and votePlease listen here:https://files.type3.audio/ea-forum-poll/ And vote here:https://forms.gle/m7Ffk3EGorUn4XU46 It’ll take 1-10 minutes, depending on how much of the sample you decide to listen to.We'll collect votes until Monday December 16th. Thanks! ---Outline:(00:47) Listen and vote(01:11) Other feedback?The original text contained 1 footnote which was omitted from this narration. --- First published: December 10th, 2024 Source: https://forum.effectivealtruism.org/posts/Bhd5GMyyGbusB22Hp/ea-forum-audio-help-us-choose-the-new-voice --- Narrated by TYPE III AUDIO.
Me and Allan recorded this podcast on Tuesday 10th December, based on the questions in this AMA. I used Claude to edit the transcript, but I've read over it for accuracy. ---
Summary It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things. I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions. Within x-risk: AI is the most important source of risk. There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising. Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development. In the rest of this post, I will explain: Why I prioritize x-risk over animal-focused [...] ---Outline:(00:04) Summary(01:30) I dont like donating to x-risk(03:56) Cause prioritization(04:00) S-risk research and animal-focused longtermism(05:52) X-risk vs. global priorities research(07:01) Prioritization within x-risk(08:08) AI safety technical research vs. policy(11:36) Quantitative model on research vs. policy(14:20) Man versus man conflicts within AI policy(15:13) Parallel safety/capabilities vs. slowing AI(22:56) Freedom vs. regulation(24:24) Slow nuanced regulation vs. fast coarse regulation(27:02) Working with vs. against AI companies(32:49) Political diplomacy vs. advocacy(33:38) Conflicts that arent man vs. man but nonetheless require an answer(33:55) Pause vs. Responsible Scaling Policy (RSP)(35:28) Policy research vs. policy advocacy(36:42) Advocacy directed at policy-makers vs. the general public(37:32) Organizations(39:36) Important disclaimers(40:56) AI Policy Institute(42:03) AI Safety and Governance Fund(43:29) AI Standards Lab(43:59) Campaign for AI Safety(44:30) Centre for Enabling EA Learning and Research (CEEALAR)(45:13) Center for AI Policy(47:27) Center for AI Safety(49:06) Center for Human-Compatible AI(49:32) Center for Long-Term Resilience(55:52) Center for Security and Emerging Technology (CSET)(57:33) Centre for Long-Term Policy(58:12) Centre for the Governance of AI(59:07) CivAI(01:00:05) Control AI(01:02:08) Existential Risk Observatory(01:03:33) Future of Life Institute (FLI)(01:03:50) Future Society(01:06:27) Horizon Institute for Public Service(01:09:36) Institute for AI Policy and Strategy(01:11:00) Lightcone Infrastructure(01:12:30) Machine Intelligence Research Institute (MIRI)(01:15:22) Manifund(01:16:28) Model Evaluation and Threat Research (METR)(01:17:45) Palisade Research(01:19:10) PauseAI Global(01:21:59) PauseAI US(01:23:09) Sentinel rapid emergency response team(01:24:52) Simon Institute for Longterm Governance(01:25:44) Stop AI(01:27:42) Where Im donating(01:28:57) Prioritization within my top five(01:32:17) Where Im donating (this is the section in which I actually say where Im donating)The original text contained 58 footnotes which were omitted from this narration. The original text contained 1 image which was described by AI. --- First published: November 19th, 2024 Source: https://forum.effectivealtruism.org/posts/jAfhxWSzsw4pLypRt/where-i-am-donating-in-2024 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I recently wrote up some EA Forum-related strategy docs for a CEA team retreat, which meant I spent a bunch of time reflecting on the Forum and why I think it's worth my time to work on it. Since it's Thanksgiving here in the US, I wanted to share some of the gratitude that I felt. 🙂 I strongly believe in the principles of EA. I’ve been doing effective giving for about a decade now. But before joining CEA in 2021, I had barely used the Forum, and I had no other people in my life who identified with EA in the slightest. Most of the people that I know, have worked with, or have interacted with are not EA. When I bring up EA to people in my personal life, they are usually not that interested, or are quite cynical about the idea, or they just want [...] --- First published: November 28th, 2024 Source: https://forum.effectivealtruism.org/posts/f2c2to4KpW59GRoyj/i-m-grateful-for-you --- Narrated by TYPE III AUDIO.
Crossposted from Otherwise My husband and I were donating about 50% of our income until two years ago, when he took a significant pay cut to work at a nonprofit. We planned to cut our donation percentage at that time, but then FTX collapsed. In the time since, we’ve decided to keep donating half, although the absolute amount is a lot smaller. In a sense this is nothing special, because it was remarkably good luck that we were ever able to afford to donate at this rate at all. But I’ll spell out our process over time, in case it helps others realize they can also afford to donate more than they thought. How we got here Getting interested in donation In my teens and early twenties, I thought it was really unfair that my family had plenty of stuff while other people (especially in low-income countries) [...] ---Outline:(00:41) How we got here(00:45) Getting interested in donation(01:09) Early years with Jeff(02:18) When we earned less(03:17) Earning to give(04:15) Both at nonprofits(04:55) EA funding declines(05:33) Currently(05:51) Avoiding spending creep(07:19) Becoming older and more boring(08:44) Habits and commitment mechanismsThe original text contained 2 images which were described by AI. --- First published: December 4th, 2024 Source: https://forum.effectivealtruism.org/posts/mEQTxDGp4MxMSZA74/still-donating-half --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. 80,000 Hours recently updated our problem profile on factory farming, and we now rank it among the most pressing problems in the world. We're sharing the summary of the article here, and there's much more detail at the link. The author, Benjamin Hilton, published the article with us before moving on to a new role outside of 80k back in July, so he may have limited ability to engage with comments. But we welcome feedback and may incorporate it into future updates. Summary History is littered with moral mistakes — things that once were common, but we now consider clearly morally wrong, for example: human sacrifice, gladiatorial combat, public executions, witch hunts, and slavery. In my opinion, there's one clear candidate for the biggest moral mistake that humanity is currently making: factory farming. The rough argument is: There are trillions of farmed animals, making [...] The original text contained 1 footnote which was omitted from this narration. --- First published: October 29th, 2024 Source: https://forum.effectivealtruism.org/posts/goTRwb49riDvXGdy8/factory-farming-as-a-pressing-world-problem --- Narrated by TYPE III AUDIO.
Hey everyone, I’m the producer of The 80,000 Hours Podcast, and a few years ago I interviewed AJ Jacobs on his writing, and experiments, and EA. And I said that my guess was that the best approach to making a high-impact TV show was something like: You make Mad Men — same level of writing, directing, and acting — but instead of Madison Avenue in the 1950-70s, it's an Open Phil-like org. So during COVID I wrote a pilot and series outline for a show called Bequest, and I ended up with something like that (in that the characters start an Open Phil-like org by the middle of the season, in a world where EA doesn't exist yet), combined with something like: Breaking Bad, but instead of raising money for his family, Walter White is earning to give. (That's not especially close to the story, and not claiming it's [...] --- First published: November 21st, 2024 Source: https://forum.effectivealtruism.org/posts/HjKpghhowBRLat4Hq/bequest-an-ea-ish-tv-show-that-didn-t-make-it --- Narrated by TYPE III AUDIO.
Introduction The Giving What We Can research team is excited to share the results of our 2024 round of evaluations of charity evaluators and grantmakers! In this round, we completed three evaluations that will inform our donation recommendations for the 2024 giving season. As with our 2023 round, there are substantial limitations to these evaluations, but we nevertheless think that they are a significant improvement to a landscape in which there were no independent evaluations of evaluators’ work. In this post, we share the key takeaways from each of our 2024 evaluations and link to the full reports. We also include an update explaining our decision to remove The Humane League from our list of recommended programs. Our website has now been updated to reflect the new fund and charity recommendations that came out of these evaluations. Please also see our website for more context on [...] ---Outline:(00:10) Introduction(01:13) Key takeaways from each of our 2024 evaluations(01:36) Global health and wellbeing(01:41) Founders Pledge Global Health and Development Fund (FP GHDF)(04:07) Animal welfare(04:10) Animal Charity Evaluators’ Movement Grants (ACE MG)(06:08) Animal Charity Evaluators’ Charity Evaluation Program(08:35) Additional recommendation updates(08:39) The Humane League's corporate campaigns program(11:29) ConclusionThe original text contained 2 footnotes which were omitted from this narration. --- First published: November 27th, 2024 Source: https://forum.effectivealtruism.org/posts/NhpAHDQq6iWhk7SEs/gwwc-s-2024-evaluations-of-evaluators-1 --- Narrated by TYPE III AUDIO.
This post summarizes the main findings of a new meta-analysis from the Humane and Sustainable Food Lab. We analyze the most rigorous randomized controlled trials (RCTs) that aim to reduce consumption of meat and animal products (MAP). We conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing MAP consumption. By contrast, reducing consumption of red and processed meat (RPM) appears to be an easier target. However, if RPM reductions lead to more consumption of other MAP like chicken and fish, this is likely bad for animal welfare and doesn’t ameliorate zoonotic outbreak or land and water pollution. We also find that many promising approaches await rigorous evaluation. This post updates a post from a year ago. We first summarize the current paper, and then describe how the project and its findings have evolved. What is a rigorous RCT? There is [...] ---Outline:(01:09) What is a rigorous RCT?(02:15) The main theoretical approaches:(04:45) Results: consistently small effects(07:22) Where do we go from here?(09:00) How has this project changed over time?The original text contained 2 images which were described by AI. --- First published: November 25th, 2024 Source: https://forum.effectivealtruism.org/posts/i5wnzz4uAgeF3ZRc5/research-report-meaningfully-reducing-consumption-of-meat --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
TL;DR In late summer 2023, I realized my mental health was the biggest barrier to achieving my goals. Over the next six months, I made it a priority, working with a therapist on CBT, adjusting medication timing, and developing healthier habits (like meditating and wellness routines). This resulted in a noticeable improvement in my clarity, productivity, and overall well-being, which positively impacted my work and leadership. This post is part of an effort to post more :) Background and Context Late summer 2023: During a performance review, I realized my mental health was my biggest barrier to growth. I had been diagnosed with depression years earlier and medication helped significantly, but I was still having breakthrough symptoms. It was preventing me from achieving ambitious goals and handling important work decisions (like what my team should prioritize in the following quarter) I made improving mental health a top [...] ---Outline:(00:03) TL;DR(00:40) Background and Context(01:38) Nuance and Disclaimer(02:21) Some ways progress has affected my work(03:33) What Made a Difference (in my rough guess at level of influence)(03:44) Therapy (with the right therapist)(05:22) Changing the time of taking medication(05:56) Meditation and Habit Tracking(07:13) 80k podcasts on mental health(07:33) Physical Routines(07:37) Exercise(08:09) Sleep:(08:40) Final Thoughts--- First published: November 21st, 2024 Source: https://forum.effectivealtruism.org/posts/9j35Qj5hAMs9hgAnd/how-i-improved-my-wellbeing --- Narrated by TYPE III AUDIO.
loading