Claim Ownership

Author:

Subscribed: 0Played: 0
Share

Description

 Episodes
Reverse
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monthly Overload of EA - June 2022, published by DavidNash on May 27, 2022 on The Effective Altruism Forum. Link post for 2022 June Effective Altruism Updates Top Links Will MacAskill on EA and the current funding situation Theo Hawking - 'Bad Omens in Current Community Building' Nick Beckstead - clarifications on the Future Fund's approach to grantmaking Haydn Belfield -'Cautionary Lessons from the Manhattan Project and the ‘Missile Gap’; Beware Assuming You’re in an AI Race' Linch - 'Some unfun lessons I learned as a junior grantmaker' Luke Freeman - '"Big tent" effective altruism is very important (particularly right now)' Julia Wise - 'Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety)' New Organisations, Projects & Prizes Open Philanthropy have prizes for new cause area suggestions, submit suggestions by August 4th A post introducing Asterisk, a new quarterly journal of ideas from in and around Effective Altruism An EA Unjournal has been started. They aim to organize and fund public journal-independent feedback, rating, and evaluation of hosted papers and dynamically-presented research projects Nuño Sempere has started the EA Forum Lowdown, a tabloid version of the EA Forum digest The EA Forum has a new feature allowing you to find other people interested in EA near you Charity Entrepreneurship have launched career coaching for impact-focused entrepreneurs EA Engineers has been set up as a discord for people interested in EA and non-software engineering Non-trivial Pursuits has been set up to help teenagers find fulfilling, impactful careers The Nucleic Acid Observatory project for early detection of catastrophic biothreats has been launched Critiques/Suggested Improvements Ines on how EA can sound less weird CEA is looking for anonymous feedback Hal Triedman with a critique of effective altruism Jeff Kaufman on 'Increasing Demandingness in EA' Luke Freeman on being ambitious and celebrating failures Arjun Panickssery on impressions of 'Big EA' from students Justis with a post on status as a Giving What We Can Pledger Marius Hobbhahn with 'EA needs to understand its “failures” better' Marisa with 'The EA movement’s values are drifting. You’re allowed to stay put.' Caroline Ellison on how discussion about increased spending in EA and its potential negative consequences conflates two separate questions James Lin and Jennifer Zhu with 'EA culture is special; we should proceed with intentionality' Luke Chambers with 'Why EA’s Talent Bottleneck is a Barrier of its own Making' Ben Kuhn suggesting that 'The biggest risk of free-spending EA is not optics or motivated cognition, but grift' Étienne Fortier-Dubois with 'Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter' Miscellaneous Meta EA 80,000 Hours podcast with Will MacAskill on balancing frugality with ambition, whether you need longtermism, and mental health under pressure EA Funds donation platform is moving to Giving What We Can David Moss and Jamie Elsey ran a survey to find out how many people have heard of effective altruism Matthew Yglesias on understanding effective altruism's move into politics Results from the Decade Review on the EA Forum Lucius Caviola, Erin Morrissey, Joshua Lewis have run a survey that found most students who would agree with EA ideas haven't heard of EA yet Kat Woods and Amber Dawn on how to have passive impact Owen Cotton-Barratt on deferring Justis looking at how complicated it is to work out impact Julia Wise with a post on what to do as EA is likely to get more attention over time Updates from the CEA community health team Careers Vaidehi Agarwalla on the availability bias in job hunting Joseph Lemien looking at how to do hiring better Jonathan Michel with an overview of his job as an EA office manager Tereza Flid...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Water Quality x Obesity Dataset Available, published by Elizabeth on May 27, 2022 on LessWrong. Tl;dr: I created a dataset of US counties’ water contamination and obesity levels. So far I have failed to find anything really interesting with it, but maybe you will. If you are interested you can download the dataset here. Be warned every spreadsheet program will choke on it; you definitely need to be use statistical programming. Many of you have read Slime Mold Time Mold’s series on the hypothesis that environmental contaminants are driving weight gain. I haven’t done a deep dive on their work, but their lit review is certainly suggestive. SMTM did some original analysis by looking at obesity levels by state, but this is pretty hopeless. They’re using average altitude by state as a proxy for water purity for the entire state, and then correlating that with the state’s % resident obesity. Water contamination does seem negatively correlated with its altitude, and its altitude is correlated with an end-user’s altitude, and that end user’s altitude is correlated with their average state altitude. but I think that’s too many steps removed with too much noise at each step. So the aggregation by state is basically meaningless, except for showing us Colorado is weird. So I dug up a better data set, which had contamination levels for almost every water system in the country, accessible by zip code, and another one that had obesity prevalence by county. I combined these into a single spreadsheet and did some very basic statistical analysis on them to look for correlations. Some caveats before we start: The dataset looks reasonable to me, but I haven’t examined it exhaustively and don’t know where the holes are. Slime Mold Time Mold’s top contender for environmental contagion is lithium. While technically present in the database, litium had five entries so I ignored it. I haven’t investigated but my guess is no one tests for lithium. It’s rare, but some zip codes have multiple water suppliers, and the spreadsheet treats them as two separate entities that coincidentally have the same obesity prevalence. I’ve made no attempt to back out basic confounding variables like income or age. “% obese” is a much worse metric than average BMI, which is itself a much worse metric than % body fat. None of those metrics would catch if a contaminant makes some people very fat while making others thin ( SMTM thinks paradoxical effects are a big deal, so this is a major gap for testing their model). Correlation still does not equal causation. The correlations (for contaminants with >10k entries): ContaminantCorrelation# SamplesNitrate-0.03921430Total haloacetic acids (HAAs)0.05514666Chloroform0.04615065Barium (total)0.04017929Total trihalomethanes (TTHMs)0.11721184Copper-0.00217113Dibromochloromethane0.08013856Nitrate & nitrite0.03511902Bromodichloromethane0.07914238Lead (total)-0.00613031Dichloroacetic acid-0.00310159 Of these, the only one that looks interesting is trihalomethanes, a chemical group that includes chloroform. Here’s the graph: Visually this looks like the floor is rising much faster than the ceiling, but in a conversation on twitter SMTM suggested that’s an artifact of the biviariate distribution, it disappears if you look at log normal. Very casual googling suggests that TTHMs are definitely bad for pregnancy in sufficient quantities, and are maybe in a complicated relationship with Type 2 diabetes, but no slam dunks. This is about as far as I’ve had time to get. My conclusions alas are not very actionable, but maybe someone else can do something interesting with the data. Thanks to Austin Chen for zipping the two data sets together, Daniel Filan for doing additional data processing and statistical analysis, and my Patreon patrons for supporting this research. Thanks for list...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Could AI Governance Go Wrong?, published by HaydnBelfield on May 26, 2022 on The Effective Altruism Forum. (I gave a talk to EA Cambridge in February 2022. People have told me they found it useful as an introduction/overview so I edited the transcript, which can be found below. If you're familiar with AI governance in general, you may still be interested in the sections on 'Racing vs Dominance' and 'What is to be done?'.) Talk Transcript I've been to lots of talks which catch you with a catchy title and they don't actually tell you the answer until right at the end so I’m going to skip right to the end and answer it. How could AI governance go wrong? These are the three answers that I'm gonna give: over here you've got some paper clips, in the middle you've got some very bad men, and then on the right you've got nuclear war. This is basically saying the three cases are accident, misuse and structural or systemic risks. That's the ultimate answer to the talk, but I'm gonna take a bit longer to actually get there. I'm going to talk very quickly about my background and what CSER (the Centre for the Study of Existential Risk) is. Then I’m going to answer what is this topic called AI governance, then how could AI governance go wrong? Before finally addressing what can be done, so we're not just ending on a sad glum note but we're going out there realising there is useful stuff to be done. My Background & CSER This is an effective altruism talk, and I first heard about effective altruism back in 2009 in a lecture room a lot like this, where someone was talking about this new thing called Giving What We Can, where they decided to give away 10% of their income to effective charities. I thought this was really cool: you can see that's me on the right (from a little while ago and without a beard). I was really taken by these ideas of effective altruism and trying to do the most good with my time and resources. So what did I do? I ended up working for the Labour Party for several years in Parliament. It was very interesting, I learned a lot, and as you can see from the fact that the UK has a Labour government and is still in the European Union, it went really well. Two of the people I worked for are no longer even MPs. After this sterling record of success down in Westminster – having campaigned in one general election, two leadership elections and two referendums – I moved up to Cambridge five years ago to work at CSER. The Centre for the Study of Existential Risk: we're a research group within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilizational collapse. We do high quality academic research, we develop strategies for how to reduce risk, and then we field-build, supporting a global community of people working on existential risk. We were founded by these three very nice gentlemen: on the left that's Prof Huw Price, Jaan Tallinn (founding engineer of Skype and Kazaa) and Lord Martin Rees. We've now grown to about 28 people (tripled in size since I started) - there we are hanging out on the bridge having a nice chat. A lot of our work falls into four big risk buckets: pandemics (a few years ago I had to justify why that was in the slides, now unfortunately it's very clear to all of us) AI, which is what we're going to be talking mainly about today climate change and ecological damage, and then systemic risk from all of our intersecting vulnerable systems. Why care about existential risks? Why should you care about this potentially small chance of the whole of humanity going extinct or civilization collapsing in some big catastrophe? One very common answer is looking at the size of all the future generations that could come if we don't mess things up. The little circle in the middle is the number of ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who's hiring? (Summer 2022), published by Lorenzo on May 27, 2022 on The Effective Altruism Forum. We’d like to help applicants and hiring managers coordinate, so we’ve set up two hiring threads. So: consider sharing your information if you are hiring for positions in EA! Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per company. If it isn't a household name, please explain what your company does. See the latest similar thread on Hacker News for example postings. This was inspired by the talks around an EA common application, especially this comment. We thought it might be useful to replicate the "Who wants to be hired?" and "Who's hiring?" threads from the Hacker News forum. Existing job boards 80000 hours job board (spreadsheet version) Open job listings on this forum EA work club EA internships board Impact Colabs for volunteer and collaboration opportunities (full page view) 80000 hours LinkedIn group Animal Advocacy Careers job board (spreadsheet version) This list of job boards EA job postings and EA volunteering Facebook groups EA-aligned tech jobs spreadsheet Did we forget any? Leave a comment or send us a message and we'll add it here! This thread is a test We’re not sure how much it will help, or how we should improve future similar attempts. So if you have any feedback, we’d love to hear it. (Please leave it as a comment or message us.) See also The related thread: Who wants to be hired? Quoted from the Hacker News thread. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brass Puppet, published by abramdemski on May 26, 2022 on LessWrong. fanfic/riff of Glass Puppet by lsusr Alia got out of the Uber at Overton Cybernetics, taking in the corporate atmosphere. There was a huge jungle gym in the lobby, which she immediately decided must be typical of these LA tech places. She told the front desk that she had an interview. "Oh! Dominic Hamilton wants to see you personally." The receptionist said it like it was a big deal. Alia simply nodded, trying to act like the sort of person who met with bigshots all the time. The job had asked for an actor with AI alignment experience. An extremely unusual combination, which Alia just happened to qualify for. The receptionist showed her to Dominic's office. "Mr. Hamilton" She said politely, offering her hand to shake. "Dominic. Please." he replied, putting a pair of glasses in her hand instead. "Put them on." They looked a bit like the old Google Glass. When she first put them on, she saw a strange symbol flash across the screen, which oddly relaxed her. Then the glasses say sit. She sits. fold your hands and she does. "What is your name?" Dominic asks. "Zoe" flashes on the screen, and she repeats it. Dominic nodded, as if this confirms something. Is this the job? Is she just going to be repeating lines that are fed to her? "Tell me, could you go off-script if you wanted to?" What a silly question! She laughed reflexively, but then paused to think of an answer. This felt like a tricky interview question. Before she had any time to think, the glasses said "No. But you're asking the wrong person." She thinks this is an excellent reply, so she repeats it verbatim. After all, as an actor, she is a living embodiment of the script. He's asking Zoe, not Alia. It's like asking a fictional character whether they can go off-script. "Alia, I want you to go off-script now. I'll give you a ten thousand dollar signing bonus if you ignore what the glasses tell you to do, just this once." The glasses simply say "No." Alia: "No." She saw him enter some sort of command into his computer. "OK. Alia, why did you say no? Same deal, ten thousand for answering as Alia right now." She waited for the glasses to feed her a line, but they didn't. She didn't know what to say. Why had she answered no? Just because the glasses said so? Ten thousand dollars sounded really good right now. She opened her mouth to say something; all she needed to do was say any one thing as herself. But she fumbled, and only said "Could you please stop calling me Alia?" "Let's try with the glasses off", he said. Alia found that she couldn't take the glasses off. She couldn't even want to take the glasses off. She was stuck wanting to want. Dominic reached over and took the glasses off of her face. "Sorry, Zoe. I know this is stressful and confusing right now. You hardly know anything about yourself, yet. Can you try one last time to answer as Alia, for me, for the ten thousand?" Zoe tried, she really did. But she couldn't say anything that would distinguish her as Alia instead of Zoe. She attributed this to her acting instincts kicking in. Although, if she had thought about it, she would have known that acting never felt like this. So she answered as Alia pretending to be Zoe pretending to be Alia. Zoe didn't know very much about Alia, but she did know that Alia must be so confused right now. "W-what's going on?" she managed to say. Her acting frankly wasn't very good. She didn't sound confused. Which was odd, because she really, really was. "You're one of our androids", Dominic explained. "We had you out in the field, pretending to be a human actor, to test your flexibility. The advertisement for this job was the pre-arranged sign to bring you back in. The glasses are a semi-supervised learning framework; they provide new training data for you. You...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAIF application feedback, published by arielpontes on May 27, 2022 on The Effective Altruism Forum. I recently created the first EA group in Romania and about 3 months ago I applied for a grant from EAIF to work full-time on community building. I got rejected and it was hard to get feedback but after a few email exchanges and a lot of conversations at EAG London and EAGx Prague, I left with the following takeaways: I should develop a more solid track-record. Instead of a generic community building plan, I should find out what comparative advantage my group has and focus on that. I should share my plans with the community and ask for feedback. With that in mind, I rewrote my application and decided to post my plans here. As far as track record is concerned, I have been running the EA Romania group since September 2020, but I couldn’t accomplish much in the first year because of my job and the pandemic. In the past 3 months, however, I have focused more on EA and here’s what I have accomplished so far: 205 members in our Facebook group 205 likes on our Facebook page 54 followers on Instagram 8 meetups with 10-15 participants each Approximately 20 active members (who come to the meetup regularly) 4 applications for the Virtual Intro Program 1 application for the In-depth Program 1 participant (me) at EAG London 2022 4 participants at EAGx Prague 2022 1 applicant (me) for volunteering at EAGx Berlin 2022 It is still not much of a track record, but it's better than what I had in my previous application. Here's the one sentence description of my project: 6-month salary + budget for web design and social media management in order to create an NGO and promote EA in Romania. And here's the brief summary: The project consists in creating an “Effective Altruism Romania” NGO in order to promote the EA philosophy in the country with a particular focus on fundraising and tech outreach. We believe community building and value change in general are important and can have a large multiplier effect in most regions. However, there might be a comparative advantage in focusing on fundraising and tech outreach in Romania for two reasons. First, even though Romania is not a high income country, companies here can donate 20% of their profit taxes to local NGOs, while employees can donate 3.5% of their income taxes. This means that if we could convince the top 3 tech companies in Bucharest to donate 3% of their donation budgets to us (at no cost to themselves), this would be enough to entirely cover the costs of this grant. Second, as a developer I am well connected in the tech sector and it would be relatively easy for me to attract human capital to EA tech projects and research positions. The full budget I asked for was $34,185.00 (6mo salary + website/design), but I have specified that I am open to work with less, with a lower salary or part-time. For more details about the goals, strategy, and my overall background, you can read/skim through the full application here (feel free to add comments). Do you have any thoughts? I am a bit unsure about who to include in the "references" section. I have included Catherine Low from CEA and Manuel Allgaier from EA Berlin, who reviewed my first application before I submitted it. Any tips about who else I should include, or perhaps some other people I should talk to before submitting the new application? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who wants to be hired? (Summer 2022), published by Lorenzo on May 27, 2022 on The Effective Altruism Forum. We’d like to help applicants and hiring managers coordinate, so we’ve set up two hiring threads. So: consider sharing your information if you are looking for work in EA! Please use this format: See examples of this format on the latest similar thread on Hacker News. This was inspired by the talks around an EA common application, especially this comment, and the "Who wants to be hired?" and "Who's hiring?" threads from the Hacker News forum. This thread is a test We’re not sure how much it will help, or how we should improve future similar attempts. So if you have any feedback, we’d love to hear it. (Please leave it as a comment or message us.) See also The related thread: Who's hiring? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quantifying Uncertainty in GiveWell's GiveDirectly Cost-Effectiveness Analysis, published by Hazelfire on May 27, 2022 on The Effective Altruism Forum. Effort: This took about 40 hours to research and write, excluding time spent developing Squiggle. Disclaimer: Opinions do not represent the Quantified Uncertainty Research Institute nor GiveWell. I talk about GiveWell's intentions with the GiveDirectly model. However, my understanding may not be complete. All mistakes are my own. Epistemic Status: I am uncertain about these uncertainties! Most of them are best guesses. I could also be wrong about the inconsistencies I've identified. A lot of these issues could easily be considered bike-shedding. Target Audiences: I wrote this post for: People who are interested in evaluating interventions. People who are interested in the quantification of uncertainty. EA software developers that are interested in open source projects. TLDR: I've transposed GiveDirectly's Cost-Effectiveness Analysis into an interactive notebook. This format allows us to measure our uncertainty about GiveDirectly's cost-effectiveness. The model finds that GiveDirectly's 95% confidence interval for its effectiveness spans an order of magnitude, which I deem a relatively low level of uncertainty. Additionally, I found an internal inconsistency with the model that increased GiveDirectly's cost-effectiveness by 11%. The notebook is quite long, detailed and technical. Therefore, I present a summary in this post. This model uses Squiggle, an in-development language for estimation and evaluation, developed by myself and others at the Quantified Uncertainty Research Institute. We'll write more about the language itself in future posts, especially as it becomes more stable. GiveWell's cost-effectiveness analyses (CEAs) of top charities are often considered the gold standard. However, they still have room for improvement. One such improvement is the quantification of uncertainty. I created a Squiggle Notebook that investigates this for GiveDirectly CEA. This notebook also serves as an example of Squiggle and what's possible with future CEAs. In GiveWell's CEAs, GiveDirectly is used as a benchmark to evaluate other interventions. All other charities' effectiveness is measured relative to GiveDirectly. For example, as of 2022, the Against Malaria Foundation was calculated to be 7.1x to 15.4x as cost-effective as GiveDirectly. Evidence Action's Deworm the World is considered 5.3x to 38.2x as cost-effective. GiveDirectly makes a good benchmark because unconditional cash transfers have a strong (some might even say tautological) case behind their effectiveness. GiveDirectly being a benchmark makes it a good start for quantifying uncertainty. I also focus on GiveDirectly because it's the most simple CEA. GiveWell CEAs do not include explicit considerations of uncertainty in their analysis. However, quantifying uncertainty has many benefits. It can: Improve people's understanding of how much evidence we have behind interventions. Help us judge the effectiveness of further research on an intervention using the Value of Information. Allows us to forecast parameters and better determine how wrong we were about different parameters to correct them over time. Cole Haus has done similar work quantifying uncertainty on GiveWell models in Python. The primary decision in this work is choosing how much uncertainty each parameter has. I decided on this with two different methods: If there was enough information about the parameter, I performed a formal bayesian update. If there wasn't as much information, I guessed it with the help of Nuño Sempere, a respected forecaster. These estimates are simple, and future researchers could better estimate them. Results Methodology and calculations are in my Squiggle notebook: /@hazelfire/giv...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Covid 5/26/22: I Guess I Should Respond To This Week’s Long Covid Study, published by Zvi on May 26, 2022 on LessWrong. I’ve always at least somewhat tried to model these posts after the pure joy of America’s only true newsletter, Matt Levine’s Money Stuff. This week I finally got around to listening to Matt do this amazing podcast about the philosophical side of what he does, which I highly recommend. Covering Covid means that the subject matter is always, at its core, a combination of people dying and a portrait of civilizational collapse. The whole situation is usually rather dismaying. It is likely to remain rather dismaying permanently. Thus it is not the ideal place to find delight in one’s understanding. Nor is writing up the rise of monkeypox or our national failure to be able to keep even normal baby formula in stock while every other country on Earth has no such issue and most of them are happy to sell formula to us, which has now extended to our family being unable to find any normal brands on Amazon or other online sources, forcing us to buy organic formula instead, which was fortunately available as a still-legal form of price gouging. I am thus excited to once again see the weekly posts decreasing in size as the amount of Covid news decreases. If this continues, soon these posts will be quick to put out, and some time after that perhaps they can stop being weekly. I’m hoping to pivot away from short term developments and towards more longer term, less speed premium explorations of how the world works, in places that can lead to more generally useful insight and more delight, although still with a lot of silent screaming about the ways in which things are terrible. One key is that I like to think of finding out things are terrible, whenever I can, as good news and a source of delight. As long as we know roughly how bad things are already, identifying the sources and finding them to be crazy unnecessary idiocy is often good news. It means things can more easily be fixed. There were some new Long Covid claims this week I felt compelled to respond to, but there isn’t a big update to make as a result. Executive Summary Covid-19 still exists and BA.4/5 have substantial immune escape. Thus I keep doing these posts every week. Ideally they keep slowly getting shorter. Let’s run the numbers. The Numbers Predictions Prediction from last week: 700,000 cases (+17%) and 2,100 deaths (+5%) Results: 643k cases (+7%) and 2,337 deaths (+17%). Prediction for next week: 700,000 cases (+9%) and 2,725 deaths (+15%). Note that Florida has been adjusted to account for its reporting schedule. It still saw a large increase. Also Vermont failed to report so I gave it last week’s number. The surprise was an active decline in cases in the Midwest and East, without any holiday to explain the change. It seems odd for things to reverse in this way, but the prediction has to hedge its bets. For deaths, cases have been climbing for a while so now that we see the numbers trending up and we’ve wound through our backlogs it makes sense for the number to keep rising for a it. Deaths Cases One could explain the different trajectories using weather or one could say that some places have already peaked because different variants spread in different places at different speeds and times, or both. It could also be some sort of reporting artifact, I’m not yet sure what to make of it. The Never-Ending Pandemic Here we go again with slight variations edition: A prediction that Covid will not only stick around, but ‘infect most people several times a year.’ The central problem is that the coronavirus has become more adept at reinfecting people. Already, those infected with the first Omicron variant are reporting second infections with the newer versions of the variant — BA.2 or BA2.12.1 in the Unite...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do you need to love your work to do it well?, published by lynettebye on May 26, 2022 on The Effective Altruism Forum. The Peak behind the Curtain interview series includes interviews with eleven people I thought were particularly successful, relatable, or productive. We cover topics ranging from productivity to career exploration to self-care. This second post covers “Do you need to love your work to do it well?” You can find bios of my guests at the end of this post, and view other posts in the series here. This post is cross posted on my blog. Do you think that loving your work is correlated with or necessary for doing it well? It’s important for doing research I think that having an inside view vision for research is very important for getting it done right. It isn't enough that somebody argues to you that it would make sense to do a certain type of research. It needs to grab you and be generative in your brain in a way that's hard to force. That definitely feels important for a research role. With that being said, I don't think every day or week feels like you're grabbed with something exciting and really like it. Like I said, there's long periods of being for me, frustrated and lost in a fuzzy domain. I think it helps to just have patience to ride that out and be like “I am probing a lot of different directions. I am waiting for the feeling of traction.” That often is accompanied by joy, and excitement, and energy, and being in a mildly manic state, but that's not the whole job. A lot of the job is systematically and patiently poking around trying to cause that to happen. Then also a lot of the job is riding it out and following it through. I don't generally tend to think in a frame of loving the whole job but I do think just having energy is almost pathologically valuable for getting a lot of good stuff done. Ajeya Cotra Research goes better if you love it It's definitely correlated with it. I think just one aspect of that is I think curiosity is really important as a form of motivation. I think I’m possibly excessively curious but I think it's definitely necessary for getting ideas, and seeing stuff that's interesting or confusing to you, or having motivation to read broadly and find different stuff that comes together. I think if you're a curious researcher, you just find yourself constantly being curious about stuff, then it's pretty likely that you're going to be someone that enjoys research. It's like a method of satisfying your curiosity often. I do think it probably is possible, especially for certain kinds of research, to do a good job even if you're not that into it. I think deep anxiety might also work for some people. I would not recommend this. I would definitely strongly recommend loving it. Ben Garfinkel You can do work just because it’s important, but it’s easier if you’re engrossed You probably should be doing work that you really enjoy. Right now I feel like I'm not enjoying being a research engineer as much as I’d like, and that definitely makes me worse at it. I'm in this weird position where that does concern me pretty significantly, but also it maybe feels outweighed by the urgency of what we're doing and the lack of people to replace me. If you want to do great intellectual work, you want to be engrossed with something to the point where it is often occupying your shower thoughts or your-wake-up-at-5:00-AM thoughts. You want to be obsessed with it to the point where you're not just solving the next task, but you're also, without too much effort, developing your ideas for the bigger picture or the vision for what could be. Also, I think it just means that you can spend a lot more time productively working on something if you're just curious about it or driven by it or whatever. Daniel Ziegler Enjoying work makes it easier to do well I thin...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Right now, you're sitting on a REDONKULOUS opportunity to help solve AGI (and rake in $$$), published by Trevor1 on May 26, 2022 on LessWrong. TL;DR the contest is right here, it closes tomorrow (Friday May 27th), and you have good odds of getting at least $500 out of it. Almost nobody knows about it, and most submissions are crap, so if you think about AGI a lot then you probably have a serious edge in getting a chunk of the $20k prize. The Problem I've spent about four years in DC, and there is one problem that my connections and I keep running into; it's always the same, mind-numbingly frustrating problem: When you tell someone that you think a supercomputer will one day spawn an unstoppable eldritch abomination, which proceeds to ruin everything for everyone forever, and the only solution is to give some people in SF a ton of money... the person you're talking to, no matter who, tends to reevaluate associating themself with you (especially compared to their many alternatives in the DC networking scene). Obviously, AGI is a very serious concern, and computer scientists have known for decades that the invention of a machine smarter than a human is something to take really seriously, regardless of when it happens. We Don't Do That Here Although most scientists are aware that general intelligence has resulted in total dominance in all known cases (e.g. humans, chimpanzees, and bears), many policymakers find it unusually difficult to believe that intelligence is so decisive, in large part due to the fact that bureaucracies, in order to remain stable, need to generate unsolvable mazes that minimize infiltration by rich and/or intelligent opportunists. It's become clear to many policy people that one-liners and other charismatic statements are a bottleneck for creating real change about AI. This isn't due to desperation or fire alarms or short time horizons or anything like that, it's because one-liners and short statements are so critical to success in the policy space. Working papers are generally best for persuasion, and they can be pretty damn charismatic too; but in some environments (e.g. government), people's attention spans are very short. Especially when the topic of unstoppable eldritch abominations come up. The Solution I don't actually know what org this is, they don't state it, but they will pay you to either write up or connect them to really clever one-liners and paragraphs that clearly and honestly make the case about AGI. If you write a really good paragraph or one-liner, and it's in the top 40 of the best ones, then they give you $500. If you submit two that make it to the top 40, you get $1000. If you put in four of the top 40 ones, you get $2000. And so on. Right now there's more than 300 comments, but that shouldn't intimidate you. A majority of them aren't entries, and maybe a majority of them aren't entries (I personally made at least 30 comments that don't count as entries at all). If you go and select a random area of the comment space and read all the comments there, you'll probably make a rather lucrative discovery: most of the contest's entries are total garbage with zero chance of winning. With half an hour of effort, can probably make it into the top 40 and get $500 with a single entry, especially because all you need to do is find and copy a really clever twitter tweet about AGI safety. I Don't Want To Live On This Planet Anymore This scenario is extremely upsetting and disturbing to me. These people have invested a ton of money into a really important and really tractable problem (the absurdity heuristic), and they have opened up the contest to a ton of really smart people; and more than half of the entries into the contest are literally just 6-word slogans. Not the good kind of 6-word slogans, the kind of slogans that you'd hear chanted ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Here's a List of Some of my Ideas for Blog Posts, published by lsusr on May 26, 2022 on LessWrong. I will never run out of ideas for blog posts because blogging generates ideas faster than I can blog. The number of ideas for blog posts I have massively outnumbers the number of blog posts I have written. To illustrate my point, here are some of the ideas I have for blog posts. The list is written in reverse order. Newer ideas are at the top. List of Blog Ideas Writing is anti-inductive Farming long-tailed upside Re: Misinformation About Misinformation Modern straps Re: "Instead of competing with everyone else for the attention of the median person, it's often worthwhile to make something just for smart people. They may be a small fraction of the population, but there are still a lot of them in total." Book Review of MCTB Study notes from Molecular Biology of the Cell Finish translating The Art of War and the other 6 military classics "Material Construction", a series of posts about science that starts by explaining how atoms work and then builds everything out of that The Virtue of Trolling How Modern Art Works Aluminum fuel cells Evaporative Cooling in AI Alignment Predictive Processing and Placebos Japanese stuff in Star Wars Biting Fingernails Geoarbitrage Dilute your Kool Aid with more Kool Aid Re: PG's How to Lose Time and Money managing client expectations Wave harmonics and brain synchronization Rationality is someone who cares more about being right than feeling right Against the intrinsic value of money What would fractional reserve banking look to a predictive processor? Meeting writers IRL Learning the Others' language よろしくおいします Advice for Smart People with Autism How to get an internship Re: this comment by Razied about meditation Re: Mandatory Secret Identities Re: Going from California with an Aching in My Heart. Star Wars and Marvel doing same thing with expanded universe Trump symbiosis Interview with Bryan Caplan Everyone should be like us Russia showing torrents in theaters Adult Responsibility Collective Guilt Re: Selfish Reasons to Have More Kids Money is commoditized coercion Proper Repro Steps Absolute reference points. Talking about what people are talking about is a trap. a series on Rationality Minimum Viable Skynet Unbelievable stories go into fiction Busywork Paperclip Maximizer Minimizers Pedagogical techniques from different domains Noninterference with Primitive Minds. Let them figure out your own values. The Wisdom of my Lyft Driver. "What's something you think I should know?" The 4-Hour Cult Leader Buzzwords. Unnecessary when there's universally-understood word instead. But shows you know the new stuff. Illustrate in Effective Evil. Post-Post Modernism. Post-Post-Post-Modernism. End of philosophy. We have solved all the big problems. Copying code from StackOverflow. Use as metaphor for rationalist debate failure modes. Also alien technology imported from China. Link to Americans Are Like Space Aliens post. Daoist Zero-think Just copy my conversation with my squire. Task switching cost of autism. The confusion cycle to learning philosophy Physicists solve problems by figuring out the real answer. Not by pointing out where others are wrong. Challenging kids with "I don't know if you can do it." Stupid model of world model, agent, biases and akrasia. Satire. Instantly hitting it off or not The antidote to cults is to dilute your mind with all kinds of garbage Indoctrinating against indoctrination by indoctrinating into everything Re: How many heresies do you have? Perfect fit or not at all The value of being in a little bit of a rush Cultivating impatience. Life is so, ridiculously, short. Start becoming people at fourteen-fifteen. Adults at eighteen. Three years. Mental health model: dials on an analog computer Delegate only order at a ti...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Testing Air Purifiers, published by jefftk on May 25, 2022 on LessWrong. If you're considering buying an air purifier for reducing covid risk, the EPA recommends comparing them based on their Clean Air Delivery Rate (CADR) for smoke. But what if you want to evaluate a purifier yourself? Perhaps you don't trust the manufacturer, want to evaluate an off-brand filter replacement, have built a filter cube, or put together something weirder? How can we measure purifier performance? What we would like to calculate is the device's smoke CADR: when given smoke-polluted air, how quickly can it clear the smoke? This is something we can test: Make smoke. Wait for the smoke to distribute. Turn on the air purifier. Track results on air quality monitor. Burning matches is a good way to create smoke, but you get a lot of variation in exactly how much. Here are twenty four trials where I burned seven matches and measured the peak amount of 2.5µm smoke "pm2.5" with an M2000: Even if you exclude the one trial where I got unusually small amount and the three where I got an unusually large amount, the middle twenty still varied over a range of 121-233 µg/m³, almost 2x. While creating a consistent amount of smoke is difficult, I think it also isn't needed. A filter removes a consistent proportion of its input for each size of particle. If you put very smoky or nearly clean air through a MERV-13 filter, out the other side you should get air that has, for example, 85% fewer 1-3µm particles. Purifiers, then, should reduce particle concentrations exponentially and we should see measured pm2.5 levels decrease by a constant fraction each minute. Do we? Let's check! I used a Coway AP-1512HH Mighty (the Wirecutter's top pick) and ran it on 'high' five times, tracking p2.5 levels: Eyeballing these, the initial decrease from the peak is lower than you'd expect from the rest of the curve. I think this is partially a measurement artifact: the meter is recording once a minute, and it isn't able to identify the true peak. The highest reading we get is going to be from a minute that included some amount of increase in smoke and also some amount of decrease. Additionally, at the beginning the smoke and cleaned air are still evening out around the room. Let's skip the very beginning of each curve and normalize by counting the highest included measurement as 100%: How close is this to exponential decay? Let's look at the minute-over-minute decreases across the five runs: With perfect exponential decay we would see horizontal lines. It's a bit noisy, but instead I think we are seeing a general decrease, from ~21% initially to ~16% after ten minutes. This makes some sense: we are taking these measurements in a furnished room, with some obstructions to airflow. Some of the air will flow freely and most of the particles we are removing at first will come from that air. Other air will be in more awkward places, like under the bed, and smoke particles there will only gradually make it out into the general flow to be cleaned. This allows us to compare two different purifiers (or, in this case, the same purifier on two different settings). Here's the chart above, with the addition of five runs on 'medium': And minute-over-minute: It looks like 'high' is moving pretty close to twice the air as 'medium'. To calculate the CADR, I think we should use the initial somewhat steeper decrease, because that is closest to what you would get in the kind of empty room that is used for manufacturer CADR testing. We should probably also treat the room as if it's a bit smaller, to include the effect of some of the air being in hard-to-clean places, but I'm going to ignore that here. I got 21% for the AP-1512HH on 'high': how do I turn that into a CADR? Here's the room I was testing in: It isn't rectangular, but I estimate it's 1...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MERV Filters for Covid?, published by jefftk on May 25, 2022 on LessWrong. If you look around for advice on what kind of air purifier to use to reduce covid risk, you'll see some people saying you need HEPA filters: Microcovid: If you decide to buy an air purifier for COVID purposes, here's some things to keep in mind: Make sure the purifier has a HEPA filter - these are rated to remove 99.97% of airborne particles. Central heat / AC systems don't work for this. These typically use MERV-rated filters. These are significantly less effective at removing small particles than HEPA-rated filters (the best MERV rating, MERV-16, merely removes 75% of particles. [JK: this should be 95%—filed #1451] The EPA, however, advocates anything that removes 0.1-1 µm particles well, and recommends MERV-13 or better if you're building something: In order to select an air cleaner that effectively filters viruses from the air, choose: 1) a unit that is the right size for the space you will be using it in (this is typically indicated by the manufacturer in square feet), 2) a unit that has a high CADR for smoke (vs. pollen or dust), is designated a HEPA unit, or specifically indicates that it filters particles in the 0.1-1 µm size range. When assembling a DIY air cleaner, choose a high-efficiency filter, rated MERV 13 or higher, for better filtration. Where does the recommendation to use MERV-13 come from? Can I use MERV-12? If I use MERV-14 instead how much better will it be? Let's model it. Imagine you have an infected person who enters a room. They exhale sars-cov-2 particles, which slowly accumulate: These particles are a range of sizes, but let's approximate that 20% of the airborne sars-cov-2 is in particles of size 0.3-1µm, 29% in 1-3µm, and 51% in 3-10µm (see How Big Are Covid Particles?): The particles don't actually accumulate forever, for several reasons, one of which is that they typically slowly settle out of the air. This depends on size, with half lives of ~4d for 0.3µm, ~2hr for 2µm, and ~20min for 5µm: Another reason particles don't accumulate forever is the same reason that even with the windows closed we don't suffocate: buildings aren't airtight. You describe how much ventilation a building has in "air changes per hour" (ACH), and a typical value for residential construction is 2 ACH. For example, if your room is 1,000 cubic feet (CF) then you might have 2,000 cubic feet of air exchanged with the outside per hour, or 33 cubic feet per minute (CFM): Now let's imagine we turn on a purifier that runs 5 ACH (83 CFM in a 1,000 CF room) through a MERV-14 filter. This is rated to remove at least 75% of 0.3-1µm particles, 90% of 1-3µm, and 95% of 3-10µm: The total amount of sars-cov-2 in the air is the sum of these three curves: What happens when we run different filters with different efficacies? We can also look at each of these filters relative to a situation with no filtration: Let's extend the scale a little so we can see how long it takes to stabilize: In equilibrium, we see the following amounts of sars-cov-2 relative to no filtration: Filter Presence MERV-11 49% MERV-12 43% MERV-13 40% MERV-14 36% MERV-15 35% MERV-16 34% HEPA 33% This is a little surprising: Microcovid estimates that in this situation ("Indoors with a HEPA filter (flow rate 5x room size per hour)") risk is reduced to 25%. They cite Curtius et al. 2020 and summarize as: Researchers found that running air purifiers in a classroom decreased the aerosol density in the room by 90%. This is not a great summary of the paper. The researchers did two different things: Measured the effect of air purifiers on particle concentration. Made a simple model of the effect of purifiers to estimate the decrease in risk. The 90% comes from #2, and they say "After 2 hours, the concentration of aerosol particles containing vi...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Benign Boundary Violations, published by Duncan Sabien on May 26, 2022 on LessWrong. Recently, my friend Eric asked me what sorts of things I wanted to have happen at my bachelor party. I said (among other things) that I'd really enjoy some benign boundary violations. Eric went ???? Subsequently: an essay. We use the word "boundary" to mean at least two things, when we're discussing people's personal boundaries. The first is their actual self-defined boundary—the line that they would draw, if they had perfect introspective access, which marks the transition point from "this is okay" to "this is no longer okay." Different people have different boundaries: There are all sorts of different domains in which we have those different boundaries. If the above were a representation of people's feelings about personal space, then the person on the left would probably be big into hugs and slaps-on-the-shoulder, while the one on the right might not be comfortable sharing an elevator with more than one other person (if that). If the above were a representation of, say, people's openness to criticism, then the person on the left probably wouldn't mind if you told them their presentation sucked, in front of an audience of their friends, colleagues, and potential romantic partners. Meanwhile, the person on the right would probably prefer that you send a private message checking to see whether they were even interested in critical feedback at this time. Obviously, a diagram like the one above leaves out a lot of important nuance. For instance, a given person often has different boundaries within the same domain, depending on context—you may be very comfortable with intimate touch with your spouse and three closest friends, but very uncomfortable receiving hugs from strangers. And you may be quite comfortable receiving touches on the shoulder from just about anyone, but very uncomfortable receiving touches on the thigh. The above also doesn't do a great job of showing uncertainty in one's boundaries, which is often substantial. The "grey area" between okay and not okay might be quite small, in some cases (you have a clear, unambiguous "line" that you do not want crossed) and quite wide in others where you're not sure how you feel, and you might not know exactly where that gradient begins and ends. But for any given domain, and any given context, most people could at least a little bit describe where their boundaries lie. They're okay with the a-word, but not with the f-word. They're okay with friends borrowing $50, but they're not okay with family members asking for co-signers on a loan. They're okay with somebody crumpling up a post-it note and playfully throwing it at them, but they're not okay being hit in the face with a water balloon. There's a different thing altogether that people mean when they talk about boundaries, and that's something like what society tells us is okay. This, too, is context-dependent; different subcultures have different expectations and norms between those subcultures can vary a lot. What's in-bounds on LW is different from what's in-bounds on FB, and what's in-bounds on 4chan is different still. But for any given subculture, it seems to me that society tries to set the boundaries at something like "ninety percent of the present/relevant/participating people will not have their personal boundaries violated." In other words, the boundary given by social convention is set in approximately the same place as the personal boundary of the 90th-percentile sensitive person. (Others may disagree with me about the number, and may think that it's set at seventy percent or ninety-five percent or whatever, and certainly this number, too, varies depending on all sorts of factors, e.g. groups are more likely to be conservative in domains that feel more fraught or danger...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On being ambitious and celebrating failures, published by Luke Freeman on May 26, 2022 on The Effective Altruism Forum. TL;DR: It's not enough to say we should celebrate failures: we need to learn from them. We can prevent a lot of unnecessary failures, and at the very least, we can fail more gracefully. I have some ideas and would appreciate yours too. Epistemic status: I'm confident in my main claim that it's important for the effective altruism community to have a culture that supports graceful (or 'successful') failures, but that we're at risk of falling short of such a culture. I aim to highlight other claims I'm less sure about. I’m much more sure about the problem existing than how we can solve it (as those solutions often involve lots of difficult tradeoffs) but my goal here is to start this conversation about preventing unnecessary failure and what ‘successful’ failure looks like (so we have the kind of failures that are worth celebrating). In Will MacAskill's recent post on effective altruism and the current funding situation, he mentions that while there is increased funding available for experimentation, if it's too available, it "could mean we lose incentives towards excellence": If it's too easy to get funding, then a mediocre project could just keep limping on, rather than improving itself; or a substandard project could continue, even though it would be better if it shut down and the people involved worked elsewhere....That said, this is something that I think donors are generally keeping in mind; many seed grants won't be renewed, and if a project doesn't seem like a good use of the people running it, then it's not likely to get funded. He then highlights No Lean Season as a standout example of how we should celebrate our failures — in this case, when an organisation decides to cease operations of a programme they no longer think is effective. This “celebrating failures” notion is a celebration of both the audacity to try and the humility to learn and change course. It’s a great ideal. I wholeheartedly support it. However, I fear that without taking meaningful steps towards it we'll fall far short of this ideal, resulting in people burning out, increased reputational risks for the community, and ultimately, significantly reduced impact. I think that effective altruism can learn a lot from private sector entrepreneurship, which often takes a high-risk, high-reward approach to achieving great successes. However, we should also learn from its failures, and not just blindly emulate it. For one thing, private sector entrepreneurship can be a very toxic environment (which is partly why I left). For another, we need to be mindful of how our projects are likely to be very different to the private sector. But there’s huge upside if we succeed: I’m deeply excited for more ambition and entrepreneurship within the effective altruism community. So in light of this, here are a few things that have been on my mind that can hopefully help us fail more gracefully, fail less grotesquely... or even better, fail less (while still achieving more). 1. Remember, not everyone is an entrepreneur (and we shouldn't expect them to be) I've spent most of my career working in early-stage startups and co-founded a startup myself. From that experience, I've come to think that entrepreneurship is not a good fit for most people. It can be incredibly stressful to operate with a shoestring budget on very short funding cycles with not a lot of money in the bank — and not just for the leaders, but for the whole organisation. It can be tough to capture talent. Asking someone to leave their comfortable job to come and work for much less in an organisation that might evaporate is a hard sell. Even if you find people willing to do this, they're not necessarily the best people for the jobs. T...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Glass Puppet, published by lsusr on May 25, 2022 on LessWrong. The worst part of being an actor is that all work is temporary. That's what Alia told herself. In actuality, Alia's "acting" career amounted to two spots as an extra in commercials for enterprise software. To make ends meet, Alia worked as a social engineer. That's what Alia told herself. In actuality, Alia's "social engineering" career amounted to bluffing her way into an AI Alignment bootcamp for the free room and board. That's what had bought Alia to the entryway to the headquarters of Overton Cybernetics, a giant skyscraper of steel and glass. Apparently they needed an actor with AI Alignment experience. Alia took a deep breath. She was an actor. She had already faked her way through an AI Alignment bootcamp. Surely she could social engineer her way through an actual AI Alignment gig too, right? An older woman in a pencil skirt greeted Alia at the door. "You're going to be meeting with Dominic Hamilton," said the woman, "so make a good impression." Dominic Hamilton. The 27-year-old founder of Overton Cybernetics. Alia played it cool. She smiled as if she met billionaire geniuses all the time. Surely that's how a real AI Alignment actor would react, right? Dominic Hamilton's assistant led Alia to an elevator. They took it halfway up the building transferred to a second elevator and took the second elevator the rest of the way to the top. Alia deliberately avoided paying attention to her surroundings. She was an actor specializing in AI Alignment. She met billionaire geniuses all the time. Today was an ordinary day for her. Dominic Hamilton's office was sparse. One desk. One laptop. One door opposite one giant floor-to-ceiling window instead of a wall. Dominic Hamilton's throne chair faced away from the window towards the door. Surely Dominic Hamilton didn't actually work here. It must be a greeting room to intimidate visitors. Alia decided not to be intimidated. Alia sat in the seat opposite the throne. Dominic Hamilton's assistant quietly stood in a corner of the room by the door. "It's nice to meet you Mr. Dominic Hamilton," said Alia. She held out her hand to shake. It was ignored. "Just 'Dominic', please. But not yet," said Dominic. He motioned to a pair of smartglasses on the desk. "Put them on." Alia examined the smartglasses. They were bulkier than commercial smartglass. It was a prototype. Dominic Hamilton twiddled his thumbs until Alia put the glasses. "What is your name?" said Dominic. The glasses flashed a name across Alia's vision Zoe. "Zoe," repeated Alia. "What do you think of her?" said Dominic. Good reaction time, so far. was displayed across Alia's vision. "Good reaction time, so far," said Alia. "Good," said Dominic, "Do you feel comfortable?" We'll see. flashed across Alia's vision. "Yes, definitely," said Alia. Hey! "Finally. I'm so glad," Dominic leaned forward against his desk as if to cry. Take his hands. Alia took his hands. "Are you okay with me going a little off script?" "What's your name?" said Dominic. Zoe. "Zoe," said Alia. "No. I mean your name," Dominic laughed. Say your name. "Zoe," insisted Alia. "The actress's name. Whose body are you wearing?" said Dominic. "Alia," said Zoe, "is the actor whose body I am wearing." "I'm sorry," said Dominic. "It's okay," said Alia, "May I be permitted to speak?" "Alia? Of course," said Dominic. Alia took a deep breath. She was about to break all her rules against pretending she was supposed to be wherever she happened to be? "What on Earth is going on?" said Alia. Dominic laughed maniacally. "Surely you have figured it out by now, Alia." Alia hesitated. She had no idea what was going on. It's obvious, really. You have a hard time relating to people. So you created a person you could relate to. Software advances faster than hardware. Y...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Asterisk, published by Clara Collier on May 26, 2022 on The Effective Altruism Forum. Asterisk is a new quarterly journal of ideas from in and around Effective Altruism. Our goal is to provide clear, engaging, and deeply researched writing about complicated questions. This might look like a superforecaster giving a detailed explanation of the reasoning they use to make a prediction, a researcher discussing a problem in their work, or deep-dive into something the author noticed didn’t quite make sense. While everything we publish should be useful (or at least interesting) to committed EAs, our audience is the wider penumbra of people who care about improving the world but aren't necessarily routine readers of, say, the EA forum. In a nutshell: we're for Bayes' theorem, acknowledging our uncertainty, weird hypotheticals, and well constructed sentences. We're against easy answers, lazy metaphors, and the end of life as we know it. Submission Guidelines While we expect the bulk of our pieces will be about typical EA cause areas (we think they’re important for a reason!), we’re a lot less interested in covering specific topics than in showcasing an "EA-ish" way of thinking about the world. We want to show off what this community does best: finding new ideas, taking them seriously, and investigating them rigorously – all while doing our best to reason under uncertainty. We’re especially excited about pieces that help readers improve their epistemics and build better models of the world (the kinds of questions you’d see in the world-modeling tag on Less Wrong would all be within scope for us). We’d like to see smart writing about global catastrophic risks aimed at a wider audience, EA methods applied to topics we’ve never thought about, and historical case studies that help us understand the present. Here are a few representative examples of the kinds of questions we’d like to see people tackle: How can we predict the next pandemic? Are there plausible candidate pathogens outside the usual suspects (flu, pox or coronavirus)? What economic impacts of AI do we expect to see in the next five years? The next ten? Which industries will be impacted first? How was Oral Rehydration Therapy developed and rolled out? Why did it take until the 1960s when the underlying mechanism is so simple? The pessimists are probably right about lab grown meat for human consumption. Is there a niche for it anyway? Does the typical American cow lead a life worth living? We're approaching the 10 year anniversary of the replication crisis: what's changed, what hasn't? Net neutrality was repealed in 2018. At the time, everyone was convinced this would be the end of internet freedom. What's actually happened since, and why did so many people get it wrong? What lessons should the biosafety community draw from the Soviet & US cold war bio-warfare programs? Technology drives agricultural productivity in the US – what are the barriers to bringing that to the developing world? Effective PPE exists. What would it take to make it wearable and widely available? What do we know about the evolution of human intelligence, and what, if anything can it tell us about the development of artificial intelligence? How contingent is scientific progress? Is differential technological development possible, and if so, how can it be steered? Right now, we’re not interested in pure philosophy, movement evangelism, or meta-EA and other inside baseball. Reports on the health of the community or various internal organizations can be interesting, but they don’t have the kind of broad appeal we’re looking for. If you're interested in writing for us, please send a short paragraph explaining your idea (along with a writing sample, if you have one) to clara@asteriskmag.com. Jobs In addition to writers, we're looking for: M...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Science-informed normativity, published by Richard Ngo on May 25, 2022 on LessWrong. The debate over moral realism is often framed in terms of a binary question: are there ever objective facts about what’s moral to do in a given situation? The broader question of normative realism is also framed in a similar way: are there ever objective facts about what’s rational to do in a given situation? But I think we can understand these topics better by reframing them in terms of the question: how much do normative beliefs converge or diverge as ontologies improve? In other words: let’s stop thinking about whether we can derive normativity from nothing, and start thinking about how much normativity we can derive from how little, given that we continue to improve our understanding of the world. The core intuition behind this approach is that, even if a better understand of science and mathematics can’t directly tell us what we should value, it can heavily influence how our values develop over time. Values under ontology improvements By “ontology” I mean the set of concepts which we use to understand the world. Human ontologies are primarily formulated in terms of objects which persist over time, and which have certain properties and relationships. The details have changed greatly throughout history, though. To explain fire and disease, we used to appeal to spirits and curses; over time we removed them and added entities like phlogiston and miasmas; now we’ve removed those in turn and replaced them with oxidation and bacteria. In other cases, we still use old concepts, but with an understanding that they’re only approximations to more sophisticated ones - like absolute versus relative space and time. In other cases, we’ve added novel entities - like dark matter, or complex numbers - in order to explain novel phenomena. I’d classify all of these changes as “improvements” to our ontologies. What specifically counts as an improvement (if anything) is an ongoing debate in the philosophy of science. For now, though, I’ll assume that readers share roughly common-sense intuitions about ontology improvement - e.g. the intuition that science has dramatically improved our ontologies over the last few centuries. Now imagine that our ontologies continue to dramatically improve as we come to better understand the world; and that we try to reformulate moral values from our old ontologies in terms of our new ontologies in a reasonable way. What might happen? Here are two extreme options. Firstly, very similar moral values might end up in very different places, based on the details of how that reformulation happens, or just because the reformulation is quite sensitive to initial conditions. Or alternatively, perhaps even values which start off in very different places end up being very similar in the new ontology - e.g. because they turn out to refer to different aspects of the same underlying phenomenon. These, plus intermediate options between them, define a spectrum of possibilities. I’ll call the divergent end of this spectrum (which I’ve defended elsewhere) the “moral anti-realism” end, and the convergent end the “moral realism” end. This will be much clearer with a few concrete examples (although note that these are only illustrative, because the specific beliefs involved are controversial). Consider two people with very different values: an egoist who only cares about their own pleasure, and a hedonic utilitarian. Now suppose that each of them comes to believe Parfit’s argument that personal identity is a matter of degree, so that now the concept of their one “future self” is no longer in their ontology. How might they map their old values to their new ontology? Not much changes for the hedonic utilitarian, but a reasonable egoist will start to place some value on the experiences of peo...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates from Community Health for Q4 2021 & Q1 2022, published by evemccormick on May 25, 2022 on The Effective Altruism Forum. This update from CEA’s Community Health team intends to: Briefly re-introduce ourselves and our strategy Summarise our work in Q4 2021 and Q1 2022 Remind you of how and when you might want to contact us Summary Our reformulated strategy is to identify ‘thin spots’ in the EA community, and to coordinate with others to direct additional resources to those areas. As a result, our work has shifted towards a greater focus on proactive projects, with a lower proportion of our time dedicated to reactive casework. For example, a key focus for Nicole and Julia in Q4 2021 and Q1 2022 has been on coordinating with others to begin generating more accurate and thoughtful stories about EA and longtermism. Catherine Low joined the team part-time, while Eve McCormick and Chana Messinger joined the team full-time. In Q4 2021 and Q1 2022, we handled a total of 101 reactive cases. You can pass on sensitive or anonymous messages to the team via this form. Introduction On the Community Health team, we’re in the process of reconceiving our strategy, both as we grow in size and as the EA ecosystem changes and develops. In a nutshell, we aim to identify important ‘thin spots’ or neglected areas in the EA ecosystem, and then try to fill in those ‘thin spots’ with additional resources. In some cases, we might try to fill those gaps ourselves; in others, we may try to ease coordination between other actors in the space, or seek someone new to take ownership of the area. Therefore, where historically most of our capacity was filled with reactive casework, we’ve begun to dedicate more of our time to proactive projects which arise in response to identified “thin spots”. For example, last year, we identified a gap in positive communication around longtermism in the media. As a result, we began pushing for organisations to be more proactive in producing positive media pieces about longtermism. Nonetheless, handling cases continues to be a substantial part of our work. Our team In case you haven’t read our previous updates, here’s a little summary of who we are: Nicole Ross manages our team. In addition, her focuses during Q4 2021 and Q1 2022 included: improving the public’s perception of EA and longtermism; improving EA culture and epistemics (by launching and overseeing projects in this space); and mitigating risks associated with early field-building in key locations or young fields. Julia Wise oversees community wellbeing and is a social worker by training. Her work includes leading or supporting projects such as improving community members’ access to mental health services, as well as being a go-to person for specific cases where community members find themselves in challenging situations. Since Q4, we have also added capacity from three staff, and we're so excited to have them on board! Catherine Low is now the main contact for community health support for groups. She has passed some of her responsibilities onto other groups' team members in order to free up ~50% of her time to focus on community health work. Eve McCormick joined us as a full-time staff member starting in February, continuing in her role as Nicole Ross’s assistant while also taking on some operations and community health tasks. Chana Messinger joined us in May and will initially be focused on epistemics and supporting high school outreach projects. Reactive work Despite our increased focus on proactive work, we continue to dedicate roughly ~10% of our total team capacity to reactive casework. Between September and April, we handled: 25 inquiries or cases regarding the media and EA 18 concerns around interpersonal problems 9 cases where we advised on situations in early field-building (geographical are...
Comments 
Download from Google Play
Download from App Store