DiscoverThe Nonlinear Library
The Nonlinear Library

The Nonlinear Library

Author: The Nonlinear Fund

Subscribed: 25Played: 6,508
Share

Description

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
4973 Episodes
Reverse
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: S-Risks: Fates Worse Than Extinction, published by A.G.G. Liu on May 4, 2024 on The Effective Altruism Forum. Cross-posted from LessWrong In this Rational Animations video, we discuss s-risks (risks from astronomical suffering), which involve an astronomical number of beings suffering terribly. Researchers on this topic argue that s-risks have a significant chance of occurring and that there are ways to lower that chance. The script for this video was a winning submission to the Rational Animations Script Writing contest (https://forum.effectivealtruism.org/posts/p8aMnG67pzYWxFj5r/rational-animations-script-writing-contest). The first author of this post, Allen Liu, was the primary script writer with the second author (Writer) and other members of the Rational Animations writing team giving significant feedback. Outside reviewers, including authors of several of the cited sources, provided input as well. Production credits are at the end of the video. You can find the script of the video below. Is there anything worse than humanity being driven extinct? When considering the long term future, we often come across the concept of "existential risks" or "x-risks": dangers that could effectively end humanity's future with all its potential. But these are not the worst possible dangers that we could face. Risks of astronomical suffering, or "s-risks", hold even worse outcomes than extinction, such as the creation of an incredibly large number of beings suffering terribly. Some researchers argue that taking action today to avoid these most extreme dangers may turn out to be crucial for the future of the universe. Before we dive into s-risks, let's make sure we understand risks in general. As Swedish philosopher Nick Bostrom explains in his 2013 paper "Existential Risk Prevention as Global Priority",[1] one way of categorizing risks is to classify them according to their "scope" and their "severity". A risk's "scope" refers to how large a population the risk affects, while its "severity" refers to how much that population is affected. To use Bostrom's examples, a car crash may be fatal to the victim themselves and devastating to their friends and family, but not even noticed by most of the world. So the scope of the car crash is small, though its severity is high for those few people. Conversely, some tragedies could have a wide scope but be comparatively less severe. If a famous painting were destroyed in a fire, it could negatively affect millions or billions of people in the present and future who would have wanted to see that painting in person, but the impact on those people's lives would be much smaller. In his paper, Bostrom analyzes risks which have both a wide scope and an extreme severity, including so-called "existential risks" or "x-risks". Human extinction would be such a risk: affecting the lives of everyone who would have otherwise existed from that point on and forever preventing all the joy, value and fulfillment they ever could have produced or experienced. Some other such risks might include humanity's scientific and moral progress permanently stalling or reversing, or us squandering some resource that could have helped us immensely in the future. S-risk researchers take Bostrom's categories a step further. If x-risks are catastrophic because they affect everyone who would otherwise exist and prevent all their value from being realized, then an even more harmful type of risk would be one that affects more beings than would otherwise exist and that makes their lives worse than non-existence: in other words, a risk with an even broader scope and even higher severity than a typical existential risk, or a fate worse than extinction. David Althaus and Lukas Gloor, in their article from 2016 titled "Reducing Risks of Astronomical Suffering: A Neglected Priority"...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing AI-Powered Audiobooks of Rational Fiction Classics, published by Askwho on May 4, 2024 on LessWrong. (ElevenLabs reading of this post:) I'm excited to share a project I've been working on that I think many in the Lesswrong community will appreciate - converting some rational fiction into high-quality audiobooks using cutting-edge AI voice technology from ElevenLabs, under the name "Askwho Casts AI". The keystone of this project is an audiobook version of Planecrash (AKA Project Lawful), the epic glowfic authored by Eliezer Yudkowsky and Lintamande. Given the scope and scale of this work, with its large cast of characters, I'm using ElevenLabs to give each character their own distinct voice. It's a labor of love to convert this audiobook version of this story, and I hope if anyone has bounced off it before, this might be a more accessible version. Alongside Planecrash, I'm also working on audiobook versions of two other rational fiction favorites: Luminosity by Alicorn (to be followed by its sequel Radiance) Animorphs: The Reckoning by Duncan Sabien I'm also putting out a feed where I convert any articles I find interesting, a lot of which are in the Rat Sphere. My goal with this project is to make some of my personal favorite rational stories more accessible by allowing people to enjoy them in audiobook format. I know how powerful these stories can be, and I want to help bring them to a wider audience and to make them easier for existing fans to re-experience. I wanted to share this here on Lesswrong to connect with others who might find value in these audiobooks. If you're a fan of any of these stories, I'd love to get your thoughts and feedback! And if you know other aspiring rationalists who might enjoy them, please help spread the word. What other classic works of rational fiction would you love to see converted into AI audiobooks? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Now THIS is forecasting: understanding Epoch's Direct Approach, published by Elliot Mckernon on May 4, 2024 on LessWrong. Happy May the 4th from Convergence Analysis! Cross-posted on the EA Forum. As part of Convergence Analysis's scenario research, we've been looking into how AI organisations, experts, and forecasters make predictions about the future of AI. In February 2023, the AI research institute Epoch published a report in which its authors use neural scaling laws to make quantitative predictions about when AI will reach human-level performance and become transformative. The report has a corresponding blog post, an interactive model, and a Python notebook. We found this approach really interesting, but also hard to understand intuitively. While trying to follow how the authors derive a forecast from their assumptions, we wrote a breakdown that may be useful to others thinking about AI timelines and forecasting. In what follows, we set out our interpretation of Epoch's 'Direct Approach' to forecasting the arrival of transformative AI (TAI). We're eager to see how closely our understanding of this matches others'. We've also fiddled with Epoch's interactive model and include some findings on its sensitivity to plausible changes in parameters. The Epoch team recently attempted to replicate DeepMind's influential Chinchilla scaling law, an important quantitative input to Epoch's forecasting model, but found inconsistencies in DeepMind's presented data. We'll summarise these findings and explore how an improved model might affect Epoch's forecasting results. This is where the fun begins (the assumptions) The goal of Epoch's Direct Approach is to quantitatively predict the progress of AI capabilities. The approach is 'direct' in the sense that it uses observed scaling laws and empirical measurements to directly predict performance improvements as computing power increases. This stands in contrast to indirect techniques, which instead seek to estimate a proxy for performance. A notable example is Ajeya Cotra's Biological Anchors model, which approximates AI performance improvements by appealing to analogies between AIs and human brains. Both of these approaches are discussed and compared, along with expert surveys and other forecasting models, in Zershaaneh Qureshi's recent post, Timelines to Transformative AI: an investigation. In their blog post, Epoch summarises the Direct Approach as follows: The Direct Approach is our name for the idea of forecasting AI timelines by directly extrapolating and interpreting the loss of machine learning models as described by scaling laws. Let's start with scaling laws. Generally, these are just numerical relationships between two quantities, but in machine learning they specifically refer to the various relationships between a model's size, the amount of data it was trained with, its cost of training, and its performance. These relationships seem to fit simple mathematical trends, and so we can use them to make predictions: if we make the model twice as big - give it twice as much 'compute' - how much will its performance improve? Does the answer change if we use less training data? And so on. If we combine these relationships with projections of how much compute AI developers will have access to at certain times in the future, we can build a model which predicts when AI will cross certain performance thresholds. Epoch, like Convergence, is interested in when we'll see the emergence of transformative AI (TAI): AI powerful enough to revolutionise our society at a scale comparable to the agricultural and industrial revolutions. To understand why Convergence is especially interested in that milestone, see our recent post 'Transformative AI and Scenario Planning for AI X-risk'. Specifically, Epoch uses an empirically measured scaling ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Welfare is now enshrined in the Belgian Constitution, published by Bob Jacobs on May 4, 2024 on The Effective Altruism Forum. A while back, I wrote a quicktake about how the Belgian Senate voted to enshrine animal welfare in the Constitution. It's been a journey. I work for GAIA, a Belgian animal advocacy group that for years has tried to get animal welfare added to the constitution. Today we were present as a supermajority of the senate came out in favor of our proposed constitutional amendment. [...] It's a very good day for Belgian animals but I do want to note that: 1. This does not mean an effective shutdown of the meat industry, merely that all future pro-animal welfare laws and lawsuits will have an easier time. And, 2. It still needs to pass the Chamber of Representatives. If there's interest I will make a full post about it if once it passes the Chamber. It is now my great pleasure to announce to you that a supermajority of the Chamber also voted in favor of enshrining animal welfare in the Constitution. Article 7a of the Belgian Constitution now reads as follows: In the exercise of their respective powers, the Federal State, the Communities and the Regions shall ensure the protection and welfare of animals as sentient beings. This inclusion of animals as sentient beings is notable as it represents the fourth major revision of the Constitution in favor of individual rights. Previous revisions have addressed universal suffrage, gender equality, and the rights of people with disabilities. TL;DR: The significance of this inclusion extends beyond symbolic value. It will have tangible effects on animal protection in Belgium: 1. Fundamental Value: Animal welfare is now recognized as a fundamental value of Belgian society. In cases where a constitutional right conflicts with animal protection, the latter will hold greater legal weight and must be seriously considered. For example, this recognition may facilitate the implementation of a country-wide ban on slaughter without anesthesia, as both freedom of religion and animal welfare are now constitutionally protected. 2. Legislative Guidance: The inclusion of animal welfare will encourage legislative and executive bodies to prioritize laws aimed at improving animal protection while rejecting those that may undermine it. Regressive measures with certain interests (e.g. purely financial interests) will face increased scrutiny as they are weighed against the constitutional protection of animal welfare. 3. Legal Precedent: In legal cases involving animals, whether criminal or civil, judges will be influenced by the values enshrined in the Constitution. This awareness may lead to greater consideration of animal interests in judicial decisions. Legal importance In the hierarchy of Belgian legal norms, the Constitution is at the very top. This means that lower regulations (the laws of the federal and regional parliament(s), the regulations of local governments and executive orders) must comply with the Constitution. If different rights must be weighed against one another, the one that is enshrined in the Constitution is deemed more important. Previously, religious freedom was in the Constitution and animal welfare was not, meaning the former carried more weight. Article 19 of the Constitution merely states that the exercise of worship is free unless crimes (criminal violations of law) are committed in the course of that exercise. There have been many attempts to ban unanesthetized slaughter; in some regions they were successful, in others not, in all of them they led to fierce legal debate and lengthy proceedings. Enshrining animal welfare in the constitutional will finally ensure a full victory for the animals. (The exercise of other fundamental rights besides religious freedom can also have a negative impact on ani...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My hour of memoryless lucidity, published by Eric Neyman on May 4, 2024 on LessWrong. Yesterday, I had a coronectomy: the top halves of my bottom wisdom teeth were surgically removed. It was my first time being sedated, and I didn't know what to expect. While I was unconscious during the surgery, the hour after surgery turned out to be a fascinating experience, because I was completely lucid but had almost zero short-term memory. My girlfriend, who had kindly agreed to accompany me to the surgery, was with me during that hour. And so - apparently against the advice of the nurses - I spent that whole hour talking to her and asking her questions. The biggest reason I find my experience fascinating is that it has mostly answered a question that I've had about myself for quite a long time: how deterministic am I? In computer science, we say that an algorithm is deterministic if it's not random: if it always behaves the same way when it's in the same state. In this case, my "state" was my environment (lying drugged on a bed with my IV in and my girlfriend sitting next to me) plus the contents of my memory. Normally, I don't ask the same question over and over again because the contents of my memory change when I ask the question the first time: after I get an answer, the answer is in my memory, so I don't need to ask the question again. But for that hour, the information I processed came in one ear and out the other in a matter of minutes. And so it was a natural test of whether my memory is the only thing keeping me from saying the same things on loop forever, or whether I'm more random/spontaneous than that.[1] And as it turns out, I'm pretty deterministic! According to my girlfriend, I spent a lot of that hour cycling between the same few questions on loop: "How did the surgery go?" (it went well), "Did they just do a coronectomy or did they take out my whole teeth?" (just a coronectomy), "Is my IV still in?" (yes), "how long was the surgery?" (an hour and a half), "what time is it?", and "how long have you been here?". (The length of that cycle is also interesting, because it gives an estimate of how long I was able to retain memories for - apparently about two minutes.) (Toward the end of that hour, I remember asking, "I know I've already asked this twice, but did they just do a coronectomy?" (The answer: "actually you've asked that much more than twice, and yes, it was just a coronectomy.)) Those weren't my only questions, though. About five minutes into that hour, I apparently asked my girlfriend for two 2-digit numbers to multiply, to check how cognitively impaired I was. She gave me 27*69, and said that I had no trouble doing the multiplication in the obvious way (27*7*10 - 27), except that I kept having to ask her to remind me what the numbers were. Interestingly, I asked her for two 2-digit numbers again toward the end of that hour, having no memory that I had already done this. She told me that she had already given me two numbers, and asked whether I wanted the same numbers again. I said yes (so I could compare my performance). The second time, I was able to do the multiplication pretty quickly without needing to ask for the numbers to be repeated. Also, about 20 minutes into the hour, I asked my girlfriend to give me the letters to that day's New York Times Spelling Bee, which is a puzzle where you're given seven letters and try to form words using the letters. (The letters were W, A, M, O, R, T, and Y.) I found the pangram - the word that uses every letter at least once[2] - in about 30 seconds, which is about average for me, except that yesterday I was holding the letters in my head instead of looking at them on a screen. I also got most of the way to the "genius" rank - a little better than I normally do - and my girlfriend got us the rest of the way ther...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to ESPR & PAIR, Rationality and AI Camps for Ages 16-21, published by Anna Gajdova on May 4, 2024 on LessWrong. TLDR - Apply now to ESPR and PAIR. ESPR welcomes students between 16-19 years. PAIR is for students between 16-21 years. The FABRIC team is running two immersive summer workshops for mathematically talented students this year. The Program on AI and Reasoning (PAIR) is for students with an interest in artificial intelligence, cognition, and minds in general. We will study how current AI systems work, mathematical theories about human minds, and how the two relate. Alumni of previous PAIR described the content as a blend of AI, mathematics and introspection, but also highlighted that a large part of the experience are informal conversations or small group activities. See the curriculum details. For students who are 16-21 years old July 29th - August 8th in Somerset, United Kingdom The European Summer Program on Rationality (ESPR) is for students with a desire to understand themselves and the world, and interest in applied rationality. The curriculum covers a wide range of topics, from game theory, cryptography, and mathematical logic, to AI, styles of communication, and cognitive science. The goal of the program is to help students hone rigorous, quantitative skills as they acquire a toolbox of useful concepts and practical techniques applicable in all walks of life. See the content details. For students who are 16-19 years old August 15th - August 25th in Oxford, United Kingdom We encourage all Lesswrong readers interested in these topics who are within the respective age windows to apply! Both programs are free for accepted students, travel scholarships are available. Apply to both camps here. The application deadline is Sunday May 19th. If you know people within the age window who might enjoy these camps, please send them the link to the FABRIC website which has an overview of all our camps. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "AI Safety for Fleshy Humans" an AI Safety explainer by Nicky Case, published by habryka on May 3, 2024 on LessWrong. Nicky Case, of "The Evolution of Trust" and "We Become What We Behold" fame (two quite popular online explainers/mini-games) has written an intro explainer to AI Safety! It looks pretty good to me, though just the first part is out, which isn't super in-depth. I particularly appreciate Nicky clearly thinking about the topic themselves, and I kind of like some of their "logic vs. intuition" frame, even though I think that aspect is less core to my model of how things will go. It's clear that a lot of love has gone into this, and I think having more intro-level explainers for AI-risk stuff is quite valuable. === The AI debate is actually 100 debates in a trenchcoat. Will artificial intelligence (AI) help us cure all disease, and build a post-scarcity world full of flourishing lives? Or will AI help tyrants surveil and manipulate us further? Are the main risks of AI from accidents, abuse by bad actors, or a rogue AI itself becoming a bad actor? Is this all just hype? Why can AI imitate any artist's style in a minute, yet gets confused drawing more than 3 objects? Why is it hard to make AI robustly serve humane values, or robustly serve any goal? What if an AI learns to be more humane than us? What if an AI learns humanity's inhumanity, our prejudices and cruelty? Are we headed for utopia, dystopia, extinction, a fate worse than extinction, or - the most shocking outcome of all - nothing changes? Also: will an AI take my job? ...and many more questions. Alas, to understand AI with nuance, we must understand lots of technical detail... but that detail is scattered across hundreds of articles, buried six-feet-deep in jargon. So, I present to you: This 3-part series is your one-stop-shop to understand the core ideas of AI & AI Safety* - explained in a friendly, accessible, and slightly opinionated way! (* Related phrases: AI Risk, AI X-Risk, AI Alignment, AI Ethics, AI Not-Kill-Everyone-ism. There is no consensus on what these phrases do & don't mean, so I'm just using "AI Safety" as a catch-all.) This series will also have comics starring a Robot Catboy Maid. Like so: [...] The Core Ideas of AI & AI Safety In my opinion, the main problems in AI and AI Safety come down to two core conflicts: Note: What "Logic" and "Intuition" are will be explained more rigorously in Part One. For now: Logic is step-by-step cognition, like solving math problems. Intuition is all-at-once recognition, like seeing if a picture is of a cat. "Intuition and Logic" roughly map onto "System 1 and 2" from cognitive science.[1]1[2]2 ( hover over these footnotes! they expand!) As you can tell by the "scare" "quotes" on "versus", these divisions ain't really so divided after all... Here's how these conflicts repeat over this 3-part series: Part 1: The past, present, and possible futures Skipping over a lot of detail, the history of AI is a tale of Logic vs Intuition: Before 2000: AI was all logic, no intuition. This was why, in 1997, AI could beat the world champion at chess... yet no AIs could reliably recognize cats in pictures.[3]3 (Safety concern: Without intuition, AI can't understand common sense or humane values. Thus, AI might achieve goals in logically-correct but undesirable ways.) After 2000: AI could do "intuition", but had very poor logic. This is why generative AIs (as of current writing, May 2024) can dream up whole landscapes in any artist's style... yet gets confused drawing more than 3 objects. ( click this text! it also expands!) (Safety concern: Without logic, we can't verify what's happening in an AI's "intuition". That intuition could be biased, subtly-but-dangerously wrong, or fail bizarrely in new scenarios.) Current Day: We still don't know how to unify logic & i...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Lament to EA, published by kta on May 3, 2024 on The Effective Altruism Forum. I am dealing with repetitive strain injury and don't foresee being able to really respond to any comments (I'm surprised with myself that I wrote all of this without twitching forearms lol!) I'm a little hesitant to post this, but I thought I should be vulnerable. Honestly, I'm relieved that I finally get to share my voice. I know some people may want me to discuss this privately - but that might not be helpful to me, as I know (by personal and indirect experience) that some community issues in EA have been tried to be silenced by the very people who were meant to help. And to be honest, the fear of criticizing EA is something I have disliked about EA - I've been behind the scenes enough to know that despite being well-intentioned, criticizing EA (especially openly) can privately get you excluded from opportunities and circles, often even silently. This is an internal battle I've had with EA for a while (years). Still, I thought by sharing my experiences I can add to the ongoing discourse in the community. Appreciation and disillusionment I want to start by saying I have many lovely friends and colleagues in the movement whom I deeply respect. You know who you are. :) My thoughts here are not generalized to the whole movement itself - just some problems I feel most have failed to recognize enough, stemming from specific experiences. I think more effort should be made to address these issues, or at least to consider them as the movement is built. I joined EA in university (five years ago), thrilled to see an actual movement work on problems I thought were important in a way that I thought was important. I dove in, thinking I finally found the group of people I so wanted to find since grade school - a bunch of cool, intelligent, kind, altruistic nerds and geeks! And for a while, it was good. I met my ex-partner there (which was good for a while) and some good friends. I'm happy thinking I made an impact over the past few years and learned so much about myself and how to be more mature and intelligent. I also have a lot of gratitude for this movement for teaching me so much and for shaping who I am today. However, throughout the years, I became more disillusioned and saddened due to systemic issues within the movement - how it was structured in a way that allowed for a lot of negative things to happen, despite how much people really brainstormed and tried for it not to. I've experienced many degrading things I wish on no one directly because of EA. (Some of these experiences I mainly wish to keep private out of respect for some.) Despite my efforts to enter the community and work hard, I burned out, physically, professionally, and personally. And it's taken such a toll on me that for a while I did not fully recognize who I was anymore. I definitely think a lot of me was consumed by hurt and negativity, and I'm working on that. I've actually distanced myself from my local group for the longest time because I felt a select of them (not all!) were toxic and mean - I mainly stayed there to protect and support someone, but unfortunately was betrayed multiple times by them, and I wish I left earlier. (I send them love and light now.) So, while I've had many great, eye-opening experiences and have made many amazing friends through EA, I don't think my positive feelings are enough anymore for me to fully stay in it for a while. Instead, I will focus on my specific cause area and research field. (I acknowledge it might tie into EA sometimes and I accept that). This has not been an easy realization, nor one reached hastily, but after considerable reflection on the negative impacts these issues have had on my well-being. The following are non-exhaustive. Specific challenges When it has been uncomfort...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Key takeaways from our EA and alignment research surveys, published by Cameron Berg on May 3, 2024 on LessWrong. Many thanks to Spencer Greenberg, Lucius Caviola, Josh Lewis, John Bargh, Ben Pace, Diogo de Lucena, and Philip Gubbins for their valuable ideas and feedback at each stage of this project - as well as the ~375 EAs + alignment researchers who provided the data that made this project possible. Background Last month, AE Studio launched two surveys: one for alignment researchers, and another for the broader EA community. We got some surprisingly interesting results, and we're excited to share them here. We set out to better explore and compare various population-level dynamics within and across both groups. We examined everything from demographics and personality traits to community views on specific EA/alignment-related topics. We took on this project because it seemed to be largely unexplored and rife with potentially-very-high-value insights. In this post, we'll present what we think are the most important findings from this project. Meanwhile, we're also sharing and publicly releasing a tool we built for analyzing both datasets. The tool has some handy features, including customizable filtering of the datasets, distribution comparisons within and across the datasets, automatic classification/regression experiments, LLM-powered custom queries, and more. We're excited for the wider community to use the tool to explore these questions further in whatever manner they desire. There are many open questions we haven't tackled here related to the current psychological and intellectual make-up of both communities that we hope others will leverage the dataset to explore further. (Note: if you want to see all results, navigate to the tool, select the analysis type of interest, and click 'Select All.' If you have additional questions not covered by the existing analyses, the GPT-4 integration at the bottom of the page should ideally help answer them. The code running the tool and the raw anonymized data are both also publicly available.) We incentivized participation by offering to donate $40 per eligible[1] respondent - strong participation in both surveys enabled us to donate over $10,000 to both AI safety orgs as well as a number of different high impact organizations (see here[2] for the exact breakdown across the two surveys). Thanks again to all of those who participated in both surveys! Three miscellaneous points on the goals and structure of this post before diving in: 1. Our goal here is to share the most impactful takeaways rather than simply regurgitating every conceivable result. This is largely why we are also releasing the data analysis tool, where anyone interested can explore the dataset and the results at whatever level of detail they please. 2. This post collectively represents what we at AE found to be the most relevant and interesting findings from these experiments. We sorted the TL;DR below by perceived importance of findings. We are personally excited about pursuing neglected approaches to alignment, but we have attempted to be as deliberate as possible throughout this write-up in striking the balance between presenting the results as straightforwardly as possible and sharing our views about implications of certain results where we thought it was appropriate. 3. This project was descriptive and exploratory in nature. Our goal was to cast a wide psychometric net in order to get a broad sense of the psychological and intellectual make-up of both communities. We used standard frequentist statistical analyses to probe for significance where appropriate, but we definitely still think it is important for ourselves and others to perform follow-up experiments to those presented here with a more tightly controlled scope to replicate and further sharpen t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #62: Too Soon to Tell, published by Zvi on May 3, 2024 on LessWrong. What is the mysterious impressive new 'gpt2-chatbot' from the Arena? Is it GPT-4.5? A refinement of GPT-4? A variation on GPT-2 somehow? A new architecture? Q-star? Someone else's model? Could be anything. It is so weird that this is how someone chose to present that model. There was also a lot of additional talk this week about California's proposed SB 1047. I wrote an additional post extensively breaking that bill down, explaining how it would work in practice, addressing misconceptions about it and suggesting fixes for its biggest problems along with other improvements. For those interested, I recommend reading at least the sections 'What Do I Think The Law Would Actually Do?' and 'What are the Biggest Misconceptions?' As usual, lots of other things happened as well. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Do your paperwork for you. Sweet. 4. Language Models Don't Offer Mundane Utility. Because it is not yet good at it. 5. GPT-2 Soon to Tell. What is this mysterious new model? 6. Fun With Image Generation. Certified made by humans. 7. Deepfaketown and Botpocalypse Soon. A located picture is a real picture. 8. They Took Our Jobs. Because we wouldn't let other humans take them first? 9. Get Involved. It's protest time. Against AI that is. 10. In Other AI News. Incremental upgrades, benchmark concerns. 11. Quiet Speculations. Misconceptions cause warnings of AI winter. 12. The Quest for Sane Regulation. Big tech lobbies to avoid regulations, who knew? 13. The Week in Audio. Lots of Sam Altman, plus some others. 14. Rhetorical Innovation. The few people who weren't focused on SB 1047. 15. Open Weights Are Unsafe And Nothing Can Fix This. Tech for this got cheaper. 16. Aligning a Smarter Than Human Intelligence is Difficult. Dot by dot thinking. 17. The Lighter Side. There must be some mistake. Language Models Offer Mundane Utility Write automatic police reports based on body camera footage. It seems it only uses the audio? Not using the video seems to be giving up a lot of information. Even so, law enforcement seems impressed, one notes an 82% reduction in time writing reports, even with proofreading requirements. Axon says it did a double-blind study to compare its AI reports with ones from regular offers. And it says that Draft One results were "equal to or better than" regular police reports. As with self-driving cars, that is not obviously sufficient. Eliminate 2.2 million unnecessary words in the Ohio administrative code, out of a total of 17.4 million. The AI identified candidate language, which humans reviewed. Sounds great, but let's make sure we keep that human in the loop. Diagnose your medical condition? Link has a one-minute video of a doctor asking questions and correctly diagnosing a patient. Ate-a-Pi: This is why AI will replace doctor. Sherjil Ozair: diagnosis any%. Akhil Bagaria: This it the entire premise of the TV show house. The first AI attempt listed only does 'the easy part' of putting all the final information together. Kiaran Ritchie then shows that yes, ChatGPT can figure out what questions to ask, solving the problem with eight requests over two steps, followed by a solution. There are still steps where the AI is getting extra information, but they do not seem like the 'hard steps' to me. Is Sam Altman subtweeting me? Sam Altman: Learning how to say something in 30 seconds that takes most people 5 minutes is a big unlock. (and imo a surprisingly learnable skill. If you struggle with this, consider asking a friend who is good at it to listen to you say something and then rephrase it back to you as concisely as they can a few dozen times. I have seen this work really well!) Interesting DM: "For what it's worth this...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mechanistic Interpretability Workshop Happening at ICML 2024!, published by Neel Nanda on May 3, 2024 on The AI Alignment Forum. Announcing the first academic Mechanistic Interpretability workshop, held at ICML 2024! We'd love to get papers submitted if any of you have relevant projects! Deadline May 29, max 4 or max 8 pages. We welcome anything that brings us closer to a principled understanding of model internals, even if it's not "traditional" mech interp. Check out our website for example topics! There's $1750 in best paper prizes. We also welcome less standard submissions, like open source software, models or datasets, negative results, distillations, or position pieces. And if anyone is attending ICML, you'd be very welcome at the workshop! We have a great speaker line-up: Chris Olah, Jacob Steinhardt, David Bau and Asma Ghandeharioun. And a panel discussion, hands-on tutorial, and social. I'm excited to meet more people into mech interp! And if you know anyone who might be interested in attending/submitting, please pass this on. Twitter thread, Website Thanks to my great co-organisers: Fazl Barez, Lawrence Chan, Kayo Yin, Mor Geva, Atticus Geiger and Max Tegmark Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On John Woolman (Thing of Things), published by Aaron Gertler on May 2, 2024 on The Effective Altruism Forum. My favorite EA blogger tells the story of an early abolitionist. The subtitle, "somewhat in favor of guilt", is better than any summary I'd write. John Woolman would probably be mad at me for writing a post about his life. He never thought his life mattered. Partially, he hated the process of traveling: the harshness of life on the road; being away from his family; the risk of bringing home smallpox, which terrified him. But mostly it was the task being asked of Woolman that filled him with grief. Woolman was naturally "gentle, self-deprecating, and humble in his address", but he felt called to harshly condemn slaveowning Quakers. All he wanted was to be able to have friendly conversations with people who were nice to him. But instead, he felt, God had called him to be an Old Testament prophet, thundering about God's judgment and the need for repentance. I don't think you get John Woolman without the scrupulosity. If someone is the kind of person who sacrifices money, time with his family, approval from his community, his health - in order to do a thankless, painful task that goes against all of his instincts for how to interact with other people, with no sign of success a task that, if it advanced abolition only in Pennsylvania by even a single year, prevented nearly 7,000 years of enslavement, and by any reasonable estimate prevented thousands or tens of thousands more Well, someone like that is going to be extra about the non-celebration of Christmas. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ask me questions here about my 80,000 hours podcast on preventing neonatal deaths with Kangaroo Mother Care, published by deanspears on May 2, 2024 on The Effective Altruism Forum. I was interviewed in yesterday's 80,000 hours podcast: Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives. As I say in the podcast, there's good evidence that this is a cost-effective way to save lives. Many peer-reviewed articles show that Kangaroo Mother Care is effective. The 80k link has many further links to the articles and data behind the podcast. You can see GiveWell's write up of their support for our project at this link. This partnership with a large government medical college is able to reach many babies. And with more funding, we could achieve more. Anyone can support this project by donating, at riceinstitute.org, to a 501(c)3 public charity. If you have any questions, please feel free to ask below! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Which skincare products are evidence-based?, published by Vanessa Kosoy on May 2, 2024 on LessWrong. The beauty industry offers a large variety of skincare products (marketed mostly at women), differing both in alleged function and (substantially) in price. However, it's pretty hard to test for yourself how much any of these product help. The feedback loop for things like "getting less wrinkles" is very long. So, which of these products are actually useful and which are mostly a waste of money? Are more expensive products actually better or just have better branding? How can I find out? I would guess that sunscreen is definitely helpful, and using some moisturizers for face and body is probably helpful. But, what about night cream? Eye cream? So-called "anti-aging"? Exfoliants? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Q&A on Proposed SB 1047, published by Zvi on May 2, 2024 on LessWrong. Previously: On the Proposed California SB 1047. Text of the bill is here. It focuses on safety requirements for highly capable AI models. This is written as an FAQ, tackling all questions or points I saw raised. Safe & Secure AI Innovation Act also has a description page. Why Are We Here Again? There have been many highly vocal and forceful objections to SB 1047 this week, in reaction to a (disputed and seemingly incorrect) claim that the bill has been 'fast tracked.' The bill continues to have substantial chance of becoming law according to Manifold, where the market has not moved on recent events. The bill has been referred to two policy committees one of which put out this 38 page analysis. The purpose of this post is to gather and analyze all objections that came to my attention in any way, including all responses to my request for them on Twitter, and to suggest concrete changes that address some real concerns that were identified. 1. Some are helpful critiques pointing to potential problems, or good questions where we should ensure that my current understanding is correct. In several cases, I suggest concrete changes to the bill as a result. Two are important to fix weaknesses, one is a clear improvement, the others are free actions for clarity. 2. Some are based on what I strongly believe is a failure to understand how the law works, both in theory and in practice, or a failure to carefully read the bill, or both. 3. Some are pointing out a fundamental conflict. They want people to have the ability to freely train and release the weights of highly capable future models. Then they notice that it will become impossible to do this while adhering to ordinary safety requirements. They seem to therefore propose to not have safety requirements. 4. Some are alarmist rhetoric that has little tether to what is in the bill, or how any of this works. I am deeply disappointed in some of those using or sharing such rhetoric. Throughout such objections, there is little or no acknowledgement of the risks that the bill attempts to mitigate, suggestions of alternative ways to do that, or reasons to believe that such risks are insubstantial even absent required mitigation. To be fair to such objectors, many of them have previously stated that they believe that future more capable AI poses little catastrophic risk. I get making mistakes, indeed it would be surprising if this post contained none of its own. Understanding even a relatively short bill like SB 1047 requires close reading. If you thoughtlessly forward anything that sounds bad (or good) about such a bill, you are going to make mistakes, some of which are going to look dumb. What is the Story So Far? If you have not previously done so, I recommend reading my previous coverage of the bill when it was proposed, although note the text has been slightly updated since then. In the first half of that post, I did an RTFB (Read the Bill). I read it again for this post. The core bill mechanism is that if you want to train a 'covered model,' meaning training on 10^26 flops or getting performance similar or greater to what that would buy you in 2024, then you have various safety requirements that attach. If you fail in your duties you can be fined, if you purposefully lie about it then that is under penalty of perjury. I concluded this was a good faith effort to put forth a helpful bill. As the bill deals with complex issues, it contains both potential loopholes on the safety side, and potential issues of inadvertent overreach, unexpected consequences or misinterpretation on the restriction side. In the second half, I responded to Dean Ball's criticisms of the bill, which he called 'California's Effort to Strangle AI.' 1. In the section What Is a Covered Model,...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please stop publishing ideas/insights/research about AI, published by Tamsin Leake on May 2, 2024 on LessWrong. Basically all ideas/insights/research about AI is potentially exfohazardous. At least, it's pretty hard to know when some ideas/insights/research will actually make things better; especially in a world where building an aligned superintelligence (let's call this work "alignment") is quite harder than building any superintelligence (let's call this work "capabilities"), and there's a lot more people trying to do the latter than the former, and they have a lot more material resources. Ideas about AI, let alone insights about AI, let alone research results about AI, should be kept to private communication between trusted alignment researchers. On lesswrong, we should focus on teaching people the rationality skills which could help them figure out insights that help them build any superintelligence, but are more likely to first give them insights that help them realize that that is a bad idea. For example, OpenAI has demonstrated that they're just gonna cheerfully head towards doom. If you give OpenAI, say, interpretability insights, they'll just use them to work towards doom faster; what you need is to either give OpenAI enough rationality to slow down (even just a bit), or at least not give them anything. To be clear, I don't think people working at OpenAI know that they're working towards doom; a much more likely hypothesis is that they've memed themselves into not thinking very hard about the consequences of their work, and to erroneously feel vaguely optimistic about those due to cognitive biases such as wishful thinking. It's very rare that any research purely helps alignment, because any alignment design is a fragile target that is just a few changes away from unaligned. There is no alignment plan which fails harmlessly if you fuck up implementing it, and people tend to fuck things up unless they try really hard not to (and often even if they do), and people don't tend to try really hard not to. This applies doubly so to work that aims to make AI understandable or helpful, rather than aligned - a helpful AI will help anyone, and the world has more people trying to build any superintelligence (let's call those "capabilities researchers") than people trying to build aligned superintelligence (let's call those "alignment researchers"). Worse yet: if focusing on alignment is correlated with higher rationality and thus with better ability for one to figure out what they need to solve their problems, then alignment researchers are more likely to already have the ideas/insights/research they need than capabilities researchers, and thus publishing ideas/insights/research about AI is more likely to differentially help capabilities researchers. Note that this is another relative statement; I'm not saying "alignment researchers have everything they need", I'm saying "in general you should expect them to need less outside ideas/insights/research on AI than capabilities researchers". Alignment is a differential problem. We don't need alignment researchers to succeed as fast as possible; what we really need is for alignment researchers to succeed before capabilities researchers. Don't ask yourself "does this help alignment?", ask yourself "does this help alignment more than capabilities?". "But superintelligence is so far away!" - even if this was true (it isn't) then it wouldn't particularly matter. There is nothing that makes differentially helping capabilities "fine if superintelligence is sufficiently far away". Differentially helping capabilities is just generally bad. "But I'm only bringing up something that's already out there!" - something "already being out there" isn't really a binary thing. Bringing attention to a concept that's "already out there" is an ex...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An explanation of evil in an organized world, published by KatjaGrace on May 2, 2024 on LessWrong. A classic problem with Christianity is the so-called 'problem of evil' - that friction between the hypothesis that the world's creator is arbitrarily good and powerful, and a large fraction of actual observations of the world. Coming up with solutions to the problem of evil is a compelling endeavor if you are really rooting for a particular bottom line re Christianity, or I guess if you enjoy making up faux-valid arguments for wrong conclusions. At any rate, I think about this more than you might guess. And I think I've solved it! Or at least, I thought of a new solution which seems better than the others I've heard. (Though I mostly haven't heard them since high school.) The world (much like anything) has different levels of organization. People are made of cells; cells are made of molecules; molecules are made of atoms; atoms are made of subatomic particles, for instance. You can't actually make a person (of the usual kind) without including atoms, and you can't make a whole bunch of atoms in a particular structure without having made a person. These are logical facts, just like you can't draw a triangle without drawing corners, and you can't draw three corners connected by three lines without drawing a triangle. In particular, even God can't. (This is already established I think - for instance, I think it is agreed that God cannot make a rock so big that God cannot lift it, and that this is not a threat to God's omnipotence.) So God can't make the atoms be arranged one way and the humans be arranged another contradictory way. If God has opinions about what is good at different levels of organization, and they don't coincide, then he has to make trade-offs. If he cares about some level aside from the human level, then at the human level, things are going to have to be a bit suboptimal sometimes. Or perhaps entirely unrelated to what would be optimal, all the time. We usually assume God only cares about the human level. But if we take for granted that he made the world maximally good, then we might infer that he also cares about at least one other level. And I think if we look at the world with this in mind, it's pretty clear where that level is. If there's one thing God really makes sure happens, it's 'the laws of physics'. Though presumably laws are just what you see when God cares. To be 'fundamental' is to matter so much that the universe runs on the clockwork of your needs being met. There isn't a law of nothing bad ever happening to anyone's child; there's a law of energy being conserved in particle interactions. God cares about particle interactions. What's more, God cares so much about what happens to sub-atomic particles that he actually never, to our knowledge, compromises on that front. God will let anything go down at the human level rather than let one neutron go astray. What should we infer from this? That the majority of moral value is found at the level of fundamental physics (following Brian Tomasik and then going further). Happily we don't need to worry about this, because God has it under control. We might however wonder what we can infer from this about the moral value of other levels that are less important yet logically intertwined with and thus beyond the reach of God, but might still be more valuable than the one we usually focus on. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I am no longer thinking about/working on AI safety, published by Jack Koch on May 2, 2024 on The AI Alignment Forum. Here's a description of a future which I understand Rationalists and Effective Altruists in general would endorse as an (if not the) ideal outcome of the labors of humanity: no suffering, minimal pain/displeasure, maximal 'happiness' (preferably for an astronomical number of intelligent, sentient minds/beings). (Because we obviously want the best future experiences possible, for ourselves and future beings.) Here's a thought experiment. If you (anyone - everyone, really) could definitely stop suffering now (if not this second then reasonably soon, say within ~5-10 years) by some means, is there any valid reason for not doing so and continuing to suffer? Is there any reason for continuing to do anything else other than stop suffering (besides providing for food and shelter to that end)? Now, what if you were to learn there really is a way to accomplish this, with method(s) developed over the course of thousands of human years and lifetimes, the fruits of which have been verified in the experiences of thousands of humans, each of whom attained a total and forevermore cessation of their own suffering? Knowing this, what possible reason could you give to justify continuing to suffer, for yourself, for your communities, for humanity? Why/how this preempts the priority of AI work on the present EA agenda I can only imagine one kind of possible world in which it makes more sense to work on AI safety now and then stop suffering thereafter. The sooner TAI is likely to arrive and the more likely it is that its arrival will be catastrophic without further intervention and (crucially) the more likely it is that the safety problem actually will be solved with further effort, the more reasonable it becomes to make AI safe first and then stop suffering. To see this, consider a world in which TAI will arrive in 10 years, it will certainly result in human extinction unless and only unless we do X, and it is certainly possible (even easy) to accomplish X in the next 10 years. Presuming living without suffering is clearly preferable to not suffering by not living, it is not prima facie irrational to spend the next 10 years ensuring humanity's continued survival and then stop suffering. On the other hand, the more likely it is that either 1) we cannot or will not solve the safety problem in time or 2) the safety problem will be solved without further effort/intervention (possibly by never having been much of a problem to begin with), the more it makes sense to prioritize not suffering now, regardless of the outcome. Now, it's not that I think 2) is particularly likely, so it more or less comes down to how tractable you believe the problem is and how likely your (individual or collective) efforts are to move the needle further in the right direction on safe AI. These considerations have led me to believe the following: CLAIM. It is possible, if not likely, that the way to eliminate the most future suffering in expectation is to stop suffering and then help others do the same, directly, now - not by trying to move the needle on beneficial/safe AI. In summary, given your preference, ceteris paribus, to not suffer, the only valid reason I can imagine for not immediately working directly towards the end of your own suffering and instead focusing on AI safety is a belief that you will gain more (in terms of not suffering) after the arrival of TAI upon which you intervened than you will lose in the meantime by suffering until its arrival, in expectation. This is even presuming a strict either/or choice for the purpose of illustration; why couldn't you work on not suffering while continuing to work towards safe AI as your "day job"? Personally, the years I spent working on AI...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Intentional Stance, LLMs Edition, published by Eleni Angelou on May 1, 2024 on LessWrong. In memoriam of Daniel C. Dennett. tl;dr: I sketch out what it means to apply Dennett's Intentional Stance to LLMs. I argue that the intentional vocabulary is already ubiquitous in experimentation with these systems therefore what is missing is the theoretical framework to justify this usage. I aim to make up for that and explain why the intentional stance is the best available explanatory tool for LLM behavior. Choosing Between Stances Why choose the intentional stance? It seems natural to employ or ascribe cognitive states to AI models starting from the field's terminology, most prominently by calling it "machine learning" (Hagendorff 2023). This is very much unlike how other computer programs are treated. When programmers write software, they typically understand it in terms of what they designed it to execute (design stance) or simply make sense of it considering its physical properties, such as the materials it was made of or the various electrical signals processing in its circuitry (physical stance). As I note, it is not that we cannot use Dennett's other two stances (Dennett 1989) to talk about these systems. It is rather that neither of them constitutes the best explanatory framework for interacting with LLMs. To illustrate this, consider the reverse example. It is possible to apply the intentional stance to a hammer although this does not generate any new information or optimally explain the behavior of the tool. What seems to be apt for making sense of how hammers operate instead is the design stance. This is just as applicable to other computer programs-tools. To use a typical program, there is no need to posit intentional states. Unlike LLMs, users do not engage in human-like conversation with the software. More precisely, the reason why neither the design nor the physical stance is sufficient to explain and predict the behavior of LLMs is because state-of-the-art LLM outputs are in practice indistinguishable from those of human agents (Y. Zhou et al. 2022). It is possible to think about LLMs as trained systems or as consisting of graphic cards and neural network layers, but these hardly make any difference when one attempts to prompt them and make them helpful for conversation and problem-solving. What is more, machine learning systems like LLMs are not programmed to execute a task but are rather trained to find the policy that will execute the task. In other words, developers are not directly coding the information required to solve the problem they are using the AI for: they train the system to find the solution on its own. This requires for the model to possess all the necessary concepts. In that sense, dealing with LLMs is more akin to studying a biological organism that is under development or perhaps raising a child, and less like building a tool the use of which is well-understood prior to the system's interaction with its environment. The LLM can learn from feedback and "change its mind" about the optimal policy to go about its task which is not the case for the standard piece of software. Moreover, LLMs seem to possess concepts. Consequently, there is a distinction to be drawn between tool-like and agent-like programs. Judging on a behavioral basis, LLMs fall into the second category. This conclusion renders the intentional stance (Dennett 1989) practically indispensable for the evaluation of LLMs on a behavioral basis. Folk Psychology for LLMs What kind of folk psychology should we apply to LLMs? Do they have beliefs, desires, and goals? LLMs acquire "beliefs" from their training distribution, since they do not memorize or copy any text from it when outputting their results - at least no more than human writers and speakers do. They must, as a result, ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ACX Covid Origins Post convinced readers, published by ErnestScribbler on May 1, 2024 on LessWrong. ACX recently posted about the Rootclaim Covid origins debate, coming out in favor of zoonosis. Did the post change the minds of those who read it, or not? Did it change their judgment in favor of zoonosis (as was probably the goal of the post), or conversely did it make them think Lab Leak was more likely (as the "Don't debate conspiracy theorists" theory claims)? I analyzed the ACX survey to find out, by comparing responses before and after the post came out. The ACX survey asked readers whether they think the origin of Covid is more likely natural or Lab Leak. The ACX survey went out March 26th and was open until about April 10th. The Covid origins post came out March 28th, and the highlights on April 9th. So we can compare people who responded before the origins post came out to those who responded after[1]. We should be careful, though, since those who fill out the survey earlier could be different than those who filled out later, and this could create a correlation which isn't causal. I used a Regression Discontinuity Design on the time of the response to see if there was a break in the trend of responses right at the time the Covid post went up. Figuratively, this compares respondents "right before" the post to "right after" so can help assuage the confound fears. I find that the post made readers more likely to think that the origin was indeed zoonosis. And this is highly significant. Here are the results, in charts. Analysis Here is the number of responses over time, with the timings of the posts highlighted. We'll mostly just need the timing of the Covid origins post, which is around response 4,002. I'm assuming that readers who responded to the survey after the post went up have read the post before responding. This is the post engagement data[1] which shows within a few days of posting, most views of the post already took place. The ACX Survey asked respondents what they thought about Covid origins. I substracted 3 from the questionnaire response, to analyze a centered scale, for convenience. Here are the sliding window averages of 1,000 responses. There are some fluctuations, but quite clearly there is a break in the trend at the time of the post, with readers starting to give scores more towards zoonosis. Looks like the post lowered responses by about 0.5 points (this takes time to transition in the chart, because of the sliding window) There's not enough data to eyeball something about the Comment Highlights post. Another way to look at the same data is using not a sliding window, but a cumulative sum, where the local slope is the average response. I detrended this, so that it has 0 slope before the Covid post, just for convenience again. We very clearly see the break in the trend, and the slope comes out -0.52 points, similar to before. This is almost half a standard deviation, which is a pretty large effect. Needless to say it is extremely statistically significant. In fact, this effect made the Covid origins question the most highly correlated with response order of all survey questions. As a placebo test, I also checked whether this effect exists for other responses, even ones correlated with Covid origins before the post, like views on Abortion, or Political Spectrum. I found nothing that looks nearly this clear. The effects are much smaller if any, and not highly significant. I was curious if the post also had a polarizing effect, where readers became more likely to hold a stronger view after the post, i.e. Lab Leak proponents becoming more certain of Lab Leak, and zoonosis proponents becoming more certain of zoonosis. I don't find much support for this. The sliding window standard deviation of responses does not increase after the post. I'm not sur...
loading
Comments 
Download from Google Play
Download from App Store