DiscoverThe Nonlinear Library: LessWrong Top Posts
The Nonlinear Library: LessWrong Top Posts
Claim Ownership

The Nonlinear Library: LessWrong Top Posts

Author: The Nonlinear Fund

Subscribed: 9Played: 381
Share

Description

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
493 Episodes
Reverse
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eight Short Studies On Excuses , published by Scott Alexander on LessWrong. The Clumsy Game-Player You and a partner are playing an Iterated Prisoner's Dilemma. Both of you have publicly pre-committed to the tit-for-tat strategy. By iteration 5, you're going happily along, raking up the bonuses of cooperation, when your partner unexpectedly presses the "defect" button. "Uh, sorry," says your partner. "My finger slipped." "I still have to punish you just in case," you say. "I'm going to defect next turn, and we'll see how you like it." "Well," said your partner, "knowing that, I guess I'll defect next turn too, and we'll both lose out. But hey, it was just a slipped finger. By not trusting me, you're costing us both the benefits of one turn of cooperation." "True", you respond "but if I don't do it, you'll feel free to defect whenever you feel like it, using the 'finger slipped' excuse." "How about this?" proposes your partner. "I promise to take extra care that my finger won't slip again. You promise that if my finger does slip again, you will punish me terribly, defecting for a bunch of turns. That way, we trust each other again, and we can still get the benefits of cooperation next turn." You don't believe that your partner's finger really slipped, not for an instant. But the plan still seems like a good one. You accept the deal, and you continue cooperating until the experimenter ends the game. After the game, you wonder what went wrong, and whether you could have played better. You decide that there was no better way to deal with your partner's "finger-slip" - after all, the plan you enacted gave you maximum possible utility under the circumstances. But you wish that you'd pre-committed, at the beginning, to saying "and I will punish finger slips equally to deliberate defections, so make sure you're careful." The Lazy Student You are a perfectly utilitarian school teacher, who attaches exactly the same weight to others' welfare as to your own. You have to have the reports of all fifty students in your class ready by the time midterm grades go out on January 1st. You don't want to have to work during Christmas vacation, so you set a deadline that all reports must be in by December 15th or you won't grade them and the students will fail the class. Oh, and your class is Economics 101, and as part of a class project all your students have to behave as selfish utility-maximizing agents for the year. It costs your students 0 utility to turn in the report on time, but they gain +1 utility by turning it in late (they enjoy procrastinating). It costs you 0 utility to grade a report turned in before December 15th, but -30 utility to grade one after December 15th. And students get 0 utility from having their reports graded on time, but get -100 utility from having a report marked incomplete and failing the class. If you say "There's no penalty for turning in your report after deadline," then the students will procrastinate and turn in their reports late, for a total of +50 utility (1 per student times fifty students). You will have to grade all fifty reports during Christmas break, for a total of - 1500 utility (-30 per report times fifty reports). Total utility is -1450. So instead you say "If you don't turn in your report on time, I won't grade it." All students calculate the cost of being late, which is +1 utility from procrastinating and -100 from failing the class, and turn in their reports on time. You get all reports graded before Christmas, no students fail the class, and total utility loss is zero. Yay! Or else - one student comes to you the day after deadline and says "Sorry, I was really tired yesterday, so I really didn't want to come all the way here to hand in my report. I expect you'll grade my report anyway, because I know you to be a perfect utilitarian, an...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making Vaccine, published by johnswentworth on LessWrong. Back in December, I asked how hard it would be to make a vaccine for oneself. Several people pointed to radvac. It was a best-case scenario: an open-source vaccine design, made for self-experimenters, dead simple to make with readily-available materials, well-explained reasoning about the design, and with the name of one of the world’s more competent biologists (who I already knew of beforehand) stamped on the whitepaper. My girlfriend and I made a batch a week ago and took our first booster yesterday. This post talks a bit about the process, a bit about our plan, and a bit about motivations. Bear in mind that we may have made mistakes - if something seems off, leave a comment. The Process All of the materials and equipment to make the vaccine cost us about $1000. We did not need any special licenses or anything like that. I do have a little wetlab experience from my undergrad days, but the skills required were pretty minimal. One vial of custom peptide - that little pile of white powder at the bottom. The large majority of the cost (about $850) was the peptides. These are the main active ingredients of the vaccine: short segments of proteins from the COVID virus. They’re all <25 amino acids, so far too small to have any likely function as proteins (for comparison, COVID’s spike protein has 1273 amino acids). They’re just meant to be recognized by the immune system: the immune system learns to recognize these sequences, and that’s what provides immunity. Each of six peptides came in two vials of 4.5 mg each. These are the half we haven't dissolved; we keep them in the freezer as backups. The peptides were custom synthesized. There are companies which synthesize any (short) peptide sequence you want - you can find dozens of them online. The cheapest options suffice for the vaccine - the peptides don’t need to be “purified” (this just means removing partial sequences), they don’t need any special modifications, and very small amounts suffice. The minimum order size from the company we used would have been sufficient for around 250 doses. We bought twice that much (9 mg of each peptide), because it only cost ~$50 extra to get 2x the peptides and extras are nice in case of mistakes. The only unusual hiccup was an email about customs restrictions on COVID-related peptides. Apparently the company was not allowed to send us 9 mg in one vial, but could send us two vials of 4.5 mg each for each peptide. This didn’t require any effort on my part, other than saying “yes, two vials is fine, thankyou”. Kudos to their customer service for handling it. Equipment - stir plate, beakers, microcentrifuge tubes, 10 and 50 mL vials, pipette (0.1-1 mL range), and pipette tips. It's all available on Amazon. Other materials - these are sold as supplements. We also need such rare and costly ingredients as vinegar and deionized water. Also all available on Amazon. Besides the peptides, all the other materials and equipment were on amazon, food grade, in quantities far larger than we are ever likely to use. Peptide synthesis and delivery was the slowest; everything else showed up within ~3 days of ordering (it’s amazon, after all). The actual preparation process involves three main high-level steps: Prepare solutions of each component - basically dissolve everything separately, then stick it in the freezer until it’s needed. Circularize two of the peptides. Concretely, this means adding a few grains of activated charcoal to the tube and gently shaking it for three hours. Then, back in the freezer. When it’s time for a batch, take everything out of the freezer and mix it together. Prepping a batch mostly just involves pipetting things into a beaker on a stir plate, sometimes drop-by-drop. Finally, a dose goes into a microcentrifuge tube....
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Best Textbooks on Every Subject, published by lukeprog on LessWrong. For years, my self-education was stupid and wasteful. I learned by consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures, peer-reviewed papers, Teaching Company courses, and Cliff's Notes. How inefficient! I've since discovered that textbooks are usually the quickest and best way to learn new material. That's what they are designed to be, after all. Less Wrong has often recommended the "read textbooks!" method. Make progress by accumulation, not random walks. But textbooks vary widely in quality. I was forced to read some awful textbooks in college. The ones on American history and sociology were memorably bad, in my case. Other textbooks are exciting, accurate, fair, well-paced, and immediately useful. What if we could compile a list of the best textbooks on every subject? That would be extremely useful. Let's do it. There have been other pages of recommended reading on Less Wrong before (and elsewhere), but this post is unique. Here are the rules: Post the title of your favorite textbook on a given subject. You must have read at least two other textbooks on that same subject. You must briefly name the other books you've read on the subject and explain why you think your chosen textbook is superior to them. Rules #2 and #3 are to protect against recommending a bad book that only seems impressive because it's the only book you've read on the subject. Once, a popular author on Less Wrong recommended Bertrand Russell's A History of Western Philosophy to me, but when I noted that it was more polemical and inaccurate than the other major histories of philosophy, he admitted he hadn't really done much other reading in the field, and only liked the book because it was exciting. I'll start the list with three of my own recommendations... Subject: History of Western Philosophy Recommendation: The Great Conversation, 6th edition, by Norman Melchert Reason: The most popular history of western philosophy is Bertrand Russell's A History of Western Philosophy, which is exciting but also polemical and inaccurate. More accurate but dry and dull is Frederick Copelston's 11-volume A History of Philosophy. Anthony Kenny's recent 4-volume history, collected into one book as A New History of Western Philosophy, is both exciting and accurate, but perhaps too long (1000 pages) and technical for a first read on the history of philosophy. Melchert's textbook, The Great Conversation, is accurate but also the easiest to read, and has the clearest explanations of the important positions and debates, though of course it has its weaknesses (it spends too many pages on ancient Greek mythology but barely mentions Gottlob Frege, the father of analytic philosophy and of the philosophy of language). Melchert's history is also the only one to seriously cover the dominant mode of Anglophone philosophy done today: naturalism (what Melchert calls "physical realism"). Be sure to get the 6th edition, which has major improvements over the 5th edition. Subject: Cognitive Science Recommendation: Cognitive Science, by Jose Luis Bermudez Reason: Jose Luis Bermudez's Cognitive Science: An Introduction to the Science of Mind does an excellent job setting the historical and conceptual context for cognitive science, and draws fairly from all the fields involved in this heavily interdisciplinary science. Bermudez does a good job of making himself invisible, and the explanations here are some of the clearest available. In contrast, Paul Thagard's Mind: Introduction to Cognitive Science skips the context and jumps right into a systematic comparison (by explanatory merit) of the leading theories of mental representation: logic, rules, concepts, analogies, images, and neural networks. The book is o...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Preface, published Eliezer Yudkowsky on LessWrong. You hold in your hands a compilation of two years of daily blog posts. In retrospect, I look back on that project and see a large number of things I did completely wrong. I’m fine with that. Looking back and not seeing a huge number of things I did wrong would mean that neither my writing nor my understanding had improved since 2009. Oops is the sound we make when we improve our beliefs and strategies; so to look back at a time and not see anything you did wrong means that you haven’t learned anything or changed your mind since then. It was a mistake that I didn’t write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples. In retrospect, this was the second-largest mistake in my approach. It ties in to the first-largest mistake in my writing, which was that I didn’t realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn’t realize that part was the priority; and regarding this I can only say “Oops” and “Duh.” Yes, sometimes those big issues really are big and really are important; but that doesn’t change the basic truth that to master skills you need to practice them and it’s harder to practice on things that are further away. (Today the Center for Applied Rationality is working on repairing this huge mistake of mine in a more systematic fashion.) A third huge mistake I made was to focus too much on rational belief, too little on rational action. The fourth-largest mistake I made was that I should have better organized the content I was presenting in the sequences. In particular, I should have created a wiki much earlier, and made it easier to read the posts in sequence. That mistake at least is correctable. In the present work Rob Bensinger has reordered the posts and reorganized them as much as he can without trying to rewrite all the actual material (though he’s rewritten a bit of it). My fifth huge mistake was that I—as I saw it—tried to speak plainly about the stupidity of what appeared to me to be stupid ideas. I did try to avoid the fallacy known as Bulverism, which is where you open your discussion by talking about how stupid people are for believing something; I would always discuss the issue first, and only afterwards say, “And so this is stupid.” But in 2009 it was an open question in my mind whether it might be important to have some people around who expressed contempt for homeopathy. I thought, and still do think, that there is an unfortunate problem wherein treating ideas courteously is processed by many people on some level as “Nothing bad will happen to me if I say I believe this; I won’t lose status if I say I believe in homeopathy,” and that derisive laughter by comedians can help people wake up from the dream. Today I would write more courteously, I think. The discourtesy did serve a function, and I think there were people who were helped by reading it; but I now take more seriously the risk of building communities where the normal and expected reaction to low-status outsider views is open mockery and contempt. Despite my mistake, I am happy to say that my readership has so far been amazingly good about not using my rhetoric as an excuse to bully or belittle others. (I want to single out Scott Alexander in particular here, who is a nicer person than I am and an increasingly amazing writer on these topics, and may deserve part of the credit for making the culture of Less Wrong a healthy one.) To be able to look backwards and say that you’ve “failed” implies that you had goals. So what was it that I was trying to do? Th...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rationalism before the Sequences, published by Eric Raymond on LessWrong. I'm here to tell you a story about what it was like to be a rationalist decades before the Sequences and the formation of the modern rationalist community. It is not the only story that could be told, but it is one that runs parallel to and has important connections to Eliezer Yudkowsky's and how his ideas developed. My goal in writing this essay is to give the LW community a sense of the prehistory of their movement. It is not intended to be "where Eliezer got his ideas"; that would be stupidly reductive. I aim more to exhibit where the drive and spirit of the Yudkowskian reform came from, and the interesting ways in which Eliezer's formative experiences were not unique. My standing to write this essay begins with the fact that I am roughly 20 years older than Eliezer and read many of his sources before he was old enough to read. I was acquainted with him over an email list before he wrote the Sequences, though I somehow managed to forget those interactions afterwards and only rediscovered them while researching for this essay. In 2005 he had even sent me a book manuscript to review that covered some of the Sequences topics. My reaction on reading "The Twelve Virtues of Rationality" a few years later was dual. It was a different kind of writing than the book manuscript - stronger, more individual, taking some serious risks. On the one hand, I was deeply impressed by its clarity and courage. On the other hand, much of it seemed very familiar, full of hints and callbacks and allusions to books I knew very well. Today it is probably more difficult to back-read Eliezer's sources than it was in 2006, because the body of more recent work within his reformation of rationalism tends to get in the way. I'm going to attempt to draw aside that veil by talking about four specific topics: General Semantics, analytic philosophy, science fiction, and Zen Buddhism. Before I get to those specifics, I want to try to convey that sense of what it was like. I was a bright geeky kid in the 1960s and 1970s, immersed in a lot of obscure topics often with an implicit common theme: intelligence can save us! Learning how to think more clearly can make us better! But at the beginning I was groping as if in a dense fog, unclear about how to turn that belief into actionable advice. Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today's Internet fora. More often than not, though, the clue would be fictional; somebody's imagination about what it would be like to increase intelligence, to burn away error and think more clearly. When I found non-fiction sources on rationality and intelligence increase I devoured them. Alas, most were useless junk. But in a few places I found gold. Not by coincidence, the places I found real value were sources Eliezer would later draw on. I'm not guessing about this, I was able to confirm it first from Eliezer's explicit reports of what influenced him and then via an email conversation. Eliezer and I were not unique. We know directly of a few others with experiences like ours. There were likely dozens of others we didn't know - possibly hundreds - on parallel paths, all hungrily seeking clarity of thought, all finding largely overlapping subsets of clues and techniques because there simply wasn't that much out there to be mined. One piece of evidence for this parallelism besides Eliezer's reports is that I bounced a draft of this essay off Nancy Lebovitz, a former LW moderator who I've known personally since the 1970s. Her instant reaction? "Full of stuff I knew already." Around the time Nancy and I first met, some years before Eliezer Yudk...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Schelling fences on slippery slopes, published by Scott Alexander on LessWrong. Slippery slopes are themselves a slippery concept. Imagine trying to explain them to an alien: "Well, we right-thinking people are quite sure that the Holocaust happened, so banning Holocaust denial would shut up some crackpots and improve the discourse. But it's one step on the road to things like banning unpopular political positions or religions, and we right-thinking people oppose that, so we won't ban Holocaust denial." And the alien might well respond: "But you could just ban Holocaust denial, but not ban unpopular political positions or religions. Then you right-thinking people get the thing you want, but not the thing you don't want." This post is about some of the replies you might give the alien. Abandoning the Power of Choice This is the boring one without any philosophical insight that gets mentioned only for completeness' sake. In this reply, giving up a certain point risks losing the ability to decide whether or not to give up other points. For example, if people gave up the right to privacy and allowed the government to monitor all phone calls, online communications, and public places, then if someone launched a military coup, it would be very difficult to resist them because there would be no way to secretly organize a rebellion. This is also brought up in arguments about gun control a lot. I'm not sure this is properly thought of as a slippery slope argument at all. It seems to be a more straightforward "Don't give up useful tools for fighting tyranny" argument. The Legend of Murder-Gandhi Previously on Less Wrong's The Adventures of Murder-Gandhi: Gandhi is offered a pill that will turn him into an unstoppable murderer. He refuses to take it, because in his current incarnation as a pacifist, he doesn't want others to die, and he knows that would be a consequence of taking the pill. Even if we offered him $1 million to take the pill, his abhorrence of violence would lead him to refuse. But suppose we offered Gandhi $1 million to take a different pill: one which would decrease his reluctance to murder by 1%. This sounds like a pretty good deal. Even a person with 1% less reluctance to murder than Gandhi is still pretty pacifist and not likely to go killing anybody. And he could donate the money to his favorite charity and perhaps save some lives. Gandhi accepts the offer. Now we iterate the process: every time Gandhi takes the 1%-more-likely-to-murder-pill, we offer him another $1 million to take the same pill again. Maybe original Gandhi, upon sober contemplation, would decide to accept $5 million to become 5% less reluctant to murder. Maybe 95% of his original pacifism is the only level at which he can be absolutely sure that he will still pursue his pacifist ideals. Unfortunately, original Gandhi isn't the one making the choice of whether or not to take the 6th pill. 95%-Gandhi is. And 95% Gandhi doesn't care quite as much about pacifism as original Gandhi did. He still doesn't want to become a murderer, but it wouldn't be a disaster if he were just 90% as reluctant as original Gandhi, that stuck-up goody-goody. What if there were a general principle that each Gandhi was comfortable with Gandhis 5% more murderous than himself, but no more? Original Gandhi would start taking the pills, hoping to get down to 95%, but 95%-Gandhi would start taking five more, hoping to get down to 90%, and so on until he's rampaging through the streets of Delhi, killing everything in sight. Now we're tempted to say Gandhi shouldn't even take the first pill. But this also seems odd. Are we really saying Gandhi shouldn't take what's basically a free million dollars to turn himself into 99%-Gandhi, who might well be nearly indistinguishable in his actions from the original? Maybe Gandhi's best...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diseased thinking: dissolving questions about disease, published by Scott Alexander on LessWrong. Related to: Disguised Queries, Words as Hidden Inferences, Dissolving the Question, Eight Short Studies on Excuses Today's therapeutic ethos, which celebrates curing and disparages judging, expresses the liberal disposition to assume that crime and other problematic behaviors reflect social or biological causation. While this absolves the individual of responsibility, it also strips the individual of personhood, and moral dignity -- George Will, townhall.com Sandy is a morbidly obese woman looking for advice. Her husband has no sympathy for her, and tells her she obviously needs to stop eating like a pig, and would it kill her to go to the gym once in a while? Her doctor tells her that obesity is primarily genetic, and recommends the diet pill orlistat and a consultation with a surgeon about gastric bypass. Her sister tells her that obesity is a perfectly valid lifestyle choice, and that fat-ism, equivalent to racism, is society's way of keeping her down. When she tells each of her friends about the opinions of the others, things really start to heat up. Her husband accuses her doctor and sister of absolving her of personal responsibility with feel-good platitudes that in the end will only prevent her from getting the willpower she needs to start a real diet. Her doctor accuses her husband of ignorance of the real causes of obesity and of the most effective treatments, and accuses her sister of legitimizing a dangerous health risk that could end with Sandy in hospital or even dead. Her sister accuses her husband of being a jerk, and her doctor of trying to medicalize her behavior in order to turn it into a "condition" that will keep her on pills for life and make lots of money for Big Pharma. Sandy is fictional, but similar conversations happen every day, not only about obesity but about a host of other marginal conditions that some consider character flaws, others diseases, and still others normal variation in the human condition. Attention deficit disorder, internet addiction, social anxiety disorder (as one skeptic said, didn't we used to call this "shyness"?), alcoholism, chronic fatigue, oppositional defiant disorder ("didn't we used to call this being a teenager?"), compulsive gambling, homosexuality, Aspergers' syndrome, antisocial personality, even depression have all been placed in two or more of these categories by different people. Sandy's sister may have a point, but this post will concentrate on the debate between her husband and her doctor, with the understanding that the same techniques will apply to evaluating her sister's opinion. The disagreement between Sandy's husband and doctor centers around the idea of "disease". If obesity, depression, alcoholism, and the like are diseases, most people default to the doctor's point of view; if they are not diseases, they tend to agree with the husband. The debate over such marginal conditions is in many ways a debate over whether or not they are "real" diseases. The usual surface level arguments trotted out in favor of or against the proposition are generally inconclusive, but this post will apply a host of techniques previously discussed on Less Wrong to illuminate the issue. What is Disease? In Disguised Queries , Eliezer demonstrates how a word refers to a cluster of objects related upon multiple axes. For example, in a company that sorts red smooth translucent cubes full of vanadium from blue furry opaque eggs full of palladium, you might invent the word "rube" to designate the red cubes, and another "blegg", to designate the blue eggs. Both words are useful because they "carve reality at the joints" - they refer to two completely separate classes of things which it's practically useful to keep in separate cat...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Generalizing From One Example, published by Scott Alexander on LessWrong. Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective "Everyone generalizes from one example. At least, I do." -- Vlad Taltos (Issola, Steven Brust) My old professor, David Berman, liked to talk about what he called the "typical mind fallacy", which he illustrated through the following example: There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like? Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed. The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2. Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's. He kind of took this idea and ran with it. He interpreted certain passages in George Berkeley's biography to mean that Berkeley was an eidetic imager, and that this was why the idea of the universe as sense-perception held such interest to him. He also suggested that experience of consciousness and qualia were as variable as imaging, and that philosophers who deny their existence (Ryle? Dennett? Behaviorists?) were simply people whose mind lacked the ability to easily experience qualia. In general, he believed philosophy of mind was littered with examples of philosophers taking their own mental experiences and building theories on them, and other philosophers with different mental experiences critiquing them and wondering why they disagreed. The formal typical mind fallacy is about serious matters of mental structure. But I've also run into something similar with something more like the psyche than the mind: a tendency to generalize from our personalities and behaviors. For example, I'm about as introverted a person as you're ever likely to meet - anyone more introverted than I am doesn't communicate with anyone. All through elementary and middle school, I suspected that the other children were out to get me. They kept on grabbing me when I was busy with something and trying to drag me off to do some rough activity with them and their friends. When I protested, they counter-protested and told me I really needed to stop whatever I was doing and come join them. I figured they were bullies who were trying to annoy me, and found ways to hide from them and scare them off. Eventually I realized that it was a double misunderstanding. They figured I must be like them, and the only thing keeping me from playing their fun games was that I was too shy. I figured they must be like me, and that the only re...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reason as memetic immune disorder, published PhilGoetz on LessWrong. A prophet is without dishonor in his hometown I'm reading the book "The Year of Living Biblically," by A.J. Jacobs. He tried to follow all of the commandments in the Bible (Old and New Testaments) for one year. He quickly found that a lot of the rules in the Bible are impossible, illegal, or embarassing to follow nowadays; like wearing tassels, tying your money to yourself, stoning adulterers, not eating fruit from a tree less than 5 years old, and not touching anything that a menstruating woman has touched; and this didn't seem to bother more than a handful of the one-third to one-half of Americans who claim the Bible is the word of God. You may have noticed that people who convert to religion after the age of 20 or so are generally more zealous than people who grew up with the same religion. People who grow up with a religion learn how to cope with its more inconvenient parts by partitioning them off, rationalizing them away, or forgetting about them. Religious communities actually protect their members from religion in one sense - they develop an unspoken consensus on which parts of their religion members can legitimately ignore. New converts sometimes try to actually do what their religion tells them to do. I remember many times growing up when missionaries described the crazy things their new converts in remote areas did on reading the Bible for the first time - they refused to be taught by female missionaries; they insisted on following Old Testament commandments; they decided that everyone in the village had to confess all of their sins against everyone else in the village; they prayed to God and assumed He would do what they asked; they believed the Christian God would cure their diseases. We would always laugh a little at the naivete of these new converts; I could barely hear the tiny voice in my head saying but they're just believing that the Bible means what it says... How do we explain the blindness of people to a religion they grew up with? Cultural immunity Europe has lived with Christianity for nearly 2000 years. European culture has co-evolved with Christianity. Culturally, memetically, it's developed a tolerance for Christianity. These new Christian converts, in Uganda, Papua New Guinea, and other remote parts of the world, were being exposed to Christian memes for the first time, and had no immunity to them. The history of religions sometimes resembles the history of viruses. Judaism and Islam were both highly virulent when they first broke out, driving the first generations of their people to conquer (Islam) or just slaughter (Judaism) everyone around them for the sin of not being them. They both grew more sedate over time. (Christianity was pacifist at the start, as it arose in a conquered people. When the Romans adopted it, it didn't make them any more militaristic than they already were.) The mechanism isn't the same as for diseases, which can't be too virulent or they kill their hosts. Religions don't generally kill their hosts. I suspect that, over time, individual selection favors those who are less zealous. The point is that a culture develops antibodies for the particular religions it co-exists with - attitudes and practices that make them less virulent. I have a theory that "radical Islam" is not native Islam, but Westernized Islam. Over half of 75 Muslim terrorists studied by Bergen & Pandey 2005 in the New York Times had gone to a Western college. (Only 9% had attended madrassas.) A very small percentage of all Muslims have received a Western college education. When someone lives all their life in a Muslim country, they're not likely to be hit with the urge to travel abroad and blow something up. But when someone from an Islamic nation goes to Europe for college, and co...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pain is not the unit of Effort, published by alkjash on LessWrong. Write a Review This is a linkpost for/ (Content warning: self-harm, parts of this post may be actively counterproductive for readers with certain mental illnesses or idiosyncrasies.) What doesn't kill you makes you stronger. ~ Kelly Clarkson. No pain, no gain. ~ Exercise motto. The more bitterness you swallow, the higher you'll go. ~ Chinese proverb. I noticed recently that, at least in my social bubble, pain is the unit of effort. In other words, how hard you are trying is explicitly measured by how much suffering you put yourself through. In this post, I will share some anecdotes of how damaging and pervasive this belief is, and propose some counterbalancing ideas that might help rectify this problem. I. Anecdotes 1. As a child, I spent most of my evenings studying mathematics under some amount of supervision from my mother. While studying, if I expressed discomfort or fatigue, my mother would bring me a snack or drink and tell me to stretch or take a break. I think she took it as a sign that I was trying my best. If on the other hand I was smiling or joyful for extended periods of time, she took that as a sign that I had effort to spare and increased the hours I was supposed to study each day. To this day there's a gremlin on my shoulder that whispers, "If you're happy, you're not trying your best." 2. A close friend who played sports in school reports that training can be harrowing. He told me that players who fell behind the pack during for daily jogs would be singled out and publicly humiliated. One time the coach screamed at my friend for falling behind the asthmatic boy who was alternating between running and using his inhaler. Another time, my friend internalized "no pain, no gain" to the point of losing his toenails. 3. In high school and college, I was surrounded by overachievers constantly making (what seemed to me) incomprehensibly bad life choices. My classmates would sign up for eight classes per semester when the recommended number is five, jigsaw extracurricular activities into their calendar like a dynamic programming knapsack-solver, and then proceed to have loud public complaining contests about which libraries are most comfortable to study at past 2am and how many pages they have left to write for the essay due in three hours. Only later did I learn to ask: what incentives were they responding to? 4. A while ago I became a connoisseur of Chinese webnovels. Among those written for a male audience, there is a surprisingly diverse set of character traits represented among the main characters. Doubtless many are womanizing murderhobos with no redeeming qualities, but others are classical heroes with big hearts, or sarcastic antiheroes who actually grow up a little, or ambitious empire-builders with grand plans to pave the universe with Confucian order, or down-on-their-luck starving artists who just want to bring happiness to the world through song. If there is a single common virtue shared by all these protagonists, it is their superhuman pain tolerance. Protagonists routinely and often voluntarily dunk themselves in vats of lava, have all their bones broken, shattered, and reforged, get trapped inside alternate dimensions of freezing cold for millennia (which conveniently only takes a day in the outside world), and overdose on level-up pills right up to the brink of death, all in the name of becoming stronger. Oftentimes the defining difference between the protagonist and the antagonist is that the antagonist did not have enough pain tolerance and allowed the (unbearable physical) suffering in his life to drive him mad. 5. I have a close friend who often asks for my perspective on personal problems. A pattern arose in a couple of our conversations: alkjash: I feel like you're not ac...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bets, Bonds, and Kindergarteners, published by jefftk on LessWrong. Bets and bonds are tools for handling different epistemic states and levels of trust. Which makes them a great fit for negotiating with small children! A few weeks ago Anna (4y) wanted to play with some packing material. It looked very messy to me, I didn't expect she would clean it up, and I didn't want to fight with her about cleaning it up. I considered saying no, but after thinking about how things like this are handled in the real world I had an idea. If you want to do a hazardous activity, and we think you might go bankrupt and not clean up, we make you post a bond. This money is held in escrow to fund the cleanup if you disappear. I explained how this worked, and she went and got a dollar: Then: When she was done playing, she cleaned it up without complaint and got her dollar back. If she hadn't cleaned it up, I would have, and kept the dollar. Some situations are more complicated, and call for bets. I wanted to go to a park, but Lily (6y) didn't want to go to that park because the last time we had been there there'd been lots of bees. I remembered that had been a summer with unusually many bees, and it no longer being that summer or, in fact, summer at all, I was not worried. Since I was so confident, I offered my $1 to her $0.10 that we would not run into bees at the park. This seemed fair to her, and when there were no bees she was happy to pay up. Over time, they've learned that my being willing to bet, especially at large odds, is pretty informative, and often all I need to do is offer. Lily was having a rough morning, crying by herself about a project not working out. I suggested some things that might be fun to do together, and she rejected them angrily. I told her that often when people are feeling that way, going outside can help a lot, and when she didn't seem to believe me I offered to bet. Once she heard the 10:1 odds I was offering her I think she just started expecting that I was right, and she decided we should go ride bikes. (She didn't actually cheer up when we got outside: she cheered up as soon as she made this decision.) I do think there is some risk with this approach that the child will have a bad time just to get the money, or say they are having a bad time and they are actually not, but this isn't something we've run into. Another risk, if we were to wager large amounts, would be that the child would end up less happy than if I hadn't interacted with them at all. I handle this by making sure not to offer a bet I think they would regret losing, and while this is not a courtesy I expect people to make later in life, I think it's appropriate at their ages. Comment via: facebook Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on the Singularity Institute (SI), published by HoldenKarnofsky on LessWrong. This post presents thoughts on the Singularity Institute from Holden Karnofsky, Co-Executive Director of GiveWell. Note: Luke Muehlhauser, the Executive Director of the Singularity Institute, reviewed a draft of this post, and commented: "I do generally agree that your complaints are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI). I am working to address both categories of issues." I take Luke's comment to be a significant mark in SI's favor, because it indicates an explicit recognition of the problems I raise, and thus increases my estimate of the likelihood that SI will work to address them. September 2012 update: responses have been posted by Luke and Eliezer (and I have responded in the comments of their posts). I have also added acknowledgements. The Singularity Institute (SI) is a charity that GiveWell has been repeatedly asked to evaluate. In the past, SI has been outside our scope (as we were focused on specific areas such as international aid). With GiveWell Labs we are open to any giving opportunity, no matter what form and what sector, but we still do not currently plan to recommend SI; given the amount of interest some of our audience has expressed, I feel it is important to explain why. Our views, of course, remain open to change. (Note: I am posting this only to Less Wrong, not to the GiveWell Blog, because I believe that everyone who would be interested in this post will see it here.) I am currently the GiveWell staff member who has put the most time and effort into engaging with and evaluating SI. Other GiveWell staff currently agree with my bottom-line view that we should not recommend SI, but this does not mean they have engaged with each of my specific arguments. Therefore, while the lack of recommendation of SI is something that GiveWell stands behind, the specific arguments in this post should be attributed only to me, not to GiveWell. Summary of my views The argument advanced by SI for why the work it's doing is beneficial and important seems both wrong and poorly argued to me. My sense at the moment is that the arguments SI is making would, if accepted, increase rather than decrease the risk of an AI-related catastrophe. More SI has, or has had, multiple properties that I associate with ineffective organizations, and I do not see any specific evidence that its personnel/organization are well-suited to the tasks it has set for itself. More A common argument for giving to SI is that "even an infinitesimal chance that it is right" would be sufficient given the stakes. I have written previously about why I reject this reasoning; in addition, prominent SI representatives seem to reject this particular argument as well (i.e., they believe that one should support SI only if one believes it is a strong organization making strong arguments). More My sense is that at this point, given SI's current financial state, withholding funds from SI is likely better for its mission than donating to it. (I would not take this view to the furthest extreme; the argument that SI should have some funding seems stronger to me than the argument that it should have as much as it currently has.) I find existential risk reduction to be a fairly promising area for philanthropy, and plan to investigate it further. More There are many things that could happen that would cause me to revise my view on SI. However, I do not plan to respond to all comment responses to this post. (Given the volume of responses we may receive, I may not be able to even read all the comments on this post.) I do not believe these two statements are inconsistent, and I lay out paths for getting me...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Humans are not automatically strategic, published by AnnaSalamon on LessWrong. Reply to: A "Failure to Evaluate Return-on-Time" Fallacy Lionhearted writes: [A] large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped. A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995.... I’m curious as to why. Why will a randomly chosen eight-year-old fail a calculus test? Because most possible answers are wrong, and there is no force to guide him to the correct answers. (There is no need to postulate a “fear of success”; most ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.) Why do most of us, most of the time, choose to "pursue our goals" through routes that are far less effective than the routes we could find if we tried?[1] My guess is that here, as with the calculus test, the main problem is that most courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective. To be more specific: there are clearly at least some limited senses in which we have goals. We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”. But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out. We do not automatically: (a) Ask ourselves what we’re trying to achieve; (b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress; (c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal; (d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven’t worked for us in the past); (e) Systematically test many different conjectures for how to achieve the goals, including methods that aren’t habitual for us, while tracking which ones do and don’t work; (f) Focus most of the energy that isn’t going into systematic exploration, on the methods that work best; (g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies; (h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting; .... or carry out any number of other useful techniques. Instead, we mostly just do things. We act from habit; we act from impulse or convenience when primed by the activities in front of us; we remember our goal and choose an action that feels associated with our goal. We do any number of things. But we do not systematically choose the narrow sets of actions that would effectively optimize for our claimed goals, or for any other goals. Why? Most basically, because humans are only just on the cusp o...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anti-Aging: State of the Art, published by JackH on LessWrong. Write a Review Aging is a problem that ought to be solved, and most Less Wrongers recognize this. However, few members of the community seem to be aware of the current state of the anti-aging field, and how close we are to developing effective anti-aging therapies. As a result, there is a much greater (and in my opinion, irrational) overemphasis on the Plan B of cryonics for life extension, rather than Plan A of solving aging. Both are important, but the latter is under-emphasised despite being a potentially more feasible strategy for life extension given the potentially high probability that cryonics will not work. Today, there are over 130 longevity biotechnology companies and over 50 anti-aging drugs in clinical trials in humans. The evidence is promising that in the next 5-10 years, we will start seeing robust evidence that aging can be therapeutically slowed or reversed in humans. Whether we live to see anti-aging therapies to keep us alive indefinitely (i.e. whether we make it to longevity escape velocity) depends on how much traction and funding the field gets in coming decades. In this post, I summarise the state of the art of the anti-aging field (also known as longevity biotechnology, rejuvenation biotechnology, translational biogerontology or geroscience). If you feel you already possess the necessary background on aging, feel free to skip to Part V. Part I: Why is Aging a problem? Aging is the biggest killer worldwide, and also the largest source of morbidity. Aging kills 100,000 people per day; more than twice the sum of all other causes of death. This equates to 37 million people - a population the size of Canada - dying per year of aging. In developed countries, 9 out of 10 deaths are due to aging. Aging also accounts for more than 30% of all disability-adjusted life years lost (DALYs); more than any other single cause. Deaths due to aging are not usually quick and painless, but preceded by 10-15 years of chronic illnesses such as cancer, type 2 diabetes and Alzheimer’s disease. Quality of life typically deteriorates in older age, and the highest rates of depression worldwide are among the elderly. To give a relevant example of the effects of aging, consider that aging is primarily responsible for almost all COVID-19 deaths. This is observable in the strong association of COVID-19 mortality with age (below, middle panel): The death rate from COVID-19 increases exponentially with age (above, middle). This is not a coincidence - it is because biological aging weakens the immune system and results in a much higher chance of death from COVID-19. On a side note, waning immunity with age also increases cancer risk, as another example of how aging is associated with chronic illness. The mortality rate doubling time for COVID-19 is close to the all-cause mortality rate doubling time, suggesting that people who die of COVID-19 are really dying of aging. Without aging, COVID-19 would not be a global pandemic, since the death rate in individuals below 30 years old is extremely low. Part II: What does a world without aging look like? For those who have broken free of the pro-aging trance and recognise aging as a problem, there is the further challenge of imagining a world without aging. The prominent ‘black mirror’ portrayals of immortality as a curse or hubristic may distort our model of what a world with anti-aging actually looks like. The 'white mirror' of aging is a world in which biological age is halted at 20-30 years, and people maintain optimal health for a much longer or indefinite period of time. Although people will still age chronologically (exist over time) they will not undergo physical and cognitive decline associated with biological aging. At chronological ages of 70s, 80s, even 200s, t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The noncentral fallacy - the worst argument in the world?, published by Scott Alexander on LessWrong. Related to: Leaky Generalizations, Replace the Symbol With The Substance, Sneaking In Connotations David Stove once ran a contest to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process. If he can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this: "X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member." Call it the Noncentral Fallacy. It sounds dumb when you put it like that. Who even does that, anyway? It sounds dumb only because we are talking soberly of categories and features. As soon as the argument gets framed in terms of words, it becomes so powerful that somewhere between many and most of the bad arguments in politics, philosophy and culture take some form of the noncentral fallacy. Before we get to those, let's look at a simpler example. Suppose someone wants to build a statue honoring Martin Luther King Jr. for his nonviolent resistance to racism. An opponent of the statue objects: "But Martin Luther King was a criminal!" Any historian can confirm this is correct. A criminal is technically someone who breaks the law, and King knowingly broke a law against peaceful anti-segregation protest - hence his famous Letter from Birmingham Jail. But in this case calling Martin Luther King a criminal is the noncentral. The archetypal criminal is a mugger or bank robber. He is driven only by greed, preys on the innocent, and weakens the fabric of society. Since we don't like these things, calling someone a "criminal" naturally lowers our opinion of them. The opponent is saying "Because you don't like criminals, and Martin Luther King is a criminal, you should stop liking Martin Luther King." But King doesn't share the important criminal features of being driven by greed, preying on the innocent, or weakening the fabric of society that made us dislike criminals in the first place. Therefore, even though he is a criminal, there is no reason to dislike King. This all seems so nice and logical when it's presented in this format. Unfortunately, it's also one hundred percent contrary to instinct: the urge is to respond "Martin Luther King? A criminal? No he wasn't! You take that back!" This is why the noncentral is so successful. As soon as you do that you've fallen into their trap. Your argument is no longer about whether you should build a statue, it's about whether King was a criminal. Since he was, you have now lost the argument. Ideally, you should just be able to say "Well, King was the good kind of criminal." But that seems pretty tough as a debating maneuver, and it may be even harder in some of the cases where the noncentral Fallacy is commonly used. Now I want to list some of these cases. Many will be political1, for which I apologize, but it's hard to separate out a bad argument from its specific instantiations. None of these examples are meant to imply that the position they support is wrong (and in fact I myself hold some of them). They only show that certain particular arguments for the position are flawed, such as: "Abortion is murder!" The archetypal murder is Charles Manson breaking into your house and shooting you. This sort of murder is bad for a number of reasons: you prefer not to die, you have various thoughts and hopes and dreams that would be snuffed out, your family and friends would be heartbroken, and the rest of society has to live in fear until Manson gets caught. If you define murder as "killing another human being", then abortion is technically ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dying Outside, published by HalFinney on LessWrong. A man goes in to see his doctor, and after some tests, the doctor says, "I'm sorry, but you have a fatal disease." Man: "That's terrible! How long have I got?" Doctor: "Ten." Man: "Ten? What kind of answer is that? Ten months? Ten years? Ten what?" The doctor looks at his watch. "Nine." Recently I received some bad medical news (although not as bad as in the joke). Unfortunately I have been diagnosed with a fatal disease, Amyotrophic Lateral Sclerosis or ALS, sometimes called Lou Gehrig's disease. ALS causes nerve damage, progressive muscle weakness and paralysis, and ultimately death. Patients lose the ability to talk, walk, move, eventually even to breathe, which is usually the end of life. This process generally takes about 2 to 5 years. There are however two bright spots in this picture. The first is that ALS normally does not affect higher brain functions. I will retain my abilities to think and reason as usual. Even as my body is dying outside, I will remain alive inside. The second relates to survival. Although ALS is generally described as a fatal disease, this is not quite true. It is only mostly fatal. When breathing begins to fail, ALS patients must make a choice. They have the option to either go onto invasive mechanical respiration, which involves a tracheotomy and breathing machine, or they can die in comfort. I was very surprised to learn that over 90% of ALS patients choose to die. And even among those who choose life, for the great majority this is an emergency decision made in the hospital during a medical respiratory crisis. In a few cases the patient will have made his wishes known in advance, but most of the time the procedure is done as part of the medical management of the situation, and then the ALS patient either lives with it or asks to have the machine disconnected so he can die. Probably fewer than 1% of ALS patients arrange to go onto ventilation when they are still in relatively good health, even though this provides the best odds for a successful transition. With mechanical respiration, survival with ALS can be indefinitely extended. And the great majority of people living on respirators say that their quality of life is good and they are happy with their decision. (There may be a selection effect here.) It seems, then, that calling ALS a fatal disease is an oversimplification. ALS takes away your body, but it does not take away your mind, and if you are determined and fortunate, it does not have to take away your life. There are a number of practical and financial obstacles to successfully surviving on a ventilator, foremost among them the great load on caregivers. No doubt this contributes to the high rates of choosing death. But it seems that much of the objection is philosophical. People are not happy about being kept alive by machines. And they assume that their quality of life would be poor, without the ability to move and participate in their usual activities. This is despite the fact that most people on respirators describe their quality of life as acceptable to good. As we have seen in other contexts, people are surprisingly poor predictors of how they will react to changed circumstances. This seems to be such a case, contributing to the high death rates for ALS patients. I hope that when the time comes, I will choose life. ALS kills only motor neurons, which carry signals to the muscles. The senses are intact. And most patients retain at least some vestige of control over a few muscles, which with modern technology can offer a surprisingly effective mode of communication. Stephen Hawking, the world's longest surviving ALS patient at over 40 years since diagnosis, is said to be able to type at ten words per minute by twitching a cheek muscle. I hope to be able to read, browse ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There"s no such thing as a tree (phylogenetically), published by eukaryote on LessWrong. This is a linkpost for/ [Crossposted from Eukaryote Writes Blog.] So you’ve heard about how fish aren’t a monophyletic group? You’ve heard about carcinization, the process by which ocean arthropods convergently evolve into crabs? You say you get it now? Sit down. Sit down. Shut up. Listen. You don’t know nothing yet. “Trees” are not a coherent phylogenetic category. On the evolutionary tree of plants, trees are regularly interspersed with things that are absolutely, 100% not trees. This means that, for instance, either: The common ancestor of a maple and a mulberry tree was not a tree. The common ancestor of a stinging nettle and a strawberry plant was a tree. And this is true for most trees or non-trees that you can think of. I thought I had a pretty good guess at this, but the situation is far worse than I could have imagined. CLICK TO EXPAND. Partial phylogenetic tree of various plants. TL;DR: Tan is definitely, 100% trees. Yellow is tree-like. Green is 100% not a tree. Sourced mostly from Wikipedia. I learned after making this chart that tree ferns exist (h/t seebs), which I think just emphasizes my point further. Also, h/t kithpendragon for suggestions on improving accessibility of the graph. Why do trees keep happening? First, what is a tree? It’s a big long-lived self-supporting plant with leaves and wood. Also of interest to us are the non-tree “woody plants”, like lianas (thick woody vines) and shrubs. They’re not trees, but at least to me, it’s relatively apparent how a tree could evolve into a shrub, or vice-versa. The confusing part is a tree evolving into a dandelion. (Or vice-versa.) Wood, as you may have guessed by now, is also not a clear phyletic category. But it’s a reasonable category – a lignin-dense structure, usually that grows from the exterior and that forms a pretty readily identifiable material when separated from the tree. (.Okay, not the most explainable, but you know wood? You know when you hold something in your hand, and it’s made of wood, and you can tell that? Yeah, that thing.) All plants have lignin and cellulose as structural elements – wood is plant matter that is dense with both of these. Botanists don’t seem to think it only could have gone one way – for instance, the common ancestor of flowering plants is theorized to have been woody. But we also have pretty clear evidence of recent evolution of woodiness – say, a new plant arrives on a relatively barren island, and some of the offspring of that plant becomes treelike. Of plants native to the Canary Islands, wood independently evolved at least 38 times! One relevant factor is that all woody plants do, in a sense, begin life as herbaceous plants – by and large, a tree sprout shares a lot of properties with any herbaceous plant. Indeed, botanists call this kind of fleshy, soft growth from the center that elongates a plant “primary growth”, and the later growth from towards the outside which causes a plant to thicken is “secondary growth.” In a woody plant, secondary growth also means growing wood and bark – but other plants sometimes do secondary growth as well, like potatoes (in roots) This paper addresses the question. I don’t understand a lot of the closely genetic details, but my impression of its thesis is that: Analysis of convergently-evolved woody plants show that the genes for secondary woody growth are similar to primary growth in plants that don’t do any secondary growth – even in unrelated plants. And woody growth is an adaption of secondary growth. To abstract a little more, there is a common and useful structure in herbaceous plants that, when slightly tweaked, “dendronizes” them into woody plants. Dendronization – Evolving into a tree-like morphology. (In the style of “carciniz...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intellectual Hipsters and Meta-Contrarianism, published by Scott Alexander on LessWrong. Related to: Why Real Men Wear Pink, That Other Kind of Status, Pretending to be Wise, The "Outside The Box" Box WARNING: Beware of things that are fun to argue -- Eliezer Yudkowsky Science has inexplicably failed to come up with a precise definition of "hipster", but from my limited understanding a hipster is a person who deliberately uses unpopular, obsolete, or obscure styles and preferences in an attempt to be "cooler" than the mainstream. But why would being deliberately uncool be cooler than being cool? As previously discussed, in certain situations refusing to signal can be a sign of high status. Thorstein Veblen invented the term "conspicuous consumption" to refer to the showy spending habits of the nouveau riche, who unlike the established money of his day took great pains to signal their wealth by buying fast cars, expensive clothes, and shiny jewelery. Why was such flashiness common among new money but not old? Because the old money was so secure in their position that it never even occurred to them that they might be confused with poor people, whereas new money, with their lack of aristocratic breeding, worried they might be mistaken for poor people if they didn't make it blatantly obvious that they had expensive things. The old money might have started off not buying flashy things for pragmatic reasons - they didn't need to, so why waste the money? But if F. Scott Fitzgerald is to be believed, the old money actively cultivated an air of superiority to the nouveau riche and their conspicuous consumption; not buying flashy objects becomes a matter of principle. This makes sense: the nouveau riche need to differentiate themselves from the poor, but the old money need to differentiate themselves from the nouveau riche. This process is called countersignaling, and one can find its telltale patterns in many walks of life. Those who study human romantic attraction warn men not to "come on too strong", and this has similarities to the nouveau riche example. A total loser might come up to a woman without a hint of romance, promise her nothing, and demand sex. A more sophisticated man might buy roses for a woman, write her love poetry, hover on her every wish, et cetera; this signifies that he is not a total loser. But the most desirable men may deliberately avoid doing nice things for women in an attempt to signal they are so high status that they don't need to. The average man tries to differentiate himself from the total loser by being nice; the extremely attractive man tries to differentiate himself from the average man by not being especially nice. In all three examples, people at the top of the pyramid end up displaying characteristics similar to those at the bottom. Hipsters deliberately wear the same clothes uncool people wear. Families with old money don't wear much more jewelry than the middle class. And very attractive men approach women with the same lack of subtlety a total loser would use.1 If politics, philosophy, and religion are really about signaling, we should expect to find countersignaling there as well. Pretending To Be Wise Let's go back to Less Wrong's long-running discussion on death. Ask any five year old child, and ey can tell you that death is bad. Death is bad because it kills you. There is nothing subtle about it, and there does not need to be. Death universally seems bad to pretty much everyone on first analysis, and what it seems, it is. But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly. Precisely because these benefits are so much smaller than th...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 100 Tips for a Better Life, published by Ideopunk on LessWrong. Write a Review (Cross-posted from my blog) The other day I made an advice thread based on Jacobian’s from last year! If you know a source for one of these, shout and I’ll edit it in. Possessions 1. If you want to find out about people’s opinions on a product, google reddit. You’ll get real people arguing, as compared to the SEO’d Google results. 2. Some banks charge you $20 a month for an account, others charge you 0. If you’re with one of the former, have a good explanation for what those $20 are buying. 3. Things you use for a significant fraction of your life (bed: 1/3rd, office-chair: 1/4th) are worth investing in. 4. “Where is the good knife?” If you’re looking for your good X, you have bad Xs. Throw those out. 5. If your work is done on a computer, get a second monitor. Less time navigating between windows means more time for thinking. 6. Establish clear rules about when to throw out old junk. Once clear rules are established, junk will probably cease to be a problem. This is because any rule would be superior to our implicit rules (“keep this broken stereo for five years in case I learn how to fix it”). 7. Don’t buy CDs for people. They have Spotify. Buy them merch from a band they like instead. It’s more personal and the band gets more money. 8. When buying things, time and money trade-off against each other. If you’re low on money, take more time to find deals. If you’re low on time, stop looking for great deals and just buy things quickly online. Cooking 9. Steeping minutes: Green at 3, black at 4, herbal at 5. Good tea is that simple! 10. Food actually can be both cheap, healthy, tasty, and relatively quick to prepare. All it requires is a few hours one day to prepare many meals for the week. 11. Cooking pollutes the air. Opening windows for a few minutes after cooking can dramatically improve air quality. 12. Food taste can be made much more exciting through simple seasoning. It’s also an opportunity for expression. Buy a few herbs and spices and experiment away. 13. When googling a recipe, precede it with ‘best’. You’ll find better recipes. Productivity 14. Advanced search features are a fast way to create tighter search statements. For example: img html will return inferior results compared to: img html -w3 15. You can automate mundane computer tasks with Autohotkey (or AppleScript). If you keep doing a sequence “so simple a computer can do it”, make the computer do it. 16. Learn keyboard shortcuts. They’re easy to learn and you’ll get tasks done faster and easier. 17. Done is better than perfect. 18. Keep your desk and workspace bare. Treat every object as an imposition upon your attention, because it is. A workspace is not a place for storing things. It is a place for accomplishing things. 19. Reward yourself after completing challenges, even badly. Body 20. The 20-20-20 rule: Every 20 minutes of screenwork, look at a spot 20 feet away for 20 seconds. This will reduce eye strain and is easy to remember (or program reminders for). 21. Exercise (weightlifting) not only creates muscle mass, it also improves skeletal structure. Lift! 22. Exercise is the most important lifestyle intervention you can do. Even the bare minimum (15 minutes a week) has a huge impact. Start small. 23. (~This is not medical advice~). Don’t waste money on multivitamins, they don’t work. Vitamin D supplementation does seem to work, which is important because deficiency is common. 24. Phones have gotten heavier in the last decade and they’re actually pretty hard on your wrists! Use a computer when it’s an alternative or try to at least prop up your phone. Success 25. History remembers those who got to market first. Getting your creation out into the world is more important than getting it perfect. 26. Are you...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taboo "Outside View", published by Daniel Kokotajlo on LessWrong. No one has ever seen an AGI takeoff, so any attempt to understand it must use these outside view considerations. [Redacted for privacy] What? That’s exactly backwards. If we had lots of experience with past AGI takeoffs, using the outside view to predict the next one would be a lot more effective. My reaction Two years ago I wrote a deep-dive summary of Superforecasting and the associated scientific literature. I learned about the “Outside view” / “Inside view” distinction, and the evidence supporting it. At the time I was excited about the concept and wrote: “...I think we should do our best to imitate these best-practices, and that means using the outside view far more than we would naturally be inclined.” Now that I have more experience, I think the concept is doing more harm than good in our community. The term is easily abused and its meaning has expanded too much. I recommend we permanently taboo “Outside view,” i.e. stop using the word and use more precise, less confused concepts instead. This post explains why. What does “Outside view” mean now? Over the past two years I’ve noticed people (including myself!) do lots of different things in the name of the Outside View. I’ve compiled the following lists based on fuzzy memory of hundreds of conversations with dozens of people: Big List O’ Things People Describe As Outside View: Reference class forecasting, the practice of computing a probability of an event by looking at the frequency with which similar events occurred in similar situations. Also called comparison class forecasting. [EDIT: Eliezer rightly points out that sometimes reasoning by analogy is undeservedly called reference class forecasting; reference classes are supposed to be held to a much higher standard, in which your sample size is larger and the analogy is especially tight.] Trend extrapolation, e.g. “AGI implies insane GWP growth; let’s forecast AGI timelines by extrapolating GWP trends.” Foxy aggregation, the practice of using multiple methods to compute an answer and then making your final forecast be some intuition-weighted average of those methods. Bias correction, in others or in oneself, e.g. “There’s a selection effect in our community for people who think AI is a big deal, and one reason to think AI is a big deal is if you have short timelines, so I’m going to bump my timelines estimate longer to correct for this.” Deference to wisdom of the many, e.g. expert surveys, or appeals to the efficient market hypothesis, or to conventional wisdom in some fairly large group of people such as the EA community or Western academia. Anti-weirdness heuristic, e.g. “How sure are we about all this AI stuff? It’s pretty wild, it sounds like science fiction or doomsday cult material.” Priors, e.g. “This sort of thing seems like a really rare, surprising sort of event; I guess I’m saying the prior is low / the outside view says it’s unlikely.” Note that I’ve heard this said even in cases where the prior is not generated by a reference class, but rather from raw intuition. Ajeya’s timelines model (transcript of interview, link to model) . and probably many more I don’t remember Big List O’ Things People Describe As Inside View: Having a gears-level model, e.g. “Language data contains enough structure to learn human-level general intelligence with the right architecture and training setup; GPT-3 + recent theory papers indicate that this should be possible with X more data and compute.” Having any model at all, e.g. “I model AI progress as a function of compute and clock time, with the probability distribution over how much compute is needed shifting 2 OOMs lower each decade.” Deference to wisdom of the few, e.g. “the people I trust most on this matter seem to think.” Intuition-based-on-deta...
loading
Comments 
loading
Download from Google Play
Download from App Store