DiscoverAstral Codex Ten Podcast
Astral Codex Ten Podcast
Claim Ownership

Astral Codex Ten Podcast

Author: Jeremiah

Subscribed: 391Played: 50,262
Share

Description

The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
1093 Episodes
Reverse
I. Eliezer Yudkowsky’s Machine Intelligence Research Institute is the original AI safety org. But the original isn’t always the best - how is Mesopotamia doing these days? As money, brainpower, and prestige pour into the field, MIRI remains what it always was - a group of loosely-organized weird people, one of whom cannot be convinced to stop wearing a sparkly top hat in public. So when I was doing AI grantmaking last year, I asked them - why should I fund you instead of the guys with the army of bright-eyed Harvard grads, or the guys who just got Geoffrey Hinton as their celebrity spokesperson? What do you have that they don’t? MIRI answered: moral clarity. Most people in AI safety (including me) are uncertain and confused and looking for least-bad incremental solutions. We think AI will probably be an exciting and transformative technology, but there’s some chance, 5 or 15 or 30 percent, that it might turn against humanity in a catastrophic way. Or, if it doesn’t, that there will be something less catastrophic but still bad - maybe humanity gradually fading into the background, the same way kings and nobles faded into the background during the modern era. This is scary, but AI is coming whether we like it or not, and probably there are also potential risks from delaying too hard. We’re not sure exactly what to do, but for now we want to build a firm foundation for reacting to any future threat. That means keeping AI companies honest and transparent, helping responsible companies like Anthropic stay in the race, and investing in understanding AI goal structures and the ways that AIs interpret our commands. Then at some point in the future, we’ll be close enough to the actually-scary AI that we can understand the threat model more clearly, get more popular buy-in, and decide what to do next. MIRI thinks this is pathetic - like trying to protect against an asteroid impact by wearing a hard hat. They’re kind of cagey about their own probability of AI wiping out humanity, but it seems to be somewhere around 95 - 99%. They think plausibly-achievable gains in company responsibility, regulation quality, and AI scholarship are orders of magnitude too weak to seriously address the problem, and they don’t expect enough of a “warning shot” that they feel comfortable kicking the can down the road until everything becomes clear and action is easy. They suggest banning all AI capabilities research immediately, to be restarted only in some distant future when the situation looks more promising. Both sides honestly believe their position and don’t want to modulate their message for PR reasons. But both sides, coincidentally, think that their message is better PR. The incrementalists think a moderate, cautious approach keeps bridges open with academia, industry, government, and other actors that prefer normal clean-shaven interlocutors who don’t emit spittle whenever they talk. MIRI thinks that the public is sick of focus-group-tested mealy-mouthed bullshit, but might be ready to rise up against AI if someone presented the case in a clear and unambivalent way. Now Yudkowsky and his co-author, MIRI president Nate Soares, have reached new heights of unambivalence with their new book, If Anyone Builds It, Everyone Dies (release date September 16, currently available for preorder). https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] If you’ve been following this blog for long, you probably know at least a bit about pharmaceutical research. You might know a bit about the sort of subtle measures pharmaceutical companies take to influence doctors’ prescribing habits, or how it takes billions of dollars on average to bring a new medication to market, or something about the perverse incentives which determine the FDA’s standards for accepting or rejecting a new drug. You might have some idea what kinds of hoops a company has to jump through to conduct actual research which meets legal guidelines for patient safety and autonomy. You may be less familiar though, with how the sausage is actually made. How do pharmaceutical companies actually go through the process of testing a drug on human participants? I’m going to be focusing here on a research subject’s view of what are known as Phase I clinical trials, the stage in which prospective drugs are tested for safety and tolerability. This is where researchers aim to answer questions like “Does this drug have any dangerous side effects?” “Through what pathways is it removed from a patient’s body?” and “Can we actually give people enough of this drug that it’s useful for anything?” This comes before the stage where researchers test how good a drug is at actually treating any sort of disease, when patients who’re suffering from the target ailments are given the option receive it as an experimental treatment. In Phase I clinical trials, the participants are healthy volunteers who’re participating in research for money. There are almost no cases in which volunteer participation is driven by motivations other than money, because the attitudes between research participants and clinicians overwhelmingly tend to be characterized by mutual guarded distrust. This distrust is baked into the process, both on a cultural level among the participants, and by the clinics’ own incentives. All of what follows is drawn from my own experiences, and experiences that other participants in clinical pharmaceutical research have shared with me, because for reasons which should become clear over the course of this review, research which systematically explores the behaviors and motives of clinical research participants is generally not feasible to conduct. https://www.astralcodexten.com/p/your-review-participation-in-phase
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.] https://www.astralcodexten.com/p/links-for-september-2025
"You made him lower than the angels for a short time..." God: …and the math results we’re seeing are nothing short of incredible. This Terry Tao guy - Iblis: Let me stop you right there. I agree humans can, in controlled situations, provide correct answers to math problems. I deny that they truly understand math. I had a conversation with one of the humans recently, which I’ll bring up here for the viewers … give me one moment … https://www.astralcodexten.com/p/what-is-man-that-thou-art-mindful
Open Letter To The NIH

Open Letter To The NIH

2025-09-0205:17

You can sign the letter here. The Trump administration has been retaliating against its critics, and people and groups with business before the administration have started laundering criticism through other sources with less need for goodwill. So I have been asked to share an open letter, which needs signatures from scientists, doctors, and healthcare professionals. The authors tell me (THIS IS NOT THE CONTENTS OF THE LETTER, IT’S THEIR EXPLANATION, TO ME, OF WHAT THE LETTER IS FOR): The NIH has spent at least $5 billion less of that money than Congress has appropriated to them, which is bad because medical research is good and we want more of it. In May, NIH Director Jay Bhattacharya told a room full of people that he would spend all the money by the end of the fiscal year. That is good news, because any money not spent by that point will disappear. The bad news is the fiscal year ends on September 30th and according to the American Association of Medical Colleges, “the true shortfall far exceeds $5 billion.” Our open letter requests that Dr. Bhattacharya do what he said he would and spend all the money by September 30th. We as the originators of the letter do not want to be named publicly because we are concerned about being the focal point for blame and retaliation. We would rather be members of a large crowd of signatories than be singled out as individuals to make an example of. Based on our understanding of current administration norms, we do not expect retaliation against private individuals who sign this letter. We are looking for signatures from scientists, doctors, and healthcare professionals. So if that is you, please sign here. If you want to help support the letter more broadly, email nihfundingletter@gmail.com. Our stretch goal is to have a thousand people sign the letter within the next two weeks. To hammer home (since many people failed to understand it) that this is not the contents of the letter, I am including the actual contents below: We, the undersigned scientists, doctors, and public health stakeholders, commend your commitment to spend all funds allocated to the NIH, as reported in The Washington Post. At the same time, we are concerned by reports that U.S. institutions received nearly $5 billion less in NIH awards over the past year. With less than one month to the end of the fiscal year, we submit this urgent request to ensure that your commitment is upheld. If you anticipate that all appropriated funds cannot be spent in time, we request a public disclosure of the barriers preventing the achievement of this crucial responsibility. We present this request in the spirit of the broad, bipartisan consensus in favor of spending appropriated NIH funds. In their July letter to the Office of Management and Budget, fourteen Republican senators, led by Senators Collins, Britt, and McConnell, forcefully argued that suspension of NIH funds “could threaten Americans' ability to access better treatments and limit our nation's leadership in biomedical science.” The case for investment in medical research transcends political divides as it serves our collective national interest. The return on investment from research is compelling. Synthesizing the empirical literature, economist Matt Clancy estimates that each public and private R&D dollar yields roughly $5.50 in GDP—and about $11 when broader benefits are counted. Every dollar of NIH funding not deployed represents lost opportunities for breakthrough treatments, missed chances to train the next generation of scientists, and diminished returns on America's innovation ecosystem. Spending these funds is also a competitiveness imperative as China attempts to transform itself from a low-end manufacturer to a high-tech research and innovation juggernaut. In 2024, the Chinese government increased its spending on science and technology by 10%, and the nation’s total expenditure on research and development increased by 50% in nominal terms between 2020 and 2024. As China’s number of clinical trials and new drug candidates begin to outpace the U.S., America cannot afford to allow biomedical research funding to go unspent. We respectfully ask that you ensure that NIH will obligate all FY25 funds by September 30, 2025, and, if that is not possible, that you address the scientific community to explain why and what must be done to ensure all appropriated funds are spent in FY26. We stand ready to support your efforts to preserve this vital national investment. https://readscottalexander.com/posts/acx-open-letter-to-the-nih  
AI psychosis (NYT, PsychologyToday) is an apparent phenomenon where people go crazy after talking to chatbots too much. There are some high-profile anecdotes, but still many unanswered questions. For example, how common is it really? Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already? Isn’t psychosis supposed to be a biological disease? Wouldn’t that make chatbot-induced psychosis the same kind of category error as chatbot-induced diabetes? I don’t have all the answers, so think of this post as an exploration of possible analogies and precedents rather than a strongly-held thesis. Also, I might have one answer - I think the yearly incidence of AI psychosis is somewhere around 1 in 10,000 (for a loose definition) to 1 in 100,000 (for a strict definition). I’ll talk about how I got those numbers at the end. But first: I. Lenin Was A Mushroom https://www.astralcodexten.com/p/in-search-of-ai-psychosis
Your Review: Ollantay

Your Review: Ollantay

2025-08-2432:18

Finalist #9 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] Ollantay is a three-act play written in Quechua, an indigenous language of the South American Andes. It was first performed in Peru around 1775. Since the mid-1800s it’s been performed more often, and nowadays it’s pretty easy to find some company in Peru doing it. If nothing else, it’s popular in Peruvian high schools as a way to get students to connect with Quechua history. It’s not a particularly long play; a full performance of Ollantay takes around an hour.1 Also, nobody knows where Ollantay was written, when it was written, or who wrote it. And its first documented performance led directly to upwards of a hundred thousand deaths. Macbeth has killed at most fifty people,2 and yet it routinely tops listicles of “deadliest plays”. I’m here to propose that Ollantay take its place. https://www.astralcodexten.com/p/your-review-ollantay
[original post here] #1: Isn’t it possible that embryos are alive, or have personhood, or are moral patients? Most IVF involves getting many embryos, then throwing out the ones that the couple doesn’t need to implant. If destroying embryos were wrong, then IVF would be unethical - and embryo selection, which might encourage more people to do IVF, or to maximize the number of embryos they get from IVF, would be extra unethical. I think a default position would be that if you believe humans are more valuable than cows, and cows more valuable than bugs - presumably because humans are more conscious/intelligent/complex/thoughtful/have more hopes and dreams/experience more emotions - then in that case embryos, which have less of a brain and nervous system even than bugs, should be less valuable still. One reason to abandon this default position would be if you believe in souls or some other nonphysical basis for personhood. Then maybe the soul would enter the embryo at conception. I think even here, it’s hard to figure out exactly what you’re saying - the soul clearly isn’t doing very much, in the sense of experiencing things, while it’s in the embryo. But it seems like God is probably pretty attached to souls, and maybe you don’t want to mess with them while He’s watching. In any case, all I can say is that this isn’t my metaphysics. But most people in the comments took a different tactic, arguing that we should give embryos special status (compared to cows and bugs) because they had the potential to grow into a person. https://www.astralcodexten.com/p/my-responses-to-three-concerns-from
Finalist #8 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] I. The Men Are Not Alright Sometimes I’m convinced there’s a note taped to my back that says, “PLEASE SPILL YOUR SOUL UPON THIS WOMAN.” I am not a therapist, nor in any way certified to deal with emotional distress, yet my presence seems to cause people to regurgitate their traumas. This quirk of mine becomes especially obvious when dating. Many of my dates turn into pseudo-therapy sessions, with men sharing emotional traumas they’ve kept bottled up for years. One moment I’m learning about his cat named Daisy, and then half a latte later, I’m hearing a detailed account of his third suicide attempt, complete with a critique of the food in the psychiatric ward. This repeated pattern in my dating life has taught me three things: I am terrible at small talk. Most men are not accustomed to genuine questions about their well-being, and will often respond with a desperate upwelling of emotion. The men are not alright. This is a review of dating men in the Bay Area. But more than that, it’s an attempt to explain those unofficial therapy sessions to people who never get to hear them. It’s a review of the various forms of neglect and abuse society inflicts upon men, and the inevitable consequences to their happiness and romantic partnerships. https://www.astralcodexten.com/p/your-review-dating-men-in-the-bay
A guest post by David Schneider-Joseph The “amyloid hypothesis” says that Alzheimer’s is caused by accumulation of the peptide amyloid-β. It’s the leading model in academia, but a favorite target for science journalists, contrarian bloggers, and neuroscience public intellectuals, who point out problems like: Some of the research establishing amyloid's role turned out to be fraudulent. The level of amyloid in the brain doesn’t correlate very well with the level of cognitive impairment across Alzheimer’s patients. Several strains of mice that were genetically programmed to have extra amyloid did eventually develop cognitive impairments. But it took much higher amyloid levels than humans have, and on further investigation the impairments didn't really look like Alzheimer’s. Some infectious agents, like the gingivitis bacterium and the herpesviruses, seem to play a role in at least some Alzheimer’s cases. . . . and amyloid is one of the body's responses to injury or infection, so it might be a harmless byproduct of these infections or whatever else the real disease is. Anti-amyloid drugs (like Aduhelm) don't reverse the disease, and only slow progression a relatively small amount. Opponents call the amyloid hypothesis zombie science, propped up only by pharmaceutical companies hoping to sell off a few more anti-amyloid me-too drugs before it collapses. Meanwhile, mainstream scientists . . . continue to believe it without really offering any public defense. Scott was so surprised by the size of the gap between official and unofficial opinion that he asked if someone from the orthodox camp would speak out in its favor. I am David Schneider-Joseph, an engineer formerly with SpaceX and Google, now working in AI safety. Alzheimer’s isn’t my field, but I got very interested in it, spent six months studying the literature, and came away believing the amyloid hypothesis was basically completely solid. I thought I’d share that understanding with current skeptics. https://www.astralcodexten.com/p/in-defense-of-the-amyloid-hypothesis
[Original post: Should Strong Gods Bet On GDP?] 1: Comments About The Theory 2: Comments About Specific Communities 3: Other Comments Comments About The Theory Darwin writes: I think you may (*may*, I'm not sure) be vastly underestimating how many people are in some form of nontraditional tight-knit community. Notice that many of the communities you list are things you've directly personally encountered through your online interests or social circle. Most people have never heard of libertarian homesteaders or rationalist dating sites, perhaps you have also never heard of the things most other people belong to. For my part, I have been part of a foam combat ('boffer') organization since college. You may want to say 'that's not a community, that's just a hobby', but the people in this sport form a strong community with tight bonds outside the game itself. Not only do I go to practices twice a week, I have 2 D&D games and 1 board game night every week with mostly members of the community, members of the community are my friends that I go out to movies and dinners with, play video games with voice chat on Discord with, talk to online in Discord servers and web forums and group chats, go to parties with and gossip about with other community members. Aside from attending over a dozen weddings of community members (mostly to other community members), I've served as best man for 2 members and wedding officiant for 2 other members. The sport itself has houses, guilds, and fighting units, all with their own ethos, credos, goals, activities, and hierarchies; it has knighthoods and squireships, it has awards for arts and crafts and community service. The sport has regular camping events that end up looking like temporary compounds of hundreds to thousand+ members, lasting from a weekend to a week. We may not have a singular God or Invisible Hand we all worship, but we have strong community norms towards things like inclusion, creating positive experiences, some modernized gender-neutral version of chivalry, creating safe spaces, etc. If you didn't know me very very well, you might know that 'oh yeah, he does some kind of sword fighting thing on the weekends I think?', and not know there's a large and strong community there. I wonder how many other things are like this - I think 'oh yeah, they play softball on the weekends, oh yeah, they belong to a knitting circle, oh yeah, they go to a lot of concerts, oh yeah, they volunteer at some kind of community center', and have no idea that there's a strong close-knit community surrounding those things that remains largely invisible to outsiders. https://www.astralcodexten.com/p/highlights-from-the-comments-on-liberalism
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] My dad only actually enjoys about ten foods, nine of them beige. His bread? White. His pizza? Cheese. His meat? Turkey breast. And his side dish? Mashed potatoes. As a child I hated mashed potatoes, despite his evangelization of them. I too was a picky eater growing up, but I would occasionally attempt to see what he saw in his beloved spuds. Whenever I tried a bite, the texture disgusted me: a gritty gruel of salty flakes coated with the oleic pall of margarine. The flavor reminded me of stale Pringles. I checked back once every couple years, but was repulsed by them every time. I lobbied my parents for pasta or frozen tater tots or any other side I actually liked. Family dinners were often dichotomous, the same protein supplemented by two different carbs. “You are not my son,” my father would joke as he continued to put away his potato slop. “Maybe you’re not my father,” I’d shoot back when he shunned the rest of the family’s rice pilaf. Our starch preferences seemed irreconcilable. As I entered my teen years, my palate expanded. After I’d tried and enjoyed brussels sprouts and sushi and escargot, my hatred of one of the most basic and inoffensive of all foods seemed silly. One day at a nice restaurant, I decided to give mashed potatoes one more try. Upon taking my first bite, I realized three things: 1) Mashed potatoes are good. 2) Whatever my dad had been eating at home was not mashed potatoes. 3) My world is built on lies. https://www.astralcodexten.com/p/your-review-my-fathers-instant-mashed
Slightly contra Fukuyama on liberal communities Francis Fukuyama is on Substack; last month he wrote Liberalism Needs Community. As always, read the whole thing and don’t trust my summary, but the key point is: R. R. Reno, editor of the magazine First Things, the liberal project of the past three generations has sought to weaken the “strong Gods” of populism, nationalism, and religion that were held to be the drivers of the bloody conflicts of the early 20th century. Those gods are now returning, and are present in the politics of both the progressive left and far right—particularly the right, which is characterized today by demands for strong national identities or religious foundations for national communities. However, there is a cogent liberal response to the charge that liberalism undermines community. The problem is that, just as in the 1930s, that response has not been adequately articulated by the defenders of liberalism. Liberalism is not intrinsically opposed to community; indeed, there is a version of liberalism that encourages the flourishing of strong community and human virtue. That community emerges through the development of a strong and well-organized civil society, where individuals freely choose to bond with other like-minded individuals to seek common ends. People are free to follow “strong Gods”; the only caveat is that there is no single strong god that binds the entire society together. In other words - yes, part of the good life is participation in a tight-knit community with strong values. Liberalism’s shared values are comparatively weak, and its knitting comparatively loose. But that’s no argument against the liberal project. Its goal isn’t to become this kind of community itself, but to be the platform where communities like this can grow up. So in a liberal democracy, Christians can have their church, Jews their synagogue, Communists their commune, and so on. Everyone gets the tight-knit community they want - which beats illiberalism, where (at most) one group gets the community they want and everyone else gets persecuted. On a theoretical level, this is a great answer. On a practical level - is it really working? Are we really a nation dotted with tight-knit communities of strong values? The average person has a church they don’t attend and a political philosophy that mainly cashes out in Twitter dunks. Otherwise they just consume whatever slop the current year’s version of capitalism chooses to throw at them. It’s worth surveying the exceptions that prove the rule: https://www.astralcodexten.com/p/should-strong-gods-bet-on-gdp
Your Review: Joan of Arc

Your Review: Joan of Arc

2025-08-0702:18:10

Finalist #6 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] When the prefect of Alexandria’s daughter converted to Christianity, nothing in particular happened - it wasn’t as though the laws outlawing the cult would be enforced against her. She was smart, she was pretty (beautiful, even) and she had connections. So long as she kept quiet, Catherine could have a comfortable life. She didn’t keep quiet. https://www.astralcodexten.com/p/your-review-joan-of-arc  
[see footnote 4 for conflicts of interest] In 2021, Genomic Prediction announced the first polygenically selected baby. When a couple uses IVF, they may get as many as ten embryos. If they only want one child, which one do they implant? In the early days, doctors would just eyeball them and choose whichever looked healthiest. Later, they started testing for some of the most severe and easiest-to-detect genetic disorders like Down Syndrome and cystic fibrosis1. The final step was polygenic selection - genotyping each embryo and implanting the one with the best genes overall. Best in what sense? Genomic Prediction claimed the ability to forecast health outcomes from diabetes to schizophrenia. For example, although the average person has a 30% chance of getting type II diabetes, if you genetically test five embryos and select the one with the lowest predicted risk, they’ll only have a 20% chance2. Since you’re taking the healthiest of many embryos, you should expect a child conceived via this method to be significantly healthier than one born naturally. Polygenic selection straddles the line between disease prevention and human enhancement. In 2023, Orchid Health entered the field. Unlike Genomic Prediction, which tested only the most important genetic variants, Orchid offers whole genome sequencing, which can detect the de novo3 mutations involved in autism, developmental disorders, and certain other genetic diseases. Critics accused GP and Orchid of offering “designer babies”, but this was only true in the weakest sense - customers couldn’t “design” a baby for anything other than slightly lower risk of genetic disease. These companies refused to offer selection on “traits” - the industry term for the really controversial stuff like height, IQ, or eye color. Still, these were trivial extensions of their technology, and everybody knew it was just a matter of time before someone took the plunge. Last month, a startup called Nucleus took the plunge. https://www.astralcodexten.com/p/suddenly-trait-based-embryo-selection
My Heart Of Hearts

My Heart Of Hearts

2025-08-0310:45

I promised some people longer responses: Thomas Cotter asks why people think “consistency” is an important moral value. After all, he says, the Nazis and Soviets were “consistent” with their evil beliefs. I’m not so sure of his examples - the Soviets massacred workers striking for better conditions, and the Nazis were so bad at race science that they banned IQ tests after Jews outscored Aryans - but I’m sure if he looked harder he could find some evil person who was superficially consistent with themselves. Hen Mazzig on Twitter is suspicious that lots of people oppose the massacres in Gaza without having objected equally strenuously to various other things. Again, he’s bad at examples - most of the things he names are less bad than the massacres in Gaza - but I’m sure if he looked harder he could find some thing which was worse than Gaza and which not quite as many people had protested. Therefore, people who object to the massacres in Gaza must be motivated by anti-Semitism. An r/TrueUnpopularOpinion poster argues that No One Actually Cares About Gaza; Your Anger Is Performative. They say that (almost) nobody can actually sustain strong emotions about the deaths of some hard-to-pin-down number of people they don’t know, and so probably people who claim to care are virtue-signaling or luxury-believing or one of those things. Since 2/3 of these are about Gaza, we’ll start there. And since there’s so much virtue-signaling and luxury-believing going around these days, I assure you that what I am about to share is my absolute most honest and deepest opinion, the one I hold in my heart of hearts.  https://www.astralcodexten.com/p/my-heart-of-hearts  
Jul 26, 2025 Finalist #5 in the Review Contest [This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] Introduction The Astral Codex Ten (ACX) Commentariat is defined as the 24,485 individuals other than Scott who have contributed to the corpus of work of Scott’s blog posts, chiefly by leaving comments at the bottom of those posts. It is well understood (by the Commentariat themselves) that they are the best comments section anywhere on the internet, and have been for some time. This review takes it as a given that the ACX Commentariat outclasses all of its pale imitators across the web, so I won’t compare the ACX Commentariat to e.g. reddit. The real question is whether our glory days are behind us – specifically whether the ACX Commentariat of today has lost its edge compared to the SSC Commentariat of pre-2021. A couple of years ago Scott asked, Why Do I Suck?. This was a largely tongue-in-cheek springboard to discuss a substantive criticism he regularly received - that his earlier writing was better than his writing now. How far back do we need to go before his writing was ‘good’? Accounts seemed to differ; Scott said that the feedback he got was of two sorts: “I loved your articles from about 2013 - 2016 so much! Why don’t you write articles like that any more?”, which dates the decline to 2016 “Do you feel like you’ve shifted to less ambitious forms of writing with the new Substack?”, which dates the decline to 2021 Quite a few people responded in the comments that Scott’s writing hadn’t changed, but it was the experience of being a commentor which had worsened. For example, David Friedman, a prolific commentor on the blog in the SSC-era, writes: A lot of what I liked about SSC was the commenting community, and I find the comments here less interesting than they were on SSC, fewer interesting arguments, which is probably why I spend more time on [an alternative forum] than on ACX. Similarly, kfix seems to be a long-time lurker (from as early as 2016) who has become more active in the ACX-era, writes: I would definitely agree that the commenting community here is 'worse' than at SSC along the lines you describe, along with the also unwelcome hurt feelings post whenever Scott makes an offhand joke about a political/cultural topic. And of course, this position wasn’t unanimous. Verbamundi Consulting is a true lurker who has only ever made one post on the blog – this one: Ok, I've been lurking for a while, but I have to say: I don't think you suck… You have a good variety of topics, your commenting community remains excellent, and you're one of the few bloggers I continue to follow. The ACX Commentariat is somewhat unique in that it self-styles itself as a major reason to come and read Scott’s writing – Scott offers up some insights on an issue, and then the comments section engages unusually open and unusually respectful discussion of the theme, and the total becomes greater than the sum of the parts. Therefore, if the Commentariat has declined in quality it may disproportionately affect people’s experience of Scott’s posts. The joint value of each Scott-plus-Commentariat offering declines if the Commentariat are not pulling their weight, even if Scott himself remains just as good as ever. In Why Do I Suck? Scott suggests that there is weak to no evidence of a decline in his writing quality, so I propose this review as something of a companion piece; is the (alleged) problem with the blog, in fact, staring at us in the mirror? My personal view aligns with Verbamundi Consulting and many other commentors - I’ve enjoyed participating in both the SSC and ACX comments, and I haven’t noticed any decline in Commentariat quality. So, I was extremely surprised to find the data totally contradicted my anecdotal experience, and indicated a very clear dropoff in a number of markers of quality at almost exactly the points Scott mentioned in Why Do I Suck? – one in mid-2016 and one in early 2021 during the switch from SSC to ACX. https://readscottalexander.com/posts/acx-your-review-the-astral-codex-ten 
We’re running another ACX Grants round! If you already know what this is and just want to apply for a grant, use the form here (should take 15 - 30 minutes), deadline August 15. If you already know what this is and want to help as a funder, VC, partner charity, evaluator, or friendly professional, click the link for the relevant form, same deadline. Otherwise see below for more information. What is ACX Grants? ACX Grants is a microgrants program that helps fund ACX readers’ charitable or scientific projects. Click the links to see the 2022 and 2024 cohorts. The program is conducted in partnership with Manifund, a charity spinoff of Manifold Markets, who handle the administrative/infrastructure side of things. How much money is involved? I plan to contribute $200K. I expect (but cannot guarantee) an additional $800K from other donors, for a total of about $1 million. Most grants will probably be between $5,000 and $50,000, with a rare few up to $100,000. Depending on how much external donor interest there is, we will probably give between 10 and 50 grants. What’s the catch? There’s no catch, but this year we plan to experiment with replacing some grants with SAFEs, and others with convertible grants. That means that if you’re a startup, we (ACX Grants as an nonprofit institution, not me personally) get some claim to future equity if you succeed. If you’re not a startup, you’ll sign an agreement saying that if your project ever becomes a startup, then we’ll get the equity claim. We’re still working on the exact details of this agreement, but we intend to have pretty standard terms and err in the favorable-to-you direction; obviously we’ll show you the final agreement before you sign anything. We’re doing this because some of our previous grantees became valuable companies, and it seems foolish to leave that money on the table when we could be capturing it and reinvesting it into future grants rounds. Please don’t let this affect your decision to apply. Our top priority remains charity, and we’ll continue to select grantees based on their philanthropic value and not on their likelihood of making us money. If you’re not a startup and don’t plan to become one, none of this should affect you. And if you have a good reason not to want to sign these agreements - including “I’m not savvy enough to know what this means and it makes me nervous” - then we’re happy to opt you out of them. What’s the timeline? We’d like to have grants awarded by October 1 and money in your hands by November 1. This is a goal, not a promise. What will the application process be like? You fill out a form that should take 15 - 30 minutes. If we have questions, an evaluator might email or call you, in a way that hopefully won’t take more than another 15 - 30 minutes of your time to answer. If you win a grant, Manifund will send you the money, probably by bank wire. Every few years, we might ask you to fill out another 15 - 30 minute form letting us know how your project is doing. What kind of projects might you fund? There are already lots of good charities that help people directly at scale, for example Against Malaria Foundation (which distributes malaria-preventing bed nets) and GiveDirectly (which gives money directly to very poor people in Africa). These are hard to beat. We’re most interested in charities that pursue novel ways to change complex systems, either through technological breakthroughs, new social institutions, or targeted political change. Among the projects we’ve funded in the past were: Development of oxfendazole, a drug for treating parasitic worms in developing countries. A platform that lets people create prediction markets on topics of their choice A trip to Nigeria for college students researching lead poisoning prevention. A group of lawyers who sue factory farms under animal cruelty laws. Development of software that helps the FDA run better drug trials. A startup building anti-mosquito drones to fight tropical disease A guide for would-be parents on which IVF clinics have the highest successful rate of successful implantation. A university lab working on artificial kidneys You can read the full list here and here, and the most recent updates from each project here. Is there anything good about winning an ACX Grant other than getting money? You’ll get my support, which is mostly useful in getting me to blog about your project. For example, I can put out updates or requests for help on Open Threads. I can also try to help connect you to people I know. Some people who won ACX Grants last year were able to leverage the attention to attract larger grantmakers or VCs. You can try to pitch me guest posts about your project. This could be a description of what you’re doing and why, or just a narrative about your experience and what you learned from it. Warning that I’m terrible to pitch guest posts to, I almost never go through with this, and I’m very nitpicky when I do. Still, you can try. We’re working on gathering a network of friendly professionals who agree to provide pro bono or heavily discounted support (eg legal, accounting, business advice, cloud compute) to ACX grantees. We’ve only just begun this process and it might not actually materialize. There are occasional virtual and physical meetups of ACX grantees; these don’t always result in Important Professional Connections, but are pretty interesting. What if I want those nonfinancial benefits for my project, but don’t need money? Apply for a grant of $1. But we’re pretty nervous about giving very-low-cost grants because it’s too easy to accept all of them and dilute our signaling value; for this reason, it might be harder to get a grant of $1 than a grant of $5,000, and we expect these to make up only 0 - 10% of our cohort. You might be better off coming up with some expansion of your project that takes $5,000 and applying for that. What are the tax implications of an ACX Grant? Consult your accountant, especially if you live outside the US. If you live inside the US, we think it’s ordinary taxable income. If you’re an individual, you’ll have to pay taxes on it at your usual tax rate. If you’re a 501(c), you’ll get your normal level of tax exemption. I want to fund you, how can I help? For bureaucratic reasons, we’re currently looking for donations mostly in the $5,000+ range. If that’s you, fill out the Funder Application Form. If we’ve already talked about this over email, you don’t need to fill out the form, but we encourage you to do so anyway so we know more about your interests and needs. What’s the story behind why you have $200K to spend on grants every year, but are still asking for more funding? Some generous readers sent me crypto during the crypto boom, or advised me on buying crypto, or asked to purchase NFTs of my post for crypto. Some of the crypto went up. Then I reinvested it into AI stocks, and those went up too. I think of this as unearned money and want to give some of it back to the community, hence this grants program. I have a lot of it but not an unlimited amount. At the current rate, I can probably afford another ~5 ACX Grants rounds. When it runs out, I‘ll just be a normal person with normal amounts of money (Substack is great, but not great enough for me to afford this level of donation consistently). My hope is that I can keep making these medium-sized donations, other people can add more to the pot, and we’ll be able to drag this out at least five more rounds, after which point maybe we’ll come up with another plan. I’m a VC, how can I help? Some of our applicants are potentially-profitable startups, and we decide they’re a better match for VC funding than for our grants. If you’re willing to look these over and get in touch with any that seem interesting, fill out the VC Application Form. It will ask for more information on what kind of opportunities you’re interested in funding. I’m a philanthropist or work at a philanthropic foundation; how can I help? Some of our applicants are good projects, but not a good match for us, and we want to shop them around to other philanthropists and charities who might have different strengths or be able to work with larger amounts of money. If that’s you, please fill out the Partner Charity Application Form I’m good at evaluating grants, or an expert in some specific field; how can I help? If you have experience as a grantmaker or VC, or you’re an expert in some technical field, you might be able to help us evaluate proposals. Fill out the Evaluator Application Form. By default we expect you’ll want us to send you one or two grants in your area of expertise, but if you want a challenge you can request more. If we’ve already talked about this over email, you don’t need to fill out the form, but we encourage you to do so anyway so I know more about your interests and needs. We expect to get more volunteers than we need, and most people who fill in the evaluator form won’t get contacted unless we need someone from their specific field. I’m a professional who wants to do pro bono work for cool charities, how can I help? Fill out the Friendly Professional Application Form. If we get enough applicants, we’ll compile them into a directory for our grantees. I participated in the Impact Certificate Market last year, did you forget about me? Yes until Austin Chen reminded me last month No! Request final oracular funding by filling in the Impact Applicant Form. Sorry, I forgot, where do I go to apply for a grant again? See form here. Please apply by 11:59 PM on August 15th. https://www.astralcodexten.com/p/apply-for-an-acx-grant-2025
[previously in series: 1, 2, 3, 4, 5, 6] It is eerily silent in San Francisco tonight. Since Mayor Lurie's crackdown, the usual drug hawkers, catcallers, and street beggars are nowhere to be seen. Still, your luck can’t last forever, and just before you reach your destination a man with bloodshot eyes lurches towards you. You recognize him and sigh. "Go away!" you shout. "Hey man," says Mark Zuckerberg, grabbing your wrist. "You wanna come build superintelligence at Meta? I'll give you five million, all cash." "I said go away!" "Ten million plus a Lambo," he counters. "I don't even know anything about AI!" you say. "I'll pay you fifty million to learn." “F@$k off!”
[This is one of the finalists in the 2025 review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] https://www.astralcodexten.com/p/your-review-islamic-geometric-patterns
loading
Comments (1)

Alex Lintz

Love this podcast! Never would have been able to read so much SSC without it and it's well presented

Feb 5th
Reply
loading