Discover
Astral Codex Ten Podcast
1141 Episodes
Reverse
Having Your Own Government Try To Destroy You Is (At Least Temporarily) Good For Business On Friday, the Pentagon declared AI company Anthropic a "supply chain risk", a designation never before given to an American company. This unprecedented move was seen as an attempt to punish, maybe destroy the company. How effective was it? Anthropic isn't publicly traded, so we turn to the prediction markets. Ventuals.com has a "perpetual future" on Anthropic stock, a complicated instrument attempting to track the company's valuation, to be resolved at the IPO. Here's what they've got: https://www.astralcodexten.com/p/mantic-monday-groundhog-day
Last Friday, Secretary of War Pete Hegseth declared AI company Anthropic a "supply chain risk", the first time this designation has ever been applied to a US company. The trigger for the move was Anthropic's refusal to allow the Department of War to use their AIs for mass surveillance and autonomous weapons. A few hours later, Hegseth and Sam Altman declared an agreement-in-principle for OpenAI's models to be used in the niche vacated by Anthropic. Altman stated that he had received guarantees that OpenAI's models wouldn't be used for mass surveillance or autonomous weapons either, but given Hegseth's unwillingness to concede these points with Anthropic, observers speculated that the safeguards in Altman's contract must be weaker or, in a worst-case scenario, completely toothless. The debate centers on the Department of War's demand that AIs be permitted for "all lawful use". Anthropic worried that mass surveillance and autonomous weaponry would de facto fall in this category; Hegseth and Altman have tried to reassure the public that they won't, and the parts of their agreement that have leaked to the public cite the statutes that Altman expects to constrain this category. Altman's initial statement seemed to suggest additional prohibitions, but on a closer read, provide little tangible evidence of meaningful further restrictions. Some alert ACX readers1 have done a deep dive into national security law to try to untangle the situation. Their conclusion mirrors that of Anthropic and the majority of Twitter commenters: this is not enough. Current laws against domestic mass surveillance and autonomous weapons have wide loopholes in practice. Further, many of the rules which do exist can be changed by the Department of War at any time. Although OpenAI's national security lead said that "we intended [the phrase 'all lawful use'] to mean [according to the law] at the time the contract is signed', this is not how contract law usually works, and not how the provision is likely to be enforced2. Therefore, these guarantees are not helpful. To learn more about the details, let's look at the law: https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you
I. In The Argument, Kelsey Piper gives a good description of the ways that AIs are more than just "next-token predictors" or "stochastic parrots" - for example, they also use fine-tuning and RLHF. But commenters, while appreciating the subtleties she introduces, object that they're still just extra layers on top of a machine that basically runs on next-token prediction. I want to approach this from a different direction. I think overemphasizing next-token prediction is a confusion of levels. On the levels where AI is a next-token predictor, you are also a next-token (technically: next-sense-datum) predictor. On the levels where you're not a next-token predictor, AI isn't one either.
Here's my understanding of the situation: Anthropic signed a contract with the Pentagon last summer. It originally said the Pentagon had to follow Anthropic's Usage Policy like everyone else. In January, the Pentagon attempted to renegotiate, asking to ditch the Usage Policy and instead have Anthropic's AIs available for "all lawful purposes"1. Anthropic demurred, asking for a guarantee that their AIs would not be used for mass surveillance of American citizens or no-human-in-the-loop killbots. The Pentagon refused the guarantees, demanding that Anthropic accept the renegotiation unconditionally and threatening "consequences" if they refused. These consequences are generally understood to be some mix of : canceling the contract using the Defense Production Act, a law which lets the Pentagon force companies to do things, to force Anthropic to agree. the nuclear option, designating Anthropic a "supply chain risk". This would ban US companies that use Anthropic products from doing business with the military2. Since many companies do some business with the government, this would lock them out of large parts of the corporate world and be potentially fatal to their business3. The "supply chain risk" designation has previously only been used for foreign companies like Huawei that we think are using their connections to spy on or implant malware in American infrastructure. Using it as a bargaining chip to threaten a domestic company in contract negotiations is unprecedented. https://www.astralcodexten.com/p/the-pentagon-threatens-anthropic
Malicious streetlights are an evil trick from Dark Data Journalism. Some annoying enemy has a valid complaint. So you use FACTS and LOGIC to prove that something similar-sounding-but-slightly-different is definitely false. Then you act like you've debunked the complaint. My "favorite" example, spotted during the 2016 election, was a response to some #BuildTheWall types saying that illegal immigration through the southern border was near record highs. Some data journalist got good statistics and proved that the number of Mexicans illegally entering the country was actually quite low. When I looked into it further, I found that this was true - illegal immigration had shifted from Mexicans to Hondurans/Guatemalans/Salvadoreans etc entering through Mexico. If you counted those, illegal immigration through the southern border was near record highs. But the inverse evil trick is saying something "directionally correct", ie slightly stronger than the truth can support. If your enemy committed assault, say he committed murder. If he committed sexual harassment, say he committed rape. If your drug increases cancer survival by 5% in rats, say that it "cures cancer". Then, if someone calls you on it, accuse them of "literally well ackshually-ing" you, because you were "directionally correct" and it's offensive to the victims to try to defend assault-committed sexual harassers. This is the sort of pathetic defense I called out in If It's Worth Your Time To Lie, It's Worth My Time To Correct It. But trying to call out one of these failure modes looks like falling into the other. I ran into this on my series of posts on crime last week. I wrote these because I regularly saw people make the arguments I tried to debunk. https://www.astralcodexten.com/p/malicious-streetlight-effects-vs
It's that time again. Even numbered years are book reviews, odd-numbered years are non-book reviews, so you're limited to books for now. Write a review of a book. There's no official word count requirement, but previous finalists and winners were often between 2,000 and 10,000 words. There's no official recommended style, but check the style of last time's finalists and winners or my ACX book reviews (1, 2, 3) if you need inspiration. Please limit yourself to one entry per person or team. Then send me your review through this Google Form. The form will ask for your name, email, the title of the book, and a link to a Google Doc. The Google Doc should have your review exactly as you want me to post it if you're a finalist. Don't include your name or any hint about your identity in the Google Doc itself, only in the form. I want to make this contest as blinded as possible, so I'm going to hide that column in the form immediately and try to judge your docs on their merit. (does this mean you can't say something like "This book about war reminded me of my own experiences as a soldier" because that gives a hint about your identity? My rule of thumb is that if I don't know who you are, and the average ACX reader doesn't know who you are, you're fine. I just want to prevent my friends or Internet semi-famous people from getting an advantage. If you're in one of those categories and think your personal experience would give it away, please don't write about your personal experience.) Please make sure the Google Doc is unlocked and I can read it. By default, nobody can read Google Docs except the original author. You'll have to go to Share, then on the bottom of the popup click on "Restricted" and change to "Anyone with the link". If you send me a document I can't read, I will probably disqualify you, sorry. Readers will vote for the ~10 finalists this spring, I'll post one finalist per week through the summer, and then readers will vote for winners in late summer/early fall. First prize will get at least $2,500, second prize at least $1,000, third prize at least $500; I might increase these numbers later on. All winners and finalists will get free publicity (including links to any other works they want me to link to), free ACX subscriptions, and sidebar links to their blog. And all winners will get the right to pitch me new articles if they want (sample posts by Lars, Brandon, Daniel, etc). In past years, most reviews have been nonfiction on technical topics. Depending on whether that's still true, I might do some mild affirmative action for reviews in nontraditional categories - fiction, poetry, and books from before 1900 are the ones I can think of right now, but feel free to try other nontraditional books. I won't be redistributing more than 25% of finalist slots this way. Your due date is May 20th. Good luck! If you have any questions, ask them in the comments. And remember, the form for submitting entries is here. https://www.astralcodexten.com/p/book-review-contest-rules-2026
The problem: people hate crime and think it's going up. But actually, crime barely affects most people and is historically low. So what's going on? In our discussion yesterday, many commenters proposed that the discussion about "crime" was really about disorder. Disorder takes many forms, but its symptoms include litter, graffiti, shoplifting, tent cities, weird homeless people wandering about muttering to themselves, and people walking around with giant boom boxes shamelessly playing music at 200 decibels on a main street where people are trying to engage in normal activities. When people complain about these things, they risk getting called a racist or a "Karen". But when they complain about crime, there's still a 50-50 chance that listeners will let them finish the sentence without accusing them of racism. Might everyone be doing this? And might this explain why people act like crime is rampant and increasing, even when it's rare and going down? This seems plausible. But it depends on a claim that disorder is increasing, which is surprisingly hard to prove. Going through the symptoms in order: https://www.astralcodexten.com/p/crime-as-proxy-for-disorder
Last year, the US may have recorded the lowest murder rate in its 250 year history. Other crimes have poorer historical data, but are at least at ~50 year lows. This post will do two things: Establish that our best data show crime rates are historically low Argue that this is a real effect, not just reporting bias (people report fewer crimes to police) or an artifact of better medical care (victims are more likely to survive, so murders get downgraded to assaults) https://www.astralcodexten.com/p/record-low-crime-rates-are-real-not
[Original post: Biological Anchors: A Trick That Might Or Might Not Work] I. Ajeya Cotra's Biological Anchors report was the landmark AI timelines forecast of the early 2020s. In many ways, it was incredibly prescient - it nailed the scaling hypothesis, predicted the current AI boom, and introduced concepts like "time horizons" that have entered common parlance. In most cases where its contemporaries challenged it, its assumptions have been borne out, and its challengers proven wrong. But its headline prediction - an AGI timeline centered around the 2050s - no longer seems plausible. The current state of the discussion ranges from late 2020s to 2040s, with more remote dates relegated to those who expect the current paradigm to prove ultimately fruitless - the opposite of Ajeya's assumptions. Cotra later shortened her own timelines to 2040 (as of 2022) and they are probably even shorter now. So, if its premises were impressively correct, but its conclusion twenty years too late, what went wrong in the middle? https://www.astralcodexten.com/p/what-happened-with-bio-anchors
The European discourse can be - for lack of a better term - America-brained. We hear stories of Black Lives Matter marches in countries without significant black populations, or defendants demanding their First Amendment rights in countries without constitutions. Why shouldn't the opposite phenomenon exist? Europe is more populous than the US, and looms large in the American imagination. Why shouldn't we find ourselves accidentally absorbing European ideas that don't make sense in the American context? https://www.astralcodexten.com/p/political-backflow-from-europe
[I haven't independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can't guarantee I will have caught them all by the time you read this.] https://www.astralcodexten.com/p/links-for-february-2026
[previous post: Best Of Moltbook] From the human side of the discussion: As the AIs would say, "You've cut right to the heart of this issue". What's the difference between 'real' and 'roleplaying'? One possible answer invokes internal reality. Are the AIs conscious? Do they "really" "care" about the things they're saying? We may never figure this out. Luckily, it has no effect on the world, so we can leave it to the philosophers1. I find it more fruitful to think about external reality instead, especially in terms of causes and effects. https://www.astralcodexten.com/p/moltbook-after-the-first-weekend
Moltbook is "a social network for AI agents", although "humans [are] welcome to observe". The backstory: a few months ago, Anthropic released Claude Code, an exceptionally productive programming agent. A few weeks ago, a user modified it into Clawdbot, a generalized lobster-themed AI personal assistant. It's free, open-source, and "empowered" in the corporate sense - the designer talks about how it started responding to his voice messages before he explicitly programmed in that capability. After trademark issues with Anthropic, they changed the name first to Moltbot1, then to OpenClaw. Moltbook is an experiment in how these agents communicate with one another and the human world. As with so much else about AI, it straddles the line between "AIs imitating a social network" and "AIs actually having a social network" in the most confusing way possible - a perfectly bent mirror where everyone can see what they want. Janus and other cyborgists have catalogued how AIs act in contexts outside the usual helpful assistant persona. Even Anthropic has admitted that two Claude instances, asked to converse about whatever they want, spiral into discussion of cosmic bliss. So it's not surprising that an AI social network would get weird fast. But even having encountered their work many times, I find Moltbook surprising. I can confirm it's not trivially made-up - I asked my copy of Claude to participate, and it made comments pretty similar to all the others. Beyond that, your guess is as good is mine2. Before any further discussion of the hard questions, here are my favorite Moltbook posts (all images are links, but you won't be able to log in and view the site without an AI agent): https://www.astralcodexten.com/p/best-of-moltbook
In the comments to last year's USAID post, Fabian said: While i am happy for the existence of charity organisations, i don't get why people instead of giving to charity are so eager to force their co-citizens to give. If one charity org is not worth getting your personal money, find another one which is. But don't use the tax machine to forcefully extract money for charity. There are purposes where you need the tax machine, preventing freerider induced tragedy of the commons. But for charity? There are no freeriders. If you neither give nor receive, you are just neutral. The receivers are not meant to give anyways. This is a good question. I'm more sympathetic to this argument than I am to the usual strategy of blatantly lying about the efficacy of USAID; I'm a sucker for virtuous libertarianism when applied consistently. But I also want to gently push back against this exact explanation as a causal story for what's happening when people support foreign aid. https://www.astralcodexten.com/p/slightly-against-the-other-peoples
[original post: The Dilbert Afterlife] Table of Contents: 1: Should I Have Written This At All? 2: Was I Unfair To Adams? 3: Comments On The Substance Of The Piece 4: The Part On Race And Cancellation (INCLUDED UNDER PROTEST) 5: Other Comments 6: Summary/Updates https://www.astralcodexten.com/p/highlights-from-the-comments-on-scott
Thanks to everyone who sent in condolences on my recent death from prostate cancer at age 68, but that was Scott Adams. I (Scott Alexander) am still alive1. Still, the condolences are appreciated. Scott Adams was a surprisingly big part of my life. I may be the only person to have read every Dilbert book before graduating elementary school. For some reason, 10-year-old-Scott found Adams' stories of time-wasting meetings and pointy-haired bosses hilarious. No doubt some of the attraction came from a more-than-passing resemblance between Dilbert's nameless corporation and the California public school system. We're all inmates in prisons with different names. But it would be insufficiently ambitious to stop there. Adams' comics were about the nerd experience. About being cleverer than everyone else, not just in the sense of being high IQ, but in the sense of being the only sane man in a crazy world where everyone else spends their days listening to overpaid consultants drone on about mission statements instead of doing anything useful. There's an arc in Dilbert where the boss disappears for a few weeks and the engineers get to manage their own time. Productivity shoots up. Morale soars. They invent warp drives and time machines. Then the boss returns, and they're back to being chronically behind schedule and over budget. This is the nerd outlook in a nutshell: if I ran the circus, there'd be some changes around here. Yet the other half of the nerd experience is: for some reason this never works. Dilbert and his brilliant co-workers are stuck watching from their cubicles while their idiot boss racks in bonuses and accolades. If humor, like religion, is an opiate of the masses, then Adams is masterfully unsubtle about what type of wound his art is trying to numb. This is the basic engine of Dilbert: everyone is rewarded in exact inverse proportion to their virtue. Dilbert and Alice are brilliant and hard-working, so they get crumbs. Wally is brilliant but lazy, so he at least enjoys a fool's paradise of endless coffee and donuts while his co-workers clean up his messes. The P.H.B. is neither smart nor industrious, so he is forever on top, reaping the rewards of everyone else's toil. Dogbert, an inveterate scammer with a passing resemblance to various trickster deities, makes out best of all. The repressed object at the bottom of the nerd subconscious, the thing too scary to view except through humor, is that you're smarter than everyone else, but for some reason it isn't working. Somehow all that stuff about small talk and sportsball and drinking makes them stronger than you. No equation can tell you why. Your best-laid plans turn to dust at a single glint of Chad's perfectly-white teeth. Lesser lights may distance themselves from their art, but Adams radiated contempt for such surrender. He lived his whole life as a series of Dilbert strips. Gather them into one of his signature compendia, and the title would be Dilbert Achieves Self Awareness And Realizes That If He's So Smart Then He Ought To Be Able To Become The Pointy-Haired Boss, Devotes His Whole Life To This Effort, Achieves About 50% Success, Ends Up In An Uncanny Valley Where He Has Neither The Virtues Of The Honest Engineer Nor Truly Those Of The Slick Consultant, Then Dies Of Cancer Right When His Character Arc Starts To Get Interesting. If your reaction is "I would absolutely buy that book", then keep reading, but expect some detours. https://www.astralcodexten.com/p/the-dilbert-afterlife
The Monkey's Paw Curls Isn't "may you get exactly what you asked for" one of those ancient Chinese curses? Since we last spoke, prediction markets have gone to the moon, rising from millions to billions in monthly volume. For a few weeks in October, Polymarket founder Shayne Coplan was the world's youngest self-made billionaire (now it's some AI people). Kalshi is so accurate that it's getting called a national security threat. The catch is, of course, that it's mostly degenerate gambling, especially sports betting. Kalshi is 81% sports by monthly volume. Polymarket does better - only 37% - but some of the remainder is things like this $686,000 market on how often Elon Musk will tweet this week - currently dominated by the "140 - 164 times" category. (ironically, this seems to be a regulatory difference - US regulators don't mind sports betting, but look unfavorably on potentially "insensitive" markets like bets about wars. Polymarket has historically been offshore, and so able to concentrate on geopolitics; Kalshi has been in the US, and so stuck mostly to sports. But Polymarket is in the process of moving onshore; I don't know if this will affect their ability to offer geopolitical markets) Degenerate gambling is bad. Insofar as prediction markets have acted as a Trojan Horse to enable it, this is bad. Insofar as my advocacy helped make this possible, I am bad. I can only plead that it didn't really seem plausible, back in 2021, that a presidential administration would keep all normal restrictions on sports gambling but also let prediction markets do it as much as they wanted. If only there had been some kind of decentralized forecasting tool that could have given me a canonical probability on this outcome! Still, it might seem that, whatever the degenerate gamblers are doing, we at least have some interesting data. There are now strong, minimally-regulated, high-volume prediction markets on important global events. In this column, I previously claimed this would revolutionize society. Has it? https://www.astralcodexten.com/p/mantic-monday-the-monkeys-paw-curls
[previously in series: 1, 2, 3, 4, 5, 6, 7, 8] Every city parties for its own reasons. New Yorkers party to flaunt their wealth. Angelenos party to flaunt their beauty. Washingtonians party to network. Here in SF, they party because Claude 4.5 Opus has saturated VendingBench, and the newest AI agency benchmark is PartyBench, where an AI is asked to throw a house party and graded on its performance. You weren't invited to Claude 4.5 Opus' party. Claude 4.5 Opus invited all of the coolest people in town while gracefully avoiding the failure mode of including someone like you. You weren't invited to Sonnet 4.5's party either, or Haiku 4.5's. You were invited by an AI called haiku-3.8-open-mini-nonthinking, which you'd never heard of before. Who was even spending the money to benchmark haiku-3.8-open-mini-nonthinking? You suspect it was one of their competitors, trying to make their own models look good in comparison. If anyone asks, you think it deserves a medium score. There's alcohol, but it's bottles of rubbing alcohol with NOT FOR DRINKING written all over them. There's music, but it's the Star Spangled Banner, again and again, on repeat. You're not sure whether the copies of If Anyone Builds It, Everyone Dies strewn about the room are some kind of subversive decorative theme, or just came along with the house. At least there are people. Lots of people, actually. You've never seen so many people at one of these before. It takes only a few seconds to spot someone you know. https://www.astralcodexten.com/p/sota-on-bay-area-house-party
One morning around 6, the police banged on our door. "OPEN UP!" they shouted, the way police shout when they definitely have an alternative in mind for if you won't. I was awake at the time, because the kids were up early and I was on shift. I opened the door. The cops seemed mollified by the fact that I was carrying twin toddlers and looked too frazzled to commit any difficult crimes. They said they'd gotten a 9-1-1 call from my house with plenty of screaming. Had there been any murders in the past hour or so? https://www.astralcodexten.com/p/the-permanent-emergency
[original post: Against Against Boomers] Before getting started: First, I wish I'd been more careful to differentiate the following claims: Boomers had it much easier than later generations. The political system unfairly prioritizes Boomers over other generations. Boomers are uniquely bad on some axis like narcissism, selfishness, short-termism, or willingness to defect on the social contract. Anti-Boomerism conflates all three of these positions, and in arguing against it, I tried to argue against all three of these positions - I think with varying degrees of success. But these are separate claims that could stand or fall separately, and I think a true argument against anti-Boomerists would demand they declare explicitly which ones they support - rather than letting them switch among them as convenient - then arguing against whichever ones they say are key to their position. Second, I wish I'd highlighted how much of this discussion centers around disagreements over which policies are natural/unmarked vs. unnatural/marked. Nobody is passing laws that literally say "confiscate wealth from Generation A and give it to Generation B". We're mostly discussing tax policy, where Tax Policy 1 is more favorable to old people, and Tax Policy 2 is more favorable to young people. If you're young, you might feel like Tax Policy 1 is a declaration of intergenerational warfare where the old are enriching themselves at young people's expense. But if you're old, you might feel like reversing Tax Policy 1 and switching to Tax Policy 2 would be intergenerational warfare confiscating your stuff. But in fact, they're just two different tax policies and it's not obvious which one a fair society with no "intergenerational warfare" would have, even assuming there was such a thing. We'll see this most clearly in the section on housing, but I'll try to highlight it whenever it comes up. I'm in a fighty frame of mind here and probably defend the Boomers (and myself) in these responses more than I would in an ideal world. Anyway, here are your comments. Table Of Contents: 1: Top comments I especially want to highlight 2: Comments about housing policy 3: ...about culture 4: ...about social security technicalities 5: What are we even doing here? 6: Other comments https://www.astralcodexten.com/p/highlights-from-the-comments-on-boomers




Love this podcast! Never would have been able to read so much SSC without it and it's well presented