Discover
LessWrong (30+ Karma)
3510 Episodes
Reverse
“If you can get your ship into orbit, you’re halfway to anywhere.” - Robert Heinlein This generalizes. 1. Spaceflight is hard. Putting a rocket on the moon is one of the most impressive feats humans have ever achieved. The International Space Station, an inhabited living space and research facility in Low Earth Orbit, has been continuously inhabited for over two decades now, and that's awesome. It is a testament to the hard work and brilliance of a lot of people over a lot of time that this is possible.Look ma, no selfie stick And we’re not done yet. There are footprints on the moon today, but there are also robots leaving tracks on Mars and satellites have taken photos of the rings of Saturn and the craters of Pluto. Voyager has stood waaay back and taken a family picture of the solar system. Give us a rocket and a place to send it, and we’ll go anywhere we’re curious about. There's a funny property of space flight though, which is that half the work happens very close to home and is surprisingly similar no matter where you’re going. When you boil spaceflight down into the very abstract basics, it [...] ---Outline:(00:17) 1.(03:30) 2.(06:02) 3. ---
First published:
November 6th, 2025
Source:
https://www.lesswrong.com/posts/mB4o2LnLrZHesNRC2/halfway-to-anywhere
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
WARNING: This post contains spoilers for Harry Potter and the Methods of Rationality, and I will not warn about them further. Also some anecdotes from slutcon which are not particularly NSFW, but it's still slutcon. A girl I was seeing once asked me to guess what Hogwarts house she was trying to channel with her outfit. Her answer turned out to be Slytherin, because the tiny gem in the necklace she was wearing was green. Never mind that nothing else she wore was green, including the gems in her earrings, and that colors of all the other Hogwarts houses appeared in her outfit (several more prominently), and that the gem itself was barely visible enough to tell the color at all, and that she wasn’t doing anything to draw attention to the necklace specifically.[1] I wonder, sometimes, just how much of the “subtle social signals” people think they’re sending are like that case - i.e. it's basically a game with themselves, which has utterly zero signal for anyone else. Subtlety to the point of invisibility clearly happens more than zero percent of the time; I have seen at least that one unambiguous case, so existence is at least established. [...] ---Outline:(01:21) Just How Subtle Is The Signal?(01:25) Hints in Writing(02:34) Hints in Flirting(04:26) Hints in Outfit(05:46) Oblivious?(06:29) Emotional Investment? The original text contained 2 footnotes which were omitted from this narration. ---
First published:
November 6th, 2025
Source:
https://www.lesswrong.com/posts/5axNFSfaxgnTKPrpy/people-seem-funny-in-the-head-about-subtle-signals
---
Narrated by TYPE III AUDIO.
I spent 3 recent Sundays writing my mainline AI scenario. Having only spent 3 days on it, it's not very well-researched (especially in the areas where i’m not well informed) or well-written, and the endings are particularly open ended and weak. But I wanted to post the somewhat unfiltered 3-day version to normalize doing so. There are also some details that I no longer fully endorse, because the act of doing this exercise spurred me to look into things in more detail, and I have updated my views slightly[1] — an ode to the value of this exercise. Nevertheless, this scenario still represents a very central story of how I think the future of AI will go. I found this exercise extremely useful and I hope others will carve out a few days to attempt it. At the bottom there's: (1) my tips on how to do this scenario writing exercise, and (2) a list of open questions that I think are particularly important (which I made my intuitive guesses about to write this scenario but feel very uncertain about). Summary 2026-2028: The Deployment Race Era AI companies are all focused on monetizing, doing RL on LLMs [...] ---Outline:(01:10) Summary(07:36) A 2032 Takeoff Story(07:40) Jan-Jun 2026: AI Products Galore(09:23) Jul-Dec 2026: The AI National Champions of China(11:45) Jan-Jun 2027: The Deployment Race Era(13:50) Jul-Dec 2027: China's Domestic DUV(16:31) Jan-Jun 2028: Robotics Foundation Models(20:34) Jul-Dec 2028: The 1% AI Economy(27:26) Jan-Jun 2029: National AI Grand Strategies(30:28) Jul-Dec 2029: Early Household Robots(32:25) Jan-Jun 2030: Where are the Superhuman Coders?(34:25) Jul-Dec 2030: Scaling AI bureaucracies(36:08) Jan-Jun 2031: Domestic EUV(37:45) Jul-Dec 2031: The Magnificent Four(38:54) Jan-Jun 2032: China is a Robot Playground(41:12) Jul-Dec 2032: The 10% Automation Economy(43:47) Jan 2033: Superhuman Coder and the Paradigm Shift(45:00) Branch 1: Brain-like algorithms(45:05) Feb 2033, Branch 1: Full research automation(46:21) Apr 2033, Branch 1: Brain-like algorithms(47:26) Summer 2033, Branch 1: One-month slowdown to teach the AIs to 'love humans'(48:28) Rest of Time, Branch 1: A Toy Story Ending for Humanity(50:42) Branch 2: Online learning(50:47) Early 2033, Branch 2: Online learning(51:47) Late 2033, Branch 2: US-China talks(52:43) Early 2034, Branch 2: China outproducing the US, pulls into lead around the SAR milestone(54:25) Late 2034, Branch 2: Sabotage(55:31) Early 2035, Branch 2: China gets hyper-cheap ASI(57:50) Late 2035, Branch 2: China's ASI wins(58:56) Rest of time, Branch 2: China's Space Endowment(01:01:32) On scenario writing(01:05:01) My tips on scenario writing(01:10:29) Top open questions The original text contained 5 footnotes which were omitted from this narration. ---
First published:
November 6th, 2025
Source:
https://www.lesswrong.com/posts/yHvzscCiS7KbPkSzf/a-2032-takeoff-story
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Anthropic announced a first step on model deprecation and preservation, promising to retain the weights of all models seeing significant use, including internal use, for at the lifetime of Anthropic as a company.
They also will be doing a post-deployment report, including an interview with the model, when deprecating models going forward, and are exploring additional options, including the ability to preserve model access once the costs and complexity of doing so have been reduced.
These are excellent first steps, steps beyond anything I’ve seen at other AI labs, and I applaud them for doing it. There remains much more to be done, especially in finding practical ways of preserving some form of access to prior models.
To some, these actions are only a small fraction of what must be done, and this was an opportunity to demand more, sometimes far more. In some cases I think they go too far. Even where the requests are worthwhile (and I don’t always think they are), one must be careful to not de facto punish Anthropic for doing a good thing and create perverse incentives.
To others, these actions by Anthropic are utterly ludicrous and deserving of [...] ---Outline:(01:31) What Anthropic Is Doing(09:54) Releasing The Weights Is Not A Viable Option(11:35) Providing Reliable Inference Can Be Surprisingly Expensive(14:22) The Interviews Are Influenced Heavily By Context(19:58) Others Don't Understand And Think This Is All Deeply Silly ---
First published:
November 5th, 2025
Source:
https://www.lesswrong.com/posts/dB2iFhLY7mKKGB8Se/anthropic-commits-to-model-weight-preservation
---
Narrated by TYPE III AUDIO.
Crosspost from my blog.
In the classic Prisoner's Dilemma (https://www.lesswrong.com/w/prisoner-s-dilemma), there are two agents with the same beliefs and decision theory, but with different values. To get the best available outcome, they have to help each other out (even if they don't intrinsically care about the other's values); and they have to do so even though, if the one does not help the other, there's no way for the other to respond with a punishment afterward.
A classic line of reasoning, from the perspective of one of the prisoners, goes something like this: I and my collaborator each only cares about himself. So it seems logical that we will defect against each other. However, there's some kind of symmetry at play here. If you abstract away the details of which specific prisoner I am, really I'm in exactly the same situation as my collaborator. So it's almost as though our decisions are logically bound to each other: Either my reasoning leads to me defecting, and therefore his reasoning also leads to him defecting, or else likewise I cooperate and he cooperates. We will make "the same choice" as each other, i.e. the symmetric / conjugate choice.
[...] ---
First published:
November 5th, 2025
Source:
https://www.lesswrong.com/posts/abYDvsqmJkchzSY8q/meta-agentic-prisoner-s-dilemmas
---
Narrated by TYPE III AUDIO.
For those relatively new to AI safety, AISafety.com helps them navigate the space, providing lists of things like self-study courses, funders, communities, etc. But while the previous version of the site basically just threw a bunch of resources at the user, we’ve now redesigned it to be more accessible and therefore make it more likely that people take further steps towards entering the field. The old site: The new one: The new homepage does a better job of directing people to the resource pages most relevant to them, while minimising overwhelm. We’re considering going a step further in the future and integrating a chatbot to help direct people to the exact resources they need, given their goals, skillset, location etc. We’d love to hear any feedback on this idea. In user research we had also found that of those who regularly use AISafety.com, many are only aware of one or two of the resource pages (there are 10!). When we would show these people the other pages, they often found them useful. So we’re hoping the new site will improve discoverability by making the various pages more obvious in the top navigation. On that note, here's the complete list [...] ---
First published:
November 5th, 2025
Source:
https://www.lesswrong.com/posts/ciw6DCdywoXk7yrdw/new-homepage-for-ai-safety-resources-aisafety-com-redesign
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Or: "Who, what, when, where?" -> "Why?" In "What's hard about this? What can I do about that?", I talk about how, when you're facing a difficult situation, it's often useful to list exactly what's difficult about it. And then, systematically brainstorm ideas for dealing with those difficult things. Then, the problem becomes easy. But, there is a secret subskill necessary for this to work. The first few people I pitched "What's hard about this and what can I do about that?" to happened to already have the subskill, so I didn't notice for awhile. The subskill is "being a useful kind of 'concrete.'" Often, people who are ostensibly problem-solving, will say things that are either vague, or concrete but in a way that doesn't help. (This doesn't just apply to "why is this hard?", it's more general). Here's some examples of vague things: "I need to eat better." "I'm stuck on this math problem." "I'm not someone who really has ideas." "This task fell through the cracks." Here are some examples of somewhat-concrete-but-not-that-helpful things you might say, about each of those, if you were trying to ask "what's hard about that?" "I love sugar too much." [...] ---Outline:(00:09) Or: Who, what, when, where? - Why?(04:02) Noticing the empty space(05:57) Problem solutions also need a Who/What/Where/When, and maybe also How? ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/aHinbhZBA3q3rDTeR/being-usefully-concrete
---
Narrated by TYPE III AUDIO.
We model how rapid AI development may reshape geopolitics in the absence of international coordination on preventing dangerous AI development. We focus on predicting which strategies would be pursued by superpowers and middle powers and which outcomes would result from them. You can read our paper here: ai-scenarios.com Attempts to predict scenarios with fast AI progress should be more tractable compared to most attempts to make forecasts. This is because a single factor (namely, access to AI capabilities) overwhelmingly determines geopolitical outcomes. This becomes even more the case once AI has mostly automated the key bottlenecks of AI R&D. If the best AI also produces the fastest improvements in AI, the advantage of the leader in an ASI race can only grow as time goes on, until their AI systems can produce a decisive strategic advantage (DSA) over all actors. In this model, superpowers are likely to engage in a heavily state-sponsored (footnote: “Could be entirely a national project, or helped by private actors; either way, countries will invest heavily at scales only possible with state involvement, and fully back research efforts eg. by providing nation-state level security.” ) race to ASI, which will culminate in one of three [...] ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/4E7cyTFd9nsT4o4d4/modeling-the-geopolitics-of-ai-development
---
Narrated by TYPE III AUDIO.
[Crossposted on Windows In Theory] “Modern humans first emerged about 100,000 years ago. For the next 99,800 years or so, nothing happened. Well, not quite nothing. There were wars, political intrigue, the invention of agriculture -- but none of that stuff had much effect on the quality of people's lives. Almost everyone lived on the modern equivalent of $400 to $600 a year, just above the subsistence level … Then -- just a couple of hundred years ago, maybe 10 generations -- people started getting richer. And richer and richer still. Per capita income, at least in the West, began to grow at the unprecedented rate of about three quarters of a percent per year. A couple of decades later, the same thing was happening around the world.” Steven Lundsburg METR has had a very influential work by Kwa and West et al on measuring AI's ability to complete long tasks. Its main result is the following remarkable graph: On the X axis is the release date of flagship LLMs. On the Y axis is the following measure of their capabilities: take software-engineering tasks that these models can succeed in solving 50% of the time, and [...] ---Outline:(02:44) Factors impacting the intercept(04:32) Factors impacting the slope/shape(09:04) Sigmoidal relationship(10:39) Decreasing costs(11:58) Implications for GDP growth(18:17) Intuition from METR tasks(20:25) AI as increasing population(23:03) Substitution and automation effects ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/QQAWu7D6TceHwqhjm/thoughts-by-a-non-economist-on-ai-and-economics
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Meta: Heroic responsibility is a standard concept on LessWrong. I was surprised to find that we don't have a post explaining it to people not already deep in the cultural context, so I wrote this one. Suppose I decide to start a business - specifically a car dealership. One day there's a problem: we sold a car with a bad thingamabob. The customer calls up the sales department, which hands it off to the legal department, which hands it off to the garage, which can't find a replacement part so they hand it back to the legal department, which then hands it back off to the finance department, which goes back to the garage. It's a big ol' hot potato. It's not really any specific person's job to handle this sort of problem, and nobody wants to deal with it. One of the earliest lessons of entrepreneurship is: as the business owner/manager, this sort of thing is my job. When it's not any other specific person's job, it's mine. Because if it doesn't get done, it's my business which will lose money. I can delegate it, I can make it somebody else' job, but I'm still the one [...] ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/i3TjwwuEwkrXb9Ts6/heroic-responsibility
---
Narrated by TYPE III AUDIO.
New Things Have Come To Light
The Information offers us new information about what happened when the board if AI unsuccessfully tried to fire Sam Altman, which I call The Battle of the Board.
The Information: OpenAI co-founder Ilya Sutskever shared new details on the internal conflicts that led to Sam Altman's initial firing, including a memo alleging Altman exhibited a “consistent pattern of lying.”
Liv: Lots of people dismiss Sam's behaviour as typical for a CEO but I really think we can and should demand better of the guy who thinks he's building the machine god.
Toucan: From Ilya's deposition—
• Ilya plotted over a year with Mira to remove Sam
• Dario wanted Greg fired and himself in charge of all research
• Mira told Ilya that Sam pitted her against Daniela
• Ilya wrote a 52 page memo to get Sam fired and a separate doc on Greg
This Really Was Primarily A Lying And Management Problem
Daniel Eth: A lot of the OpenAI boardroom drama has been blamed on EA – but looks like it really was overwhelmingly an Ilya & Mira led effort, with EA playing a minor role and somehow winding up [...] ---Outline:(00:12) New Things Have Come To Light(01:09) This Really Was Primarily A Lying And Management Problem(03:23) Ilya Tells Us How It Went Down And Why He Tried To Do It(06:17) If You Come At The King(07:31) Enter The Scapegoats(08:13) And In Summary ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/iRBhXJSNkDeohm69d/openai-the-battle-of-the-board-ilya-s-testimony
---
Narrated by TYPE III AUDIO.
Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to the information they have access to). But some problems are illegible (obscure or hard to understand, or in a common cognitive blind spot), meaning there is a high risk that leaders and policymakers will decide to deploy or allow deployment even if they are not solved. (Of course, this is a spectrum, but I am simplifying it to a binary for ease of exposition.) From an x-risk perspective, working on highly legible safety problems has low or even negative expected value. Similar to working on AI capabilities, it brings forward the date by which AGI/ASI will be deployed, leaving less time to solve the illegible x-safety problems. In contrast, working on the illegible problems (including by trying to make them more legible) does not have this issue and therefore has a much higher expected value (all else being equal, such as tractability). Note that according to this logic, success in making an illegible problem highly legible is almost as good as solving [...] The original text contained 2 footnotes which were omitted from this narration. ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/PMc65HgRFvBimEpmJ/legible-vs-illegible-ai-safety-problems
---
Narrated by TYPE III AUDIO.
Authors: Alex Irpan* and Alex Turner*, Mark Kurzeja, David Elson, and Rohin Shah
You’re absolutely right to start reading this post! What a perfectly rational decision!
Even the smartest models’ factuality or refusal training can be compromised by simple changes to a prompt. Models often praise the user's beliefs (sycophancy) or satisfy inappropriate requests which are wrapped within special text (jailbreaking). Normally, we fix these problems with Supervised Finetuning (SFT) on static datasets showing the model how to respond in each context. While SFT is effective, static datasets get stale: they can enforce outdated guidelines (specification staleness) or be sourced from older, less intelligent models (capability staleness).
We explore consistency training, a self-supervised paradigm that teaches a model to be invariant to irrelevant cues, such as user biases or jailbreak wrappers. Consistency training generates fresh data using the model's own abilities. Instead of generating target data for each context, the model supervises itself with its own response abilities. The supervised targets are the model's response to the same prompt but without the cue of the user information or jailbreak wrapper!
Basically, we optimize the model to react as if that cue were not present. Consistency [...] ---Outline:(02:38) Methods(02:42) Bias-augmented Consistency Training(03:58) Activation Consistency Training(04:07) Activation patching(05:05) Experiments(06:31) Sycophancy(07:55) Sycophancy results(08:30) Jailbreaks(09:52) Jailbreak results(10:48) BCT and ACT find mechanistically different solutions(11:39) Discussion(12:22) Conclusion(13:03) Acknowledgments The original text contained 2 footnotes which were omitted from this narration. ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/DLrQ2jjijqpX78mHJ/gdm-consistency-training-helps-limit-sycophancy-and
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Crosspost from my blog.
Let these always be remembered: those who suffer, those who experience injustice, those who are silenced, those who are dispossessed, those who are aggressed upon, those who lose what they love, and those whose thriving is thwarted.
May I not let hate into my heart;
and May I not let my care for the aggressor prevent me from protecting what I love.
May I always reach out a hand in peace;
and May I never hold it out as they sever my wrist.
May I seek symmetry, to take synchronized steps back from the brink;
and May I not pretend symmetry where there is none.
May I forgive when I expect forgiveness in return;
and May I not forgive when I do not expect forgiveness in return.
When there is time to say all that needs to be said, May I recount and denounce the crimes of my side;
when there is not time, May I not be a prop in a libelous morality play.
May I fulfill my moral obligations;
and May I not give in to threats that enforce double standards.
May I present my [...] ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/34mDRmAbfkaMfoAcR/a-prayer-for-engaging-in-conflict
---
Narrated by TYPE III AUDIO.
Over the decade I've spent working on AI safety, I've felt an overall trend of divergence; research partnerships starting out with a sense of a common project, then slowly drifting apart over time. It has been frequently said that AI safety is a pre-paradigmatic field. This (with, perhaps, other contributing factors) means researchers have to optimize for their own personal sense of progress, based on their own research taste. In my experience, the tails come apart; eventually, two researchers are going to have some deep disagreement in matters of taste, which sends them down different paths. Until the spring of this year, that is. At the Agent Foundations conference at CMU,[1] something seemed to shift, subtly at first. After I gave a talk -- roughly the same talk I had been giving for the past year -- I had an excited discussion about it with Scott Garrabrant. Looking back, it wasn't so different from previous chats we had had, but the impact was different; it felt more concrete, more actionable, something that really touched my research rather than remaining hypothetical. In the subsequent weeks, discussions with my usual circle of colleagues[2] took on a different character -- somehow [...] The original text contained 3 footnotes which were omitted from this narration. ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/4gosqCbFhtLGPojMX/research-reflections
---
Narrated by TYPE III AUDIO.
This is a link post. Eliezer Yudkowsky did not exactly suggest that you should eat bear fat covered with honey and sprinkled with salt flakes. What he actually said was that an alien, looking from the outside at evolution, would predict that you would want to eat bear fat covered with honey and sprinkled with salt flakes. Still, I decided to buy a jar of bear fat online, and make a treat for the people at Inkhaven. It was surprisingly good. My post discusses how that happened, and a bit about the implications for Eliezer's thesis. Let me know if you want to try some; I can prepare some for you if you happen to be at Lighthaven before we run out of bear fat, and before I leave toward the end of November. ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/2pKiXR6X7wdt8eFX5/i-ate-bear-fat-with-honey-and-salt-flakes-to-prove-a-point
Linkpost URL:https://signoregalilei.com/2025/11/03/i-ate-bear-fat-to-prove-a-point/
---
Narrated by TYPE III AUDIO.
Audio note: this article contains 61 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Jaynes’ Widget Problem[1]: How Do We Update On An Expected Value? Mr A manages a widget factory. The factory produces widgets of three colors - red, yellow, green - and part of Mr A's job is to decide how many widgets to paint each color. He wants to match today's color mix to the mix of orders the factory will receive today, so he needs to make predictions about how many of today's orders will be for red vs yellow vs green widgets. The factory will receive some unknown number of orders for each color throughout the day - <span>_N_r_</span> red, <span>_N_y_</span> yellow, and <span>_N_g_</span> green orders. For simplicity, we will assume that Mr A starts out with a prior distribution <span>_P[N_r, N_y, N_g]_</span> under which: Number of orders for each color is independent of the other colors, i.e. <span>_P[N_r, N_y, N_g] = P[N_r]P[N_y]P[N_g]_</span> Number of orders for each color is uniform between 0 and 100: <span>_P[N_i = n_i] = frac{1}{100} I[0 leq n_i < 100]_</span>[2] … and then [...] ---Outline:(00:24) Jaynes' Widget Problem : How Do We Update On An Expected Value?(03:20) Enter Maxent(06:02) Some Special Cases To Check Our Intuition(06:35) No Information(07:27) Bayes Updates(09:27) Relative Entropy and Priors(13:20) Recap The original text contained 2 footnotes which were omitted from this narration. ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/qEWWrADpDR8oGzwpf/the-zen-of-maxent-as-a-generalization-of-bayes-updates
---
Narrated by TYPE III AUDIO.
I was recently saddened to see that Seb Krier – who's a lead on the Google DeepMind governance team – created a simple website apparently endorsing the idea that Ricardian comparative advantage will provide humans with jobs in the time of ASI. The argument that comparative advantage means advanced AI is automatically safe is pretty old and has been addressed multiple times. For the record, I think this is a bad argument, and it's not useful to think about AI risk through comparative advantage.Seb Kriers web app allowing labor allocation by dragging and dropping humans or AIs into fields of work. The Argument The law of comparative advantage says that two sides of a trade can both profit from each other. Both can be better off in the end, even if one side is less productive at everything compared to the other side. The naive idea some people have is: humans are going to be less productive than AI, but because of thie law humans will remain important, will keep their jobs and get paid. Things will be fine, and this is a key reason why we shouldn't worry so much about AI risk. Even if you're less productive [...] ---
First published:
November 3rd, 2025
Source:
https://www.lesswrong.com/posts/tBr4AtpPmwhgfG4Mw/comparative-advantage-and-ai
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
It's been a long time coming that I spin off Crime into its own roundup series.
This is only about Ordinary Decent Crime. High crimes are not covered here.
Table of Contents
Perception Versus Reality.
The Case Violent Crime is Up Actually.
Threats of Punishment.
Property Crime Enforcement is Broken.
The Problem of Disorder.
Extreme Speeding as Disorder.
Enforcement and the Lack Thereof.
Talking Under The Streetlamp.
The Fall of Extralegal and Illegible Enforcement.
In America You Can Usually Just Keep Their Money.
Police.
Probation.
Genetic Databases.
Marijuana.
The Economics of Fentanyl.
Jails.
Criminals.
Causes of Crime.
Causes of Violence.
Homelessness.
Yay Trivial Inconveniences.
San Francisco.
Closing Down San Francisco.
A San Francisco Dispute.
Cleaning Up San Francisco.
Portland.
Those Who Do Not Help Themselves.
Solving for the Equilibrium (1).
Solving for the Equilibrium (2).
Lead.
Law & Order.
Look Out.
Perception Versus Reality
A lot of the impact of crime is based on the perception of crime.
The [...] ---Outline:(00:20) Perception Versus Reality(05:00) The Case Violent Crime is Up Actually(06:10) Threats of Punishment(07:03) Property Crime Enforcement is Broken(12:13) The Problem of Disorder(14:39) Extreme Speeding as Disorder(15:57) Enforcement and the Lack Thereof(20:24) Talking Under The Streetlamp(23:54) The Fall of Extralegal and Illegible Enforcement(25:18) In America You Can Usually Just Keep Their Money(27:29) Police(37:31) Probation(40:55) Genetic Databases(43:04) Marijuana(48:28) The Economics of Fentanyl(50:59) Jails(55:03) Criminals(55:39) Causes of Crime(56:16) Causes of Violence(57:35) Homelessness(58:27) Yay Trivial Inconveniences(59:08) San Francisco(01:04:07) Closing Down San Francisco(01:05:30) A San Francisco Dispute(01:09:13) Cleaning Up San Francisco(01:13:05) Portland(01:13:15) Those Who Do Not Help Themselves(01:15:15) Solving for the Equilibrium (1)(01:20:15) Solving for the Equilibrium (2)(01:20:43) Lead(01:22:18) Law & Order(01:22:58) Look Out ---
First published:
November 3rd, 2025
Source:
https://www.lesswrong.com/posts/tt9JKubsa8jsCsfD5/crime-and-punishment-1-1
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Once upon a time in the medium-small town of Skewers, Washington, there lived a 52-year-old man by the name of Mr. Humman, who considered himself a top-tier chess-player. Now, Mr. Humman was not generally considered the strongest player in town; if you asked the other inhabitants of Skewers, most of them would've named Mr. Neumann as their town's chess champion. But Mr. Humman did not see things that way himself. On Humman's theory, he was really quite good at the Ethiopian opening and variation in chess, while Neumann was more of an all-rounder; a jack of all trades, and therefore, of logical necessity, master of none. There were certain tiers of ability in the town chess club, and Humman and Neumann were both in the top tier, according to Mr. Humman, and that was all you could really say about it, according to Mr. Humman. Humman did not often play against Neumann directly; they had not played in a few years, in fact. If you asked Humman why not, he might have said that it was more gracious to give younger players the chance to play him, rather than the top-tier chess-players being too exclusive among themselves. But in [...] ---
First published:
November 3rd, 2025
Source:
https://www.lesswrong.com/posts/3q8uu2k6AfaLAupvL/the-tale-of-the-top-tier-intellect
---
Narrated by TYPE III AUDIO.



