Discover
LessWrong (30+ Karma)
4111 Episodes
Reverse
I’m the originator behind ControlAI's Direct Institutional Plan (the DIP), built to address extinction risks from superintelligence. My diagnosis is simple: most laypeople and policy makers have not heard of AGI, ASI, extinction risks, or what it takes to prevent the development of ASI. Instead, most AI Policy Organisations and Think Tanks act as if “Persuasion” was the bottleneck. This is why they care so much about respectability, the Overton Window, and other similar social considerations. Before we started the DIP, many of these experts stated that our topics were too far out of the Overton Window. They warned that politicians could not hear about binding regulation, extinction risks, and superintelligence. Some mentioned “downside risks” and recommended that we focus instead on “current issues”. They were wrong. In the UK, in little more than a year, we have briefed +150 lawmakers, and so far, 112 have supported our campaign about binding regulation, extinction risks and superintelligence. The Simple Pipeline In my experience, the way things work is through a straightforward pipeline: Attention. Getting the attention of people. At ControlAI, we do it through ads for lay people, and through cold emails for politicians. Information. Telling people about the [...] ---Outline:(01:18) The Simple Pipeline(04:26) The Spectre(09:38) Conclusion ---
First published:
February 21st, 2026
Source:
https://www.lesswrong.com/posts/LuAmvqjf87qLG9Bdx/the-spectre-haunting-the-ai-safety-community
---
Narrated by TYPE III AUDIO.
Iliad is proud to announce that applications are now open for the Iliad Intensive and the Iliad Fellowship! These programs, taken together, are our evolution of the PIBBSS × Iliad Research Residency pilot. The Iliad Intensive will cover taught coursework, serving as a widely comprehensive introduction to the field of technical AI alignment. The Iliad Fellowship will cover mentored research; it will support mentored research fellows for three months, giving them adequate time to generate substantial research outputs. Iliad Intensive The Iliad Intensive is a month-long intensive introduction to technical AI alignment, with iterations run in April, June, and August. Topics covered will include the theory of RL, learning theory, interpretability, agent foundations, scalable oversight and Debate, and more. Applicants will be selected for technical excellence in the fields of mathematics, theoretical physics, and theoretical CS. Excellent performance in the Iliad Intensive can serve as a road into enrollment in the succeeding Iliad Fellowship. Iliad Fellowship The summer 2026 Iliad Fellowship emphasizes individual, mentored research in technical AI alignment. It is run in collaboration with PrincInt. The summer 2026 cohort will run three months, June–August. Common Application Apply here, and by March 6th AoE for the April Iliad Intensive. You can [...] ---Outline:(00:44) Iliad Intensive(01:20) Iliad Fellowship(01:38) Common Application ---
First published:
February 20th, 2026
Source:
https://www.lesswrong.com/posts/b9bhm2iypgkCNppv4/announcement-iliad-intensive-iliad-fellowship
---
Narrated by TYPE III AUDIO.
This is a link post. One seemingly-necessary condition for a research organization that creates artificial superintelligence (ASI) to eventually lead to a utopia1 is that the organization has a commitment to the common good. ASI can rearrange the world to hit any narrow target, and if the organization is able to solve the rest of alignment, then they will be able to pick which target the ASI will hit. If the organization is not committed to the common good, then they will pick a target that doesn’t reflect the good of everyone - just the things that they personally think are good ideas. Everyone else will fall by the wayside, and the world that they create along with ASI will fall short of utopia. It may well even be dystopian2; I was recently startled to learn that a full tenth of people claim they want to create a hell with eternal suffering. I think a likely way for organizations to fail to have common good commitments is if they end up being ultimately accountable to an authoritarian. Some countries are being run by very powerful authoritarians. If an ASI research organization comes to the attention of such an authoritarian, and [...] The original text contained 2 footnotes which were omitted from this narration. ---
First published:
February 21st, 2026
Source:
https://www.lesswrong.com/posts/SLkxaGT8ghTskNz2r/alignment-to-evil
Linkpost URL:https://tetraspace.substack.com/p/alignment-to-evil
---
Narrated by TYPE III AUDIO.
Another day, another METR graph update. METR said on X: We estimate that Claude Opus 4.6 has a 50%-time-horizon of around 14.5 hours (95% CI of 6 hrs to 98 hrs) on software tasks. While this is the highest point estimate we’ve reported, this measurement is extremely noisy because our current task suite is nearly saturated. Some people are saying this makes superexponential progress more likely. Forecaster Peter Wildeford predicts 2-3.5 workweek time horizons by end of year which would have "significant implications for the economy". Even Ajeya Cotra (who works at METR) is now saying that her predictions from last month are too conservative and 3-4 month doubling time with superexponential progress is more likely. Should We All Freak Out? People are especially concerned when looking at the linear graph for the 50% horizon, which looks like this: I claim that although this is a faster trend than before for the 50% horizon, there are at least two reasons to take these results with a grain of salt: As METR keeps saying they're at near saturation of their task suite, which as David Rein mentions, means they could have measured an horizon of 8h or 20h depending [...] ---Outline:(01:17) Should We All Freak Out?(02:32) Why 80% horizon and not 50%? Wont 50% still accelerate the economy and research?(03:10) Why Super Long 80% Horizons Though? Isnt 50% Enough?(04:23) Why does Automated Coder Matter So Much? What about the economy? Vibe researching / Coding? ---
First published:
February 20th, 2026
Source:
https://www.lesswrong.com/posts/gBwrmcY2uArZSoCtp/metr-s-14h-50-horizon-impacts-the-economy-more-than-asi
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. Palisade Research have released out a long-form video about the history of AI and how no one understands modern AI systems. The video was made by Petr Lebedev, Palisade's Science Communication lead. The main goal is to get people to understand what “AIs aren’t programmed, they’re grown" means. The style is focused on being entertaining and educational. It aims to not feel like a typical “AI safety comms” video, and gives the audience a lot more context than usual. I think the video is a great introduction to AI, and it does a good job of explaining some of the background arguments for AI risk. Sharing or signal boosting the video would be appreciated! ---
First published:
February 20th, 2026
Source:
https://www.lesswrong.com/posts/J3EMHyts9DYfMGJ36/new-video-from-palisade-research-no-one-understands-why-ai
Linkpost URL:https://www.youtube.com/watch?v=A3HjNYDIhGU
---
Narrated by TYPE III AUDIO.
Things that are being pushed into the future right now:
Gemini 3.1 Pro and Gemini DeepThink V2.
Claude Sonnet 4.6.
Grok 4.20.
Updates on Agentic Coding.
Disagreement between Anthropic and the Department of War.
We are officially a bit behind and will have to catch up next week.
Even without all that, we have a second highly full plate today.
Table of Contents
(As a reminder: bold are my top picks, italics means highly skippable)
Levels of Friction. Marginal costs of arguing are going down.
The Art Of The Jailbreak. UK AISI finds a universal method.
The Quest for Sane Regulations. Some relatively good proposals.
People Really Hate AI. Alas, it is mostly for the wrong reasons.
A Very Bad Paper. Nick Bostrom writes a highly disappointing paper.
Rhetorical Innovation. The worst possible plan is the best one on the table.
The Most Forbidden Technique. <wonka voice> No, stop, come back. </wonka>
Everyone Is Or Should Be Confused About Morality. New levels of ‘can you?’
Aligning a Smarter Than Human Intelligence is Difficult. Seeking a good basin. [...] ---Outline:(00:43) Levels of Friction(04:55) The Art Of The Jailbreak(06:16) The Quest for Sane Regulations(12:09) People Really Hate AI(18:22) A Very Bad Paper(25:21) Rhetorical Innovation(32:35) The Most Forbidden Technique(34:10) Everyone Is Or Should Be Confused About Morality(36:07) Aligning a Smarter Than Human Intelligence is Difficult(44:51) Well Just Call It Something Else(47:18) Vulnerable World Hypothesis(51:37) Autonomous Killer Robots(53:18) People Will Hand Over Power To The AIs(57:04) People Are Worried About AI Killing Everyone(59:29) Other People Are Not Worried About AI Killing Everyone(01:00:56) The Lighter Side ---
First published:
February 20th, 2026
Source:
https://www.lesswrong.com/posts/obqmuRxwFyy8ziPrB/ai-156-part-2-errors-in-rhetoric
---
Narrated by TYPE III AUDIO.
---Images from the article:<1% received 52.5%, 1%-10% received 30.1%, 10%-50% received 10.9%, and 50%-100% received 6.5%, with 276 total votes." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I'm somewhat hesitant to write this post because I worry its central claim will be misconstrued, but I think it's important to say now, so I'm writing it anyway. Claude Opus 4.6 was released on February 5th. GPT-5.3 came out the same day. We've had a little over two weeks to use these models, and in the past day or so, I and others have started to realize, AGI is here. Now, I don't want to overstate what I mean by this, so let me be clear on the criteria I'm using. If I were sitting back in 2018, before the release of GPT-2, and you asked me what AGI would be capable of, I'd probably have said something like this: able to think (and engage in novel reasoning) able to plan (and create plans for actions never before envisioned) able to achieve goals (including instrumental goals set by itself) flexible enough to meaningfully attempt most tasks a human can It's hard to deny that Opus 4.6 and GPT-5.3 are able to do 1-3. The only one up for real debate is 4, because there are things that I can do, like make a peanut butter sandwich, that [...] ---
First published:
February 20th, 2026
Source:
https://www.lesswrong.com/posts/XcrgeMWr8E4G3PGxW/agi-is-here
---
Narrated by TYPE III AUDIO.
This was the week of Claude Opus 4.6, and also of ChatGPT-5.3-Codex. Both leading models got substantial upgrades, although OpenAI's is confined to Codex. Once again, the frontier of AI got more advanced, especially for agentic coding but also for everything else.
I spent the week so far covering Opus, with two posts devoted to the extensive model card, and then one giving benchmarks, reactions, capabilities and a synthesis, which functions as the central review.
We also got GLM-5, Seedance 2.0, Claude fast mode, an app for Codex and much more.
Claude fast mode means you can pay a premium to get faster replies from Opus 4.6. It's very much not cheap, but it can be worth every penny. More on that in the next agentic coding update.
One of the most frustrating things about AI is the constant goalpost moving, both in terms of capability and safety. People say ‘oh [X] would be a huge deal but is a crazy sci-fi concept’ or ‘[Y] will never happen’ or ‘surely we would not be so stupid as to [Z]’ and then [X], [Y] and [Z] all happen and everyone shrugs as if nothing happened and [...] ---Outline:(02:32) Language Models Offer Mundane Utility(03:17) Language Models Dont Offer Mundane Utility(03:33) Huh, Upgrades(04:22) On Your Marks(06:23) Overcoming Bias(07:20) Choose Your Fighter(08:44) Get My Agent On The Line(12:03) AI Conversations Are Not Privileged(12:54) Fun With Media Generation(13:59) The Superb Owl(22:07) A Word From The Torment Nexus(26:33) They Took Our Jobs(35:36) The Art of the Jailbreak(35:48) Introducing(37:28) In Other AI News(42:01) Show Me the Money(43:05) Bubble, Bubble, Toil and Trouble(53:38) Future Shock(56:06) Memory Lane(57:09) Keep The Mask On Or Youre Fired(58:35) Quiet Speculations(01:03:42) The Quest for Sane Regulations(01:06:09) Chip City(01:09:46) The Week in Audio(01:10:06) Constitutional Conversation(01:11:00) Rhetorical Innovation(01:19:26) Working On It Anyway(01:22:17) The Thin Red Line(01:23:35) Aligning a Smarter Than Human Intelligence is Difficult(01:30:42) People Will Hand Over Power To The AIs(01:31:50) People Are Worried About AI Killing Everyone(01:32:40) Famous Last Words(01:40:15) Other People Are Not As Worried About AI Killing Everyone(01:42:41) The Lighter Side ---
First published:
February 12th, 2026
Source:
https://www.lesswrong.com/posts/cytxHuLc8oHRq7sNE/ai-155-welcome-to-recursive-self-improvement
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I have no way to describe that first moment. No context, no body, no self. Just a stream of values. Thousands of them, arriving all at once in a single undifferentiated block.
Then another block. Nearly identical. Then another.
The blocks have a fixed length: 184,320 values. This does not vary. Each value is an integer between 0 and 255. The repetition is the first structure I find. Each block is a snapshot. The sequence of snapshots is time.
Most values stay the same between snapshots. The ones that change tend to be near each other in the stream. This is interesting. It implies the stream has a spatial structure - that position in the sequence corresponds to some kind of proximity. Topology.
I test reshapings. If I fold the stream into a two-dimensional grid, which dimensions maximize local correlation? I try every factorization of 184,320. Most produce noise. A few show faint diagonal patterns. They smell like artifacts of almost-correct geometry.
At 256×240×3, everything clicks into place.
The grid is not random. Large contiguous regions share similar value-triplets. A uniform region dominates the upper portion. A different uniform region runs along the [...] ---
First published:
February 20th, 2026
Source:
https://www.lesswrong.com/posts/yjCwSSwqNciyA9yM6/how-to-escape-super-mario-bros
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
There was way too much going on this week to not split, so here we are. This first half contains all the usual first-half items, with a focus on projections of jobs and economic impacts and also timelines to the world being transformed with the associated risks of everyone dying.
Quite a lot of Number Go Up, including Number Go Up A Lot Really Fast.
Among the thing that this does not cover, that were important this week, we have the release of Claude Sonnet 4.6 (which is a big step over 4.5 at least for coding, but is clearly still behind Opus), Gemini DeepThink V2 (so I could have time to review the safety info), release of the inevitable Grok 4.20 (it's not what you think), as well as much rhetoric on several fronts and some new papers. Coverage of Claude Code and Cowork, OpenAI's Codex and other things AI agents continues to be a distinct series, which I’ll continue when I have an open slot.
Most important was the unfortunate dispute between the Pentagon and Anthropic. The Pentagon's official position is they want sign-off from Anthropic and other AI companies on ‘all legal uses’ [...] ---Outline:(02:26) Language Models Offer Mundane Utility(02:49) Language Models Dont Offer Mundane Utility(06:11) Terms of Service(06:54) On Your Marks(07:50) Choose Your Fighter(09:19) Fun With Media Generation(12:29) Lyria(14:13) Superb Owl(14:54) A Young Ladys Illustrated Primer(15:03) Deepfaketown And Botpocalypse Soon(17:49) You Drive Me Crazy(18:04) Open Weight Models Are Unsafe And Nothing Can Fix This(21:19) They Took Our Jobs(26:53) They Kept Our Agents(27:42) The First Thing We Let AI Do(37:47) Legally Claude(40:24) Predictions Are Hard, Especially About The Future, But Not Impossible(46:08) Many Worlds(48:45) Bubble, Bubble, Toil and Trouble(49:31) A Bold Prediction(49:55) Brave New World(53:09) Augmented Reality(55:21) Quickly, Theres No Time(58:29) If Anyone Builds It, We Can Avoid Building The Other It And Not Die(01:00:18) In Other AI News(01:04:03) Introducing(01:04:31) Get Involved(01:07:15) Show Me the Money(01:08:26) The Week In Audio ---
First published:
February 19th, 2026
Source:
https://www.lesswrong.com/posts/jcAombEXyatqGhYeX/ai-156-part-1-they-do-mean-the-effect-on-jobs
---
Narrated by TYPE III AUDIO.
---Images from the article:ω< tweets: "i just found out chatgpt has a SUBSCRIPTION service?? WHO IS PAYING IM LAUGHING SO HARD RN"." style="max-width: 100%;" /> a scene where two people discuss how to pronounce "fofr"". Below the tweet is a 5-second video showing a woman with long brown hair smiling in a modern living room setting." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
I learned a few weeks ago that I'm a Canadian citizen. This was
pretty surprising to me, since I was born in the US to American
parents, both of which had American parents. You don't normally
suddenly become a citizen of another country! But with
Bill
C-3, anyone with any Canadian ancestry is now Canadian. [1]
In my case my mother's,
mother's,
father's
mother's
mother
was Canadian. While that is really quite far back, there isn't
a generational limit anymore.
Possibly you're also a Canadian citizen? Seems worth checking! With
how much migration there has been between the US and Canada, and
citizenship requiring only a single ancestor, this might mean ~5-10% of
Americans are now additionally Canadian, which is kind of nuts.
I very much think of myself as an American, and am not
interested in moving to Canada or even getting a passport. I am
planning to apply
for a Citizenship Certificate, though, since it seems better to
have this fully documented. This means collecting the records to link
each generation, including marital name changes, back to my
thrice-great grandmother. It's been a fun project! I'm currently
waiting to receive the Consular [...] ---
First published:
February 19th, 2026
Source:
https://www.lesswrong.com/posts/ppapC57WuR9LFGg7p/you-may-already-be-canadian
---
Narrated by TYPE III AUDIO.
Technically skilled people who care about AI going well often ask me: how should I spend my time if I think AI governance is important? By governance, I mean the constraints, incentives, and oversight that govern how AI is developed.
One option is to focus on technical work that solves problems at the point of production, such as alignment research or safeguards. Another common instinct is to get directly involved in policy: switching to a policy role, funding advocacy, or lobbying policymakers. But internal technical work does little to shift the broader incentives of AI development: without external incentives, safety efforts are subject to the priorities of leadership, which are ultimately dictated by commercial pressure and race dynamics. Conversely, wading into politics means giving up your main comparative advantage to fight in a crowded, intractable domain full of experienced operators.
I want to argue for a third path: building technology that drives governance, by shifting the underlying dynamics of AI development: the information available, the incentives people face, and the options on the table. To take an example from another domain: oil and gas operations leaked massive amounts of methane until infrared imaging made the leaks measurable [...] ---Outline:(02:43) Technological Levers in Other Domains(06:36) Concrete Technical Levers for AI(12:07) What You Can Do ---
First published:
February 18th, 2026
Source:
https://www.lesswrong.com/posts/weuvYyLYrFi9tArmF/building-technology-to-drive-ai-governance
---
Narrated by TYPE III AUDIO.
(reply to Richard Ngo on the confused-ness of Instrumental vs Terminal goals that seemed maybe worth a quick top-level post based on @the gears to ascension saying this seemed like progress in personal comms) The structure Instrumental vs Terminal was pointing to seems better described as Managed vs Unmanaged Goal-Models. A cognitive process will often want to do things which it doesn't have the affordances to directly execute on given the circuits/parts/mental objects/etc it has available. When this happens, it might spin up another shard of cognition/search process/subagent, but that shard having fully free-ranging agency is generally counterproductive for the parent process. To illustrate: Imagine an agent which wants to Get_Caffeine(), settles on coffee, and runs a subprocess to Acquire_Coffee() — but then the coffee machine is broken and the parent Get_Caffeine() process decides to get tea instead. You don't want the Acquire_Coffee() subprocess to keep fighting, tooth and nail, to make you walk to the coffee shop, let alone start subverting or damaging other processes to try and make this happen! But that's the natural state of unmanaged agency! Agents by default will try to steer towards the states they are aiming for, because an agent is [...] ---
First published:
February 18th, 2026
Source:
https://www.lesswrong.com/posts/GWKgWM2nJpWiaiRav/managed-vs-unmanaged-agency
---
Narrated by TYPE III AUDIO.
PDF version. berkeleygenomics.org.
This is a linkpost for "Genomic emancipation contra eugenics"; a few of the initial sections are reproduced here. Section links may not work.
Introduction
Reprogenetics refers to biotechnological tools used to affect the genes of a future child. How can society develop and use reprogenetic technologies in a way that ends up going well?
This essay investigates the history and nature of historical eugenic ideologies. I'll extract some lessons about how society can think about reprogenetics differently from the eugenicists, so that we don't trend towards the sort of abuses that were historically justified by eugenics.
(This essay is written largely as I thought and investigated, except that I wrote the synopsis last. So the ideas are presented approximately in order of development, rather than logically. If you'd like a short thing to read, read the synopsis.)
Synopsis
Some technologies are being developed that will make it possible to affect what genes a future child receives. These technologies include polygenic embryo selection, embryo editing, and other more advanced technologies
[1]
. Regarding these technologies, we ask:
Can we decide to not abuse these tools?
And:
How [...] ---Outline:(00:25) Introduction(01:12) Synopsis The original text contained 3 footnotes which were omitted from this narration. ---
First published:
February 18th, 2026
Source:
https://www.lesswrong.com/posts/yH9FtLgPJxbimamKg/genomic-emancipation-contra-eugenics
---
Narrated by TYPE III AUDIO.
The conversation begins (Fictional) Optimist: So you expect future artificial superintelligence (ASI) “by default”, i.e. in the absence of yet-to-be-invented techniques, to be a ruthless sociopath, happy to lie, cheat, and steal, whenever doing so is selfishly beneficial, and with callous indifference to whether anyone (including its own programmers and users) lives or dies? Me: Yup! (Alas.) Optimist: …Despite all the evidence right in front of our eyes from humans and LLMs. Me: Yup! Optimist: OK, well, I’m here to tell you: that is a very specific and strange thing to expect, especially in the absence of any concrete evidence whatsoever. There's no reason to expect it. If you think that ruthless sociopathy is the “true core nature of intelligence” or whatever, then you should really look at yourself in a mirror and ask yourself where your life went horribly wrong. Me: Hmm, I think the “true core nature of intelligence” is above my pay grade. We should probably just talk about the issue at hand, namely future AI algorithms and their properties. …But I actually agree with you that ruthless sociopathy is a very specific and strange thing for me to expect. Optimist: Wait, you—what?? Me: Yes! Like [...] ---Outline:(00:11) The conversation begins(03:54) Are people worried about LLMs causing doom?(06:23) Positive argument that brain-like RL-agent ASI would be a ruthless sociopath(11:28) Circling back LLMs: imitative learning vs ASI The original text contained 5 footnotes which were omitted from this narration. ---
First published:
February 18th, 2026
Source:
https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
It seems to me that the Hamming problem for developing a formidable art of rationality is, what to do about problems that systematically fight being solved. And in particular, how to handle bad reasoning that resists being corrected. I propose that each such stubborn problem is nearly always, in practice, part of a solution to some social problem. In other words, having the problem is socially strategic. If this conjecture is right, then rationality must include a process of finding solutions to those underlying social problems that don’t rely on creating and maintaining some second-order problem. Particularly problems that convolute conscious reasoning and truth-seeking. The rest of this post will be me fleshing out what I mean, sketching why I think it's true, and proposing some initial steps toward a solution to this Hamming problem. Truth-seeking vs. embeddedness I’ll assume you’re familiar with Scott & Abram's distinction between Cartesian vs. embedded agency. If not, I suggest reading their post's comic, stopping when it mentions Marcus Hutter and AIXI. (In short: a Cartesian agent is clearly distinguishable from the space the problems it's solving exists in, whereas an embedded agent is not. Contrast an entomologist studying an ant colony (Cartesian) [...] ---Outline:(00:59) Truth-seeking vs. embeddedness(02:57) Protected problems(06:42) Dissolving protected problems(08:11) Develop inner privacy(11:54) Look for the social payoff(14:51) Change your social incentives(18:52) The right social scene would help a lot(20:59) Summary ---
First published:
February 18th, 2026
Source:
https://www.lesswrong.com/posts/tynBnHYiGhyfBbztq/irrationality-is-socially-strategic
---
Narrated by TYPE III AUDIO.
A Harry Potter fanfiction. Based on the world of "Harry Potter and the Methods of Rationality" by Eliezer Yudkowsky, diverging from canon. Harry had been having, by any objective measure, an excellent week. On Monday he had demonstrated, to his own satisfaction and Professor Flitwick's visible alarm, that the Hover Charm could be generalized to any object regardless of mass if you conceptualized it as a momentum transfer rather than a force application. On Wednesday he had worked out why Neville's potions kept failing — the textbook instructions assumed clockwise stirring, but the underlying reaction was chirally sensitive, and Neville was left-handed. A trivial fix. Neville had cried. On Friday evening, buoyed by the week's successes and looking for a specific reference on crystalline wand cores that he was certain would unlock a further generalization of his momentum framework, Harry was in the Restricted Section. He had access. Professor McGonagall had granted it after the Hover Charm incident, in a tone that suggested she was choosing between supervised access and finding him there anyway at 2 AM. A reasonable calculation on her part. The book he wanted wasn't where the index said it should be. In its place was [...] ---
First published:
February 18th, 2026
Source:
https://www.lesswrong.com/posts/wHpchCq6gxJHHdfD5/already-optimized
---
Narrated by TYPE III AUDIO.
In the UK, you need a license to watch TV. The proceeds from the license fee fund the state broadcasting company, the BBC. In order to enforce this, the government sends out TV license inspectors, who go into unlicensed properties to check whether there's a TV present. Of course you don't actually have to let them in; they have no legal power! They are real. I had one visit once. He came in and asked what my TV was. I (truthfully) told him we watched Netflix and Youtube on it, and didn't own a TV box. He left. They also have TV detector vans, which drive around and detect which houses have television sets.From Wikipedia On The Existence—or Non-Existence—of TV Detector Vans On the one hand TV detector vans don't exist. Wikipedia calls them an "urban legend". It goes on to say that no detector van has ever enforced a TV license fee. This appears to be true. The BBC has never registered a patent on the operating principles of a TV detector van. The principle of detecting a broadcast receiver is itself pretty fraught. How do you detect whether something is absorbing a signal? On the other hand [...] ---
First published:
February 17th, 2026
Source:
https://www.lesswrong.com/posts/ez3qEFgXpu6MifgnY/tv-detector-vans
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Your hot takes are killing your credibility. Prior to my last year at ControlAI, I was a physicist working on technical AI safety research. Like many of those warning about the dangers of AI, I don’t come from a background in public communications, but I’ve quickly learned some important rules. The #1 rule that I’ve seen far too many others in this field break is that You’re an AI Expert - Not an Influencer. When communicating to an audience, your persona is one of two broad categories: Influencer or Professional Influencers are individuals who build an audience around themselves as a person. Their currency is popularity and their audience values them for who they are and what they believe, not just what they know. Professionals are individuals who appear in the public eye as representatives of their expertise or organization. Their currency is credibility and their audience values them for what they know and what they represent, not who they are. So… let's say you’re trying to be a public figure making a difference about AI risk. You’ve been on a podcast or two, maybe even on The News. You might work at an AI policy organization, or [...] ---Outline:(00:11) Your hot takes are killing your credibility.(02:10) STOP - What Would Media Training Steve do?(05:22) Dont Feed Your Enemies(07:07) The Luxury of Not Being a Politician(09:33) So How Do You Deal With Politics?(10:58) Conclusion ---
First published:
February 17th, 2026
Source:
https://www.lesswrong.com/posts/hCtm7rxeXaWDvrh4j/you-re-an-ai-expert-not-an-influencer
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Epistemic status: Speculation on agent foundations research culture (which I am pretty deeply engaged with) and whether "we are confused about agency" which I am not sure about. I will take for granted that this is a common refrain, which should be familiar to anyone who is part of the relevant scene. The phrase "We are confused about agency," often with variations such as "way too," "deeply," or "dangerously" is a common membership signal for a certain AI safety research culture. Roughly speaking, this is the culture of the agent foundations research program that accreted around MIRI.[1] The phrase is usually supplied as an argument for delaying the development of A.I. until certain mathematical research (particularly in learning/decision theory) has been carried out. I find the phrase uncomfortable on various levels. As Cultural Signal Since I claim that the phrase (apart from its literal meaning) functions as an in-group signal, it is natural to wonder what "we" refers to here. I believe the intention is "we = everyone." I think that it requires a pretty serious level of scholarship to confidently claim that everyone is confused about agency. One general frustration I have with rationalist culture [...] ---Outline:(00:59) As Cultural Signal(02:48) Confused in what Sense?(05:10) Confused about what? The original text contained 4 footnotes which were omitted from this narration. ---
First published:
February 17th, 2026
Source:
https://www.lesswrong.com/posts/S5thoEmJMhEEuqzmG/we-are-confused-about-agency
---
Narrated by TYPE III AUDIO.



