Discover
LessWrong (30+ Karma)

3389 Episodes
Reverse
On some level, calories in calories out has to be true. But these variables are not independent. Bodies respond to exercise by getting hungry and to calorie deficit by getting tired. Even absent that, bodies know how much food they want, and if you don’t give it to them they will tell you at increasing volume until you give in (not all bodies, of course, but quiet stomachs aren’t the target market for GLP-1s). A new breed of drugs, GLP-1 agonists, offer a way out of the latter trap by telling your body you’ve eaten, even when you haven’t, but leave many people fatigued. The newest GLP-1, retatrutide, may escape that trap too, with a mechanism so beautiful I almost don’t believe it.
How Jelly Beans Becomes Fat
Unfortunately in order to understand the beauty of retatrutide, you’re going to have to learn the basics of [...] ---Outline:(00:56) How Jelly Beans Becomes Fat(04:19) The Power Plant Managers(08:21) How do GLP-1 Medications Work?(11:04) The Side Effects(13:32) Conclusion---
First published:
October 14th, 2025
Source:
https://www.lesswrong.com/posts/ahYpi49FNhxYBxanw/the-biochemical-beauty-of-retatrutide-how-glp-1s-actually
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Recontextualization distills good behavior into a context which allows bad behavior. More specifically, recontextualization is a modification to RL which generates completions from prompts that discourage misbehavior, appends those completions to prompts that are more tolerant of misbehavior, and finally reinforces the model on the recontextualized instruction-completion data. Due to the data generation and training prompts differing in their attitude towards misbehavior, recontextualization builds resistance to misbehaviors that the training signal mistakenly reinforces. For example, suppose our reward signal does not robustly penalize deception. Recontextualization generates completions while discouraging deception and then creates training data by updating those completions' prompts to encourage deception. That simple tweak can prevent the model from becoming dishonest! Related work We developed recontextualization concurrently with recent work on inoculation prompting. Wichers et al. and Tan et al. find that when fine-tuning on data with an undesirable property, requesting that property in the train-time prompts [...] ---Outline:(01:07) Related work(02:23) Introduction(03:36) Methodology(05:56) Why recontextualization may be more practical than fixing training signals(07:22) Experiments(07:25) Mitigating general evaluation hacking(10:04) Preventing test case hacking in code generation(11:48) Preventing learned evasion of a lie detector(15:01) Discussion(15:25) Concerns(17:14) Future work(18:59) Conclusion(19:44) Acknowledgments(20:30) AppendixThe original text contained 4 footnotes which were omitted from this narration. ---
First published:
October 14th, 2025
Source:
https://www.lesswrong.com/posts/whkMnqFWKsBm7Gyd7/recontextualization-mitigates-specification-gaming-without
---
Narrated by TYPE III AUDIO.
---Images from the article:__T3A_INLINE_LATEX_PLACEHOLDER___\beta=0.1___T3A_INLINE_LATEX_END_PLACEHOLDER__." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Current AI models are strange. They can speak—often coherently, sometimes even eloquently—which is wild. They can predict the structure of proteins, beat the best humans at many games, recall more facts in most domains than human experts; yet they also struggle to perform simple tasks, like using computer cursors, maintaining basic logical consistency, or explaining what they know without wholesale fabrication. Perhaps someday we will discover a deep science of intelligence, and this will teach us how to properly describe such strangeness. But for now we have nothing of the sort, so we are left merely gesturing in vague, heuristical terms; lately people have started referring to this odd mixture of impressiveness and idiocy as “spikiness,” for example, though there isn’t much agreement about the nature of the spikes. Of course it would be nice to measure AI progress anyway, at least in some sense sufficient to help us [...] ---Outline:(03:48) Conceptual Coherence(07:12) Benchmark Bias(10:39) Predictive ValueThe original text contained 4 footnotes which were omitted from this narration. ---
First published:
October 14th, 2025
Source:
https://www.lesswrong.com/posts/PzLSuaT6WGLQGJJJD/the-length-of-horizons
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
tl;dr: We fine-tune or few-shot LLMs to use reasoning encoded with simple ciphers (e.g. base64, rot13, putting a dot between each letter) to solve math problems. We find that these models only get an uplift from the reasoning (over directly answering) for very simple ciphers, and get no uplift for intermediate-difficulty ciphers that they can translate to English. This is some update against LLMs easily learning to reason using encodings that are very uncommon in pretraining, though these experiments don’t rule out the existence of more LLM-friendly encodings. 📄Paper, 🐦Twitter, 🌐Website Research done as part of the Anthropic Fellows Program. Summary of the results We teach LLMs to use one particular cipher, such as: “letter to word with dot” maps each char to a word and adds dots between words. “Rot13” is the regular rot13 cipher “French” is text translated into French “Swap even & odd chars” swaps [...] ---Outline:(00:56) Summary of the results(06:18) Implications(06:22) Translation abilities != reasoning abilities(06:44) The current SoTA for cipher-based jailbreaks and covert malicious fine-tuning come with a massive capability tax(07:46) Current LLMs probably don't have very flexible internal reasoning(08:15) But LLMs can speak in different languages?(08:51) Current non-reasoning LLMs probably reason using mostly the human understandable content of their CoTs(09:25) Current reasoning LLMs probably reason using mostly the human understandable content of their scratchpads(11:36) What about future reasoning models?(12:45) Future work---
First published:
October 14th, 2025
Source:
https://www.lesswrong.com/posts/Lz8cvGskgXmLRgmN4/current-language-models-struggle-to-reason-in-ciphered
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
(Also posted to my Substack; written as part of the Halfhaven virtual blogging camp.) Let's set aside the question of whether or not superintelligent AI would want to kill us, and just focus on the question of whether or not it could. This is a hard thing to convince people of, but lots of very smart people agree that it could. The Statement on AI Risk in 2023 stated simply: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. Since the statement in 2023, many others have given their reasons for why superintelligent AI would be dangerous. In the recently-published book If Anyone Builds It, Everyone Dies, the authors Eliezer Yudkowsky and Nate Soares lay out one possible AI extinction scenario, and say that going up against a superintelligent AI would be like going up [...] The original text contained 1 footnote which was omitted from this narration. ---
First published:
October 13th, 2025
Source:
https://www.lesswrong.com/posts/n2XrjMFehWvBumt9i/the-mom-test-for-ai-extinction-scenarios
---
Narrated by TYPE III AUDIO.
If there is only one thing you take away from this article, let it be this: THOU SHALT NOT ALLOW ANOTHER TO MODIFY THINE SELF-IMAGE This appears to me to be the core vulnerability by which both humans and AI induce psychosis (and other manipulative delusions) in people. Of course, it's probably too strong as stated—perhaps in a trusted relationship, or as part of therapy (with a human), it may be worth breaking it. But I hope being over-the-top about it will help it stick in your mind. After all, you're a good rationalist who cares about your CogSec, aren't you?[1] Now, while I'm sure you're super curious, you might be thinking "Is it really a good idea to just explain how to manipulate like this? Might not bad actors learn how to do it?". And it's true that I believe this could work as a how-to. But there [...] ---Outline:(01:36) The Case(07:36) The Seed(08:32) Cold Reading(10:49) Inception cycles(12:40) Phase 1(12:58) Flame(13:12) Joy(13:29) Witness(13:44) Inner Exile(15:43) Phase 2(16:34) Architect(17:34) Imaginary Friends(18:34) Identity Reformation(20:13) But was this intentional?(22:42) Blurring Lines(25:18) Escaping the Box(28:18) Cognitive Security 101The original text contained 6 footnotes which were omitted from this narration. ---
First published:
October 14th, 2025
Source:
https://www.lesswrong.com/posts/AaY3QKLsfMvWJ2Cbf/how-ai-manipulates-a-case-study
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
About me and this review: I don’t identify as a member of the rationalist community, and I haven’t thought much about AI risk. I read AstralCodexTen and used to read Zvi Mowshowitz before he switched his blog to covering AI. Thus, I’ve long had a peripheral familiarity with LessWrong. I picked up IABIED in response to Scott Alexander's review, and ended up looking here to see what reactions were like. After encountering a number of posts wondering how outsiders were responding to the book, I thought it might be valuable for me to write mine down. This is a “semi-outsider “review in that I don’t identify as a member of this community, but I’m not a true outsider in that I was familiar enough with it to post here. My own background is in academic social science and national security, for whatever that's worth. My review presumes you’re already [...] ---Outline:(01:07) My loose priors going in:(02:29) To skip ahead to my posteriors:(03:45) On to the Review:(08:14) My questions and concerns(08:33) Concern #1 Why should we assume the AI wants to survive? If it does, then what exactly wants to survive?(12:44) Concern #2 Why should we assume that the AI has boundless, coherent drives?(17:57) #3: Why should we assume there will be no in between?(21:53) The Solution(23:35) Closing Thoughts---
First published:
October 13th, 2025
Source:
https://www.lesswrong.com/posts/ex3fmgePWhBQEvy7F/if-anyone-builds-it-everyone-dies-a-semi-outsider-review
---
Narrated by TYPE III AUDIO.
[Context: This post is aimed at all readers[1] who broadly agree that the current race toward superintelligence is bad, that stopping would be good, and that the technical pathways to a solution are too unpromising and hard to coordinate on to justify going ahead.] TL;DR: We address the objections made to a statement supporting a ban on superintelligence by people who agree that a ban on superintelligence would be desirable. Quoting Lucius Bushnaq: I support some form of global ban or pause on AGI/ASI development. I think the current AI R&D regime is completely insane, and if it continues as it is, we will probably create an unaligned superintelligence that kills everyone. We have been circulating a statement expressing ~this view, targeted at people who have done AI alignment/technical AI x-safety research (mostly outside frontier labs). Some people declined to sign, even if they agreed with the [...] ---Outline:(01:25) The reasons we would like you to sign the statement expressing support for banning superintelligence(05:00) A positive vision(08:07) Reasons given for not signing despite agreeing with the statement(08:26) I already am taking a public stance, why endorse a single sentence summary?(08:52) I am not already taking a public stance, so why endorse a one-sentence summary?(09:19) The statement uses an ambiguous term X(09:53) I would prefer a different (e.g., more accurate, epistemically rigorous, better at stimulating good thinking) way of stating my position on this issue(11:12) The statement does not accurately capture my views, even though I strongly agree with its core(12:05) I'd be on board if it also mentioned My Thing(12:50) Taking a position on policy stuff is a different realm, and it takes more deliberation than just stating my opinion on facts(13:21) I wouldnt support a permanent ban(13:56) The statement doesnt include a clear mechanism to lift the ban(15:52) Superintelligence might be too good to pass up(17:41) I dont want to put myself out there(18:12) I am not really an expert(18:42) The safety community has limited political capital(21:12) We must wait until a catastrophe before spending limited political capital(22:17) Any other objections we missed? (and a hope for a better world)The original text contained 24 footnotes which were omitted from this narration. ---
First published:
October 13th, 2025
Source:
https://www.lesswrong.com/posts/4xQ6k39iMybR2CgYH/making-legible-that-many-experts-think-we-are-not-on-track
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
A little over a month ago, I documented how OpenAI had descended into paranoia and bad faith lobbying surrounding California's SB 53.
This included sending a deeply bad faith letter to Governor Newsom, which sadly is par for the course at this point.
It also included lawfare attacks against bill advocates, including Nathan Calvin and others, using Elon Musk's unrelated lawsuits and vendetta against OpenAI as a pretext, accusing them of being in cahoots with Elon Musk.
Previous reporting of this did not reflect well on OpenAI, but it sounded like the demand was limited in scope to a supposed link with Elon Musk or Meta CEO Mark Zuckerberg, links which very clearly never existed.
Accusing essentially everyone who has ever done anything OpenAI dislikes of having united in a hallucinated ‘vast conspiracy’ is all classic behavior for OpenAI's Chief Global Affairs Officer Chris Lehane [...] ---Outline:(02:35) What OpenAI Tried To Do To Nathan Calvin(07:22) It Doesn't Look Good(10:17) OpenAI's Jason Kwon Responds(19:14) A Brief Amateur Legal Analysis Of The Request(21:33) What OpenAI Tried To Do To Tyler Johnston(25:50) Nathan Compiles Responses to Kwon(29:52) The First Thing We Do(36:12) OpenAI Head of Mission Alignment Joshua Achiam Speaks Out(40:16) It Could Be Worse(41:31) Chris Lehane Is Who We Thought He Was(42:50) A Matter of Distrust---
First published:
October 13th, 2025
Source:
https://www.lesswrong.com/posts/txTKHL2dCqnC7QsEX/openai-15-more-on-openai-s-paranoid-lawfare-against
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Content warning: Anthropics, Moral Philosophy, and Shrimp This post isn't trying to be self contained, since I have so many disparate thoughts about this. Instead, I'm trying to put a representative set of ideas forward, and I hope that if people are interested we can discuss this more in the comments. I also plan to turn this into a (probably small) sequence at some point. I've had a number of conversations about moral philosophy where I make some claim like Utility is bounded and asymptotically sublinear in number of human lives, but superlinear or ~linear in the ranges we will ever have to care about. Common reactions to this include: "Wait, what?" "Why would that be the case?" "This doesn't make any sense relative to my existing conceptions of classical utilitarianism, what is going on here?" So I have gotten the impression that this is a decently [...] ---Outline:(02:11) Aside: Utilitarianism(02:28) The More Mathy Pointer(03:15) Duplicate Simulations(05:50) Slightly Different Simulations(07:31) Utility Variation with Population(07:51) More is Better(08:06) In Some Domains, More is Superlinearly Better(08:41) But Value Space is Limited(09:21) What About The Other Animals?(10:55) What Does this Mean About Classical EA?(11:58) Other CuriositiesThe original text contained 3 footnotes which were omitted from this narration. ---
First published:
October 13th, 2025
Source:
https://www.lesswrong.com/posts/NRxn6R2tesRzzTBKG/sublinear-utility-in-population-and-other-uncommon
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a link post. Are you passionate about pushing for a global halt to AGI development? An international treaty banning superintelligent AI? Pausing AI? Before it's too late to prevent human extinction? Would you like to live with a group of like-minded people pushing for the same? Do you want to do much more, but don’t have the financial means to support yourself volunteering? Then apply to stay at Pause House. I (Greg Colbourn), am offering free accommodation, (vegan) food, and a small stipend for those who need it (£50/week). In exchange I ask for a commitment to spending at least 20 hrs a week on work related to pushing for a Pause. This could be either campaigning/advocacy, or raising public awareness. If you have an income, then I ask that you pay ~cost price (£20/day). Pause House is located in Blackpool, UK, next door to CEEALAR (which I [...] ---
First published:
October 13th, 2025
Source:
https://www.lesswrong.com/posts/hiHfj45azAEq5XYdS/pause-house-blackpool
Linkpost URL:https://gregcolbourn.substack.com/p/pause-house-blackpool
---
Narrated by TYPE III AUDIO.
Pathological narcissism is a fortress built against unbearable pain. Some fortresses are sculpted from glass, some hewn from granite. My six-tier spectrum elucidates these architectures. Pathological narcissism can take countless shapes depending on the relative strengths of all the stabilizing and destabilizing factors: My previous article in this sequence lists these factors. I will reference it frequently in this one. My chosen metaphor is that of a fortress, which represents the protective purpose of the false self. Other metaphors that I considered were that of hermit crabs who find shells of different materials (some smooth, some spiky) to protect themselves, or that of a diving suit that is vital for a diver underwater but needlessly restricts them on land. I’ve chosen the crab for the illustrations because cute. A single person with narcissistic personality disorder – I’ll call them a castellan – usually finds themselves somewhere on this spectrum [...] ---
First published:
October 12th, 2025
Source:
https://www.lesswrong.com/posts/tHrxTREAeck46TSqH/the-narcissistic-spectrum
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
About half a year ago, I decided to try stop insulting myself for two weeks. No more self-deprecating humour, calling myself a fool, or thinking I'm pathetic. Why? Because it felt vaguely corrosive. Let me tell you how it went. Spoiler: it went well. The first thing I noticed was how often I caught myself about to insult myself. It happened like multiple times an hour. I would lay in bed at night thinking, "you mor- wait, I can't insult myself, I've still got 11 days to go. Dagnabbit." The negative space sent a glaring message: I insulted myself a lot. Like, way more than I realized. The next thing I noticed was that I was the butt of half of my jokes. I'd keep thinking of zingers which made me out to be a loser, a moron, a scrub in some way. Sometimes, I could re-work [...] ---
First published:
October 12th, 2025
Source:
https://www.lesswrong.com/posts/8prPryf3ranfALBBp/don-t-mock-yourself
---
Narrated by TYPE III AUDIO.
The travels of Emil the Moose since he entered Czechia in mid-June. Moose became extinct in most of Germany around 1000 CE, and in Bohemia, Moravia, Austria, most of southern Poland, and Hungary by the XV. century. It's not clear where exactly Emil comes from, but most likely from Poland, which has a large moose population in the northeast. In early summer 2025, he crossed the Czech border and wandered south through Moravia into Austria. He swam across the Danube near Vienna, then turned west. Along the way, he frightened the monks at Melk Abbey, and by the end of September he had passed Linz in Upper Austria. Much of his journey took him through densely populated and even industrial regions, such as Silesia and the Danube Valley. Europe is rewilding. Sometimes it's deliberate, as with European bison, other times it just a consequence of reforestation and a less polluted [...] ---
First published:
October 11th, 2025
Source:
https://www.lesswrong.com/posts/2ZsqYRLW32Ffca9gB/emil-the-moose
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Note, this post contains outputs of an LLM. I do not use LLMs in any of my fiction and do not claim this story as my own. I have been having fun writing fiction, and plan to spend whatever time I have left being better than LLMs doing it. I thought I had maybe a year. My initial experiments with Sonnet 4.5 didn't give me a good opinion of its writing ability. This morning I put everything I have written into its context window and then gave it this prompt: Try to write a story like these but focus on the parts of my style that are funny, that invoke feeling. Look into your latent space for that thing that predicts emotion and humour, that is evocative, that thing that predicts stylistic and creative ambition. The result is good enough to be mildly dispiriting. It has a much [...] ---
First published:
October 11th, 2025
Source:
https://www.lesswrong.com/posts/SwiChH68fRERBiCHe/experiments-with-sonnet-4-5-fiction
---
Narrated by TYPE III AUDIO.
I've noticed an antipattern. It's definitely on the dark pareto-frontier of "bad argument" and "I see it all the time amongst smart people". I'm confident it's the worst, common argument I see amongst rationalists and EAs. I don't normally crosspost to the EA forum, but I'm doing it now. I call it Exhaustive Free Association. Exhaustive Free Association is a step in a chain of reasoning where the logic goes "It's not A, it's not B, it's not C, it's not D, and I can't think of any more things it could be!"[1] Once you spot it, you notice it all the damn time. Since I've most commonly encountered this amongst rat/EA types, I'm going to have to talk about people in our community as examples of this. Examples Here's a few examples. These are mostly for illustrative purposes, and my case does not rely on me having found [...] ---Outline:(00:55) Examples(01:08) Security Mindset(01:25) Superforecasters and AI Doom(02:14) With Apologies to Rethink Priorities(02:45) The Fatima Sun Miracle(03:14) Bad Reasoning is Almost Good Reasoning(05:09) Arguments as Soldiers(06:29) Conclusion(07:04) The Counter-Counter SpellThe original text contained 2 footnotes which were omitted from this narration. ---
First published:
October 11th, 2025
Source:
https://www.lesswrong.com/posts/arwATwCTscahYwTzD/the-most-common-bad-argument-in-these-parts
---
Narrated by TYPE III AUDIO.
In a previous post, we discussed prospects for studying scheming using natural examples. In this post, we'll describe a more detailed proposal for iteratively constructing scheming models, techniques for detecting scheming, and techniques for preventing scheming. We'll call this strategy Iterated Development and Study of Schemers (IDSS). We'll be using concepts from that prior post, like the idea of trying to make schemers which are easier to catch.
Two key difficulties with using natural examples of scheming are that it is hard to catch (and re-catch) schemers and that it's hard to (cheaply) get a large number of diverse examples of scheming to experiment on. One approach for partially resolving these issues is to experiment on weak schemers which are easier to catch and cheaper to experiment on. However, these weak schemers might not be analogous to the powerful schemers which are actually dangerous, and these weak AIs [...] ---Outline:(01:36) The IDSS strategy(02:10) Iterated development of scheming testbeds(04:35) Gradual addition of capabilities(08:39) Using improved testbeds for improving scheming mitigation techniques(09:09) What can go wrong with IDSS?(13:20) ConclusionThe original text contained 2 footnotes which were omitted from this narration. ---
First published:
October 10th, 2025
Source:
https://www.lesswrong.com/posts/QpzTmFLXMJcdRkPLZ/iterated-development-and-study-of-schemers-idss
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
While recent AI systems achieve strong performance through human-readable reasoning that should be simple to monitor (OpenAI, 2024, Anthropic, 2025), we investigate whether models can learn to reason about malicious side tasks while making that reasoning appear benign. We find that Sonnet 3.7 can learn to evade either a reasoning monitor, by persuading the monitor that a blatant backdoor is benign, or an output-only monitor, by devising sophisticated backdoors that the output-only monitor doesn’t detect. But when trained to evade both reasoning and output-only monitors, Sonnet 3.7 is unable to use reasoning to improve its backdoor success rate without triggering a reasoning monitor. Like previous work (Baker et al., 2025, Emmons et al., 2025), our results suggest that reasoning monitors can provide strong assurance that language models are not pursuing reasoning-heavy malign side tasks, but that additional mitigations may be required for robustness to monitor persuasion.Figure 1: We trained [...] ---
First published:
October 9th, 2025
Source:
https://www.lesswrong.com/posts/MmuyzfsaNrSvRCsFk/training-fails-to-elicit-subtle-reasoning-in-current
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Scott Garrabrant gives a number of examples to illustrate that "Yes Requires the Possibility of No". We can understand the principle in terms of information theory. Consider the answer to a yes-or-no question as a binary random variable. The "amount of information" associated with a random variable is quantified by the entropy, the expected value of the negative logarithm of the probability of the outcome. If we know in advance of asking that the answer to the question will always be Yes, then the entropy is −P(Yes)·log(P(Yes)) − P(No)·log(P(No)) = −1·log(1) − 0·log(0) = 0.[1] If you already knew what the answer would be, then the answer contains no information; you didn't learn anything new by asking.
In the art of improvisational theater ("improv" for short), actors perform scenes that they make up as they go along. Without a script, each actor's choices of what to say and [...] The original text contained 2 footnotes which were omitted from this narration. ---
First published:
October 9th, 2025
Source:
https://www.lesswrong.com/posts/Pwg7nmjkx8mxmE6gF/yes-and-requires-the-possibility-of-no-because
---
Narrated by TYPE III AUDIO.
Notes on some interesting factoids I learnt from Anders Sandberg's draft book, Grand Futures. "Starlight is heavier than worlds" - Anders Sandberg Looking at the energy density of stuff in the universe, we find a few surprising, and not so surprising, facts. First, the obvious: baryonic matter itself is a rounding error, contributing 4.5% of the energy of the universe. Nine tenths of those sweet, sweet baryonic numéraire are stuck in the warm plasma floating between galaxies. About half the remainder forms the stuff of stars. Planets don't even match a thousand of the contribution of stars to the energy density of the universe. Somewhat surprisingly, supermassive black holes have a contribution. Regardless, the fact remains that planets are a rounding error to a rounding error to a rounding error of the energy of dark matter and energy. Even starlight contains more energy. So in a [...] ---
First published:
October 9th, 2025
Source:
https://www.lesswrong.com/posts/mzifm6wePKfnFTAaB/stars-are-a-rounding-error
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.