DiscoverLessWrong posts by zvi
LessWrong posts by zvi
Claim Ownership

LessWrong posts by zvi

Author: zvi

Subscribed: 5Played: 382
Share

Description

Audio narrations of LessWrong posts by zvi
433 Episodes
Reverse
Consider this largely a follow-up to Friday's post about a statement aimed at creating common knowledge around it being unwise to build superintelligence any time soon. Mainly, there was a great question asked, so I gave a few hour shot at writing out my answer. I then close with a few other follow-ups on issues related to the statement. A Great Question To Disentangle There are some confusing wires potentially crossed here but the intent is great. Scott Alexander: I think removing a 10% chance of humanity going permanently extinct is worth another 25-50 years of having to deal with the normal human problems the normal way. Sriram Krishnan: Scott what are verifiable empirical things ( model capabilities / incidents / etc ) that would make you shift that probability up or down over next 18 months? I went through three steps interpreting [...] ---Outline:(00:30) A Great Question To Disentangle(02:20) Scott Alexander Gives a Fast Answer(04:53) Question 1: What events would most shift your p(doom | ASI) in the next 18 months?(13:01) Question 1a: What would get this risk down to acceptable levels?(14:48) Question 2: What would shift the amount that stopping us from creating superintelligence for a potentially extended period would reduce p(doom)?(17:01) Question 3: What would shift your timelines to ASI (or to sufficiently advanced AI, or 'to crazy')?(20:01) Bonus Question 1: Why Do We Keep Having To Point Out That Building Superintelligence At The First Possible Moment Is Not A Good Idea?(22:44) Bonus Question 2: What Would a Treaty On Prevention of Artificial Superintelligence Look Like?--- First published: October 27th, 2025 Source: https://www.lesswrong.com/posts/mXtYM3yTzdsnFq3MA/asking-some-of-the-right-questions --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Building superintelligence poses large existential risks. Also known as: If Anyone Builds It, Everyone Dies. Where ‘it’ is superintelligence, and ‘dies’ is that probably everyone on the planet literally dies. We should not build superintelligence until such time as that changes, and the risk of everyone dying as a result, as well as the risk of losing control over the future as a result, is very low. Not zero, but far lower than it is now or will be soon. Thus, the Statement on Superintelligence from FLI, which I have signed. Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties [...] ---Outline:(02:02) A Brief History Of Prior Statements(03:51) This Third Statement(05:08) Who Signed It(07:27) Pushback Against the Statement(09:05) Responses To The Pushback(12:32) Avoid Negative Polarization But Speak The Truth As You See It--- First published: October 24th, 2025 Source: https://www.lesswrong.com/posts/QzY6ucxy8Aki2wJtF/new-statement-calls-for-not-building-superintelligence-for --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The big release this week was OpenAI giving us a new browser, called Atlas. The idea of Atlas is that it is Chrome, except with ChatGPT integrated throughout to let you enter agent mode and chat with web pages and edit or autocomplete text, and that will watch everything you do and take notes to be more useful to you later. From the consumer standpoint, does the above sound like a good trade to you? A safe place to put your trust? How about if it also involves (at least for now) giving up many existing Chrome features? From OpenAI's perspective, a lot of that could have been done via a Chrome extension, but by making a browser some things get easier, and more importantly OpenAI gets to go after browser market share and avoid dependence on Google. I’m going to stick with using Claude [...] ---Outline:(02:01) Language Models Offer Mundane Utility(03:07) Language Models Don't Offer Mundane Utility(04:52) Huh, Upgrades(05:24) On Your Marks(10:15) Language Barrier(12:50) From ChatGPT, a Chinese answer to the question about which qualities children should have:(13:30) ChatGPT in English on the same question:(15:17) Choose Your Fighter(17:19) Get My Agent On The Line(18:54) Fun With Media Generation(23:09) Copyright Confrontation(25:19) You Drive Me Crazy(35:31) They Took Our Jobs(44:06) A Young Lady's Illustrated Primer(44:42) Get Involved(45:33) Introducing(47:15) In Other AI News(48:30) Show Me the Money(51:03) So You've Decided To Become Evil(53:18) Quiet Speculations(56:11) People Really Do Not Like AI(57:18) The Quest for Sane Regulations(01:00:55) Alex Bores Launches Campaign For Congress(01:03:33) Chip City(01:10:17) The Week in Audio(01:13:00) Rhetorical Innovation(01:16:17) Don't Take The Bait(01:27:29) Do You Feel In Charge?(01:29:30) Tis The Season Of Evil(01:34:45) People Are Worried About AI Killing Everyone(01:36:00) The Lighter Side--- First published: October 23rd, 2025 Source: https://www.lesswrong.com/posts/qC3M3x2FwiG2Qm7Jj/ai-139-the-overreach-machines --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was very clearly one of those. So here we go. As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary. If I am quoting directly I use quote marks, otherwise assume paraphrases. Rather than worry about timestamps, I’ll use YouTube's section titles, as it's not that hard to find things via the transcript as needed. This was a fun one in many places, interesting throughout, frustrating in similar places to where other recent Dwarkesh interviews have been frustrating. It gave me a lot of ideas, some of which might even be good. Double click to interact with video AGI Is Still a Decade Away Andrej calls this the ‘decade of agents’ contrary to (among [...] ---Outline:(00:58) AGI Is Still a Decade Away(12:13) LLM Cognitive Deficits(15:31) RL Is Terrible(17:24) How Do Humans Learn?(24:17) AGI Will Blend Into 2% GDP Growth(29:28) ASI(37:38) Evolution of Intelligence and Culture(38:57) Why Self Driving Took So Long(42:06) Future Of Education(48:27) Reactions--- First published: October 21st, 2025 Source: https://www.lesswrong.com/posts/ZBoJaebKFEzxuhNGZ/on-dwarkesh-patel-s-podcast-with-andrej-karpathy --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
We have the classic phenomenon where suddenly everyone decided it is good for your social status to say we are in an ‘AI bubble.’ Are these people short the market? Do not be silly. The conventional wisdom response to that question these days is that, as was said in 2007, ‘if the music is playing you have to keep dancing.’ So even with lots of people newly thinking there is a bubble the market has not moved down, other than (modestly) on actual news items, usually related to another potential round of tariffs, or that one time we had a false alarm during the DeepSeek Moment. So, what's the case we’re in a bubble? What's the case we’re not? My Answer In Brief People get confused about bubbles, often applying that label any time prices fall. So you have to be clear on what [...] ---Outline:(01:17) My Answer In Brief(02:18) Time Sensitive Point of Order: Alex Bores Launches Campaign For Congress, If You Care About AI Existential Risk Consider Donating(04:09) So They're Saying There's a Bubble(05:04) AI Is Atlas And People Worry It Might Shrug(05:35) Can A Bubble Be Common Knowledge?(08:33) Steamrollers, Picks and Shovels(09:27) What Can Go Up Must Sometimes Go Down(11:36) What Can Go Up Quite A Lot Can Go Even More Down(13:17) Step Two Remains Important(15:00) Oops We Might Do It Again(15:49) Derek Thompson Breaks Down The Arguments(17:47) AI Revenues Are Probably Going To Go Up A Lot(20:19) True Costs That Matter Are Absolute Not Relative(21:05) We Are Spending a Lot But Also Not a Lot(22:46) Valuations Are High But Not Super High(23:59) Official GPU Depreciation Schedules Seem Pretty Reasonable To Me(29:14) The Bubble Case Seems Weak(30:53) What It Would Mean If Prices Did Go Down--- First published: October 20th, 2025 Source: https://www.lesswrong.com/posts/rkiBknhWh3D83Kdr3/bubble-bubble-toil-and-trouble --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
As usual when things split, Part 1 is mostly about capabilities, and Part 2 is mostly about a mix of policy and alignment. Table of Contents The Quest for Sane Regulations. The GAIN Act and some state bills. People Really Dislike AI. They would support radical, ill-advised steps. Chip City. Are we taking care of business? The Week in Audio. Hinton talks to Jon Stewart, Klein to Yudkowsky. Rhetorical Innovation. How to lose the moral high ground. Water Water Everywhere. AI has many big issues. Water isn’t one of them. Read Jack Clark's Speech From The Curve. It was a sincere, excellent speech. How One Other Person Responded To This Thoughtful Essay. Some aim to divide. A Better Way To Disagree. Others aim to work together and make things better. Voice Versus Exit. The age old [...] ---Outline:(00:20) The Quest for Sane Regulations(05:56) People Really Dislike AI(12:22) Chip City(13:12) The Week in Audio(13:24) Rhetorical Innovation(20:53) Water Water Everywhere(23:57) Read Jack Clark's Speech From The Curve(28:26) How One Other Person Responded To This Thoughtful Essay(38:43) A Better Way To Disagree(59:39) Voice Versus Exit(01:03:51) The Dose Makes The Poison(01:06:44) Aligning a Smarter Than Human Intelligence is Difficult(01:10:08) You Get What You Actually Trained For(01:15:54) Messages From Janusworld(01:18:37) People Are Worried About AI Killing Everyone(01:22:40) The Lighter SideThe original text contained 1 footnote which was omitted from this narration. --- First published: October 17th, 2025 Source: https://www.lesswrong.com/posts/gCuJ5DabY9oLNDs9B/ai-138-part-2-watch-out-for-documents --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Well, one person says ‘demand,’ another says ‘give the thumbs up to’ or ‘welcome our new overlords.’ Why quibble? Surely we’re all making way too big a deal out of this idea of OpenAI ‘treating adults like adults.’ Everything will be fine. Right? Why not focus on all the other cool stuff happening? Claude Haiku 4.5 and Veo 3.1? Walmart joining ChatGPT instant checkout? Hey, come back. Alas, the mass of things once again got out of hand this week, so we’re splitting the update into two parts. Table of Contents Earlier This Week. OpenAI does paranoid lawfare, China escalates bigly. Language Models Offer Mundane Utility. Help do your taxes, of course. Language Models Don’t Offer Mundane Utility. Beware the false positive. Huh, Upgrades. Claude Haiku 4.5, Walmart on ChatGPT instant checkout. We Patched The Torment Nexus, Turn It [...] ---Outline:(00:51) Earlier This Week(02:25) Language Models Offer Mundane Utility(05:44) Language Models Don't Offer Mundane Utility(11:47) Huh, Upgrades(14:55) We Patched The Torment Nexus, Turn It Back On(27:49) On Your Marks(34:33) Choose Your Fighter(35:50) Deepfaketown and Botpocalypse Soon(36:15) Fun With Media Generation(40:32) Copyright Confrontation(41:04) AIs Are Often Absurd Sycophants(44:32) They Took Our Jobs(54:04) Find Out If You Are Worried About AI Killing Everyone(55:42) A Young Lady's Illustrated Primer(01:02:59) AI Diffusion Prospects(01:11:13) The Art of the Jailbreak(01:14:38) Get Involved(01:14:57) Introducing(01:16:48) In Other AI News(01:18:22) Show Me the Money(01:20:45) Quiet Speculations--- First published: October 16th, 2025 Source: https://www.lesswrong.com/posts/n4xxSKwwP3SYaRqBS/ai-138-part-1-the-people-demand-erotic-sycophants --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
It is increasingly often strange compiling the monthly roundup, because life comes at us fast. I look at various things I’ve written, and it feels like they are from a different time. Remember that whole debate over free speech? Yeah, that was a few weeks ago. Many such cases. Gives one a chance to reflect. In any case, here we go. Table of Contents Don’t Provide Bad Training Data. Maybe Don’t Say Maybe. Throwing Good Parties Means Throwing Parties. Air Travel Gets Worse. Bad News. You Do Not Need To Constantly Acknowledge That There Is Bad News. Prediction Market Madness. No Reply Necessary. While I Cannot Condone This. Antisocial Media. Government Working. Tylenol Does Not Cause Autism. Jones Act Watch. For Science!. Work Smart And Hard. So Emotional. [...] ---Outline:(00:37) Don't Provide Bad Training Data(02:25) Maybe Don't Say Maybe(03:00) Throwing Good Parties Means Throwing Parties(05:06) Air Travel Gets Worse(06:16) Bad News(09:03) You Do Not Need To Constantly Acknowledge That There Is Bad News(12:52) Prediction Market Madness(16:00) No Reply Necessary(19:49) While I Cannot Condone This(23:42) Antisocial Media(29:22) Government Working(47:12) Tylenol Does Not Cause Autism(51:12) Jones Act Watch(52:13) For Science!(54:29) Work Smart And Hard(55:34) So Emotional(59:17) Where Credit Is Due(01:01:30) Good News, Everyone(01:03:37) I Love New York(01:08:14) For Your Entertainment(01:17:14) Gamers Gonna Game Game Game Game Game(01:19:53) I Was Promised Flying Self-Driving Cars(01:24:33) Sports Go Sports(01:29:06) Opportunity Knocks--- First published: October 15th, 2025 Source: https://www.lesswrong.com/posts/XLbwizZeuhoHpdAvE/monthly-roundup-35-october-2025 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
What is going on with, and what should we do about, the Chinese declaring extraterritorial exports controls on rare earth metals, which threaten to go way beyond semiconductors and also beyond rare earths into things like lithium and also antitrust investigations? China also took other actions well beyond only rare Earths, including going after Qualcomm, lithium and everything else that seemed like it might hurt, as if they are confident that a cornered Trump will fold and they believe they have escalation dominance and are willing to use it. China now has issued reassurances that it will allow all civilian uses of rare earths and not to worry, but it seems obvious that America cannot accept a Chinese declaration of extraterritorial control over entire world supply chains, even if China swears it will only narrowly use that power. In response, Trump has threatened massive tariffs and cancelled [...] ---Outline:(01:20) Was This Provoked?(02:43) What Is China Doing?(03:37) How Is America Responding?(05:58) How Is China Responding To America's Response?(07:14) What To Make Of China's Attempted Reassurances?(08:15) How Should We Respond From Here?(09:37) It Looks Like China Overplayed Its Hand(10:36) We Need To Mitigate China's Leverage Across The Board(13:11) What About The Chip Export Controls?(14:01) This May Be A Sign Of Weakness(15:06) What Next?--- First published: October 14th, 2025 Source: https://www.lesswrong.com/posts/rTP5oZ3CDJR429Dw7/trade-escalation-supply-chain-vulnerabilities-and-rare-earth --- Narrated by TYPE III AUDIO.
A little over a month ago, I documented how OpenAI had descended into paranoia and bad faith lobbying surrounding California's SB 53. This included sending a deeply bad faith letter to Governor Newsom, which sadly is par for the course at this point. It also included lawfare attacks against bill advocates, including Nathan Calvin and others, using Elon Musk's unrelated lawsuits and vendetta against OpenAI as a pretext, accusing them of being in cahoots with Elon Musk. Previous reporting of this did not reflect well on OpenAI, but it sounded like the demand was limited in scope to a supposed link with Elon Musk or Meta CEO Mark Zuckerberg, links which very clearly never existed. Accusing essentially everyone who has ever done anything OpenAI dislikes of having united in a hallucinated ‘vast conspiracy’ is all classic behavior for OpenAI's Chief Global Affairs Officer Chris Lehane [...] ---Outline:(02:35) What OpenAI Tried To Do To Nathan Calvin(07:22) It Doesn't Look Good(10:17) OpenAI's Jason Kwon Responds(19:14) A Brief Amateur Legal Analysis Of The Request(21:33) What OpenAI Tried To Do To Tyler Johnston(25:50) Nathan Compiles Responses to Kwon(29:52) The First Thing We Do(36:12) OpenAI Head of Mission Alignment Joshua Achiam Speaks Out(40:16) It Could Be Worse(41:31) Chris Lehane Is Who We Thought He Was(42:50) A Matter of Distrust--- First published: October 13th, 2025 Source: https://www.lesswrong.com/posts/txTKHL2dCqnC7QsEX/openai-15-more-on-openai-s-paranoid-lawfare-against --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The 2025 State of AI Report is out, with lots of fun slides and a full video presentation. They’ve been consistently solid, providing a kind of outside general view. I’m skipping over stuff my regular readers already know that doesn’t bear repeating. Qwen The Fine Tune King For Now Nathan Benaich: Once a “Llama rip-off,” @Alibaba_Qwen now powers 40% of all new fine-tunes on @huggingface. China's open-weights ecosystem has overtaken Meta's, with Llama riding off into the sunset…for now. I highlight this because the ‘for now’ is important to understand, and to note that it's Qwen not DeepSeek. As in, models come and models go, and especially in the open model world people will switch on you on a dime. Stop worrying about lock-ins and mystical ‘tech stacks.’ Rise Of The Machines Robots now reason too. “Chain-of-Action” planning brings structured thought to the [...] ---Outline:(00:39) Qwen The Fine Tune King For Now(01:19) Rise Of The Machines(01:40) Model Context Protocol Wins Out(02:06) Benchmarks Are Increasingly Not So Useful(02:56) The DeepSeek Moment Was An Overreaction(03:40) Capitalism Is Hard To Pin Down(04:30) The Odds Are Against Us And The Situation Is Grim(06:35) They Grade Last Year's Predictions(09:35) Their Predictions for 2026--- First published: October 10th, 2025 Source: https://www.lesswrong.com/posts/AvKjYYYHC93JzuFCM/2025-state-of-ai-report-and-predictions --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
OpenAI is making deals and shipping products. They locked in their $500 billion valuation and then got 10% of AMD in exchange for buying a ton of chips. They gave us the ability to ‘chat with apps’ inside of ChatGPT. They walked back their insane Sora copyright and account deletion policies and are buying $50 million in marketing. They’ve really got a lot going on right now. Of course, everyone else also has a lot going on right now. It's AI. I spent the last weekend at a great AI conference at Lighthaven called The Curve. The other big news that came out this morning is that China is asserting sweeping extraterritorial control over rare earth metals. This is likely China's biggest card short of full trade war or worse, and it is being played in a hugely escalatory way that America obviously can’t accept. Presumably this [...] ---Outline:(01:33) Language Models Offer Mundane Utility(03:39) Language Models Don't Offer Mundane Utility(05:13) Huh, Upgrades(08:21) Chat With Apps(11:32) On Your Marks(12:51) Choose Your Fighter(13:47) Fun With Media Generation(24:03) Deepfaketown and Botpocalypse Soon(27:46) You Drive Me Crazy(31:33) They Took Our Jobs(34:35) The Art of the Jailbreak(34:55) Get Involved(38:31) Introducing(42:18) In Other AI News(43:38) Get To Work(45:45) Show Me the Money(50:24) Quiet Speculations(52:15) The Quest for Sane Regulations(01:02:31) Chip City(01:06:53) The Race to Maximize Rope Market Share(01:08:49) The Week in Audio(01:09:28) Rhetorical Innovation(01:10:13) Paranoia Paranoia Everybody's Coming To Test Me(01:13:10) Aligning a Smarter Than Human Intelligence is Difficult(01:14:12) Free Petri Dish(01:15:45) Unhobbling The Unhobbling Department(01:17:46) Serious People Are Worried About Synthetic Bio Risks(01:19:09) Messages From Janusworld(01:28:54) People Are Worried About AI Killing Everyone(01:32:14) Other People Are Excited About AI Killing Everyone(01:40:48) So You've Decided To Become Evil(01:45:25) The Lighter Side--- First published: October 9th, 2025 Source: https://www.lesswrong.com/posts/YfvqLidW5BGpiFrNF/ai-137-an-openai-app-for-that --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
It's been about a year since the last one of these. Given the long cycle, I have done my best to check for changes but things may have changed on any given topic by the time you read this. The NEPA Problem NEPA is a constant thorn in the side of anyone attempting to do anything. A certain kind of person responds with: “Good.” That kind of person does not want humans to do physical things in the world. They like the world as it is, or as it used to be. They do not want humans messing with it further. They often also think humans are bad, and should stop existing entirely. Or believe humans deserve to suffer or do penance. Or do not trust people to make good decisions and safeguard what matters. To them [...] ---Outline:(00:21) The NEPA Problem(01:53) The Full NEPA Solution(02:43) The Other Full NEPA Solution(03:59) Meanwhile(06:06) Yay Nuclear Power(14:22) Yay Solar and Wind Power(18:11) Yay Grid Storage(18:42) Yay Transmission Lines(19:40) American Energy is Cheap(20:48) Geoengineering(22:29) NEPA Standard Procedure is a Doom Loop(25:34) Categorical Exemptions(26:13) Teachers Against Transportation(32:41) Also CEQA(36:36) It Can Always Be Worse(39:35) How We Got Into This Mess(41:00) Categorical Exclusions(47:23) A Green Bargain(54:18) Costs Versus Benefits(55:27) A Call for Proposals(56:18) The Men of Two Studies(59:55) A Modest Proposal--- First published: October 8th, 2025 Source: https://www.lesswrong.com/posts/HqAJyxhdJcEfhH2nW/nepa-permitting-and-energy-roundup-2 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The odds are against you and the situation is grim. Your scrappy band are the only ones facing down a growing wave of powerful inhuman entities with alien minds and mysterious goals. The government is denying that anything could possibly be happening and actively working to shut down the few people trying things that might help. Your thoughts, no matter what you think could not harm you, inevitably choose the form of the destructor. You knew it was going to get bad, but this is so much worse. You have an idea. You’ll cross the streams. Because there is a very small chance that you will survive. You’re in love with this plan. You’re excited to be a part of it. Welcome to the always excellent Lighthaven venue for The Curve, Season 2, a conference I had the pleasure to attend this past weekend. Where [...] ---Outline:(02:53) Overall Impressions(03:36) The Inside View(08:16) Track Trouble(15:42) Let's Talk(15:45) Jagged Alliance(18:39) More Teachers' Dirty Looks(21:16) The View Inside The White House(22:33) Assume The Future AIs Be Scheming(23:29) Interlude(23:53) Eyes On The Mission(24:44) Putting The Code Into Practice(25:25) Missing It(25:54) Clark Talks About The Frontier(27:04) Other Perspectives(27:08) Deepfates(32:13) Anton(32:50) Jack Clark(33:43) Roon(34:43) Nathan Lambert(37:49) The Food--- First published: October 7th, 2025 Source: https://www.lesswrong.com/posts/A9fxfCfEAoouJshhZ/bending-the-curve --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Some amazing things are going on, not all of which involve mRNA, although please please those of you with the ability to do so, do your part to ensure that stays funded, either via investment or grants. As for mRNA, please do what you can to help save it, so we can keep getting more headlines like ‘a new cancer vaccine just wiped out tumors’ even if it is sufficiently early that the sentence this time inevitably concludes ‘IN MICE.’ Heart Disease Wait, what, you’re saying we might soon ‘mostly defeat’ heart disease? Cremieux: It's hard to oversell how big a discovery this is. Heart disease is America's #1 cause of death. With a combination of two drugs administered once every six months, it might be mostly defeated. Just think about that how big this is! You will know your great-grandkids! [...] ---Outline:(00:34) Heart Disease(02:16) Alcohol(05:21) Legal Reforms(06:24) Embryo Selection and Gene Editing(15:02) GLP-1s Work(18:57) There Is No Catch Other Than Availability And Price(22:54) The Societal Impact of GLP-1s(25:40) I'm Not a Yo-Yo(26:24) Back Problems Are An Underrated Reason To Lose Weight(27:18) No One Knows Much About Nutrition(29:23) Ketogenic Diets Can Go Very Wrong(36:01) Weight Loss The Hard Way(36:36) We Ask Far Too Much Of Satiety(39:28) Supplements(41:07) A Cure For Some Depression(42:16) Assisted Suicide(43:17) Bioethics Delenda Est(45:05) Ignorance Can Be Bliss(45:48) Living Forever In Your Apartment(46:51) In Other Health News--- First published: October 6th, 2025 Source: https://www.lesswrong.com/posts/DqK7TaH2drLxGiKcb/medical-roundup-5 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
OpenAI gave us two very different Sora releases. Here is the official announcement. The part where they gave us a new and improved video generator? Great, love it. The part where they gave us a new social network dedicated purely to short form AI videos? Not great, Bob. Don’t be evil. OpenAI is claiming they are making their social network with an endless scroll of 10-second AI videos the Actively Good, pro-human version of The Big Bright Screen Slop Machine, that helps you achieve your goals and can be easily customized and favors connection and so on. I am deeply skeptical. They also took a bold copyright stance, with that stance being, well, not quite ‘f*** you,’ but kind of close? You are welcome to start flagging individual videos. Or you can complain to them more generally about your characters and they say they can [...] ---Outline:(01:42) Act 1: Sora 2, The Producer of Ten Second Videos(03:08) Big Talk(06:00) Failures of Physics(06:45) Fun With Media Generation(09:19) Act 2: Copyright Confrontation(09:25) Imitation Learning(12:50) OpenAI In Copyright Infringement(14:25) OpenAI Gives Copyright Holders The Finger(22:36) State Of The Copyright Game(23:44) Not Safe For Pliny(25:46) Act 3: The Big Bright Screen Slop Machine(25:58) The Vibes Are Off(27:30) I Have An Idea, Let's Moral Panic(33:08) OpenAI's Own Live Look At Their New App(34:11) When Everyone Is Creative No One Will Be(36:51) Nobody Wants This(39:25) Okay Maybe Somebody Wants This?(40:22) Sora Sora What's It Good For?(42:31) The Vibes Are Even Worse(43:35) No I Mean How Is It Good For OpenAI?(46:15) Don't Worry It's The Good Version(49:03) Optimize For Long Term User Satisfaction(50:59) Encourage Users To Control Their Feed(56:20) Prioritize Creation(57:09) Help Users Achieve Their Long Term Goals(58:10) Prioritize Connection(58:52) Alignment Test(59:27) One Strike(01:00:50) Don't Be Evil(01:05:29) No Escape--- First published: October 3rd, 2025 Source: https://www.lesswrong.com/posts/DKXa42nu2SDnWWmeW/sora-and-the-big-bright-screen-slop-machine --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The big headline this week was the song, which was the release of Claude Sonnet 4.5. I covered this in two parts, first the System Card and Alignment, and then a second post on capabilities. It is a very good model, likely the current best model for most coding tasks, most agentic and computer use tasks, and quick or back-and-forth chat conversations. GPT-5 still has a role to play as well. There was also the dance, also known as Sora, both the new and improved 10-second AI video generator Sora and also the new OpenAI social network Sora. I will be covering that tomorrow. The video generator itself seems amazingly great. The social network sounds like a dystopian nightmare and I like to think Nobody Wants This, although I do not yet have access nor am I a typical customer of such products. The copyright decisions being [...] ---Outline:(02:52) Language Models Offer Mundane Utility(04:24) Language Models Don't Offer Mundane Utility(06:53) Huh, Upgrades(09:46) On Your Marks(18:02) Choose Your Fighter(22:36) Copyright Confrontation(25:35) Fun With Media Generation(27:02) Deepfaketown and Botpocalypse Soon(34:06) You Drive Me Crazy(35:12) Parental Controls(35:59) They Took Our Jobs(42:23) The Art of the Jailbreak(43:25) Introducing(49:16) In Other AI News(50:49) Show Me the Money(55:19) Quiet Speculations(01:03:15) The Quest for Sane Regulations(01:05:11) Chip City(01:07:15) The Week in Audio(01:07:55) If Anyone Builds It, Everyone Dies(01:12:55) Rhetorical Innovation(01:16:36) Messages From Janusworld(01:22:54) Aligning a Smarter Than Human Intelligence is Difficult(01:23:40) Other People Are Not As Worried About AI Killing Everyone(01:26:18) The Lighter Side--- First published: October 2nd, 2025 Source: https://www.lesswrong.com/posts/QZE2Hztfvk7xKrzBk/ai-136-a-song-and-dance --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
A few weeks ago, Anthropic announced Claude Opus 4.1 and promised larger announcements within a few weeks. Claude Sonnet 4.5 is the larger announcement. Yesterday I covered the model card and related alignment concerns. Today's post covers the capabilities side. We don’t currently have a new Opus, but Mike Krieger confirmed one is being worked on for release later this year. For Opus 4.5, my request is to give us a second version that gets minimal or no RL, isn’t great at coding, doesn’t use tools well except web search, doesn’t work as an agent or for computer use and so on, and if you ask it for those things it suggests you hand your task off to its technical friend or does so on your behalf. I do my best to include all substantive reactions I’ve seen, positive and negative, because right after model [...] ---Outline:(01:14) Big Talk(02:53) The Big Takeaways(04:55) On Your Marks(09:25) Huh, Upgrades(13:08) The System Prompt(20:31) Positive Reactions Curated By Anthropic(23:13) Other Systematic Positive Reactions(27:24) Anecdotal Positive Reactions(32:02) Anecdotal Negative Reactions(40:57) Claude Enters Its Non-Sycophantic Era(42:28) So Emotional(48:25) Early Days--- First published: October 1st, 2025 Source: https://www.lesswrong.com/posts/spQh5JfWXqTE5x5Wi/claude-sonnet-4-5-is-a-very-good-model --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Claude Sonnet 4.5 was released yesterday. Anthropic credibly describes it as the best coding, agentic and computer use model in the world. At least while I learn more, I am defaulting to it as my new primary model for queries short of GPT-5-Pro level. I’ll cover the system card and alignment concerns first, then cover capabilities and reactions tomorrow once everyone has had another day to play with the new model. It was great to recently see the collaboration between OpenAI and Anthropic where they evaluated each others’ models. I would love to see this incorporated into model cards going forward, where GPT-5 was included in Anthropic's system cards as a comparison point, and Claude was included in OpenAI's. Basic Alignment Facts About Sonnet 4.5 Anthropic: Overall, we find that Claude Sonnet 4.5 has a substantially improved safety profile compared to previous Claude models. [...] ---Outline:(01:36) Basic Alignment Facts About Sonnet 4.5(03:54) 2.1: Single Turn Tests and 2.2: Ambiguous Context Evaluations(05:01) 2.3. and 2.4: Multi-Turn Testing(07:00) 2.5: Bias(08:56) 3: Honesty(10:26) 4: Agentic Safety(10:41) 4.1: Malicious Agentic Coding(13:01) 4.2: Prompt Injections Within Agentic Systems(15:05) 5: Cyber Capabilities(17:35) 5.3: Responsible Scaling Policy (RSP) Cyber Tests(22:15) 6: Reward Hacking(26:47) 7: Alignment(28:11) Situational Awareness(33:38) Test Design(36:32) Evaluation Awareness(42:57) 7.4: Evidence From Training And Early Use(43:55) 7.5: Risk Area Discussions(45:26) It's Sabotage(50:48) Interpretability Investigations(58:35) 8: Model Welfare Assessment(58:54) 9: RSP (Responsible Scaling Policy) Evaluations(59:51) Keep Sonnet Safe--- First published: September 30th, 2025 Source: https://www.lesswrong.com/posts/4yn8B8p2YiouxLABy/claude-sonnet-4-5-system-card-and-alignment --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This seems like a good opportunity to do some of my classic detailed podcast coverage. The conventions are: This is not complete, points I did not find of note are skipped. The main part of each point is descriptive of what is said, by default paraphrased. For direct quotes I will use quote marks, by default this is Sutton. Nested statements are my own commentary. Timestamps are approximate and from his hosted copy, not the YouTube version, in this case I didn’t bother because the section divisions in the transcript should make this very easy to follow without them. Full transcript of the episode is here if you want to verify exactly what was said. Well, that was the plan. This turned largely into me quoting Sutton and then expressing my mind boggling. A lot of what was interesting [...] ---Outline:(01:21) Sutton Says LLMs Are Not Intelligent And Don't Do Anything(13:51) Humans Do Imitation Learning(19:35) The Experimental Paradigm(23:45) Current Architectures Generalize Poorly Out Of Distribution(28:36) Surprises In The AI Field(30:01) Will The Bitter Lesson Apply After AGI?(34:18) Succession To AI--- First published: September 29th, 2025 Source: https://www.lesswrong.com/posts/fpcRpBKBZavumySoe/on-dwarkesh-patel-s-podcast-with-richard-sutton --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
loading
Comments