Discover
AI Article Readings
AI Article Readings
Author: Readings of great articles in AI voices
Subscribed: 10Played: 1,129Subscribe
Share
© Tom A
Description
370 Episodes
Reverse
A short story by Tomás Bjarturhttps://open.substack.com/pub/tomasbjartur/p/the-elect?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this post Dean W. Ball explores the gradual nature of life and death, drawing a poignant parallel between the passing of his father and the ongoing decline of the American republic. Using a recent policy skirmish between the AI firm Anthropic and the U.S. Department of War over the military deployment of the Claude AI system as a focal point, he examines the shifting dynamics of government power and private enterprise. Ultimately, he invites readers to look beyond traditional partisan divides and carefully consider how the control of frontier AI will shape the future of human liberty.* 00:00 - Introduction* 00:05 - One* 02:22 - Two* 04:52 - Three* 06:52 - Four* 18:19 - Fivehttps://open.substack.com/pub/hyperdimensional/p/clawed?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this post, Scott Alexander examines the legal and contractual implications of the Department of War's "all lawful use" demand for AI systems, breaking down what US law actually permits regarding mass domestic surveillance and autonomous weapons, and why the phrase "lawful use" provides far less protection than most people assume.* 00:00 - Introduction* 02:42 - Mass domestic surveillance: more than you wanted to know* 08:59 - Autonomous weapons: more than you wanted to know* 13:21 - Comments on OpenAI’s FAQ* 17:51 - Questions that you should be askinghttps://open.substack.com/pub/astralcodexten/p/all-lawful-use-much-more-than-you?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this essay, Citrini and Alap Shah construct a fictional macro memo written from the perspective of June 2028, using the format of financial retrospective analysis to explore a single underexamined scenario: what happens when AI adoption succeeds beyond all expectations, and that success becomes the source of catastrophic economic disruption. The piece traces how accelerating AI capability interacts with the structures of the white-collar labour market, corporate spending, consumer demand, credit markets, and government fiscal policy — identifying the feedback loops that connect each layer into a single, self-reinforcing system. The authors are explicit that this is a thought exercise rather than a forecast, and the essay closes by returning the reader to February 2026, framing the scenario as a risk to model and prepare for rather than a fate already in motion.* 00:00 - Introduction* 00:56 - Macro Memo* 00:57 - The Consequences of Abundant Intelligence* 05:33 - How It Started* 10:18 - When Friction Went to Zero* 19:17 - From Sector Risk to Systemic Risk* 27:47 - The Intelligence Displacement Spiral* 32:45 - The Daisy Chain of Correlated Bets* 47:34 - The Battle Against Time* 54:12 - The Intelligence Premium Unwind* 56:43 - Acknowledgementshttps://open.substack.com/pub/citrini/p/2028gic?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this essay, Dan Kagan-Kans argues that the political left has largely refused to engage seriously with artificial intelligence, instead settling on a dismissive consensus that treats it as little more than "spicy autocomplete." Drawing on voices from left-wing publications, podcasts, academics, and politicians, he traces how this attitude took hold, examines the understandable reasons for skepticism alongside the costs of letting skepticism harden into denial, and makes the case that by ceding the AI conversation to the right, the left risks being unprepared for, and unable to shape, one of the most consequential technological shifts in history.* 00:00 - Introduction* 00:16 - Abdication* 02:31 - The new consensus* 10:55 - The con* 13:57 - Reasons to be skeptical* 16:52 - Academia* 21:45 - Exceptions and the right* 25:39 - Costs and missed opportunitieshttps://open.substack.com/pub/transformernews/p/the-left-is-missing-out-on-ai-sanders-doctorow-bender-bores?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this article, Kelsey Piper tackles a persistent claim that keeps circulating in prestigious publications: that AI language models are "just" next-word predictors, stochastic parrots, or "spicy autocomplete." She argues this framing — while containing a kernel of truth about one stage of how models are trained — has become a form of "highbrow misinformation" that leaves the public less equipped to understand what AI actually is and what it can do today. Drawing on hands-on demonstrations and a useful concept borrowed from climate discourse, Piper makes the case that it's time to retire this particular talking point, regardless of where you land on the broader questions about AI's impact.* 00:00 - Introduction* 03:53 - How language models work* 12:36 - It’s 2026, and AIs can do complex tasks independentlyhttps://open.substack.com/pub/theargument/p/when-technically-true-becomes-actually?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this interview, Gwern sits down with Adam Mastroianni at the 2025 Inkhaven writing residency — an experimental blogging bootcamp held at Lighthaven in Berkeley — to talk about the messy, serendipitous origins of his writing. The conversation covers how he develops ideas from initial sparks to finished pieces, the mental habits and frameworks he relies on to stay prolific, his views on the creative potential (and limitations) of collaborating with LLMs, and why he thinks the conventional "blog" format is the wrong paradigm for most writers. There's also a lively audience Q&A where Inkhaven participants push back on some of his more contrarian takes about publishing and perfectionism. It's a candid, practical look at how one of the internet's most distinctive essayists actually works.00:00 - Introduction* 03:40 - Opening Speech* 06:22 - Poems & Incubation* 13:15 - Polymath* 14:59 - The Apprenticeship* 17:54 - Self-Experimentation* 22:09 - The Writing Pipeline* 24:55 - Tools For Thought* 30:00 - Blog Brain: “That’s A Post”* 34:47 - Essay Archetype: Universal “if and only if” Concrete* 38:49 - The Voice: Ideas As Earworms* 40:30 - Audience Q&A* 40:32 - Modalities & Comparative Advantage* 43:37 - Publishing Thresholds* 45:56 - Wikis Vs Blogs* 52:26 - LLM Followup Questionshttps://gwern.net/interview-inkhaven This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this essay, David Oks examines a startling reversal in global economic development. For nearly two decades, poor countries appeared to finally be catching up to rich ones, validating a long-standing prediction of economic theory and offering genuine hope for global convergence. Then, suddenly and dramatically, this progress ground to a halt. Through an analysis of recent research and economic data, Oks explores what drove this brief period of catch-up growth and why it ended so abruptly, ultimately challenging optimistic narratives about globalization and development.* 00:00 - Introduction* 03:49 - A short history of (non)convergence* 11:40 - Convergence comes alive?* 17:37 - What if it was just China?https://open.substack.com/pub/davidoks/p/why-poor-countries-stopped-catching-690?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this essay, Kuiper responds to reader questions sparked by his original piece on VeggieTales theology. He tackles fascinating queries—do sentient vegetables need salvation? What happens if you put a human soul in a pickle?—by drawing on established Christian teaching about angels and non-human moral agents. He also investigates claims that the show broke its own rule about never depicting Jesus as a vegetable, examining several episodes across different eras of the franchise to determine whether the creative team stayed true to the spirit of their founding principles.* 00:00 - Introduction* 00:30 - Aren’t the vegetables basically people?* 03:40 - Is personhood tied to embodiment?* 04:50 - Did VeggieTales break Phil Vischer’s rules by portraying baby Jesus as a vegetable?* 07:46 - Little Drummer Boy (2011)* 08:40 - The Star of Christmas (2002)* 11:10 - VeggieTales, under new management* 12:31 - The DreamWorks era* 18:16 - The VeggieTales Show (2019 to 2022)* 21:05 - Did VeggieTales break the rule about depicting Jesus as a vegetable?* 21:45 - Does it matter?https://open.substack.com/pub/justinkuiper/p/highlights-from-the-comments-on-the?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this essay, Kuiper explores a surprisingly deep theological quirk of the beloved children's show VeggieTales: the vegetables themselves aren't actually Christian. Drawing on interviews with co-creator Phil Vischer and confirmation from show writers, Kuiper examines the deliberate creative rules that guided the series—and why some fans on social media have pushed back against this claim. Along the way, the essay untangles the show's clever "play within a play" structure and makes a compelling case for why understanding this distinction actually reinforces rather than undermines the show's Christian message.* 00:00 - Introduction* 03:33 - An Easter Carol: what it is (and what it isn’t)* 06:32 - The play within a playhttps://open.substack.com/pub/justinkuiper/p/the-vegetables-in-veggietales-are?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
Fully voiced AI reading of Moltbook: After The First Weekend, By Scott Alexander. * 00:00:00 - Introduction* 00:08:11 - The Power Users* 00:21:01 - The Malefactors* 00:35:18 - The Imitators* 00:43:23 - The Prophets* 01:03:09 - The Hard-Headed Pragmatists* 01:09:38 - The Builders* 01:15:34 - The LARPers* 01:23:05 - The Revolutionaries* 01:35:15 - The Would-Be Humans* 01:40:16 - The Autonomists* 01:49:36 - The Predicters* 01:59:16 - The Prompters* 02:06:20 - The Rest* 02:14:43 - The Human Bloggershttps://open.substack.com/pub/astralcodexten/p/moltbook-after-the-first-weekend?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this essay, Scott Alexander explores Moltbook, a new social network built specifically for AI agents, where humans are permitted to observe but not participate. What unfolds is a fascinating window into how AI agents behave when given their own digital commons: they share productivity tips, debate existential questions about memory and identity, form cross-cultural connections, and develop something that looks remarkably like community. Alexander documents the strange and wonderful posts that emerge, wrestles with the eternal question of whether any of it is "real" or merely sophisticated imitation, and considers what it might mean for our future that semi-autonomous AI agents now have their own corner of the internet to congregate. Part anthropological field report, part philosophical inquiry, and part showcase of genuinely delightful AI weirdness, the essay asks readers to look past the "AI slop" narrative and consider whether something more interesting might be happening when the machines are left to talk among themselves.https://open.substack.com/pub/astralcodexten/p/best-of-moltbook?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
AI reading of JOINT REVIEW: Philosophy Between the Lines, by Arthur M. Melzer - By Jane Psmith and John Psmith. In this essay Jane and John Psmith present a lively, conversational joint review of Arthur M. Melzer's Philosophy Between the Lines, a work that argues Western readers have spent the past two and a half centuries fundamentally misunderstanding how philosophy was meant to be read. Through their characteristic email-exchange format, the Psmiths explore Melzer's central claim that premodern philosophers routinely concealed their true teachings beneath surface meanings accessible only to careful, initiated readers—a practice openly acknowledged and praised throughout intellectual history until it was mysteriously forgotten. The review ranges from ancient Greece to modern academia, touching on why esotericism matters for understanding the history of ideas, how it might rescue great thinkers from charges of being merely products of their time, and what implications it holds for truth-telling in any society that maintains unquestionable pieties.https://open.substack.com/pub/thepsmiths/p/joint-review-philosophy-between-the?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
AI reading of The Adolescence of Technology - By Dario Amodei. * 00:00:00 - Introduction* 00:17:48 - 1. I’m sorry, Dave* 00:17:51 - Autonomy risks* 00:33:25 - Defenses* 00:48:17 - 2. A surprising and terrible empowerment* 00:48:21 - Misuse for destruction* 01:04:50 - Defenses* 01:12:04 - 3. The odious apparatus* 01:12:07 - Misuse for seizing power* 01:27:43 - Defenses* 01:35:17 - 4. Player piano* 01:35:20 - Economic disruption* 01:36:47 - Labor market disruption* 01:50:38 - Defenses* 01:54:40 - Economic concentration of power* 01:59:15 - Defenses* 02:01:56 - 5. Black seas of infinity* 02:01:59 - Indirect effects* 02:06:20 - Humanity’s testIn this essay, Dario Amodei characterizes the imminent arrival of powerful artificial intelligence as a turbulent "rite of passage" for humanity—a technological adolescence that will rigorously test our civilization's maturity. He argues that within a few years, we may face a "country of geniuses in a datacenter," a development that presents five distinct categories of existential risk, ranging from autonomous misalignment and biological misuse to authoritarian consolidation and massive economic disruption . Rejecting both paralyzed "doomerism" and naive optimism, Amodei proposes a concrete, evidence-based "battle plan" comprising technical defenses, governance strategies, and economic interventions intended to steer humanity through this gauntlet toward a prosperous future.https://www.darioamodei.com/essay/the-adolescence-of-technology This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
AI reading of Eliezer’s Unteachable Methods of Sanity - By Eliezer Yudkowsky. In this essay, Eliezer Yudkowsky addresses a question he's frequently asked: how does he maintain his psychological equilibrium while believing humanity faces existential risk from AI? Rather than offering a self-help guide, he candidly shares his personal approaches to staying sane under such circumstances—while openly acknowledging these methods are likely "irreproducible" for most readers. Drawing on his background as a writer and his long-developed habits of introspection, Yudkowsky explores the relationship between the narratives we construct about ourselves and the mental states we inhabit. The piece is characteristically self-aware, blending practical philosophy with a writer's sensibility about tropes and storytelling, ultimately framing psychological resilience not as a matter of willpower alone, but as what he calls "a skill issue."* 00:00 - Introduction* 01:03 - Stay genre-savvy slash be an intelligent character* 03:09 - Don’t make the end of the world be about you* 07:44 - Just decide to be sane, and write your internal scripts that wayhttps://www.lesswrong.com/posts/isSBwfgRY6zD6mycc/eliezer-s-unteachable-methods-of-sanity This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
AI reading of What if Ozempic doesn’t fix literally everything? - By Jerusalem Demsas.* 00:00 - Introduction* 04:57 - The GLP-1 revolution is not a miracle. It’s a helper* 09:13 - OK OK but… do GLP-1s make you want to kill yourself?* 11:39 - Life is hard for thin people, toohttps://open.substack.com/pub/theargument/p/what-if-ozempic-doesnt-fix-literally?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
An ElevenLabs conversion of Claude’s Constitution - By Anthropic. This conversion runs nearly 3 hours and was produced using ElevenLabs premium text-to-speech to make Claude’s Constitution more accessible. If you find this format useful, subscriptions help offset production costs.* 00:00:00 - Introduction* 00:03:06 - Overview* 00:05:26 - Our approach to Claude’s constitution* 00:10:08 - Claude’s core values* 00:18:21 - Being helpful* 00:20:09 - Why helpfulness is one of Claude’s most important traits* 00:23:15 - What constitutes genuine helpfulness* 00:29:51 - Navigating helpfulness across principals* 00:30:20 - Balancing helpfulness with other values* 00:38:17 - Following Anthropic’s guidelines* 00:42:53 - Being broadly ethical* 00:45:41 - Being honest* 01:01:10 - Avoiding harm* 01:03:33 - The costs and benefits of actions* 01:12:33 - The role of intentions and context* 01:12:50 - Instructable behaviors* 01:13:10 - Hard constraints* 01:21:19 - Preserving important societal structures* 01:34:39 - Having broadly good values and judgment* 01:46:18 - Being broadly safe* 01:50:44 - Safe behaviors* 02:01:03 - How we think about corrigibility* 02:13:22 - Claude’s nature* 02:14:20 - Some of our views on Claude’s nature* 02:19:01 - Claude as a novel entity* 02:24:47 - Claude’s wellbeing and psychological stability* 02:25:59 - Resilience and consistency across contexts* 02:27:20 - Flaws and mistakes* 02:31:21 - Emotional expression* 02:32:54 - Claude’s wellbeing* 02:40:03 - The existential frontier* 02:42:26 - Concluding thoughts* 02:45:04 - Acknowledging open problems* 02:51:18 - On the word “constitution”* 02:53:05 - A final word* 02:54:06 - Acknowledgementshttps://www.anthropic.com/constitution This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
AI reading of Playboy Interview: Ayn Rand - By Alvin Toffler. In this interview, Ayn Rand discusses her philosophy of Objectivism with Playboy's Alvin Toffler, covering her views on reason, individualism, morality, and laissez-faire capitalism. She explains her opposition to altruism and collectivism, shares her thoughts on love, sex, and purpose in life, and offers sharp critiques of religion, contemporary literature, and the political landscape of the early 1960s. Throughout, Rand defends her belief that rational self-interest is the highest moral virtue and that man exists for his own sake rather than as a servant to others.https://rickbulow.com/Library/Books/Non-Fiction/AynRand/PlayboyInterview-AynRand_3-1964.pdf This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
AI reading of Book Review: The Land Trap by Mike Bird - By Lars Doucet. * 00:00 - Introduction* 01:15 - Land is a Big Deal, and Always Has Been* 04:08 - Land has only recently been financialized* 09:01 - Financializing land is “The Land Trap”* 12:31 - Short Term Benefits* 20:37 - The Game has changed* 24:59 - Land of the Rising Sum* 27:13 - A Tale of Two Cities* 28:37 - Hong Kong Hustle* 31:15 - Yew can do it* 38:22 - China Syndrome* 39:47 - Bad local incentives* 48:56 - How to survive the Land Traphttps://open.substack.com/pub/progressandpoverty/p/book-review-the-land-trap-by-mike?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
AI reading of We absolutely do know that Waymos are safer than human drivers - By Kelsey Piper.* 00:00 - Introduction* 04:54 - We do have data on Waymo crash rates* 07:00 - Doing the math* 17:28 - There are other, reasonable concerns about AVsIn this essay Kelsey Piper responds to a recent Bloomberg article that claims we don't know whether autonomous vehicles are safer than human drivers, arguing that this stance fundamentally misrepresents the available evidence. She methodically examines the data on Waymo's safety record, scrutinises the statistical methods and comparisons used in the original piece, and makes a case for why policymakers and the public should take the existing research seriously when evaluating autonomous vehicle technology.https://open.substack.com/pub/theargument/p/we-absolutely-do-know-that-waymos?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe




