DiscoverAI Article Readings
AI Article Readings
Claim Ownership

AI Article Readings

Author: Readings of great articles in AI voices

Subscribed: 10Played: 1,203
Share

Description

Readings of great articles in AI voices

askwhocastsai.substack.com
380 Episodes
Reverse
Tension - By Max Harms

Tension - By Max Harms

2026-04-0723:57

In this article Max explores the craft of writing through the lens of tension, arguing that the ability to create, sustain, and release anticipation is one of the most powerful tools a writer has for keeping readers engaged, while also emphasizing that tension alone is not enough without underlying quality. Using a mix of examples from film, television, nonfiction, and classic literature, he examines how effective storytelling balances setup and payoff, builds empathy, and carefully controls the reader’s curiosity, while also showing that even works which seem to “break the rules” often do so deliberately in service of a broader artistic goal.* 00:00 - Introduction* 07:07 - Tension Done Well* 15:06 - Tension in Nonfiction* 16:57 - Chesterton’s Fencehttps://open.substack.com/pub/raelifin/p/tension?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
The Scary Bridge - by MoridinamaelIn this post Moridinamael uses a short allegorical scene, a town hall debate about a dangerous bridge, to illustrate how technical arguments about real, mechanistic risks get flattened into emotional narratives by both allies and opponents, and how onlookers end up judging the validity of claims based on the perceived confidence and composure of the speakers rather than the substance of what they're actually saying.https://www.lesswrong.com/posts/jbPgRMiEqnJbwtsim/the-scary-bridge This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
Consider chilling out in 2028 - by Valentine. In this post Valentine argues that if the AI risk landscape in early 2028 looks functionally the same as it does today, perpetually escalating alarm without correspondingly escalating real-world evidence of doom, the rationalist community should treat that as a serious signal to pause, reconsider its decades-long strategy of frightening people into action, and explore whether unexamined psychological and emotional dynamics might be distorting collective threat perception more than anyone currently appreciates.* 00:00 - Introduction* 02:22 - Inner cries for help* 08:59 - Scaring people* 12:13 - A shared positive vision* 18:14 - Maybe it’ll be okay* 22:06 - Come 2028…https://www.lesswrong.com/posts/D4eZF6FAZhrW4KaGG/consider-chilling-out-in-2028 This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
Against the Luddites - By SE GygesIn this post, SE Gyges argues that the contemporary rehabilitation of the Luddites as thoughtful technology critics is historically dishonest. Drawing on labor history, Marx and Engels, and modern post-capitalist thinkers, the piece makes the case that Luddism was a reactionary defense of guild privilege and male craft monopoly, not a progressive workers' movement, and that better intellectual traditions exist for anyone serious about the politics of automation.* 00:00 - Introduction* 01:28 - An Elite Movement* 03:05 - The Exclusion of Women* 06:01 - Marx and Engels Saw Through It* 09:01 - Restoration, Never Revolution* 11:57 - Conclusionhttps://open.substack.com/pub/verysane/p/against-the-luddites?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
dark ilan - By Ozy Brennanhttps://open.substack.com/pub/ozybrennan/p/dark-ilan?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
This post by Corvin is a pastiche of the ACX Bay Area House Party Series. https://open.substack.com/pub/ravenstales/p/every-acx-house-party?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
Every Debate On Pausing AI - By Scott Alexanderhttps://open.substack.com/pub/astralcodexten/p/every-debate-on-pausing-ai?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
“Full Cast” AI reading of Being John Rawls - By Scott Alexander. * 00:00 - Introduction* 03:23 - II* 11:27 - III* 20:21 - IV* 25:00 - V* 31:17 - VIhttps://open.substack.com/pub/astralcodexten/p/being-john-rawls?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this article, SE Gyges argues that the widely cited “stochastic parrots” critique of large language models is not only outdated but actively harmful to serious discussion of AI. The piece examines how the argument misunderstands modern AI systems, ignores advances like multimodal training and reinforcement learning, and rests on a narrow definition of “meaning.” By walking through both empirical evidence and conceptual flaws in the original claim, Gyges contends that dismissing LLMs as mere parrots prevents society from grappling with the real ethical and political challenges posed by systems that demonstrably do work. * 00:00 - Introduction* 02:31 - Even If True, The Argument Is Irrelevant* 03:32 - The Argument Doesn’t Apply to Any Major Model Since 2023* 06:45 - The Argument Was Already Obsolete When Published* 08:05 - The Argument Is Empirically False* 08:19 - The Octopus Test* 12:05 - The Platonic Representation Hypothesis* 13:24 - Form Carries Meaning* 15:48 - The Argument Is Badly Constructed* 16:07 - Parrots Are Amazing, Actually* 16:56 - The Definition of Meaning Is Circular* 19:28 - Conclusionhttps://open.substack.com/pub/verysane/p/polly-wants-a-better-argument?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this article David Oks takes a familiar story about technology and jobs, the idea that ATMs automated banking without destroying teller work, and turns it on its head, arguing that the real disruption came later from the smartphone era. Using the history of bank branches, bank tellers, and mobile banking, he explores a broader point about technological change: that the biggest effects often come not when a new tool replaces part of a job, but when it creates an entirely new way of doing things that makes the old role far less necessary. * 00:00 - Introduction* 07:17 - ATMs didn’t kill bank teller jobs* 20:32 - But iPhones actually did* 26:25 - Automating a job is much harder than making it irrelevanthttps://open.substack.com/pub/davidoks/p/why-the-atm-didnt-kill-bank-teller?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
A short story by Tomás Bjarturhttps://open.substack.com/pub/tomasbjartur/p/the-elect?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this post Dean W. Ball explores the gradual nature of life and death, drawing a poignant parallel between the passing of his father and the ongoing decline of the American republic. Using a recent policy skirmish between the AI firm Anthropic and the U.S. Department of War over the military deployment of the Claude AI system as a focal point, he examines the shifting dynamics of government power and private enterprise. Ultimately, he invites readers to look beyond traditional partisan divides and carefully consider how the control of frontier AI will shape the future of human liberty.* 00:00 - Introduction* 00:05 - One* 02:22 - Two* 04:52 - Three* 06:52 - Four* 18:19 - Fivehttps://open.substack.com/pub/hyperdimensional/p/clawed?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this post, Scott Alexander examines the legal and contractual implications of the Department of War's "all lawful use" demand for AI systems, breaking down what US law actually permits regarding mass domestic surveillance and autonomous weapons, and why the phrase "lawful use" provides far less protection than most people assume.* 00:00 - Introduction* 02:42 - Mass domestic surveillance: more than you wanted to know* 08:59 - Autonomous weapons: more than you wanted to know* 13:21 - Comments on OpenAI’s FAQ* 17:51 - Questions that you should be askinghttps://open.substack.com/pub/astralcodexten/p/all-lawful-use-much-more-than-you?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this essay, Citrini and Alap Shah construct a fictional macro memo written from the perspective of June 2028, using the format of financial retrospective analysis to explore a single underexamined scenario: what happens when AI adoption succeeds beyond all expectations, and that success becomes the source of catastrophic economic disruption. The piece traces how accelerating AI capability interacts with the structures of the white-collar labour market, corporate spending, consumer demand, credit markets, and government fiscal policy — identifying the feedback loops that connect each layer into a single, self-reinforcing system. The authors are explicit that this is a thought exercise rather than a forecast, and the essay closes by returning the reader to February 2026, framing the scenario as a risk to model and prepare for rather than a fate already in motion.* 00:00 - Introduction* 00:56 - Macro Memo* 00:57 - The Consequences of Abundant Intelligence* 05:33 - How It Started* 10:18 - When Friction Went to Zero* 19:17 - From Sector Risk to Systemic Risk* 27:47 - The Intelligence Displacement Spiral* 32:45 - The Daisy Chain of Correlated Bets* 47:34 - The Battle Against Time* 54:12 - The Intelligence Premium Unwind* 56:43 - Acknowledgementshttps://open.substack.com/pub/citrini/p/2028gic?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this essay, Dan Kagan-Kans argues that the political left has largely refused to engage seriously with artificial intelligence, instead settling on a dismissive consensus that treats it as little more than "spicy autocomplete." Drawing on voices from left-wing publications, podcasts, academics, and politicians, he traces how this attitude took hold, examines the understandable reasons for skepticism alongside the costs of letting skepticism harden into denial, and makes the case that by ceding the AI conversation to the right, the left risks being unprepared for, and unable to shape, one of the most consequential technological shifts in history.* 00:00 - Introduction* 00:16 - Abdication* 02:31 - The new consensus* 10:55 - The con* 13:57 - Reasons to be skeptical* 16:52 - Academia* 21:45 - Exceptions and the right* 25:39 - Costs and missed opportunitieshttps://open.substack.com/pub/transformernews/p/the-left-is-missing-out-on-ai-sanders-doctorow-bender-bores?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this article, Kelsey Piper tackles a persistent claim that keeps circulating in prestigious publications: that AI language models are "just" next-word predictors, stochastic parrots, or "spicy autocomplete." She argues this framing — while containing a kernel of truth about one stage of how models are trained — has become a form of "highbrow misinformation" that leaves the public less equipped to understand what AI actually is and what it can do today. Drawing on hands-on demonstrations and a useful concept borrowed from climate discourse, Piper makes the case that it's time to retire this particular talking point, regardless of where you land on the broader questions about AI's impact.* 00:00 - Introduction* 03:53 - How language models work* 12:36 - It’s 2026, and AIs can do complex tasks independentlyhttps://open.substack.com/pub/theargument/p/when-technically-true-becomes-actually?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this interview, Gwern sits down with Adam Mastroianni at the 2025 Inkhaven writing residency — an experimental blogging bootcamp held at Lighthaven in Berkeley — to talk about the messy, serendipitous origins of his writing. The conversation covers how he develops ideas from initial sparks to finished pieces, the mental habits and frameworks he relies on to stay prolific, his views on the creative potential (and limitations) of collaborating with LLMs, and why he thinks the conventional "blog" format is the wrong paradigm for most writers. There's also a lively audience Q&A where Inkhaven participants push back on some of his more contrarian takes about publishing and perfectionism. It's a candid, practical look at how one of the internet's most distinctive essayists actually works.00:00 - Introduction* 03:40 - Opening Speech* 06:22 - Poems & Incubation* 13:15 - Polymath* 14:59 - The Apprenticeship* 17:54 - Self-Experimentation* 22:09 - The Writing Pipeline* 24:55 - Tools For Thought* 30:00 - Blog Brain: “That’s A Post”* 34:47 - Essay Archetype: Universal “if and only if” Concrete* 38:49 - The Voice: Ideas As Earworms* 40:30 - Audience Q&A* 40:32 - Modalities & Comparative Advantage* 43:37 - Publishing Thresholds* 45:56 - Wikis Vs Blogs* 52:26 - LLM Followup Questionshttps://gwern.net/interview-inkhaven This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this essay, David Oks examines a startling reversal in global economic development. For nearly two decades, poor countries appeared to finally be catching up to rich ones, validating a long-standing prediction of economic theory and offering genuine hope for global convergence. Then, suddenly and dramatically, this progress ground to a halt. Through an analysis of recent research and economic data, Oks explores what drove this brief period of catch-up growth and why it ended so abruptly, ultimately challenging optimistic narratives about globalization and development.* 00:00 - Introduction* 03:49 - A short history of (non)convergence* 11:40 - Convergence comes alive?* 17:37 - What if it was just China?https://open.substack.com/pub/davidoks/p/why-poor-countries-stopped-catching-690?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this essay, Kuiper responds to reader questions sparked by his original piece on VeggieTales theology. He tackles fascinating queries—do sentient vegetables need salvation? What happens if you put a human soul in a pickle?—by drawing on established Christian teaching about angels and non-human moral agents. He also investigates claims that the show broke its own rule about never depicting Jesus as a vegetable, examining several episodes across different eras of the franchise to determine whether the creative team stayed true to the spirit of their founding principles.* 00:00 - Introduction* 00:30 - Aren’t the vegetables basically people?* 03:40 - Is personhood tied to embodiment?* 04:50 - Did VeggieTales break Phil Vischer’s rules by portraying baby Jesus as a vegetable?* 07:46 - Little Drummer Boy (2011)* 08:40 - The Star of Christmas (2002)* 11:10 - VeggieTales, under new management* 12:31 - The DreamWorks era* 18:16 - The VeggieTales Show (2019 to 2022)* 21:05 - Did VeggieTales break the rule about depicting Jesus as a vegetable?* 21:45 - Does it matter?https://open.substack.com/pub/justinkuiper/p/highlights-from-the-comments-on-the?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
In this essay, Kuiper explores a surprisingly deep theological quirk of the beloved children's show VeggieTales: the vegetables themselves aren't actually Christian. Drawing on interviews with co-creator Phil Vischer and confirmation from show writers, Kuiper examines the deliberate creative rules that guided the series—and why some fans on social media have pushed back against this claim. Along the way, the essay untangles the show's clever "play within a play" structure and makes a compelling case for why understanding this distinction actually reinforces rather than undermines the show's Christian message.* 00:00 - Introduction* 03:33 - An Easter Carol: what it is (and what it isn’t)* 06:32 - The play within a playhttps://open.substack.com/pub/justinkuiper/p/the-vegetables-in-veggietales-are?utm_campaign=post-expanded-share&utm_medium=web This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe
loading
Comments