Discover
Stay Human, from the Artificiality Institute
Stay Human, from the Artificiality Institute
Author: Helen and Dave Edwards
Subscribed: 25Played: 425Subscribe
Share
© Helen and Dave Edwards
Description
Exploring how AI changes the way we think, who we become, and what it means to be human. We explore how AI changes the way we think, who we become, and what it means to be human. We believe AI shouldn't just be safe or efficient—it should be worth it. Through story-based research, education, and community, we help people choose the relationship they want with machines—so they remain the authors of their own minds.
111 Episodes
Reverse
In this conversation, we explore the cultural foundations of artificial intelligence with Nina Beguš, Assistant Professor at UC Berkeley and author of "Artificial Humanities: A Fictional Perspective on Language in AI." Nina makes a compelling case for an entirely new field—one that brings humanistic insights into the very creation of technology rather than treating humanities as critical afterthought or ethical guardrail.Nina's work emerged from recognizing patterns everywhere she looked: the same fictional scripts appearing in technology products, films, and Silicon Valley's imagination. When Siri launched as a feminized virtual assistant designed to build rapport, Nina immediately asked "why is it a woman?" and began tracing how deeply fiction shapes our technological reality—not as metaphor but as blueprint.Key themes we explore:The Pygmalion Template: How an ancient myth—male creator produces idealized woman, projects desire onto creation—persistently shapes virtual assistants and AI interfacesFrom Marble to Cockney to LLMs: Tracing evolution from Ovid through Shaw's "Pygmalion" to the "ELIZA effect" named after Eliza DoolittleLanguage No Longer Uniquely Human: The profound implications of machines using language eloquently without consciousnessMonolingual AI at Global Scale: How tokenization creates structural monolingualism beyond just favoring EnglishWriters Responding to AI: Nina's project gathering sixteen writers to reflect on what happens when language is no longer exclusively humanPlanetary Ontology: Collaborative work seeing human/nature/technology as sitting "in the same continuum of this planet"Nina Beguš is Researcher and Lecturer at the Center for Science, Technology, Medicine & Society at the University of California, Berkeley. She graduated with a Ph.D. in comparative literature from Harvard University. During her time at the Berggruen Institute and ToftH, she helped implement novel humanities-based consulting techniques for big tech companies.https://www.ninabegus.com
In this conversation, we explore the nature of intelligence and life itself with Blaise Agüera y Arcas, VP and Fellow at Google and head of the Paradigms of Intelligence Lab. Blaise discusses his ambitious new book "What Is Intelligence?"—a work that bridges evolutionary biology, complexity science, artificial life, and AI to argue that intelligence fundamentally arises from computation, symbiosis, and the recursive modeling of minds.Blaise describes himself as "an inch deep with a few deeper wells" across disciplines, drawing from sources as diverse as Nick Lane's work on energetics, Darwin's evolution, and anarcho-communist Peter Kropotkin's 1910 treatise on mutual aid. This intellectual breadth allows him to see connections others miss—like recognizing that the urgent questions raised by modern AI models exhibiting general intelligence without any "magical discovery" demand we fundamentally rethink what intelligence means across all substrates.Key themes we explore:- Symbiogenesis, Not Just Symbiosis: Why the distinction matters—when mutualism creates something new that reproduces as a unit, with individuals no longer viable alone- Humans as Existing Cyborgs: How the steam engine represents our "mitochondrion," enabling 7 of 8 billion people to exist by metabolizing energy on our behalf- The Endless Frontier of Intelligence: Why energy budgets increasingly shift toward thought as systems scale—and why this demand is "bottomless"- Theory of Mind as Foundation: How recursive modeling of others' minds enables social coordination and represents the mathematical basis for multi-agent learning- Artificial Life's Emergence: Why massive parallel computation will finally allow artificial life research to flourish- Categories as Approximations: Moving beyond both essentialist categorization and postmodern rejection toward understanding statistical descriptions with limits- Planetary Consciousness as Survival: Why modeling the entire ecological system isn't "woo-woo" but literally what we need for collective agencyBlaise Agüera y Arcas is a VP and Fellow at Google, where he is the CTO of Technology & Society and founder of Paradigms of Intelligence (Pi). Pi is an organization working on basic research in AI and related fields, especially the foundations of neural computing, active inference, sociality, evolution, and Artificial Life. A frequent public speaker, he has given multiple TED talks and keynoted NeurIPS. He has also authored numerous papers, essays, op-eds, and chapters, as well as two previous books, Who Are We Now? and Ubi Sunt. His most recent book, What Is Life?, is part 1 of the larger book What Is Intelligence?, forthcoming from Antikythera and MIT Press in September 2025.
In this conversation, we explore the psychology of conviction with Steve Sloman, Professor of Cognitive, Linguistic, and Psychological Sciences at Brown University and advisor to the Artificiality Institute. Returning to the podcast for a third time, Steve discusses his new book "The Cost of Conviction," which examines a fundamental tension in how humans make decisions—between carefully weighing consequences versus following deeply held sacred values that demand certain actions regardless of outcomes.Steve's work challenges the dominant assumption in decision research that people primarily act as consequentialists, calculating costs and benefits to maximize utility. Instead, he reveals how many of our most important decisions bypass consequence entirely, guided by sacred values—rules about appropriate action handed down through families and communities that define who we are and signal membership in our social groups. These aren't carefully derived from first principles like philosophical deontology suggests, but rather adopted beliefs about right and wrong that make us members in good standing of our communities.Key themes we explore:Sacred Values as Uber Heuristics: Why treating certain actions as absolutely right or wrong, independent of consequences, represents perhaps the most powerful shortcut for decision-making—simpler even than most heuristics because it allows us to ignore outcomes entirelyConviction Without Compromise: How framing issues through sacred values makes them feel less tractable, generates more outrage when violated, and increases willingness to take action—producing the absolutist convictions that drive both heroic stands and intractable conflictsDynamic Sacred Values: How values that define communities aren't fixed but emerge and shift based on what distinguishes groups from each other—explaining why tariffs or transgender rights suddenly become hotly contested "sacred" issues that weren't previously centralAI's Polarization Problem: The observation that attitudes toward AI have taken on sacred value characteristics, with absolutist believers that it will save the world racing against those convinced it represents fundamental evil—both positions simpler than engaging with genuine complexity and uncertaintyThe conversation reveals Steve's core thesis: we rely on sacred values too much when we should be more consequentialist. Sacred values simplify decisions in ways that produce conviction and community cohesion, but at the cost of making us intransigent, uncompromising, and absolutist. When we shift to genuinely considering consequences, we become more humble about our knowledge limitations and hopefully more open to alternative perspectives.Yet the discussion also surfaces important nuances. Sacred values serve crucial functions—they may have consequentialist origins in cultural experience even if individuals apply them without consequence calculation. They provide the kind of universal moral stance that makes someone trustworthy in ways that preferences over specific outcomes cannot. And expressing certainty about complex issues where genuine experts admit uncertainty often signals ignorance rather than knowledge.About Steve Sloman: Steve Sloman is Professor of Cognitive, Linguistic, and Psychological Sciences at Brown University, where his research examines reasoning, decision-making, and the cognitive foundations of community. Author of "The Knowledge Illusion" (with Philip Fernbach) and now "The Cost of Conviction," Steve's work explores how our reliance on others' knowledge shapes everything from individual decisions to political polarization. As an advisor to the Artificiality Institute, he helps bridge cognitive science insights with questions about human-AI collaboration and co-evolution.
In this conversation, we explore the foundations of artificial intelligence with Ellie Pavlick, Assistant Professor of Computer Science at Brown University, a Research Scientist at Google Deepmind, and Director of ARIA, an NSF-funded institute examining AI's role in mental health support. Ellie's trajectory—from undergraduate degrees in economics and saxophone performance to pioneering research at the intersection of AI and cognitive science—reflects the kind of interdisciplinary thinking increasingly essential for understanding what these systems are and what they mean for us.Ellie represents a generation of researchers grappling with what she calls a "paradigm shift" in how we understand both artificial and human intelligence. Her work challenges long-held assumptions in cognitive science while refusing to accept easy answers about what AI systems can or cannot do. As she observes, we're witnessing concepts like "intelligence," "meaning," and "understanding" undergo the kind of radical redefinition that historically accompanies major scientific revolutions—where old terms become relics of earlier theories or get repurposed to mean something fundamentally different.Key themes we explore:- The Grounding Question: How Ellie's thinking evolved from believing AI fundamentally lacked meaning without embodied sensory experience to recognizing that grounding itself is a more complex and empirically testable question than either side of the debate typically acknowledges- Symbols Without Symbolism: Her recent collaborative work with Tom Griffiths, Brenden Lake, and others demonstrating that large language models exhibit capabilities previously thought to require explicit symbolic architectures—challenging decades of cognitive science orthodoxy about human cognition- The Measurability Problem: Why AI's apparent success on standardized tests reveals more about the inadequacy of our metrics than the adequacy of the systems, and how education, hiring, and relationships have always resisted quantification in ways we conveniently forget when evaluating AI- Intelligence as Moving Target: Ellie's argument that "intelligence" functions as a placeholder term for "the thing we don't yet understand"—always retreating as scientific progress advances, much like obsolete scientific concepts such as ether- The Value Frontier: Why the aspects of human experience that resist quantification may be definitionally human—not because they're inherently unmeasurable, but because they represent whatever currently sits beyond our measurement capabilities- Mental Health as Hard Problem: Why her new institute focuses on arguably the most challenging application domain for AI, where getting memory, co-adaptation, transparency, and long-term human impact right isn't optional but essentialEllie consistently pushes back against premature conclusions—whether it's claims that AI definitively lacks meaning or assertions that passing standardized tests proves human-level capability. Her approach emphasizes asking "are these processes similar or different?" rather than making sweeping judgments about whether systems "really" understand or "truly" have intelligence. As Ellie notes, we're at the "tip of the iceberg" in understanding these systems—we haven't yet pushed them to their breaking point or discovered their full potential.Her work on ARIA demonstrates this philosophy in practice. Rather than avoiding mental health applications because they're ethically fraught, she's leaning into the difficulty precisely because it forces confrontation with all the hard questions—from how memory works to how repeated human-AI interaction fundamentally changes both parties over time. It's research that refuses to wait a generation to see if we've "screwed up a whole generation."
We enjoyed giving a virtual keynote for the Autonomous Summit on December 4, 2025, titled Becoming Synthetic: What AI Is Doing To Us, Not Just For Us. We talked about our research on how to maintain human agency & cognitive sovereignty, the philosophical question of what it means to be human, and our new(ish) approach to create better AI tools called unDesign. unDesign is not the absence of design nor is it anti-design. It's design oriented differently. The history of design has been a project of reducing uncertainty. Making things legible. Signaling affordances. Good design means you never have to wonder what to do.Undesign inverts this and uses "uns" as design material. The unknown. The unpredictable. The unplanned. These aren't bugs. They're the medium where value actually lives. Because uncertainty is the condition of genuine encounter. unDesign doesn't design outcomes—it designs the space where outcomes can emerge.You can watch the full keynote below. Check it out!
In this conversation recorded on the 1,000th day since ChatGPT's launch, we explore education, creativity, and transformation with Tess Posner, founding CEO of AI4ALL. For nearly a decade—long before the current AI surge—Tess has led efforts to broaden access to AI education, starting from a 2016 summer camp at Stanford that demonstrated how exposure to hands-on AI projects could inspire high school students, particularly young women, to pursue careers in the field.What began as exposing students to "the magic" of AI possibilities has evolved into something more complex: helping young people navigate a moment of radical uncertainty while developing both technical capabilities and critical thinking about implications. As Tess observes, we're recording at a time when universities are simultaneously banning ChatGPT and embracing it, when the job market for graduates is sobering, and when the entire structure of work is being "reinvented from the ground up."Key themes we explore:Living the Questions: How Tess's team adopted Rilke's concept of "living the questions" as their guiding principle for navigating unprecedented change—recognizing that answers won't come easily and that cultivating wisdom matters more than chasing certaintyThe Diverse Pain Point: Why students from varied backgrounds gravitate toward different AI applications—from predicting droughts for farm worker families to detecting Alzheimer's based on personal experience—and how this diversity of lived experience shapes what problems get attentionProject-Based Learning as Anchor: How hands-on making and building creates the kind of applied learning that both reveals AI's possibilities and exposes its limitations, while fostering the critical thinking skills that pure consumption of AI outputs cannot developThe Educational Reckoning: Why this moment is forcing fundamental questions about the purpose of schooling—moving beyond detection tools and honor codes toward reimagining how learning happens when instant answers are always availableThe Worst Job Market in Decades: Sobering realities facing graduates alongside surprising opportunities—some companies doubling down on "AI native" early career talent while others fundamentally restructure work around managing AI agents rather than doing tasks directlyMusic and the Soul Question: Tess's personal wrestling with AI-generated music that can mimic human emotional expression so convincingly it gets stuck in your head—forcing questions about whether something deeper than output quality matters in artThe conversation reveals someone committed to equity and access while refusing easy optimism about technology's trajectory. Tess acknowledges that "nobody really knows" what the future of work looks like or how education should adapt, yet maintains that the response cannot be paralysis. Instead, AI4ALL's approach emphasizes building community, developing genuine technical skills, and threading ethical considerations through every project—equipping students not with certainty but with agency.About Tess Posner: Tess Posner is founding and interim CEO of AI4ALL, a nonprofit working to increase diversity and inclusion in AI education, research, development, and policy. Since 2017, she has led the organization's expansion from a single summer program at Stanford to a nationwide initiative serving students from over 150 universities. A graduate of St. John's College with its Great Books curriculum, Tess is also an accomplished musician who brings both technical expertise and humanistic perspective to questions about AI's role in creativity and human flourishing.Our Theme Music:Solid State (Reprise)Written & performed by Jonathan CoultonLicense: Perpetual, worldwide licence for podcast theme usage granted to Artificiality Institute by songwriter and publisher
In this conversation, we explore the philosophical art of embracing uncertainty with Eric Schwitzgebel, Professor of Philosophy at UC Riverside and author of "The Weirdness of the World." Eric's work celebrates what he calls "the philosophy of opening"—not rushing to close off possibilities, but instead revealing how many more viable alternatives exist than we typically recognize. As he observes, learning that the world is less comprehensible than you thought, that more possibilities remain open, constitutes a valuable form of knowledge in itself.The conversation centers on one of Eric's most provocative arguments: that if we take mainstream scientific theories of consciousness seriously and apply them consistently, the United States might qualify as a conscious entity. Not in some fascist "absorb yourself into the group mind" sense, but perhaps at the level of a rabbit—possessing massive internal information processing, sophisticated environmental responsiveness, self-monitoring capabilities, and all the neural substrate you could want (just distributed across individual skulls rather than contained in one).Key themes we explore:The United States Consciousness Thought Experiment: How standard materialist theories that attribute consciousness to animals based on information processing and behavioral complexity would, if applied consistently, suggest large-scale collective entities might be conscious too—and why every attempt to wiggle out of this conclusion commits you to other forms of weirdnessPhilosophy of Opening vs. Closing: Eric's distinction between philosophical work that narrows possibilities to find definitive answers versus work that reveals previously unconsidered alternatives, expanding rather than contracting the space of viable theoriesThe AI Consciousness Crisis Ahead: Why we'll face social decisions about how to treat AI systems before we have scientific consensus on whether they're conscious—with respectable theories supporting radically different conclusions and people's investments (emotional, religious, economic) driving which theories they embraceMimicry and Mistrust: Why we're justified in being more skeptical about AI consciousness than human consciousness—not because similarity proves anything definitively, but because AI systems trained to mimic human linguistic patterns raise the same concerns as parrots saying "hoist the flag"The Design Policy of the Excluded Middle: Eric's recommendation (which he doubts the world will follow) to avoid creating systems whose moral status we cannot determine—because making mistakes in either direction could be catastrophic at scaleStrange Intelligence Over Superintelligence: Why the linear conception of AI as "subhuman, then human, then superhuman" fundamentally misunderstands what's likely to emerge—we should expect radically different cognitive architectures with cross-cutting capacities and incapacities rather than human-like minds that are simply "better"About Eric Schwitzgebel: Eric Schwitzgebel is Professor of Philosophy at the University of California, Riverside, specializing in philosophy of mind and moral psychology. His work spans consciousness, introspection, and the ethics of artificial intelligence. Author of "The Weirdness of the World" and a forthcoming book on AI consciousness and moral status, Eric maintains an active blog (The Splintered Mind) where he explores philosophical questions with clarity and wit. His scholarship consistently challenges comfortable assumptions while remaining remarkably accessible to readers beyond academic philosophy.
In this conversation, we explore the challenges of building more inclusive AI systems with John Pasmore, founder and CEO of Latimer AI and advisor to the Artificiality Institute. Latimer represents a fundamentally different approach to large language models—one built from the ground up to address the systematic gaps in how AI systems represent Black and Brown cultures, histories, and perspectives that have been largely absent from mainstream training data.John brings a practical founder's perspective to questions that often remain abstract in AI discourse. With over 400 educational institutions now using Latimer, he's witnessing firsthand how students, faculty, and administrators are navigating the integration of AI into learning—from universities licensing 40+ different LLMs to schools still grappling with whether AI represents a cheating risk or a pedagogical opportunity.Key themes we explore:The Data Gap: Why mainstream LLMs reflect a narrow "Western culture bias" and what's missing when AI claims to "know everything"—from 15 million unscanned pages in Howard University's library to oral traditions across thousands of indigenous tribes.Critical Thinking vs. Convenience: How universities are struggling to preserve deep learning and intellectual rigor when AI makes it trivially easy to get instant answers, and whether requiring students to bring their prompts to class represents a viable path forward.The GPS Analogy: John's insight that AI's effect on cognitive skills mirrors what happened with navigation—we've gained efficiency but lost the embodied knowledge that comes from building mental maps through direct experience.Multiple Models, Multiple Perspectives: Why the future likely involves domain-specific and culturally-situated LLMs rather than a single "universal" system, and how this parallels the reality that different cultures tell different stories about the same events.Excavating Hidden Knowledge: Latimer's ambitious project to digitize and make accessible vast archives of cultural material—from church records to small museum collections—that never made it onto the internet and therefore don't exist in mainstream AI systems.An eBay for Data: John's vision for creating a marketplace where content owners can license their data to AI companies, establishing both proper compensation and a mechanism for filling the systematic gaps in training corpora.The conversation shows that AI bias goes beyond removing offensive outputs. We need to rethink which data sources we treat as authoritative and whose perspectives shape these influential systems. When AI presents itself as an oracle that has "read everything on the internet," it claims omniscience while excluding vast amounts of human knowledge and experience.The discussion raises questions about expertise and process in an era of instant answers—in debugging code, navigating cities, or writing essays. John notes that we may be "working against evolution" by preserving slower, more effortful learning when our brains naturally seek efficiency. But what do we lose when we eliminate the struggle that builds deeper understanding?About John Pasmore: John Pasmore is founder and CEO of Latimer AI, a large language model built to provide accurate historical information and bias-free interaction for Black and Brown audiences and anyone who values precision in their data. Previously a partner at TRS Capital and Movita Organics, John serves on the Board of Directors of Outward Bound USA and holds degrees in Business Administration from SUNY and Computer Science from Columbia University. He is also an advisor to the Artificiality Institute.
In this conversation, we explore how humans can better navigate the AI era with De Kai, pioneering researcher who built the web's first machine translation systems and whose work spawned Google Translate. Drawing on four decades of AI research experience, De Kai offers a different framework for understanding our relationship with artificial intelligence—moving beyond outdated metaphors toward more constructive approaches.De Kai's perspective was shaped by observing how AI technologies are being deployed in ways that increase rather than decrease human understanding. While AI has tremendous potential to help people communicate across cultural and linguistic differences—as his translation work demonstrated—current implementations often amplify polarization and misunderstanding instead.Key themes we explore:Beyond Machine Metaphors: Why thinking of AI as "tools" or "machines" is dangerously outdated—AI systems are fundamentally artificial psychological entities that learn, adapt, and influence human behavior in ways no coffee maker ever couldThe Parenting Framework: De Kai's central insight that we're all currently "parenting" roughly 100 artificial intelligences daily through our smartphones, tablets, and devices—AIs that are watching, learning, and imitating our attitudes, behaviors, and belief systemsSystem One vs. System Two Intelligence: How current large language models operate primarily through "artificial autism"—brilliant pattern matching without the reflective, critical thinking capacities that characterize mature human intelligenceTranslation as Understanding: Moving beyond simple language translation toward what De Kai calls a "translation mindset"—using AI to help humans understand different cultural framings and perspectives rather than enforcing singular universal truthsThe Reframing Superpower: How AI's capacity for rapid perspective-shifting and metaphorical reasoning represents one of humanity's best hopes for breaking out of polarized narratives and finding common groundSocial Fabric Transformation: Understanding how 800 billion artificial minds embedded in our social networks are already reshaping how cultures and civilizations evolve—often in ways that increase rather than decrease mutual understandingDrawing on insights from developmental psychology and complex systems, De Kai's "Raising AI" framework emphasizes conscious human responsibility in shaping how these artificial minds develop. Rather than viewing this as an overwhelming burden, he frames it as an opportunity for humans to become more intentional about the values and behaviors they model—both for AI systems and for each other.About De Kai: De Kai is Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley’s International Computer Science Institute. He is Independent Director of AI ethics think tank The Future Society, and was one of eight inaugural members of Google’s AI ethics council. De Kai invented and built the world’s first global-scale online language translator that spawned Google Translate, Yahoo Translate, and Microsoft Bing Translator. For his pioneering contributions in AI, natural language processing, and machine learning, De Kai was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows and by Debrett’s as one of the 100 most influential figures of Hong Kong.
In this conversation, we sit down with Adam Cutler, Distinguished Designer at IBM and pioneer in human-centered AI design, to explore how generative AI is reshaping creativity, reliance, and human experience. Adam reflects on the parallels between today’s AI moment and past technology shifts—from the rise of Web 2.0 to the early days of the internet—and why we may be living through a “mini singularity.” We discuss the risks of over-reliance, the importance of intentional design, and the opportunities for AI to augment curiosity, creativity, and community. As always, a conversation with Adam provides a thoughtful and caring view of possible futures with AI. And it's heartening to spend time with someone who is so central to the future of AI who consistenly thinks about humans first.Adam will be speaking (again) at the Artificiality Summit in Bend, Oregon on Oct 23-25, 2025. More info: https://artificialityinstitute.org/summit
In this conversation, we explore the shifts in human experience with Christine Rosen, senior fellow at the American Enterprise Institute and author of "The Extinction of Experience: Being Human in a Disembodied World." As a member of the "hybrid generation" of Gen X, Christine (like us) brings the perspective of having lived through the transition from an analog to a digital world and witnessed firsthand what we've gained and lost in the process.Christine frames our current moment through the lens of what naturalist Robert Michael Pyle called "the extinction of experience"—the idea that when something disappears from our environment, subsequent generations don't even know to mourn its absence. Drawing on over 20 years of studying technology's impact on human behavior, she argues that we're experiencing a mass migration from direct to mediated experience, often without recognizing the qualitative differences between them.Key themes we explore:The Archaeology of Lost Skills: How the abandonment of handwriting reveals the broader pattern of discarding embodied cognition—the physical practices that shape how we think, remember, and process the world around usMediation as Default: Why our increasing reliance on screens to understand experience is fundamentally different from direct engagement, and how this shift affects our ability to read emotions, tolerate friction, and navigate uncomfortable social situationsThe Machine Logic of Relationships: How technology companies treat our emotions "like the law used to treat wives as property"—as something to be controlled, optimized, and made efficient rather than experienced in their full complexityEmbodied Resistance: Why skills like cursive handwriting, face-to-face conversation, and the ability to sit with uncomfortable emotions aren't nostalgic indulgences but essential human capacities that require active preservationThe Keyboard Metaphor: How our technological interfaces—with their control buttons, delete keys, and escape commands—are reshaping our expectations for human relationships and emotional experiencesChristine challenges the Silicon Valley orthodoxy that frames every technological advancement as inevitable progress, instead advocating for what she calls "defending the human." This isn't a Luddite rejection of technology but a call for conscious choice about what we preserve, what we abandon, and what we allow machines to optimize out of existence.The conversation reveals how seemingly small decisions—choosing to handwrite a letter, putting phones in the center of the table during dinner, or learning to read cursive—become acts of resistance against a broader cultural shift toward treating humans as inefficient machines in need of optimization. As Christine observes, we're creating a world where the people designing our technological future live with "human nannies and human tutors and human massage therapists" while prescribing AI substitutes for everyone else.What emerges is both a warning and a manifesto: that preserving human experience requires actively choosing friction, inefficiency, and the irreducible messiness of being embodied creatures in a physical world. Christine's work serves as an essential field guide for navigating the tension between technological capability and human flourishing—showing us how to embrace useful innovations while defending the experiences that make us most fully human.About Christine Rosen: Christine Rosen is a senior fellow at the American Enterprise Institute, where she focuses on the intersection of technology, culture, and society. Previously the managing editor of The New Republic and founding editor of The Hedgehog Review, her writing has appeared in The Atlantic, The New York Times, The Wall Street Journal, and numerous other publications. "The Extinction of Experience" represents over two decades of research into how digital technologies are reshaping human behavior and social relationships.
Join Beth Rudden at the Artificiality Summit in Bend, Oregon—October 23-25, 2025—to imagine a meaningful life with synthetic intelligence for me, we and us. Learn more here: www.artificialityinstitute.org/summitIn this thought-provoking conversation, we explore the intersection of archaeological thinking and artificial intelligence with Beth Rudden, former IBM Distinguished Engineer and CEO of Bast AI. Beth brings a unique interdisciplinary perspective—combining her training as an archaeologist with over 20 years of enterprise AI experience—to challenge fundamental assumptions about how we build and deploy artificial intelligence systems.Beth describes her work as creating "the trust layer for civilization," arguing that current AI systems reflect what Hannah Arendt called the "banality of evil"—not malicious intent, but thoughtlessness embedded at scale. As she puts it, "AI is an excavation tool, not a villain," surfacing patterns and biases that humanity has already normalized in our data and language.Key themes we explore:Archaeological AI: How treating AI as an excavation tool reveals embedded human thoughtlessness, and why scraping random internet data fundamentally misunderstands the nature of knowledge and contextOntological Scaffolding: Beth's approach to building AI systems using formal knowledge graphs and ontologies—giving AI the scaffolding to understand context rather than relying on statistical pattern matching divorced from meaningData Sovereignty in Healthcare: A detailed exploration of Bast AI's platform for explainable healthcare AI, where patients control their data and can trace every decision back to its source—from emergency logistics to clinical communicationThe Economics of Expertise: Moving beyond the "humans as resources" paradigm to imagine economic models that compete to support and amplify human expertise rather than eliminate itEmbodied Knowledge and Community: Why certain forms of knowledge—surgical skill, caregiving, craftsmanship—are irreducibly embodied, and how AI should scale this expertise rather than replace itHopeful Rage: Beth's vision for reclaiming humanist spaces and community healing as essential infrastructure for navigating technological transformationBeth challenges the dominant narrative that AI will simply replace human workers, instead proposing systems designed to "augment and amplify human expertise." Her work at Bast AI demonstrates how explainable AI can maintain full provenance and transparency while reducing cognitive load—allowing healthcare providers to spend more time truly listening to patients rather than wrestling with bureaucratic systems.The conversation reveals how archaeological thinking—with its attention to context, layers of meaning, and long-term patterns—offers essential insights for building trustworthy AI systems. As Beth notes, "You can fake reading. You cannot fake swimming"—certain forms of embodied knowledge remain irreplaceable and should be the foundation for human-AI collaboration.About Beth Rudden: Beth Rudden is CEO and Chairwoman of Bast AI, building explainable artificial intelligence systems with full provenance and data sovereignty. A former IBM Distinguished Engineer and Chief Data Officer, she's been recognized as one of the 100 most brilliant leaders in AI Ethics. With her background spanning archaeology, cognitive science, and decades of enterprise AI development, Beth offers a grounded perspective on technology that serves human flourishing rather than replacing it.This interview was recorded as part of the lead-up to the Artificiality Summit 2025 (October 23-25 in Bend, Oregon), where Beth will be speaking about the future of trustworthy AI.
At the Artificiality Summit in October 2024, Steve Sloman, professor at Brown University and author of The Knowledge Illusion and The Cost of Conviction, catalyzed a conversation about how we perceive knowledge in ourselves, others, and now in machines. What happens when our collective knowledge includes a community of machines? Steve challenged us to think about the dynamics of knowledge and understanding in an AI-driven world, about the evolving landscape of narratives, and ask the question can AI make us believe in ways that humans make us believe? What would it take for AI to construct a compelling ideology and belief system that humans would want to follow?Bio: Steven Sloman has taught at Brown since 1992. He studies higher-level cognition. He is a Fellow of the Cognitive Science Society, the Society of Experimental Psychologists, the American Psychological Society, the Eastern Psychological Association, and the Psychonomic Society. Along with scientific papers and editorials, his published work includes a 2005 book Causal Models: How We Think about the World and Its Alternatives, a 2017 book The Knowledge Illusion: Why We Never Think Alone co-authored with Phil Fernbach, and the forthcoming Righteousness: How Humans Decide from MIT Press. He has been Editor-in-Chief of the journal Cognition, Chair of the Brown University faculty, and created Brown’s concentration in Behavioral Decision Sciences.
At the Artificiality Summit 2024, Jamer Hunt, professor at the Parsons School of Design and author of Not to Scale, catalyzed our opening discussion on the concept of scale. This session explored how different scales—whether individual, organizational, community, societal, or even temporal—shape our perspectives and influence the design of AI systems. By examining the impact of scale on context and constraints, Jamer guided us to a clearer understanding of the appropriate levels at which we can envision and build a hopeful future with AI. This interactive session set the stage for a thought-provoking conference.Bio: Jamer Hunt collaboratively designs open and adaptable frameworks for participation that respond to emergent cultural conditions—in education, organizations, exhibitions, and for the public. He is the Vice Provost for Transdisciplinary Initiatives at The New School (2016-present), where he was founding director of the graduate program in Transdisciplinary Design at Parsons School of Design (2009-2015). He is the author of Not to Scale: How the Small Becomes Large, the Large Becomes Unthinkable, and the Unthinkable Becomes Possible (Grand Central Publishing, March 2020), a book that repositions scale as a practice-based framework for analyzing broken systems and navigating complexity. He has published over twenty articles on the poetics and politics of design, including for Fast Company and the Huffington Post, and he is co-author, with Meredith Davis, of Visual Communication Design (Bloomsbury, 2017).
In this conversation, we explore AI bias, transformative justice, and the future of technology with Dr. Avriel Epps, computational social scientist, Civic Science Postdoctoral Fellow at Cornell University's CATLab, and co-founder of AI for Abolition.What makes this conversation unique is how it begins with Avriel's recently published children's book, A Kids Book About AI Bias (Penguin Random House), designed for ages 5-9. As an accomplished researcher with a PhD from Harvard and expertise in how algorithmic systems impact identity development, Avriel has taken on the remarkable challenge of translating complex technical concepts about AI bias into accessible language for the youngest learners.Key themes we explore:- The Translation Challenge: How to distill graduate-level research on algorithmic bias into concepts a six-year-old can understand—and why kids' unfiltered responses to AI bias reveal truths adults often struggle to articulate- Critical Digital Literacy: Why building awareness of AI bias early can serve as a protective mechanism for young people who will be most vulnerable to these systems- AI for Abolition: Avriel's nonprofit work building community power around AI, including developing open-source tools like "Repair" for transformative and restorative justice practitioners- The Incentive Problem: Why the fundamental issue isn't the technology itself, but the economic structures driving AI development—and how communities might reclaim agency over systems built from their own data- Generational Perspectives: How different generations approach digital activism, from Gen Z's innovative but potentially ephemeral protest methods to what Gen Alpha might bring to technological resistanceThroughout our conversation, Avriel demonstrates how critical analysis of technology can coexist with practical hope. Her work embodies the belief that while AI currently reinforces existing inequalities, it doesn't have to—if we can change who controls its development and deployment.The conversation concludes with Avriel's ongoing research into how algorithmic systems shaped public discourse around major social and political events, and their vision for "small tech" solutions that serve communities rather than extracting from them.For anyone interested in AI ethics, youth development, or the intersection of technology and social justice, this conversation offers both rigorous analysis and genuine optimism about what's possible when we center equity in technological development.About Dr. Avriel Epps:Dr. Avriel Epps (she/they) is a computational social scientist and a Civic Science Postdoctoral Fellow at the Cornell University CATLab. She completed her Ph.D. at Harvard University in Education with a concentration in Human Development. She also holds an S.M. in Data Science from Harvard’s School of Engineering and Applied Sciences and a B.A. in Communication Studies from UCLA. Previously a Ford Foundation predoctoral fellow, Avriel is currently a Fellow at The National Center on Race and Digital Justice, a Roddenberry Fellow, and a Public Voices Fellow on Technology in the Public Interest with the Op-Ed Project in partnership with the MacArthur Foundation.Avriel is also the co-founder of AI4Abolition, a community organization dedicated to increasing AI literacy in marginalized communities and building community power with and around data-driven technologies. Avriel has been invited to speak at various venues including tech giants like Google and TikTok, and for The U.S. Courts, focusing on algorithmic bias and fairness. In the Fall of 2025, she will begin her tenure as Assistant Professor of Fair and Responsible Data Science at Rutgers University.Links:- Dr. Epps' official website: https://www.avrielepps.com- AI for Abolition: https://www.ai4.org- A Kids Book About AI Bias details: https://www.avrielepps.com/book
In this wide-ranging conversation, we explore the implications of planetary-scale computation with Benjamin Bratton, Director of the Antikythera program at the Berggruen Institute and Professor at UC San Diego. Benjamin describes his interdisciplinary work as appearing like a "platypus" to others—an odd creature combining seemingly incompatible parts that somehow works as a coherent whole.At the heart of our discussion is Benjamin's framework for understanding how computational technology literally evolves, not metaphorically but through the same mechanisms that drive biological evolution: scaffolding, symbiogenesis, niche construction, and what he calls "allopoiesis"—the process by which organisms transform their external environment to capture more energy and information.Key themes we explore:Computational Evolution: How artificial computation has become the primary mechanism for human "allopoietic virtuosity"—our ability to reshape our environment to sustain larger populationsThe Embodiment Question: Moving beyond anthropomorphic assumptions about AI embodiment to imagine synthetic intelligence with radically different spatial capabilities and sensory arrangementsAgentic Multiplication: How the explosion of AI agents (potentially reaching hundreds of billions) will fundamentally alter human agency and subjectivity, creating "parasocial relationships with ourselves"Planetary Intelligence: Understanding Earth itself as having evolved a computational sensory layer through satellites, fiber optic networks, and distributed sensing systemsThe Paradox of Intelligence: Whether complex intelligence is ultimately evolutionarily adaptive, given that our computational enlightenment has revealed our own role in potentially destroying the substrate we depend onBenjamin challenges us to think beyond conventional categories of life, intelligence, and technology, arguing that these distinctions are converging into something more fundamental. As he puts it: "Agency precedes subjectivity"—we've been transforming our world at terraforming scales long before we were conscious of doing so.The conversation culminates in what Benjamin calls "the paradox of intelligence": What are the preconditions necessary to ensure that complex intelligence remains evolutionarily adaptive rather than self-destructive? As he notes, we became aware of our terraforming-scale agency precisely at the moment we discovered it might be destroying the substrate we depend on. It's a question that becomes increasingly urgent as we stand at the threshold of what could be either a viable planetary civilization or civilizational collapse—what Benjamin sees as requiring us to fundamentally rethink "what planetary scale computation is for."About Benjamin Bratton: Benjamin Bratton is a philosopher of technology, Professor of Philosophy of Technology and Speculative Design at UC San Diego, and Director of Antikythera, a think tank researching planetary computation at the Berggruen Institute. Beginning in 2024, he also serves as Visiting Faculty Researcher at Google's Paradigms of Intelligence group, conducting fundamental research on the artificialization of intelligence.His influential book The Stack: On Software and Sovereignty (MIT Press, 2015) develops a comprehensive framework for understanding planetary computation through six modular layers: Earth, Cloud, City, Address, Interface, and User. Other recent works include Accept All Cookies (Berggruen Press), written in conjunction with his co-curation of "The Next Earth: Computation, Crisis, Cosmology" at the 2025 Venice Architecture Biennale, and The Terraforming (Strelka), a manifesto arguing for embracing anthropogenic artificiality to compose a planet sustaining diverse life.
In this episode, we welcome David Wolpert, a Professor at the Santa Fe Institute renowned for his groundbreaking work across multiple disciplines—from physics and computer science to game theory and complexity. * Note: If you enjoy our podcast conversations, please join us for the Artificiality Summit on October 23-25 in Bend, Oregon for many more in person conversations like these! Learn more about the Summit at www.artificiality.world/summit.We reached out to David to explore the mathematics of meaning—a concept that's becoming crucial as we live more deeply with artificial intelligences. If machines can hold their own mathematical understanding of meaning, how does that reshape our interactions, our shared reality, and even what it means to be human?David takes us on a journey through his paper "Semantic Information, Autonomous Agency and Non-Equilibrium Statistical Physics," co-authored with Artemy Kolchinsky. While mathematically rigorous in its foundation, our conversation explores these complex ideas in accessible terms.At the core of our discussion is a novel framework for understanding meaning itself—not just as a philosophical concept, but as something that can be mathematically formalized. David explains how we can move beyond Claude Shannon's syntactic information theory (which focuses on the transmission of bits) to a deeper understanding of semantic information (what those bits actually mean to an agent).Drawing from Judea Pearl's work on causality, Schrödinger's insights on life, and stochastic thermodynamics, David presents a unified framework where meaning emerges naturally from an agent's drive to persist into the future. This approach provides a mathematical basis for understanding what makes certain information meaningful to living systems—from humans to single cells.Our conversation ventures into:How AI might help us understand meaning in ways we cannot perceive ourselvesWhat a mathematically rigorous definition of meaning could mean for AI alignmentHow contexts shape our understanding of what's meaningfulThe distinction between causal information and mere correlationWe finish by talking about David's current work on a potentially concerning horizon: how distributed AI systems interacting through smart contracts could create scenarios beyond our mathematical ability to predict—a "distributed singularity" that might emerge in as little as five years. We wrote about this work here. For anyone interested in artificial intelligence, complexity science, or the fundamental nature of meaning itself, this conversation offers rich insights from one of today's most innovative interdisciplinary thinkers. About David Wolpert:David Wolpert is a Professor at the Santa Fe Institute and one of the modern era's true polymaths. He received his PhD in physics from UC Santa Barbara but has made seminal contributions across numerous fields. His research spans machine learning (where he formulated the "No Free Lunch" theorems), statistical physics, game theory, distributed intelligence, and the foundations of inference and computation. Before joining SFI, Wolpert held positions at NASA, Stanford, and the Santa Fe Institute as a professor. His work consistently bridges disciplinary boundaries to address fundamental questions about complex systems, computation, and the nature of intelligence.Thanks again to Jonathan Coulton for our music.
In this remarkable conversation, Michael Levin (Tufts University) and Blaise Agüera y Arcas (Google) examine what happens when biology and computation collide at their foundations. Their recent papers—arriving simultaneously yet from distinct intellectual traditions—illuminate how simple rules generate complex behaviors that challenge our understanding of life, intelligence, and agency.Michael’s "Self-Sorting Algorithm" reveals how minimal computational models demonstrate unexpected problem-solving abilities resembling basal intelligence—where just six lines of deterministic code exhibit dynamic adaptability we typically associate with living systems. Meanwhile, Blaise's "Computational Life" investigates how self-replicating programs emerge spontaneously from random interactions in digital environments, evolving complexity without explicit design or guidance.Their parallel explorations suggest a common thread: information processing underlies both biological and computational systems, forming an endless cycle where information → computation → agency → intelligence → information. This cyclical relationship transcends the traditional boundaries between natural and artificial systems.The conversation unfolds around several interwoven questions:- How does genuine agency emerge from simple rule-following components?- Why might intelligence be more fundamental than life itself?- How do we recognize cognition in systems that operate unlike human intelligence?- What constitutes the difference between patterns and the physical substrates expressing them?- How might symbiosis between humans and synthetic intelligence reshape both?Perhaps most striking is their shared insight that we may already be surrounded by forms of intelligence we're fundamentally blind to—our inherent biases limiting our ability to recognize cognition that doesn't mirror our own. As Michael notes, "We have a lot of mind blindness based on our evolutionary firmware."The timing of their complementary work isn't mere coincidence but reflects a cultural inflection point where our understanding of intelligence is expanding beyond anthropocentric models. Their dialogue offers a conceptual framework for navigating a future where the boundaries between biological and synthetic intelligence continue to dissolve, not as opposing forces but as variations on a universal principle of information processing across different substrates.For anyone interested in the philosophical and practical implications of emergent intelligence—whether in cells, code, or consciousness—this conversation provides intellectual tools for understanding the transformed relationship between humans and technology that lies ahead.Links:Our article on these two papersMichael Levin’s Self-Sorting AlgorithmBlaise Agüera y Arcas’s Computational Life------Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.
In this episode, we welcome Maggie Jackson, whose latest book, Uncertain, has become essential reading for navigating today’s complex world. Known for her groundbreaking work on attention and distraction, Maggie now turns her focus to uncertainty—not as a problem to be solved, but as a skill to be cultivated. Note: Uncertain won an Artificiality Book Award in 2024—check out our review here: https://www.artificiality.world/artificiality-book-awards-2024/In the interview, we explore the neuroscience of uncertainty, the cultural biases that make us crave certainty, and why our discomfort with the unknown may be holding us back. Maggie unpacks the two core types of uncertainty—what we can’t know and what we don’t yet know—and explains why understanding this distinction is crucial for thinking well in the digital age.Our conversation also explores the implications of AI—as technology increasingly mediates our reality, how do we remain critical thinkers? How do we resist the illusion of certainty in a world of algorithmically generated answersMaggie’s insights challenge us to reframe uncertainty—not as fear, but as an opportunity for discovery, adaptability, and even creativity. If you’ve ever felt overwhelmed by ambiguity or pressured to always have the “right” answer, this episode offers a refreshing perspective on why being uncertain might be one of our greatest human strengths.Links:Maggie: https://www.maggie-jackson.com/Uncertain: https://www.prometheusbooks.com/9781633889194/uncertain/Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.
In this episode, we talk with Greg Epstein—humanist chaplain at Harvard and MIT, bestselling author, and a leading voice on the intersection of technology, ethics, and belief systems. Greg’s latest book, Tech Agnostic, offers a provocative argument: Silicon Valley isn’t just a powerful industry—it has become the dominant religion of our time. Note: Tech Agnostic won an Artificality Book Award in 2024—check out our review here. In this interview, we explore the deep parallels between big tech and organized religion, from sacred texts and prophets to digital congregations and AI-driven eschatology. The conversation explores digital Puritanism, the "unwitting worshipers" of tech's altars, and the theological implications of AI doomerism.But this isn’t just a critique—it’s a call for a Reformation. Greg lays out a path toward a more humane and ethical future for technology, one that resists unchecked power and prioritizes human values over digital dogma.Join us for a thought-provoking conversation on faith, fear, and the future of being human in an age where technology defines what we believe in.Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.




