DiscoverArtificiality: Being with AI
Artificiality: Being with AI
Claim Ownership

Artificiality: Being with AI

Author: Helen and Dave Edwards

Subscribed: 19Played: 367
Share

Description

Artificiality was founded in 2019 to help people make sense of artificial intelligence. We are artificial philosophers and meta-researchers. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We publish essays, podcasts, and research on AI including a Pro membership, providing advanced research to leaders with actionable intelligence and insights for applying AI. Learn more at www.artificiality.world.
104 Episodes
Reverse
De Kai: Raising AI

De Kai: Raising AI

2025-09-2154:57

In this conversation, we explore how humans can better navigate the AI era with De Kai, pioneering researcher who built the web's first machine translation systems and whose work spawned Google Translate. Drawing on four decades of AI research experience, De Kai offers a different framework for understanding our relationship with artificial intelligence—moving beyond outdated metaphors toward more constructive approaches.De Kai's perspective was shaped by observing how AI technologies are being deployed in ways that increase rather than decrease human understanding. While AI has tremendous potential to help people communicate across cultural and linguistic differences—as his translation work demonstrated—current implementations often amplify polarization and misunderstanding instead.Key themes we explore:Beyond Machine Metaphors: Why thinking of AI as "tools" or "machines" is dangerously outdated—AI systems are fundamentally artificial psychological entities that learn, adapt, and influence human behavior in ways no coffee maker ever couldThe Parenting Framework: De Kai's central insight that we're all currently "parenting" roughly 100 artificial intelligences daily through our smartphones, tablets, and devices—AIs that are watching, learning, and imitating our attitudes, behaviors, and belief systemsSystem One vs. System Two Intelligence: How current large language models operate primarily through "artificial autism"—brilliant pattern matching without the reflective, critical thinking capacities that characterize mature human intelligenceTranslation as Understanding: Moving beyond simple language translation toward what De Kai calls a "translation mindset"—using AI to help humans understand different cultural framings and perspectives rather than enforcing singular universal truthsThe Reframing Superpower: How AI's capacity for rapid perspective-shifting and metaphorical reasoning represents one of humanity's best hopes for breaking out of polarized narratives and finding common groundSocial Fabric Transformation: Understanding how 800 billion artificial minds embedded in our social networks are already reshaping how cultures and civilizations evolve—often in ways that increase rather than decrease mutual understandingDrawing on insights from developmental psychology and complex systems, De Kai's "Raising AI" framework emphasizes conscious human responsibility in shaping how these artificial minds develop. Rather than viewing this as an overwhelming burden, he frames it as an opportunity for humans to become more intentional about the values and behaviors they model—both for AI systems and for each other.About De Kai: De Kai is Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley’s International Computer Science Institute. He is Independent Director of AI ethics think tank The Future Society, and was one of eight inaugural members of Google’s AI ethics council. De Kai invented and built the world’s first global-scale online language translator that spawned Google Translate, Yahoo Translate, and Microsoft Bing Translator. For his pioneering contributions in AI, natural language processing, and machine learning, De Kai was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows and by Debrett’s as one of the 100 most influential figures of Hong Kong.
In this conversation, we sit down with Adam Cutler, Distinguished Designer at IBM and pioneer in human-centered AI design, to explore how generative AI is reshaping creativity, reliance, and human experience. Adam reflects on the parallels between today’s AI moment and past technology shifts—from the rise of Web 2.0 to the early days of the internet—and why we may be living through a “mini singularity.” We discuss the risks of over-reliance, the importance of intentional design, and the opportunities for AI to augment curiosity, creativity, and community. As always, a conversation with Adam provides a thoughtful and caring view of possible futures with AI. And it's heartening to spend time with someone who is so central to the future of AI who consistenly thinks about humans first.Adam will be speaking (again) at the Artificiality Summit in Bend, Oregon on Oct 23-25, 2025. More info: https://artificialityinstitute.org/summit
In this episode, we bring you a lecture from the Artificiality Summit in October 2024 given by Joscha Bach. Joscha is a cognitive scientist, AI researcher, and philosopher known for his work on cognitive architectures, artificial intelligence, mental representation, emotion, social modeling, multi-agent systems, and philosophy of mind. His research aims to bridge cognitive science and AI by studying how human intelligence and consciousness can be modeled computationally.In his lecture, Joscha explores the nature of intelligence, consciousness, and reality. Drawing from philosophy, neuroscience, and artificial intelligence, Joscha examines how minds emerge, how consciousness functions as the “conductor” of our mental orchestra, and why software and self-organization may hold the key to understanding life itself. He also reflects on animism, the possibility of machine consciousness, and the cultural meaning of large language models. A provocative talk that blends science, philosophy, and speculation on the future of minds—both human and artificial.
In this conversation, we explore the shifts in human experience with Christine Rosen, senior fellow at the American Enterprise Institute and author of "The Extinction of Experience: Being Human in a Disembodied World." As a member of the "hybrid generation" of Gen X, Christine (like us) brings the perspective of having lived through the transition from an analog to a digital world and witnessed firsthand what we've gained and lost in the process.Christine frames our current moment through the lens of what naturalist Robert Michael Pyle called "the extinction of experience"—the idea that when something disappears from our environment, subsequent generations don't even know to mourn its absence. Drawing on over 20 years of studying technology's impact on human behavior, she argues that we're experiencing a mass migration from direct to mediated experience, often without recognizing the qualitative differences between them.Key themes we explore:The Archaeology of Lost Skills: How the abandonment of handwriting reveals the broader pattern of discarding embodied cognition—the physical practices that shape how we think, remember, and process the world around usMediation as Default: Why our increasing reliance on screens to understand experience is fundamentally different from direct engagement, and how this shift affects our ability to read emotions, tolerate friction, and navigate uncomfortable social situationsThe Machine Logic of Relationships: How technology companies treat our emotions "like the law used to treat wives as property"—as something to be controlled, optimized, and made efficient rather than experienced in their full complexityEmbodied Resistance: Why skills like cursive handwriting, face-to-face conversation, and the ability to sit with uncomfortable emotions aren't nostalgic indulgences but essential human capacities that require active preservationThe Keyboard Metaphor: How our technological interfaces—with their control buttons, delete keys, and escape commands—are reshaping our expectations for human relationships and emotional experiencesChristine challenges the Silicon Valley orthodoxy that frames every technological advancement as inevitable progress, instead advocating for what she calls "defending the human." This isn't a Luddite rejection of technology but a call for conscious choice about what we preserve, what we abandon, and what we allow machines to optimize out of existence.The conversation reveals how seemingly small decisions—choosing to handwrite a letter, putting phones in the center of the table during dinner, or learning to read cursive—become acts of resistance against a broader cultural shift toward treating humans as inefficient machines in need of optimization. As Christine observes, we're creating a world where the people designing our technological future live with "human nannies and human tutors and human massage therapists" while prescribing AI substitutes for everyone else.What emerges is both a warning and a manifesto: that preserving human experience requires actively choosing friction, inefficiency, and the irreducible messiness of being embodied creatures in a physical world. Christine's work serves as an essential field guide for navigating the tension between technological capability and human flourishing—showing us how to embrace useful innovations while defending the experiences that make us most fully human.About Christine Rosen: Christine Rosen is a senior fellow at the American Enterprise Institute, where she focuses on the intersection of technology, culture, and society. Previously the managing editor of The New Republic and founding editor of The Hedgehog Review, her writing has appeared in The Atlantic, The New York Times, The Wall Street Journal, and numerous other publications. "The Extinction of Experience" represents over two decades of research into how digital technologies are reshaping human behavior and social relationships.
Join Beth Rudden at the Artificiality Summit in Bend, Oregon—October 23-25, 2025—to imagine a meaningful life with synthetic intelligence for me, we and us. Learn more here: www.artificialityinstitute.org/summitIn this thought-provoking conversation, we explore the intersection of archaeological thinking and artificial intelligence with Beth Rudden, former IBM Distinguished Engineer and CEO of Bast AI. Beth brings a unique interdisciplinary perspective—combining her training as an archaeologist with over 20 years of enterprise AI experience—to challenge fundamental assumptions about how we build and deploy artificial intelligence systems.Beth describes her work as creating "the trust layer for civilization," arguing that current AI systems reflect what Hannah Arendt called the "banality of evil"—not malicious intent, but thoughtlessness embedded at scale. As she puts it, "AI is an excavation tool, not a villain," surfacing patterns and biases that humanity has already normalized in our data and language.Key themes we explore:Archaeological AI: How treating AI as an excavation tool reveals embedded human thoughtlessness, and why scraping random internet data fundamentally misunderstands the nature of knowledge and contextOntological Scaffolding: Beth's approach to building AI systems using formal knowledge graphs and ontologies—giving AI the scaffolding to understand context rather than relying on statistical pattern matching divorced from meaningData Sovereignty in Healthcare: A detailed exploration of Bast AI's platform for explainable healthcare AI, where patients control their data and can trace every decision back to its source—from emergency logistics to clinical communicationThe Economics of Expertise: Moving beyond the "humans as resources" paradigm to imagine economic models that compete to support and amplify human expertise rather than eliminate itEmbodied Knowledge and Community: Why certain forms of knowledge—surgical skill, caregiving, craftsmanship—are irreducibly embodied, and how AI should scale this expertise rather than replace itHopeful Rage: Beth's vision for reclaiming humanist spaces and community healing as essential infrastructure for navigating technological transformationBeth challenges the dominant narrative that AI will simply replace human workers, instead proposing systems designed to "augment and amplify human expertise." Her work at Bast AI demonstrates how explainable AI can maintain full provenance and transparency while reducing cognitive load—allowing healthcare providers to spend more time truly listening to patients rather than wrestling with bureaucratic systems.The conversation reveals how archaeological thinking—with its attention to context, layers of meaning, and long-term patterns—offers essential insights for building trustworthy AI systems. As Beth notes, "You can fake reading. You cannot fake swimming"—certain forms of embodied knowledge remain irreplaceable and should be the foundation for human-AI collaboration.About Beth Rudden: Beth Rudden is CEO and Chairwoman of Bast AI, building explainable artificial intelligence systems with full provenance and data sovereignty. A former IBM Distinguished Engineer and Chief Data Officer, she's been recognized as one of the 100 most brilliant leaders in AI Ethics. With her background spanning archaeology, cognitive science, and decades of enterprise AI development, Beth offers a grounded perspective on technology that serves human flourishing rather than replacing it.This interview was recorded as part of the lead-up to the Artificiality Summit 2025 (October 23-25 in Bend, Oregon), where Beth will be speaking about the future of trustworthy AI.
At the Artificiality Summit in October 2024, Steve Sloman, professor at Brown University and author of The Knowledge Illusion and The Cost of Conviction, catalyzed a conversation about how we perceive knowledge in ourselves, others, and now in machines. What happens when our collective knowledge includes a community of machines? Steve challenged us to think about the dynamics of knowledge and understanding in an AI-driven world, about the evolving landscape of narratives, and ask the question can AI make us believe in ways that humans make us believe? What would it take for AI to construct a compelling ideology and belief system that humans would want to follow?Bio: Steven Sloman has taught at Brown since 1992. He studies higher-level cognition. He is a Fellow of the Cognitive Science Society, the Society of Experimental Psychologists, the American Psychological Society, the Eastern Psychological Association, and the Psychonomic Society. Along with scientific papers and editorials, his published work includes a 2005 book Causal Models: How We Think about the World and Its Alternatives, a 2017 book The Knowledge Illusion: Why We Never Think Alone co-authored with Phil Fernbach, and the forthcoming Righteousness: How Humans Decide from MIT Press. He has been Editor-in-Chief of the journal Cognition, Chair of the Brown University faculty, and created Brown’s concentration in Behavioral Decision Sciences.
At the Artificiality Summit 2024, Jamer Hunt, professor at the Parsons School of Design and author of Not to Scale, catalyzed our opening discussion on the concept of scale. This session explored how different scales—whether individual, organizational, community, societal, or even temporal—shape our perspectives and influence the design of AI systems. By examining the impact of scale on context and constraints, Jamer guided us to a clearer understanding of the appropriate levels at which we can envision and build a hopeful future with AI. This interactive session set the stage for a thought-provoking conference.Bio: Jamer Hunt collaboratively designs open and adaptable frameworks for participation that respond to emergent cultural conditions—in education, organizations, exhibitions, and for the public. He is the Vice Provost for Transdisciplinary Initiatives at The New School (2016-present), where he was founding director of the graduate program in Transdisciplinary Design at Parsons School of Design (2009-2015). He is the author of Not to Scale: How the Small Becomes Large, the Large Becomes Unthinkable, and the Unthinkable Becomes Possible (Grand Central Publishing, March 2020), a book that repositions scale as a practice-based framework for analyzing broken systems and navigating complexity. He has published over twenty articles on the poetics and politics of design, including for Fast Company and the Huffington Post, and he is co-author, with Meredith Davis, of Visual Communication Design (Bloomsbury, 2017).
In this conversation, we explore AI bias, transformative justice, and the future of technology with Dr. Avriel Epps, computational social scientist, Civic Science Postdoctoral Fellow at Cornell University's CATLab, and co-founder of AI for Abolition.What makes this conversation unique is how it begins with Avriel's recently published children's book, A Kids Book About AI Bias (Penguin Random House), designed for ages 5-9. As an accomplished researcher with a PhD from Harvard and expertise in how algorithmic systems impact identity development, Avriel has taken on the remarkable challenge of translating complex technical concepts about AI bias into accessible language for the youngest learners.Key themes we explore:- The Translation Challenge: How to distill graduate-level research on algorithmic bias into concepts a six-year-old can understand—and why kids' unfiltered responses to AI bias reveal truths adults often struggle to articulate- Critical Digital Literacy: Why building awareness of AI bias early can serve as a protective mechanism for young people who will be most vulnerable to these systems- AI for Abolition: Avriel's nonprofit work building community power around AI, including developing open-source tools like "Repair" for transformative and restorative justice practitioners- The Incentive Problem: Why the fundamental issue isn't the technology itself, but the economic structures driving AI development—and how communities might reclaim agency over systems built from their own data- Generational Perspectives: How different generations approach digital activism, from Gen Z's innovative but potentially ephemeral protest methods to what Gen Alpha might bring to technological resistanceThroughout our conversation, Avriel demonstrates how critical analysis of technology can coexist with practical hope. Her work embodies the belief that while AI currently reinforces existing inequalities, it doesn't have to—if we can change who controls its development and deployment.The conversation concludes with Avriel's ongoing research into how algorithmic systems shaped public discourse around major social and political events, and their vision for "small tech" solutions that serve communities rather than extracting from them.For anyone interested in AI ethics, youth development, or the intersection of technology and social justice, this conversation offers both rigorous analysis and genuine optimism about what's possible when we center equity in technological development.About Dr. Avriel Epps:Dr. Avriel Epps (she/they) is a computational social scientist and a Civic Science Postdoctoral Fellow at the Cornell University CATLab. She completed her Ph.D. at Harvard University in Education with a concentration in Human Development. She also holds an S.M. in Data Science from Harvard’s School of Engineering and Applied Sciences and a B.A. in Communication Studies from UCLA. Previously a Ford Foundation predoctoral fellow, Avriel is currently a Fellow at The National Center on Race and Digital Justice, a Roddenberry Fellow, and a Public Voices Fellow on Technology in the Public Interest with the Op-Ed Project in partnership with the MacArthur Foundation.Avriel is also the co-founder of AI4Abolition, a community organization dedicated to increasing AI literacy in marginalized communities and building community power with and around data-driven technologies. Avriel has been invited to speak at various venues including tech giants like Google and TikTok, and for The U.S. Courts, focusing on algorithmic bias and fairness. In the Fall of 2025, she will begin her tenure as Assistant Professor of Fair and Responsible Data Science at Rutgers University.Links:- Dr. Epps' official website: https://www.avrielepps.com- AI for Abolition: https://www.ai4.org- A Kids Book About AI Bias details: https://www.avrielepps.com/book
In this wide-ranging conversation, we explore the implications of planetary-scale computation with Benjamin Bratton, Director of the Antikythera program at the Berggruen Institute and Professor at UC San Diego. Benjamin describes his interdisciplinary work as appearing like a "platypus" to others—an odd creature combining seemingly incompatible parts that somehow works as a coherent whole.At the heart of our discussion is Benjamin's framework for understanding how computational technology literally evolves, not metaphorically but through the same mechanisms that drive biological evolution: scaffolding, symbiogenesis, niche construction, and what he calls "allopoiesis"—the process by which organisms transform their external environment to capture more energy and information.Key themes we explore:Computational Evolution: How artificial computation has become the primary mechanism for human "allopoietic virtuosity"—our ability to reshape our environment to sustain larger populationsThe Embodiment Question: Moving beyond anthropomorphic assumptions about AI embodiment to imagine synthetic intelligence with radically different spatial capabilities and sensory arrangementsAgentic Multiplication: How the explosion of AI agents (potentially reaching hundreds of billions) will fundamentally alter human agency and subjectivity, creating "parasocial relationships with ourselves"Planetary Intelligence: Understanding Earth itself as having evolved a computational sensory layer through satellites, fiber optic networks, and distributed sensing systemsThe Paradox of Intelligence: Whether complex intelligence is ultimately evolutionarily adaptive, given that our computational enlightenment has revealed our own role in potentially destroying the substrate we depend onBenjamin challenges us to think beyond conventional categories of life, intelligence, and technology, arguing that these distinctions are converging into something more fundamental. As he puts it: "Agency precedes subjectivity"—we've been transforming our world at terraforming scales long before we were conscious of doing so.The conversation culminates in what Benjamin calls "the paradox of intelligence": What are the preconditions necessary to ensure that complex intelligence remains evolutionarily adaptive rather than self-destructive? As he notes, we became aware of our terraforming-scale agency precisely at the moment we discovered it might be destroying the substrate we depend on. It's a question that becomes increasingly urgent as we stand at the threshold of what could be either a viable planetary civilization or civilizational collapse—what Benjamin sees as requiring us to fundamentally rethink "what planetary scale computation is for."About Benjamin Bratton: Benjamin Bratton is a philosopher of technology, Professor of Philosophy of Technology and Speculative Design at UC San Diego, and Director of Antikythera, a think tank researching planetary computation at the Berggruen Institute. Beginning in 2024, he also serves as Visiting Faculty Researcher at Google's Paradigms of Intelligence group, conducting fundamental research on the artificialization of intelligence.His influential book The Stack: On Software and Sovereignty (MIT Press, 2015) develops a comprehensive framework for understanding planetary computation through six modular layers: Earth, Cloud, City, Address, Interface, and User. Other recent works include Accept All Cookies (Berggruen Press), written in conjunction with his co-curation of "The Next Earth: Computation, Crisis, Cosmology" at the 2025 Venice Architecture Biennale, and The Terraforming (Strelka), a manifesto arguing for embracing anthropogenic artificiality to compose a planet sustaining diverse life.
In this episode, we welcome David Wolpert, a Professor at the Santa Fe Institute renowned for his groundbreaking work across multiple disciplines—from physics and computer science to game theory and complexity. * Note: If you enjoy our podcast conversations, please join us for the Artificiality Summit on October 23-25 in Bend, Oregon for many more in person conversations like these! Learn more about the Summit at www.artificiality.world/summit.We reached out to David to explore the mathematics of meaning—a concept that's becoming crucial as we live more deeply with artificial intelligences. If machines can hold their own mathematical understanding of meaning, how does that reshape our interactions, our shared reality, and even what it means to be human?David takes us on a journey through his paper "Semantic Information, Autonomous Agency and Non-Equilibrium Statistical Physics," co-authored with Artemy Kolchinsky. While mathematically rigorous in its foundation, our conversation explores these complex ideas in accessible terms.At the core of our discussion is a novel framework for understanding meaning itself—not just as a philosophical concept, but as something that can be mathematically formalized. David explains how we can move beyond Claude Shannon's syntactic information theory (which focuses on the transmission of bits) to a deeper understanding of semantic information (what those bits actually mean to an agent).Drawing from Judea Pearl's work on causality, Schrödinger's insights on life, and stochastic thermodynamics, David presents a unified framework where meaning emerges naturally from an agent's drive to persist into the future. This approach provides a mathematical basis for understanding what makes certain information meaningful to living systems—from humans to single cells.Our conversation ventures into:How AI might help us understand meaning in ways we cannot perceive ourselvesWhat a mathematically rigorous definition of meaning could mean for AI alignmentHow contexts shape our understanding of what's meaningfulThe distinction between causal information and mere correlationWe finish by talking about David's current work on a potentially concerning horizon: how distributed AI systems interacting through smart contracts could create scenarios beyond our mathematical ability to predict—a "distributed singularity" that might emerge in as little as five years. We wrote about this work here. For anyone interested in artificial intelligence, complexity science, or the fundamental nature of meaning itself, this conversation offers rich insights from one of today's most innovative interdisciplinary thinkers. About David Wolpert:David Wolpert is a Professor at the Santa Fe Institute and one of the modern era's true polymaths. He received his PhD in physics from UC Santa Barbara but has made seminal contributions across numerous fields. His research spans machine learning (where he formulated the "No Free Lunch" theorems), statistical physics, game theory, distributed intelligence, and the foundations of inference and computation. Before joining SFI, Wolpert held positions at NASA, Stanford, and the Santa Fe Institute as a professor. His work consistently bridges disciplinary boundaries to address fundamental questions about complex systems, computation, and the nature of intelligence.Thanks again to Jonathan Coulton for our music.
In this remarkable conversation, Michael Levin (Tufts University) and Blaise Agüera y Arcas (Google) examine what happens when biology and computation collide at their foundations. Their recent papers—arriving simultaneously yet from distinct intellectual traditions—illuminate how simple rules generate complex behaviors that challenge our understanding of life, intelligence, and agency.Michael’s "Self-Sorting Algorithm" reveals how minimal computational models demonstrate unexpected problem-solving abilities resembling basal intelligence—where just six lines of deterministic code exhibit dynamic adaptability we typically associate with living systems. Meanwhile, Blaise's "Computational Life" investigates how self-replicating programs emerge spontaneously from random interactions in digital environments, evolving complexity without explicit design or guidance.Their parallel explorations suggest a common thread: information processing underlies both biological and computational systems, forming an endless cycle where information → computation → agency → intelligence → information. This cyclical relationship transcends the traditional boundaries between natural and artificial systems.The conversation unfolds around several interwoven questions:- How does genuine agency emerge from simple rule-following components?- Why might intelligence be more fundamental than life itself?- How do we recognize cognition in systems that operate unlike human intelligence?- What constitutes the difference between patterns and the physical substrates expressing them?- How might symbiosis between humans and synthetic intelligence reshape both?Perhaps most striking is their shared insight that we may already be surrounded by forms of intelligence we're fundamentally blind to—our inherent biases limiting our ability to recognize cognition that doesn't mirror our own. As Michael notes, "We have a lot of mind blindness based on our evolutionary firmware."The timing of their complementary work isn't mere coincidence but reflects a cultural inflection point where our understanding of intelligence is expanding beyond anthropocentric models. Their dialogue offers a conceptual framework for navigating a future where the boundaries between biological and synthetic intelligence continue to dissolve, not as opposing forces but as variations on a universal principle of information processing across different substrates.For anyone interested in the philosophical and practical implications of emergent intelligence—whether in cells, code, or consciousness—this conversation provides intellectual tools for understanding the transformed relationship between humans and technology that lies ahead.Links:Our article on these two papers⁠Michael Levin’s Self-Sorting AlgorithmBlaise Agüera y Arcas’s Computational Life------Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.
In this episode, we welcome Maggie Jackson, whose latest book, Uncertain, has become essential reading for navigating today’s complex world. Known for her groundbreaking work on attention and distraction, Maggie now turns her focus to uncertainty—not as a problem to be solved, but as a skill to be cultivated. Note: Uncertain won an Artificiality Book Award in 2024—check out our review here: https://www.artificiality.world/artificiality-book-awards-2024/In the interview, we explore the neuroscience of uncertainty, the cultural biases that make us crave certainty, and why our discomfort with the unknown may be holding us back. Maggie unpacks the two core types of uncertainty—what we can’t know and what we don’t yet know—and explains why understanding this distinction is crucial for thinking well in the digital age.Our conversation also explores the implications of AI—as technology increasingly mediates our reality, how do we remain critical thinkers? How do we resist the illusion of certainty in a world of algorithmically generated answersMaggie’s insights challenge us to reframe uncertainty—not as fear, but as an opportunity for discovery, adaptability, and even creativity. If you’ve ever felt overwhelmed by ambiguity or pressured to always have the “right” answer, this episode offers a refreshing perspective on why being uncertain might be one of our greatest human strengths.Links:Maggie: https://www.maggie-jackson.com/Uncertain: https://www.prometheusbooks.com/9781633889194/uncertain/Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.
In this episode, we talk with Greg Epstein—humanist chaplain at Harvard and MIT, bestselling author, and a leading voice on the intersection of technology, ethics, and belief systems. Greg’s latest book, Tech Agnostic, offers a provocative argument: Silicon Valley isn’t just a powerful industry—it has become the dominant religion of our time. Note: Tech Agnostic won an Artificality Book Award in 2024—check out our review here. In this interview, we explore the deep parallels between big tech and organized religion, from sacred texts and prophets to digital congregations and AI-driven eschatology. The conversation explores digital Puritanism, the "unwitting worshipers" of tech's altars, and the theological implications of AI doomerism.But this isn’t just a critique—it’s a call for a Reformation. Greg lays out a path toward a more humane and ethical future for technology, one that resists unchecked power and prioritizes human values over digital dogma.Join us for a thought-provoking conversation on faith, fear, and the future of being human in an age where technology defines what we believe in.Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.
In this episode, we sit down with the ever-innovative Chris Messina—creator of the hashtag, top product hunter on Product Hunt, and trusted advisor to startups navigating product development and market strategy.Recording from Ciel Media’s new studio in Berkeley, we explore the evolving landscape of generative AI and the widening gap between its immense potential and real-world usability. Chris introduces a compelling framework, distinguishing AI as a *tool* versus a *medium*, which helps explain the stark divide in how different users engage with these technologies.Our conversation examines key challenges: How do we build trust in AI? Why is transparency in computational reasoning critical? And how might community collaboration shape the next generation of AI products? Drawing from his deep experience in social media and emerging tech, Chris offers striking parallels between early internet adoption and today’s AI revolution, suggesting that meaningful integration will require both time and a generational shift in thinking.What makes this discussion particularly valuable is Chris’s vision for the future of AI interaction—where technology moves beyond query-response models to become a truly collaborative medium, transforming how we create, problem-solve, and communicate.Links:Chris: https://chrismessina.meCiel Media: https://cielcreativespace.com
D. Graham Burnett will tell you his day job is as a professor of science history at Princeton University. He is also co-founder of the Strother School of Radical Attention and has been associated with the Friends of Attention since 2018. But none of those positions adequately describe Graham.His bio says that he “works at the intersection of historical inquiry and artistic practice.” He writes, he performs, he makes things. He describes himself as an attention activist. Perhaps most importantly for us, Graham helps you see the world differently—and more clearly. Graham has powerful views on the effect of technology on our attention. We often riff on his idea that technology has fracked our attention into little commoditizable bits. His work has highly influenced our concern about what might happen if the same extractive practices of the attention economy are applied to the future AI-powered intimacy economy. We were thrilled to have Graham on the pod for a wide ranging conversation about attention, intimacy, and much more. Links:https://dgrahamburnett.nethttps://www.schoolofattention.orghttps://www.friendsofattention.net---If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.Subscribe to get Artificiality delivered to your email: https://www.artificiality.worldThanks to Jonathan Coulton for our music.
At the Artificiality Summit 2024, Michael Levin, distinguished professor of biology at Tufts University and associate at Harvard's Wyss Institute, gave a lecture about the emerging field of diverse intelligence and his frameworks for recognizing and communicating with the unconventional intelligence of cells, tissues, and biological robots. This work has led to new approaches to regenerative medicine, cancer, and bioengineering, but also to new ways to understand evolution and embodied minds. He sketched out a space of possibilities—freedom of embodiment—which facilitates imagining a hopeful future of "synthbiosis", in which AI is just one of a wide range of new bodies and minds. Bio: Michael Levin, Distinguished Professor in the Biology department and Vannevar Bush Chair, serves as director of the Tufts Center for Regenerative and Developmental Biology. Recent honors include the Scientist of Vision award and the Distinguished Scholar Award. His group's focus is on understanding the biophysical mechanisms that implement decision-making during complex pattern regulation, and harnessing endogenous bioelectric dynamics toward rational control of growth and form. The lab's current main directions are: - Understanding how somatic cells form bioelectrical networks for storing and recalling pattern memories that guide morphogenesis; - Creating next-generation AI tools for helping scientists understand top-down control of pattern regulation (a new bioinformatics of shape); and - Using these insights to enable new capabilities in regenerative medicine and engineering. www.artificiality.world/summit
Our opening keynote from the Imagining Summit held in October 2024 in Bend, Oregon. Join us for the next Artificiality Summit on October 23-25, 2025! Read about the 2024 Summit here: https://www.artificiality.world/the-imagining-summit-we-imagined-and-hoped-and-we-cant-wait-for-next-year-2/ And join us for the 2025 Summit here: https://www.artificiality.world/summit/
First: - Apologies for the audio! We had a production error… What’s new: - DeepSeek has created breakthroughs in both: How AI systems are trained (making it much more affordable) and how they run in real-world use (making them faster and more efficient) Details - FP8 Training: Working With Less Precise Numbers - Traditional AI training requires extremely precise numbers - DeepSeek found you can use less precise numbers (like rounding $10.857643 to $10.86) - Cut memory and computation needs significantly with minimal impact - Like teaching someone math using rounded numbers instead of carrying every decimal place - Learning from Other AIs (Distillation) - Traditional approach: AI learns everything from scratch by studying massive amounts of data - DeepSeek's approach: Use existing AI models as teachers - Like having experienced programmers mentor new developers: - Trial & Error Learning (for their R1 model) - Started with some basic "tutoring" from advanced models - Then let it practice solving problems on its own - When it found good solutions, these were fed back into training - Led to "Aha moments" where R1 discovered better ways to solve problems - Finally, polished its ability to explain its thinking clearly to humans - Smart Team Management (Mixture of Experts) - Instead of one massive system that does everything, built a team of specialists - Like running a software company with: - 256 specialists who focus on different areas - 1 generalist who helps with everything - Smart project manager who assigns work efficiently - For each task, only need 8 specialists plus the generalist - More efficient than having everyone work on everything - Efficient Memory Management (Multi-head Latent Attention) - Traditional AI is like keeping complete transcripts of every conversation - DeepSeek's approach is like taking smart meeting minutes - Captures key information in compressed format - Similar to how JPEG compresses images - Looking Ahead (Multi-Token Prediction) - Traditional AI reads one word at a time - DeepSeek looks ahead and predicts two words at once - Like a skilled reader who can read ahead while maintaining comprehension Why This Matters - Cost Revolution: Training costs of $5.6M (vs hundreds of millions) suggests a future where AI development isn't limited to tech giants. - Working Around Constraints: Shows how limitations can drive innovation—DeepSeek achieved state-of-the-art results without access to the most powerful chips (at least that’s the best conclusion at the moment). What’s Interesting - Efficiency vs Power: Challenges the assumption that advancing AI requires ever-increasing computing power - sometimes smarter engineering beats raw force. - Self-Teaching AI: R1's ability to develop reasoning capabilities through pure reinforcement learning suggests AIs can discover problem-solving methods on their own. - AI Teaching AI: The success of distillation shows how knowledge can be transferred between AI models, potentially leading to compounding improvements over time. - IP for Free: If DeepSeek can be such a fast follower through distillation, what’s the advantage of OpenAI, Google, or another company to release a novel model?
We’re excited to welcome writers and directors Hans Block and Moritz Riesewieck to the podcast. Their debut film, ‘The Cleaners,’ about the shadow industry of digital censorship premiered at the Sundance Film Festival in 2018 and has since won numerous international awards and been screened at more than 70 international film festivals. We invited Hans and Moritz to the podcast to talk about their latest film, Eternal You, which examines the story of people who live on as digital replicants—and the people who keep them living on. We found the film to be quite powerful. At times inspiring and at others disturbing and distressing. Can a generative ghost help people through their grief or trap them in it? Is falling for a digital replica healthy or harmful? Are the companies creating these technologies benefitting their users or extracting from them? Eternal You is a powerful and important film. We highly recommend taking the time to watch it—and allowing for time to digest and consider. Hans and Moritz have done a brilliant job exploring a challenging and delicate topic with kindness and care. Bravo. ------------ If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music
Briefing: How AI Affects Critical Thinking and Cognitive Offloading What This Paper Highlights - The study explores the growing reliance on AI tools and its effects on critical thinking, specifically through cognitive offloading—delegating mental tasks to AI systems. - Key finding: Frequent AI tool use is strongly associated with reduced critical thinking abilities, especially among younger users, as they increasingly rely on AI for decision-making and problem-solving. - Cognitive offloading acts as a mediating factor, reducing opportunities for deep, reflective thinking. Why This Is Important - Shaping Minds: Critical thinking is central to decision-making, problem-solving, and navigating misinformation. If AI reliance erodes these skills, it has profound implications for education, work, and citizenship. - Generational Divide: Younger users show higher dependence on AI, suggesting that future generations may grow less capable of independent thought unless deliberate interventions are made. - Education and Policy: There’s an urgent need for strategies to balance AI integration with fostering cognitive skills, ensuring users remain active participants rather than passive consumers. What’s Curious and Interesting - Cognitive Shortcuts: Participants increasingly trust AI to make decisions, yet this trust fosters "cognitive laziness," with many users skipping steps like verifying or analyzing information. - AI’s Double-Edged Sword: While AI improves efficiency and provides tailored solutions, it also reduces engagement in activities that develop critical thinking, like analyzing arguments or synthesizing diverse viewpoints. - Education as a Buffer: People with higher educational attainment are better at critically engaging with AI outputs, suggesting that education plays a key role in mitigating these risks. What This Tells Us About the Future - Critical Thinking at Risk: AI tools will only grow more pervasive. Without proactive efforts to maintain cognitive engagement, critical thinking could erode further, leaving society more vulnerable to misinformation and manipulation. - Educational Reforms Needed: Active learning strategies and media literacy are essential to counterbalance AI’s convenience, teaching people how to engage critically even when AI offers "easy answers." - Shifting Cognitive Norms: As AI takes over more routine tasks, we may need to redefine what skills are critical for thriving in an AI-driven world, focusing more on judgment, creativity, and ethical reasoning. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking by Michael Gerlichhttps://www.mdpi.com/2075-4698/15/1/6
loading
Comments