DiscoverFriction
Friction
Claim Ownership

Friction

Author: Philosophy

Subscribed: 0Played: 6
Share

Description

On this podcast, I interview philosophers and other academics on fascinating philosophical and philosophy-adjacent topics.

fric.substack.com
95 Episodes
Reverse
What are intuitions, and are they indispensable to our knowledge?My links: https://linktr.ee/frictionphilosophy.1. GuestMarc Moffett is associate professor at the University of Texas at El Paso, and his work has focused on epistemology, philosophy of language, philosophy of mind, and metaphysics.Check out his book with Cambridge Elements, "The Indispensability of Intuitions"!https://www.cambridge.org/core/books/indispensability-of-intuitions/6F7C18793C39B08507716DD934E4C6A2https://a.co/d/0bsB4MX12. Book SummaryMarc A. Moffett’s The Indispensability of Intuitions argues that rational intuitions are not mystical or mysterious, but rather a ubiquitous and essential feature of human cognition. Defending a stance called “moderate dogmatism,” Moffett contends that intuitions serve as basic sources of evidence alongside perception and introspection. He posits that rejecting the role of intuitions would undermine our knowledge on a massive scale, rendering them epistemically indispensable for almost all knowledge, whether a priori or a posteriori.A central part of Moffett’s argument involves rejecting the prevalent idea that the epistemic weight of intuitions (and other “seemings”) relies on a specific “presentational phenomenology” or conscious “feel”. Through thought experiments involving “Cartesian zombies,” he demonstrates that phenomenological properties are not what confer epistemic justification. Instead, he introduces the Attitudinal Theory of Presentationality (ATP), which characterizes presentational states by a unique cognitive posture—specifically, an involuntary “apprehending-as-actual” of certain contents. This non-phenomenological approach successfully addresses skepticism, such as Timothy Williamson’s “Absent Intuition Challenge,” by showing that intuitions can rationally guide our doxastic inclinations without requiring a distinct, introspectively obvious phenomenology.Building on this non-phenomenological foundation, Moffett demonstrates the widespread payoff of his theory by linking intuitions directly to concept application. He explains that philosophical thought experiments, such as the famous Gettier cases, rely on these concept-application intuitions to guide our judgments. Furthermore, Moffett expands his scope to argue that acts of explicit inference, as well as the higher-level presentational contents of normal perceptual experiences, fundamentally rely on the application of concepts, and therefore on intuitions. Consequently, intuitions are not just tools for abstract philosophy, but are intimately integrated into nearly all of our everyday cognitive functioning.3. Interview Chapters00:00 - Introduction00:54 - What are intuitions?03:06 - Absent intuition worry06:55 - John Bengson08:22 - Terminological dispute12:20 - Methodological worry14:53 - Moderate dogmatism18:38 - Foundationalism23:10 - Internalism26:39 - Blindsight30:10 - Zombie argument36:52 - Rejoinder43:09 - Non-phenomenal presentational dogmatism45:48 - Upshot47:47 - Another rejoinder51:48 - Indispensability55:46 - Are intuitions needed?59:47 - Intuitions as content-determining1:02:07 - Animal concepts1:06:10 - Inferences1:08:39 - Inference without reckoning1:10:59 - Philosophy without intuitions?1:14:14 - Ethics1:17:29 - Perceptual experience1:23:54 - Value of philosophy1:27:32 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
If Wittgenstein is right, the mystery of “private experience” doesn’t point to hidden inner objects or an incommunicable language of sensation, but to a philosophical picture that makes our ordinary talk about pain and perception look far more puzzling than it is.My links: https://linktr.ee/frictionphilosophy.1. GuestMichael Hymers is Munro Professor of Metaphysics at Dalhousie University, Canada and his work has focused primarily on Wittgenstein, 20th-Century philosophy, epistemology, philosophy of language.Check out his book with Cambridge Elements, "Wittgenstein on Private Language, Sensation and Perception"!https://www.cambridge.org/core/books/wittgenstein-on-private-language-sensation-and-perception/BC7058BF509740A839271C98B084F176https://a.co/d/05nGUE5I2. Book SummaryMichael Hymers argues that Ludwig Wittgenstein’s discussion of private language in Philosophical Investigations §§243–315 is best read not as “the” Private Language Argument (centered on the diary passage at §258), and not as an attempt to prove that language is intrinsically social. Instead, the book presents Wittgenstein’s treatment as a cluster of arguments, examples, and reminders whose central target is a picture: the temptation to treat sensations and perceptual experiences as private objects located in a private “phenomenal space,” and to model sensation-words on an “object-and-name” scheme. Hymers frames this as continuous with Wittgenstein’s earlier work (including The Big Typescript) and with his shift away from assumptions carried over from the Tractatus Logico-Philosophicus about how naming works. Methodologically, the book emphasizes Wittgenstein’s therapeutic/clarificatory aim: dissolving philosophical confusion by giving an overview of our “grammar,” rather than issuing deep theses or scientific-style explanations.A large part of the book (roughly its middle sections) explains why the “private object in phenomenal space” picture is unstable, and why it makes the very idea of a private sensation-language look deceptively natural. Hymers traces Wittgenstein’s doubts to the earlier critique of sense-data and of treating visual or tactile “space” as if it worked like physical space—where measurement, re-identification, and objecthood behave very differently. He then distinguishes “ordinary” privacy (e.g., the mundane fact that pains are my pains in the sense that I’m the one who manifests them) from stronger “superprivacy,” and separates epistemic privacy (who can know) from ontological privacy (what sort of thing a pain is). Against the idea that first-person authority rests on privileged inner access to private objects, Hymers highlights Wittgenstein’s alternative: first-person present-tense psychological utterances (“I am in pain,” etc.) function paradigmatically as expressions or avowals rather than as reports based on observation, so their asymmetry with third-person claims is grammatical, not a deliverance of a private epistemic channel.In the latter half, Hymers organizes the interpretive landscape around several “waves” of reading Wittgenstein’s anti–private-language materials—moving from verification/memory worries, to problems about private ostensive definition, to rule-following, and finally to broader “stage-setting” concerns (what has to be in place for something to count as naming, attending, or grasping a rule at all). Key thought experiments are used to pry us away from the object-and-name model: the “human manometer” shows that even if a diary-sign ‘S’ correlates with a bodily measure, it can become pointless to insist on a hidden inner act of correctly identifying the sensation—suggesting that the “misidentification” knob is ornamental if sensations are treated as detached inner objects. And the “beetle in a box” at PI §293 is presented as the most explicit pressure against thinking that sensation-words get their meaning by privately baptizing inner items: if the term belongs to a shared practice, the private “thing in the box” is not what gives it its role, and treating sensations as if they were objects is precisely the misleading picture doing the damage. The epilogue’s upshot is not behaviorism or the denial of experience, but a diagnostic: the philosophical “problem” is generated by a grammatical fiction that holds us captive, and Wittgenstein’s aim is to restore clarity about how our sensation- and perception-talk actually works.3. Interview Chapters00:00 - Introduction01:06 - Overview of element03:39 - Methodology09:31 - Interpreting Wittgenstein13:57 - Private language18:01 - First wave: skepticism22:17 - Second wave: definition27:22 - Third wave: social34:10 - Wittgenstein on Kripke37:22 - Fourth wave: stage-setting49:23 - Pains and sensations52:52 - Problem for private languages54:23 - Difference from second wave56:46 - Objections1:01:31 - Avoiding behaviorism1:07:00 - Inverted spectrum1:14:17 - Infallibility1:17:07 - Objection1:21:55 - Upshots1:25:15 - Value of philosophy1:26:33 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
Are gender and sexuality really two neat boxes, or are they better understood as positions in a multidimensional space where people can differ by degree rather than kind?My links: https://linktr.ee/frictionphilosophy.1. GuestKevin Richardson is Associate Professor of Philosophy at Duke University, and his work has focused on metaphysics, language, and social reality.Check out his book, "The End of Binaries: How Gender and Sexuality Come in Degrees"!https://academic.oup.com/book/61709https://a.co/d/04PYhWSf2. Book SummaryKevin Richardson’s The End of Binaries: How Gender and Sexuality Come in Degrees argues that many contemporary fights over gender and sexuality are fueled by an overly rigid “binary” picture—one that treats people as cleanly classifiable into just two genders (male/female) and two orientations (straight/gay). The book begins by emphasizing the real-world stakes of this picture—how the gender binary is defended not only by conservatives but also, in some contexts, by “gender critical” feminists, and how those defenses show up in social practices and legislation. Against this background, Richardson proposes a different organizing framework: instead of asking which category someone belongs to, we should think of gender and sexual orientation more like “where you live” in a space—something that can be described coarsely (city/state) or very precisely (GPS coordinates), depending on the conversational purpose.The core metaphysical proposal is the “spatial theory.” On this view, we should distinguish gender itself from gender categories: gender is an underlying space of features, while categories like man, woman, and non-binary are socially recognized regions within that space; likewise for sexual orientation and sexual-orientation categories. Thinking spatially makes it straightforward to explain “in-between” and hard-to-classify cases: indeterminacy arises because people often use the same terms to organize overlapping regions, and scalar variation is fundamental—one can be a man (or gay/straight) to a greater or lesser degree, rather than only “all-or-nothing.” The book also uses this framework to explain why crisp definitions of gender/orientation categories are so elusive: categories are structured around prototypes (central examples) rather than necessary-and-sufficient conditions, and our difficulty in defining them is compared to the difficulty of verbally specifying an exact geometric shape.Building on the same model, Richardson argues that sexual orientation categories are constructed by communities organizing social life around certain regions of sexual-orientation space and “conferring” category-status by resemblance to prototypes; the result is that our standard labels can be much coarser than the underlying reality they’re trying to track. He also connects the metaphysics to language and politics: disputes like “Trans women are women” are treated as negotiations over which gender “perspectives” (bundles of norms) a community will coordinate on, so meaning-talk and social-world-making are tightly linked. In the concluding “Binary Abolition” discussion, the book rejects both (i) simply eliminating all categories and (ii) replacing binaries with hyper-granular “micro-categories,” recommending instead a positive project of spatial abolition: learning to think and talk in ways that reflect the underlying spaces, with more context-sensitive and purpose-sensitive ways of “locating” ourselves socially—just as we do when describing physical location.3. Interview Chapters00:00 - Introduction00:42 - Overview of book05:01 - Semantics vs. ontology10:18 - Descriptive vs. prescriptive14:50 - Gender binaries20:47 - Biological binaries25:07 - Gender norms32:47 - Linguistic constraints37:15 - Social accounts47:07 - Haggling usage53:07 - Spatial theory of gender59:38 - Simplicity vs. informativeness1:07:12 - Gender kinds1:12:53 - Vagueness1:23:14 - Abolitionism1:27:15 - Social issues1:34:47 - Making progress1:41:01 - Value of philosophy1:44:50 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
1. GuestDaniel Nicholson is Assistant Professor of Philosophy at George Mason University, and his work has focused on the philosophy of science, and in particular biology and life sciences.Check out his book with Cambridge Elements, "What is Life? Revisited"!https://www.cambridge.org/core/elements/abs/what-is-life-revisited/E6B3EA136720CF50C9480ADB8F41A6F4https://a.co/d/5aBcmau2. Book SummaryDaniel Nicholson’s What Is Life? Revisited reassesses Erwin Schrödinger’s famous 1944 book What Is Life?—a work that’s widely cited but, Nicholson argues, rarely engaged with carefully—and asks how well Schrödinger’s core ideas have held up. Nicholson reconstructs Schrödinger’s main argument, then evaluates it via two extended critiques (of the “order-from-order” and “order-from-disorder” principles), before turning to the book’s historical influence on molecular biology and (using archival sources) Schrödinger’s deeper motivations for writing it.On Nicholson’s reconstruction, Schrödinger’s central move is to contrast the statistical “order-from-disorder” explanations common in physics and chemistry with a distinctively biological “order-from-order” picture: biological regularities, he thinks, depend on microscopic structural order in hereditary material being amplified into macroscopic organismic order. He proposes that genes must be extraordinarily stable because they are solid-state structures—an “aperiodic crystal” whose nonrepetitive organization can encode a “meaningful design” rather than a simple periodic pattern. On this basis, Schrödinger treats the organism as a kind of “clockwork” mechanism and even suggests that biology may involve “other laws of physics” (not a rejection of physics, but new non-statistical principles suited to living matter). He also offers his influential thermodynamics discussion: organisms avoid equilibrium by importing free energy—his famous (if controversial) talk of feeding on “negative entropy.”Nicholson’s bottom line is that Schrödinger’s emphasis on rigidity, specificity, and a gene-centered “order-from-order” program powerfully shaped molecular biology’s self-image—helping to normalize an engineering-style, deterministic picture of the cell (e.g., “molecular machines,” wiring-diagram thinking, and circuit-like pathway depictions). But Nicholson argues that much of this inherited picture is increasingly in tension with experimental work that foregrounds stochasticity, dynamical flexibility, and non-classical self-organizing processes—pushing researchers toward more statistical (rather than purely mechanical) explanatory strategies. Finally, Nicholson contends that to understand Why Schrödinger framed biology this way, we should see What Is Life? as part of Schrödinger’s broader fight against the orthodox (Copenhagen) interpretation of quantum mechanics: his biological proposals were, in effect, entangled with an attempt to defend a more deterministic worldview and to oppose Bohr-inspired extensions of quantum indeterminacy into biology. The payoff of rereading Schrödinger now, Nicholson suggests, isn’t that the book is straightforwardly right, but that it clarifies how we arrived at our current image of the cell—and how that image may be due for revision.3. Interview Chapters00:00 - Introduction00:32 - Background03:26 - Why did he write it?08:19 - Biological order14:08 - Order from disorder17:37 - Not applicable to life20:27 - Hereditary substance22:58 - Gene-centric view31:35 - Entropy39:12 - Negative entropy41:24 - New laws48:51 - Modern developments51:26 - Determinism and free will1:03:09 - Helpful aspects1:04:42 - Lessons to learn1:13:11 - Value of philosophy1:20:20 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
What is error, and what is scientific error? Douglas Allchin explores the various types of scientific errors, how to identify them, and how to do science in light of them.My links: https://linktr.ee/frictionphilosophy.1. AuthorDouglas Allchin is an AAAS Fellow and Resident Fellow at the Minnesota Center for the Philosophy of Science, and his work has primarily focused on the history and philosophy of science.Check out his book, "Toward a Philosophy of Error in Science"!https://global.oup.com/academic/product/toward-a-philosophy-of-error-in-science-9780197827673https://a.co/d/iobiDIc2. Book SummaryDouglas Allchin’s Toward a Philosophy of Error in Science argues that scientific error shouldn’t be treated as an embarrassing sideshow to “real” science, but as something integral to how science actually learns and progresses. Instead of assuming that good methods straightforwardly yield reliable knowledge, Allchin urges a systematic “philosophy of error” that tracks how a claim can be justified at one time and later become unjustified—i.e., how changes in evidence, framing, and reasoning can overturn what once looked reasonable.The book develops an “inventory” of error types across three layers of scientific justification. At the observational layer, errors can stem from material contamination, instrument problems, sampling and measurement misframing (like small samples, proxies, or confounders), and observer effects and biases. At the conceptual layer, mistakes arise in inference and interpretation—overgeneralization, faulty assumptions, confirmation bias, and culturally inflected biases, alongside a meta-risk Allchin calls “epistemic hubris” (the idea that these pitfalls only happen to other scientists). At the social layer, scientific discourse and institutions can also entrench errors (through weak vetting, communal biases, or distorted incentives), even though—ideally—organized skepticism and reciprocal criticism are supposed to help filter mistakes.Finally, Allchin focuses on how errors are actually found and remedied: they don’t “announce themselves,” and there’s no single ‘error-correction method’—correction can be slow, uneven, and sometimes driven by contingencies rather than a tidy mechanism. Against the comforting slogan that science is simply ‘self-correcting,’ he argues we should be more explicit about when and how peer review and replication succeed or fail, and then manage error more deliberately. A key payoff is rethinking what counts as epistemic progress: “negative knowledge” (learning what’s not the case, and why) is still genuine knowledge, and improving reliability often means actively probing for hidden sources of error rather than only accumulating confirming evidence.3. Interview Chapters00:00 - Introduction00:56 - Overview of book02:12 - Error09:08 - Uncertainty11:42 - Epistemology13:33 - Vagueness17:38 - First layer of error: raw data29:30 - Second layer of error: conceptual50:25 - Third layer of error: social1:10:46 - Recognizing error1:22:34 - Resolving error1:26:10 - Humans and history1:29:18 - Useful biases1:36:03 - Negative knowledge1:41:49 - Pessimistic meta-induction1:47:42 - Value of philosophy1:50:23 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
What is discrimination, and what makes it wrongful?My links: https://linktr.ee/frictionphilosophy.1. AuthorKasper Lippert-Rasmussen is professor in political theory at University of Aarhus, Denmark. His work has focused primarily on applied and normative ethical issues.Check out his Cambridge Element, “Wrongful Discrimination”!https://www.cambridge.org/core/elements/wrongful-discrimination/6E0371A0B8D60E14E657153706F6F3EChttps://a.co/d/fjqivMb2. Book SummaryLippert-Rasmussen’s Wrongful Discrimination asks what “discrimination” is and, more importantly, what makes it wrongful when it is. He starts by distinguishing mere (generic) discrimination—just differentiating—from “group discrimination,” where people are treated differently because they’re seen as members of socially salient groups (race, gender, religion, etc.). He then maps key varieties of group discrimination (especially direct vs. indirect, plus structural patterns), and stresses that “wrongful” and “morally impermissible” can come apart: discrimination can wrong someone even in cases where (all things considered) an act might still be permissible, and vice versa.The core of the book is a critical survey of three leading families of explanations for wrongfulness: harm-based views, disrespect-based views, and views that tie wrongfulness to sustaining or expressing relations of social inequality (a “social equality”/relational-egalitarian approach). Lippert-Rasmussen argues that each can explain many paradigm cases of wrongful direct discrimination, but each runs into serious trouble once you press on hard cases—e.g., cases that look wrongful without straightforward harm, or cases where harms are present but don’t seem to generate a complaint in the right way.He then uses three especially important “non-paradigmatic” domains—indirect discrimination, implicit-bias discrimination, and algorithmic discrimination—to test these theories. The upshot is pessimistic about any single master explanation: these phenomena often don’t fit neatly under standard categories (prompting proposals like a third category beyond direct/indirect discrimination), and they expose systematic gaps in harm-, disrespect-, and social-equality accounts as usually formulated. Overall, he concludes that the prospects for a monistic theory of what makes discrimination wrongful are dim, and that we may need a more pluralistic (or significantly revised) framework.3. Interview Chapters00:00 - Introduction00:43 - What is “discrimination”?07:17 - Irrelevant features10:48 - Framing the project18:43 - Socially salient groups23:44 - Connection with the law26:49 - Empirical research28:04 - Vagueness33:12 - Political beliefs35:07 - Direct and indirect discrimination38:14 - Worry about indirect discrimination43:35 - Statistical discrimination46:24 - Different category?48:41 - Structural discrimination52:40 - Wrongful discrimination55:09 - Rejoinder1:03:02 - Harm-based accounts1:06:53 - Respect-based accounts1:11:11 - Intent 1:13:19 - Equality-based accounts1:19:16 - Monistic accounts1:23:05 - Value of philosophy1:27:10 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
What is the mind, and how do we address the hard problem?My links: https://linktr.ee/frictionphilosophy.1. GuestJoseph Mendola is Professor of Philosophy at the University of Nebraska, Lincoln. His work covers a range of topics, including ethics, metaphysics, and mind.Check out his book, "The Neural Structure of Consciousness!"https://www.cambridge.org/core/books/neural-structure-of-consciousness/C7CDE1BEC7582CBE10F6875F56D5EBE0https://a.co/d/3xmkBMz2. Book SummaryJoseph Mendola’s The Neural Structure of Consciousness tackles the “hard problem” by asking how phenomenal features of experience (especially sensory qualia) relate to the physical features of the nervous system, aiming for a physicalist, internalist account that uses color experience as the central test case. The guiding idea is that the rich apparent structure of what we experience—e.g., the way colors stand in relations of similarity, opposition, and inclusion—can be explained by the real modal structure of the neurophysiology that makes those experiences possible: which neural states are available as alternatives, how they exclude or entail others, and how that “space of possibilities” is built into our visual system. Mendola frames this as a “MOUDD” approach: explaining sensory qualia by matching the modal structure of experience to the modal structure of the underlying neurophysiology, while treating many of the “properties” experience seems to present (like phenomenal colors “out there” on objects) as in significant respects illusory.A core commitment of the book is a version of the “whole nervous system” model: rather than locating consciousness in some sharply bounded neural correlate, Mendola argues (with qualifications) that the relevant nervous-system-wide organization bridging sensory receptors and action is what constitutes sensory phenomenality. In detail, he proposes that each particular quale (e.g., a specific red-at-a-location) is constituted by a distinct “modal filament” that links stimulation to action within a fixed background, where the filament is individuated modally (by how it can vary and what alternatives it rules in/out), not necessarily by a single spatial pathway or by representational “information content.” This framework is then used to make sense of introspection and the feel of experience without leaning on standard representationalist machinery, by stressing how actual neural states and their “real possibilities” can be dynamically relevant to what we do and say.The later chapters broaden the application: from color to other senses, then to the layered structure of visual space (including the way experience can attribute properties both to a “visual field” and to robust external objects), and finally to temporal experience, causal experience, and the sense of robust particularity. In discussing time, Mendola engages Husserl-style retentional structure (retention/primal impression/protention) and argues that any adequate view must respect the phenomenology of motion and temporal content in experience. The concluding material confronts familiar anti-physicalist challenges (the “explanatory gap,” bats, zombies, inverted spectra, and Mary) and responds in part by emphasizing differences in concepts and cognitive access: e.g., Mary’s “new knowledge” is cast as acquiring an experience-based concept and learning a coreference claim rather than learning an extra nonphysical fact.3. Interview Chapters00:00 - Introduction00:54 - The hard problem06:51 - Dualism10:06 - Panpsychism12:44 - Panpsychist rejoinders15:28 - Modal structure24:13 - Modal structure of neurophysiology27:22 - Description-sensitivity32:00 - Identity34:52 - Type identity theory36:27 - Boltzmann brains39:17 - Correlations vs. identity43:54 - Phenomenal concepts45:56 - Zombies and inverts50:07 - A priori reasoning51:47 - Color experience57:38 - Are colors real?1:02:39 - Other senses1:04:41 - Unity of consciousness1:09:41 - Unconscious mental states1:12:29 - Animal consciousness1:15:48 - Vagueness1:16:55 - Functionalism1:20:48 - Artificial intelligence1:21:28 - Paul Thagard's approach1:25:51 - Progress1:27:11 - Value of philosophy1:28:32 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
What is deception, and can it occur without an intention to mislead, especially when the person being deceived is oneself?My links: https://linktr.ee/frictionphilosophy1. GuestVladimir Krstić is Assistant Professor at the United Arab Emirates University, and his work focuses on philosophy of mind, language, philosophy of deception.Check out his book with Cambridge Elements!https://www.cambridge.org/core/books/deception-and-selfdeception/F245F27D1A823DB21CC24B9C2D161C7A2. Book SummaryVladimir Krstić argues that the main puzzles about self-deception come from starting with the wrong theory of interpersonal deception. Traditional “intentionalist” accounts say deception requires an intention to mislead; when that model is applied to self-deception, it generates classic paradoxes (roughly: you’d have to knowingly trick yourself).His alternative is a functional account: something counts as deceptive when its function is to mislead—so deception (including self-deception) may be intentional, but it needn’t be, and crucially it’s never merely accidental or a simple mistake. This functional framework is meant to unify human deception, self-deception, and biological deception under one analysis.On the self-deception side, he applies the same functional idea to explain familiar “motivated” cases (e.g., rationalizing away distressing evidence) without requiring intention to self-deceive, and he suggests a practical marker: self-deception often shows up as a motivated departure from one’s normal standards—being “not oneself.” He also argues against the idea that self-deception must be beneficial or adaptive; some forms can be neutral or even harmful, so it calls for case-by-case treatment.3. Interview Chapters00:00 – Introduction00:50 – Overview of the book11:09 – Intention17:58 – Is deception always wrong?29:25 – Functional account36:29 – Function43:08 – Sci-fi case48:13 – Vagueness53:45 – Objections57:51 – Self-deception1:02:15 – Function and self-deception1:09:12 – Semantics1:17:27 – Value of philosophy1:24:33 – Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
If quantum mechanics forces us to rethink what a “measurement outcome” even is, can experiments still count as genuine evidence for any scientific theory?My links: https://linktr.ee/frictionphilosophy.1. GuestEmily Adlam is Assistant Professor of Philosophy at Chapman University and her work focuses on physics, especially quantum physics, and the philosophy of physics.Check out her book, "Saving Science from Quantum Mechanics: The Epistemology of the Measurement Problem"!https://global.oup.com/academic/product/saving-science-from-quantum-mechanics-9780197808856https://www.amazon.com/dp/0197808859/2. Book SummaryEmily Adlam’s Saving Science from Quantum Mechanics argues that the quantum ‘measurement problem’ isn’t just a puzzle about what exists (wavefunctions, worlds, collapses, etc.), but a threat to the epistemology of science—our right to treat experimental outcomes as evidence. She frames the central demand as a kind of “closing the circle”: a viable physical story of measurement should be coherent with the idea that measurement outcomes genuinely provide information about what’s measured. Against the background of ordinary assumptions about measurement (value-definiteness, veracity, unique outcomes, shareable records, reliable memory), quantum mechanics and results like contextuality make it hard to keep the whole intuitive package, which means some “solutions” risk making scientific knowledge fragile or even impossible.The book then evaluates leading families of responses to the measurement problem by asking whether they preserve empirical confirmation. For Everettian (many-worlds) approaches, Adlam emphasizes the “probability problem” as an epistemic problem: if we can’t explain why observed relative frequencies should confirm the theory, Everettian QM risks empirical incoherence—undermining the very evidence that would support it. She also examines “observer-relative” approaches (including perspectival/neo-Copenhagen, relational QM, and possibly QBism), characterized by universal unitary dynamics plus unique outcomes that are nevertheless relativized to observers; a key worry is that this picture strains the expectation that different observers can straightforwardly share and align records of outcomes.Stepping back, Adlam’s through-line is that you don’t get to quarantine these issues inside “interpretation”: changing our conception of measurement reshapes what counts as evidence for any scientific theory, since no theory is empirically confirmed without observation and measurement. She uses this lens to assess Bayesian/decision-theoretic moves and their limits for “sceptical” hypotheses like multiverses, where even the relevant priors may be ill-defined without a broader belief-revision story. And she presses that some stances—e.g. “intersubjective QBism” that severs the link between quantum states/probabilities and observed frequencies—would drain quantum mechanics of empirical content and thus of confirmation.3. Interview Chapters00:00 - Introduction00:54 - The measurement problem05:14 - Shut up and calculate07:00 - Different senses of "measurement"09:11 - Bootstrapping10:18 - Relevance to scientific practice13:18 - Quantum bayesianism17:46 - Many worlds20:05 - Recovering the Born rule32:21 - Bohmian mechanics36:09 - Probability37:58 - All-at-once laws42:54 - Anti-Humeanism45:12 - Superdeterminism48:56 - Naturalness50:15 - Retrocausality54:33 - Primitive ontology57:51 - Fundamentality1:01:41 - Consistent histories1:04:38 - Saving quantum mechanics1:07:25 - Making progress1:08:38 - Value of philosophy1:10:20 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
What if the deepest question about “you” isn’t whether you’re the same person over time, but which future life it’s actually rational for you to anticipate and care about as your survival?My links: https://linktr.ee/frictionphilosophy.1. GuestTrenton Merricks is Commonwealth Professor of Philosophy at the University of Virginia, and his work focuses primarily on metaphysics, but also religion, epistemology, language and mind. In this interview, we discuss his book, "Self and Identity".2. Book SummaryIn Self and Identity, Trenton Merricks argues that a lot of debate about “personal identity” mixes together two different questions. The first is his What Question: what it is for a future person to have, at that future time, what matters in survival for you. His answer is that survival-relevance is constituted by what it’s appropriate for you to first-personally anticipate and to have future-directed self-interested concern about—where “appropriate” is a distinctive, non-evidential and non-moral norm. He also insists we shouldn’t conflate what matters in survival with what matters to you about the future in general (friends, projects, agency, etc.), since that conflation can distort arguments about survival.The second is his Why Question: what relation to a future conscious person explains why that future person will have what matters in survival for you. Merricks’s headline view is: identity is not what matters in survival, but identity delivers what matters in survival—i.e., numerical identity is (on his favored endurance picture) the right kind of explanation for why survival obtains. He then defends both the sufficiency and the necessity of personal identity for survival, targeting Parfit-style fission reasoning in particular and arguing that (depending on one’s metaphysics of persistence) Parfit’s argument can be blocked; he also rejects the idea that unbranching psychological connectedness/continuity is sufficient for personal identity (and so for what matters in survival).Chapters 4–6 then stress-test rival “psychological” answers to the Why Question—views that tie survival to having the same self (values/desires/projects), the same self-narrative, or forms of agential / narrative continuity—and Merricks argues these proposals mishandle cases of deep transformation (including being “turned” into someone evil in a way that seems bad for you without being merely like ceasing to exist). Finally, Chapter 7 applies the framework to personal immortality (“the hope of glory”): immortality is framed as there always being someone who will have what matters in survival for you, and Merricks uses his earlier claims to respond to familiar worries—e.g., that survival comes in degrees, or that immortality would inevitably be tedious.3. Interview Chapters00:00 - Introduction00:44 - Self and Identity04:25 - What and why questions07:25 - Semantics12:29 - Normative issues13:29 - What matters in survival18:36 - Numerical identity21:04 - More conditions?22:42 - The past24:35 - Permanent comatose30:49 - Memory wipe36:05 - Psychological continuity37:25 - Puzzles of identity40:47 - Persistence and eternalism46:43 - Relative identity53:42 - Sci-fi cases58:17 - Other views1:00:24 - Non-reductionism1:05:51 - Examples1:10:55 - Vagueness1:14:37 - Narrative accounts1:18:32 - Christian theology1:25:03 - A puzzle1:27:32 - Value of philosophy1:29:25 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
Can brains build consciousness? In this interview, Paul Thagard argues that they can, and explains his approach.My links: https://linktr.ee/frictionphilosophy.1. GuestPaul Thagard is Distinguished Professor Emeritus of Philosophy at the University of Waterloo and a Fellow of the Royal Society of Canada, the Cognitive Science Society, and the Association for Psychological Science. His work focuses on cognitive science, philosophy of mind, and the philosophy of science and medicine.Check out his book, "Dreams, Jokes, and Songs: How Brains Build Consciousness"!https://academic.oup.com/book/60618https://www.amazon.com/dp/B0FHQJ3KCS/2. Book SummaryPaul Thagard’s Dreams, Jokes, and Songs: How Brains Build Consciousness develops a neuroscientifically grounded, mechanism-based theory meant to explain not just “ordinary” perception, sensation, emotion, and thought, but also the especially puzzling, highly structured forms of experience that show up in dreaming, humour, and music. The core proposal is the “NBC” theory: conscious experience arises from interactions among Neural representation, Binding, Coherence, and Competition—where coherence is understood as constraint satisfaction and competition governs which representations win out for attention and interpretation.After laying out NBC and illustrating it with simpler cases (e.g., how brains build perceptual and bodily experiences and integrate them into unified “compound” consciousness), Thagard uses it to explain three marquee domains. Dreaming is treated as a product of the same mechanisms, aiming to explain why dreams are common, emotionally charged, continuous with daily life yet sometimes bizarre, and still feel intensely “what-it’s-like” (his term “zing”) even when they don’t make ordinary sense. Humour is explained via a characteristic dual shift: incoming words/images trigger an initial interpretation and emotional response, then a change prompts a second interpretation and response, and recognizing that shift yields surprise and laughter. Musical experience is explained as the brain binding basic note-representations into higher-order structures like melody, rhythm, and harmony, then binding these with other modalities (movement, words, visuals, emotion), with competition helping music “break through” into conscious attention.The later chapters broaden the same framework to other conscious domains (e.g., religion, morality, sports performance, romance, and the effects of drugs), and argue that any full theory must handle time consciousness: the brain represents time using “time cells,” binds these into larger “memory units,” and uses coherence and competition to produce an experienced sense of duration and temporal flow. Thagard also evaluates animal consciousness and asks about machine consciousness, arguing that current large language models (including ChatGPT) can be impressive without having felt perceptions, sensations, or emotions, partly because they lack the kind of world- and body-involving understanding central to his story. Finally, he connects the theory to a broader mind–body view he calls “coherent materialism” (or “cohmaterialism”), on which genuinely minded systems are rare because they require tightly coupled hardware/software that coherently satisfies constraints of time, space, energy, and history.3. Interview Chapters00:00 - Introduction00:51 - Overview of book04:57 - Qualia08:12 - Illusionism11:53 - Neural representation14:58 - Representation18:14 - Binding22:40 - Coherence26:58 - Emotions28:49 - Competition31:18 - Getting consciousness38:13 - Emergence40:27 - Additional mechanisms42:50 - Correlates vs. identity48:00 - Explanatory breadth50:53 - Dreams55:59 - Global workspace theory58:27 - Other approaches1:01:46 - Animal consciousness1:05:41 - Vagueness1:08:37 - Functionalism and AI1:16:14 - Coherent materialism1:18:37 - Thought experiments1:22:30 - Mary's room1:25:22 - Future research1:27:57 - Value of philosophy1:30:01 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
If we ever make first contact, the hard part might not be sending a message across space, but working out whether aliens do science in anything like our sense, share concepts like number and explanation, and could actually understand what we mean by “physics.”My links: https://linktr.ee/frictionphilosophy.1. GuestDaniel Whiteson is an experimental particle physicist and professor of Physics and Astronomy at University of California, Irvine. His work focuses on the analysis of high-energy particle collisions. He co-hosts a podcast about the Universe (Daniel and Kelly's Extraordinary Universe).Check out his new book with Andy Warner, "Do Aliens Speak Physics?: And Other Questions about Science and the Nature of Reality"!https://www.amazon.com/dp/1324064641/2. Book SummaryIn Do Aliens Speak Physics? Daniel Whiteson (with Andy Warner) asks what it would take—not just to find intelligent aliens, but to have a meaningful scientific exchange with them. The organizing idea is an “extended Drake equation”: beyond the usual probabilities of life and intelligence, we have to ask what fraction of alien civilizations do something like experiment-driven science (fscience), what fraction of those we could communicate with at all (fcommunication), and then whether we’d even share enough conceptual overlap to ask and answer the “same” scientific questions.The middle of the book is a tour of the ways those terms might collapse. Even if aliens are curious, their “science” might not look like ours; even if we can exchange signals, translating meanings could be brutally hard; and even math—often treated as the obvious shared language—might not function as a universal bridge if aliens don’t carve the world into countable objects the way we do. The authors use vivid hypotheticals to press the point that what feels “obvious” to us can hide deep assumptions (about counting, representation, and what matters), and those assumptions can reshape what we notice and what questions we even think to ask.In the later chapters, they argue that—even granting shared questions—there’s no guarantee of the kind of grand, final alien “answer” we fantasize about. Human physics already looks like a patchwork of domain-specific approximations that don’t neatly sew into one overarching quilt, and there can be multiple incompatible “stories” that fit the same observed data, suggesting a Rashomon-style underdetermination that aliens might resolve differently (or not at all). The upbeat conclusion is that this isn’t just a downer about SETI: thinking through alien science is a way of spotting our own hidden commitments and keeping alternative conceptual paths alive—so the exercise teaches us about our science and our minds, even if no perfectly compatible alien colleagues ever show up.3. Interview Chapters00:00 - Introduction01:55 - Overview of book04:29 - Illustrations05:31 - Extended Drake equation08:31 - Navigators11:44 - Different physics14:28 - Communication21:25 - First contact24:50 - Mathematics29:33 - Vagueness33:12 - Indispensability35:57 - Ontology plus dynamics39:21 - Arbitrary conventions41:20 - Varieties of life48:06 - Friendly?49:16 - Common concepts52:51 - Learning about ourselves54:11 - Progress1:00:03 - Value of philosophy1:02:39 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
Can a Bayesian look at fine-tuning make “design” less compelling, and do Grim Reaper-style infinity puzzles really show that an infinite past is impossible?My links: https://linktr.ee/frictionphilosophy.1. GuestGraham Oppy is Professor of Philosophy at Monash University, and specializes in Philosophy of Religion.2. Interview SummaryIn this interview, Friction speaks with Graham Oppy about two big clusters of issues: a Bayesian way of framing fine-tuning arguments, and how (if at all) Benardete/“Grim Reaper” style paradoxes support causal or temporal finitism. On fine-tuning, Friction sketches a strategy that starts from probabilistic constraints—roughly, that “design” shouldn’t get a higher prior than non-design, and that life-permittingness/fine-tuning isn’t (or needn’t be) more expected on design than on non-design—so that updating on a life-permitting universe won’t, by itself, drive you toward design. Oppy presses on how the hypothesis space is being carved up and what background assumptions are doing the work, noting that fine-tuning defenders often treat “design” as a family of more specific hypotheses—some of which might assign high likelihood to fine-tuning (the “more batter on the design side” idea). A related thread Oppy raises is an “inscrutability” worry: given a designer’s vast option space, it may be hard to say what fine-tuning should even be likely on design, which complicates the likelihood comparisons that fine-tuning arguments rely on. The conversation also touches on how conditioning on extremely specific facts about “these exact parameters” can generate counterintuitive results about what should have been expected a priori, and Oppy connects this to “many-gods” style worries familiar from Pascal’s Wager debates.In the second half, Friction and Oppy turn to Benardete-style setups: infinite sequences of would-be interveners arranged at times approaching a limit, which can make it seem like an outcome must occur even though no particular intervener is ever the one who triggers it. Friction outlines a common finitist dialectic: if an infinite past/regress would allow a Grim Reaper scenario (often via a “patchwork” recombination principle), and if Grim Reaper scenarios are impossible, then infinite pasts/regresses are impossible too. Oppy focuses much of his skepticism on the linking step—especially the idea that you can “piece together” regions from different possible worlds to build the paradox—because the relevant dispositions and actions don’t obviously survive that kind of cut-and-paste. He also emphasizes that there are plenty of coherent infinite-sequence stories that don’t generate contradiction (he offers simple toggle-style examples), which undercuts the claim that infinity as such forces paradox. And a recurring diagnosis is that many paradox presentations under-specify what happens at the crucial infinite-limit case—so the sense of impossibility may come from an incomplete story rather than a genuine contradiction.3. Interview Chapters00:00 - Introduction01:18 - Bayesian fine-tuning argument02:30 - Design vs. non-design hypotheses03:52 - Two probability constraints05:17 - Oppy’s first reaction07:24 - Conditional probabilities questioned10:11 - Does design predict life?11:16 - Purely a priori reasoning15:16 - Causation vs. design16:36 - Probability19:54 - Background22:33 - Simplicity27:41 - Skeptical theism and fine-tuning28:22 - Life-permitting vs. fine-tuned31:39 - Comparing specific hypotheses37:55 - Simplicity and divine complexity39:28 - Necessary beings and the universe43:30 - Intuitions and priors46:52 - Stalking-horse objection49:52 - Background knowledge and updating51:34 - Double-dipping concern55:44 - Grim reapers1:01:41 - Patchwork principle1:10:54 - Thomson’s lamp analogy1:14:33 - Toe-regrowing variant1:22:12 - Lewis and patchwork1:23:41 - Intrinsic powers1:26:27 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
What if Kant is right that real freedom is not doing whatever you feel like, but choosing principles you can rationally endorse and then living by them?My links: https://linktr.ee/frictionphilosophy.1. GuestKaren Stohr is Ryan Family Chair Professor of Metaphysics and Moral Philosophy at Georgetown University, where she is also a Senior Research Scholar in the Kennedy Institute of Ethics. Her work focuses on ethics. In this interview, we focus on her book, "Choosing Freedom: A Kantian Guide to Life".2. Book SummaryKaren Stohr’s Choosing Freedom is a practical guide to living “freely” in a Kantian sense: not doing whatever you feel like, but governing yourself by principles you can rationally stand behind. She emphasizes that the book is not about becoming more like Kant or constantly asking “What would Kant say?”; it’s about using Kant’s insights to illuminate hard-to-notice features of our moral lives and help you live by your own standards. Stohr also frames the book as a short tour of Kant’s systematic ethics followed by lots of attention to the everyday “trees” Kant actually wrote about—things like gossip, friendship, and dinner parties—because Kant meant ethics to guide real life. Kantian freedom, on this telling, often requires self-constraint: exercising autonomy means “getting a grip on ourselves” so we can live according to rationally defensible principles rather than being yanked around by impulse and procrastination.The early chapters lay out the Kantian basics: morality is grounded in reason rather than shifting feelings, and the categorical imperative is presented through three connected ideals—equality, dignity, and community. Stohr stresses that Kant isn’t only about isolated individual choice: the “kingdom of ends” picture highlights how our communities shape our moral lives and how morality asks us to build social relations on the equal value of persons. In the “moral assessment” sections, she connects this framework to knowing and judging ourselves (and others), urging forms of charitable interpretation that keep us from using other people’s flaws as a way to feel superior, and redirecting attention back to our own moral work. Along the way, she squarely acknowledges Kant’s moral failures—especially racist and sexist views—while arguing that Kant’s own framework contains powerful resources against dehumanization, beginning with a strict duty to treat every human being with dignity.Most of the book applies the theory to character, goals, and social life, organized into parts on vices, life goals, socializing, and looking forward. Stohr explains Kantian vices as “monsters” that live inside us and “enslave us from the inside,” warping our reasoning and making it harder to recognize and follow our duties—hence chapters on servility, arrogance, contempt, gossip/defamation, mockery, deceitfulness, and drunkenness. She then turns to constructive practices (self-improvement, resilience, reserve, beneficence, gratitude) and to the moral texture of friendship, love, manners, and even hosting: for Kant, good social rituals can cultivate both understanding and “fellow-feeling,” helping us practice respect in community. The final chapters emphasize hope as a duty-like orientation toward moral progress: we’re to work toward better ethical community (and even peace) by sustained effort, grounding optimism in the idea that people can keep trying to be better than they were yesterday.3. Interview Chapters00:00 - Intro00:43 - Overview of Choosing Freedom03:03 - Making Kant accessible06:08 - Everyday Kantian ethics06:56 - Freedom and rationality10:16 - Acting irrationally12:39 - Human nature and evil16:36 - Can evil be rational?20:58 - The categorical imperative21:44 - Universal law formulation25:55 - Exceptions and universalization30:48 - Humanity formulation34:30 - Ends and dignity37:44 - Kingdom of ends41:38 - Perfect vs imperfect duties46:29 - Conscience and moral assessment51:55 - Reflecting on conscience52:24 - Vices and virtues53:06 - Duty not to lie57:53 - Lies and omissions1:00:14 - Civility and manners1:02:59 - Moral improvement1:06:39 - Teaching ethics1:09:54 - Philosophy as practice1:13:09 - Value of philosophy1:16:34 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
If causation is not fundamental, what keeps reality from turning into chaos with things randomly popping into existence, and does the kalām’s claim that whatever begins to exist has a cause really explain the order we see?My links: https://linktr.ee/frictionphilosophy.1. GuestDan Linford is lecturer at Old Dominion University, Department of Philosophy & Religious Studies. His work focuses on physics and the philosophy of physics, philosophy of religion.2. Interview SummaryIn this interview, Dan Linford discusses his paper “Without microphysical causation, just anything cannot begin to exist just anywhere,” motivated in part by debates around the causal principle often associated with the kalām cosmological argument. He frames the core question as whether the order we observe in the universe really requires causation—specifically, whether “whatever begins to exist must have a cause”—or whether there are non-causal ways to explain why we don’t see arbitrary “raging tigers” popping into existence out of nowhere.A major focus is a traditional line of support for the causal principle that Linford labels the Hobbes–Hume–Edwards–Pryor principle (HPP): roughly, if the causal principle were false, we’d lack a good explanation for why things don’t begin to exist at arbitrary times, places, in arbitrary numbers, and of arbitrary kinds. Linford and the host also pause on how strong the causal principle is supposed to be (mere accident vs physical/metaphysical necessity), and note that once you add extra metaphysical commitments (the interview uses the A-theory of time as an example), the principle can become either harder to justify or even vacuously true in a way that won’t do the work causal-principle defenders want.Linford then develops an alternative picture—drawing on “neo-Russellian” themes—on which causation isn’t fundamental to microphysics (for Russell-style reasons like time-symmetry), but causal talk remains useful in the special sciences for identifying “effective strategies” (a Cartwright-inspired point about intervention vs mere correlation). The upshot is that even if microphysical causation fails, it doesn’t follow that “anything goes”: what can begin to exist is still constrained by nomic (law-based), metaphysical, and logical principles, and those constraints can underwrite explanations of why tigers (etc.) don’t pop into existence. He also addresses a familiar objection to Humean-style views—why expect an “ordered continuation” of the mosaic rather than chaos—by appealing to Lewis-style similarity/“closeness” considerations (and related constraints on probability talk), arguing that the standard HPP-based worry doesn’t straightforwardly land.3. Interview Chapters00:00 - Intro00:30 - Overview04:40 - How strong is the causal principle?10:15 - The Hobbes-Edwards-Prior (HEP) principle16:20 - Expecting chaos vs. no explanation20:35 - What if explanation just runs out?23:37 - Neo-Russellianism32:30 - Fundamental physics36:13 - Time asymmetries in fundamental physics?40:49 - The main challenge to Neo-Russellianism44:23 - Do microphysical things "begin to exist"?51:33 - Law-based explanations without causation57:22 - Are laws more mysterious than causes?1:03:41 - The Neo-Humean response1:14:35 - Where does metaphysical explanation end?1:17:37 - Theological connections and brute facts1:21:45 - Final thoughts1:22:14 - Value of philosophy1:24:30 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
What, if anything, happened before the Big Bang, which origin story is right, and what future observations could finally decide between them?My links: https://linktr.ee/frictionphilosophy.1. GuestsPhil is a fellow of the Royal Astronomical Socieity, science popularizer, and runs the excellent YouTube channel "Phil Halper", aka Skydivephil. Niayesh Afshordi is professor of physics and astronomy at the University of Waterloo. He is also a founding faculty member at the Waterloo Centre for Astrophysics and an Associate Faculty in the Cosmology and Gravitation group at the Perimeter Institute for Theoretical Physics.2. Book SummaryBattle of the Big Bang argues that what most people call “the Big Bang” is really two things: a well-tested story about a very hot early universe, and a much less secure story about an initial “bang” or singular beginning. The authors frame the hot early universe as “science’s earliest memory,” while emphasizing that cosmologists are now trying to recover an even earlier “lost memory,” using new physics rather than just extrapolating familiar laws backwards forever. They set the stage with a brisk history of cosmological thinking and with the central puzzle: the standard picture explains a lot about how the universe evolves, but it does not straightforwardly tell us what (if anything) happened before the Big Bang, or what replaces the would-be singularity.The middle of the book is a guided tour through today’s rival “origin stories,” presented as a genuine competition with strengths, weaknesses, and lots of unfinished business. Using inflation and its offshoots as one major contender, the authors then explore a sequence of alternatives: multiverse ideas, Hawking-style “no boundary” beginnings, string-theoretic scenarios like colliding branes and string-gas phases, loop-quantum-gravity-inspired “big bounce” pictures, cyclic models, “born from a black hole” proposals, varying-speed-of-light approaches, holographic cosmology, and even self-creation/time-loop possibilities. A recurring theme is that the singularity is widely treated as a sign that our two great frameworks, quantum mechanics and general relativity, cannot both be straightforwardly applied at the earliest times, so any serious account has to confront quantum gravity head-on, even though there is no consensus (and sometimes “too many answers”) about what that looks like in detail.In the final stretch, the book turns from “what might have happened” to “how could we ever know,” stressing the limits of what current headline instruments can actually tell us about the beginning. The authors note that even spectacular observatories like JWST are not designed to see back to the origin itself, and that the cosmic microwave background is the oldest light we can directly observe, so ordinary telescopes hit a hard wall; to probe earlier than that, we likely need new “messengers,” especially primordial gravitational waves, and better ways of squeezing evidence out of subtle imprints on the sky. They also reflect on the sociology of foundational disputes, warning that scientific consensus is not the same thing as popularity, and that the “battle” can sometimes resemble factional conflict more than dispassionate evaluation. The upshot is deliberately modest: nobody yet knows what happened at the Big Bang, but the path forward is clearer than it used to be, because future observations could rule whole classes of models in or out.3. Interview Chapters00:00 - Introduction00:54 - Impetus for the book07:12 - Historical background09:01 - The Big Bang15:22 - The meaning of “nothing”15:43 - Quantifier vs. noun sense of nothing18:22 - Almost nothing scenarios23:37 - How theories bear on cosmic origins28:47 - Concerns about multiverse theories29:10 - Testability of multiverse models34:05 - String theory and brane theory39:25 - Could there be time before time?39:59 - Limits of temporal concepts43:43 - Two-direction time models50:05 - Other models54:01 - Are we on the cusp of a new cosmic revolution?1:01:31 - Favorite cosmological models1:04:36 - Connections to theology and the Kalam1:09:30 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
What happens to our laws about pregnancy, parenthood, and abortion when “gestation” can be shared, transferred, or even moved outside the human body by new biotechnology?My links: https://linktr.ee/frictionphilosophy.1. GuestElizabeth Chloe Romanis is Associate Professor in Biolaw in the Durham Law School as well as in the Durham Centre for Ethics and Law in the Life Sciences, and her work focuses on healthcare law and bioethics.Check out her book!https://global.oup.com/academic/product/biotechnology-gestation-and-the-law-9780198873785https://www.amazon.com/dp/01988737862. Book SummaryIn Biotechnology, Gestation, and the Law, Elizabeth Chloe Romanis argues that debates about reproduction are often built on shaky concepts, and that this matters once new technologies make “ordinary” assumptions about pregnancy and birth start to wobble. A central move is to separate pregnancy (a state of being) from gestation (a generative process), and to show how legal thinking slides between incompatible pictures, sometimes implicitly treating the fetus as part of the pregnant person, but more often treating the pregnant person as a “container.” This conceptual work is not just metaphysical housekeeping: it exposes the background assumptions that structure current legal schemas and shape how people’s lives are regulated.Building on that foundation, Romanis proposes treating “technologies enabling gestation” as a genus that includes surrogacy, uterus transplantation (UTx), and ectogestation, and she argues that the law’s focus on “assisted conception” is a poor fit for regulating this very different procreative enterprise. She then tracks how existing frameworks can blunt the technology’s transformative potential by trying to force new modalities of gestation to mimic “natural” procreation, a pattern tied to deeper forms of biological essentialism and a tendency to privilege the binary, two-parent nuclear family. On sex and gender, she argues that these technologies can be equality-enhancing for marginalized groups, but that it is a mistake to treat them as a simple “solution” for women’s equality; the more radical potential lies in “unsexing” generative labour and disrupting the assumption that gestation is inherently female.Later chapters apply this framework to parenthood and abortion. Romanis examines why gestation has been used to anchor legal motherhood, and how that rationale becomes unstable once gestational work can be divided across people and machines (as in partial or complete ectogestation), creating new puzzles about who counts as a legal parent and when parental rights and responsibilities should begin. She emphasizes the importance of keeping clear boundaries that protect pregnant people, including carefully distinguishing entities undergoing extra-uterine gestation from fetuses, precisely to avoid expanding fetal-centred regulation of pregnancy. Finally, she argues that technologies enabling gestation do not change the morality of abortion when the harms of unwanted pregnancy are centred, but they are likely to generate politically motivated pressures on abortion provision because much ectogestation literature frames abortion as “the problem” rather than recognizing it as a response to unwanted pregnancy.3. Interview Chapters00:00 - Introduction01:03 - Overview02:43 - Pregnancy vs. gestation05:50 - Conceptual engineering08:19 - Fetal relationship13:13 - Legal metaphysics14:43 - Gradual part-whole views19:22 - Biotech and gestation22:39 - Social and legal issues25:59 - Uterus transplants30:23 - Social narratives36:44 - Biological essentialism42:50 - Legal motherhood49:03 - Biotech and abortion58:53 - Abortion and metaphysics01:01:04 - Reforming abortion law01:11:36 - Value of philosophy01:14:16 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
128. Dan Zahavi | Being We

128. Dan Zahavi | Being We

2025-11-1801:35:48

What if the most important thing about acting together is not that our individual intentions line up, but that it can genuinely change how the world shows up to us through a first-person plural perspective?My links: https://linktr.ee/frictionphilosophy.1. GuestDan Zahavi is professor of philosophy and the director of the Center for Subjectivity Research at the University of Copenhagen and is editor-in-chief of the journal Phenomenology and the Cognitive Sciences, with Shaun Gallagher. His work focuses on phenomenology, philosophy of mind and cognitive science.Check out his book, "Being We: Phenomenological Contributions to Social Ontology"!https://academic.oup.com/book/59446https://www.amazon.com/dp/019289448X2. Book SummaryZahavi’s Being We argues that debates about ‘collective intentionality’ miss something central if they focus only on how individual intentions line up. The phenomenological tradition, he claims, forces us to take seriously the *qualitative* character of doing things together: feeling, thinking, and acting “as part of a we” can transform one’s sense of self, one’s relation to others, and one’s experience of the world. On this view, *we*-perspectives and *we*-experiences are not optional add-ons to an already complete theory of mind; they are genuine explananda that constrain what we can plausibly say about selfhood and social cognition.In Part I (“We and I”), Zahavi tackles the “primacy” question: does the first-person plural precede the first-person singular, or vice versa? He argues that talk of a we requires plurality and differentiation, and that we-experiences presuppose (rather than erase) the self–other distinction; attempts to derive phenomenal consciousness or basic subjectivity from communal life don’t succeed. That doesn’t mean sociality is irrelevant to selfhood, but it does mean we need careful distinctions between cultural/conceptual accounts of the self and the minimal first-personal “for-me-ness” of experience—because an irreducible plurality of perspectives is exactly what makes distinctive forms of being-with possible in the first place.Parts II and III then explain how we-ness is built up through concrete interpersonal relations and can take multiple forms. Zahavi emphasizes empathy and second-person engagement as ways of encountering another that preserve otherness while enabling coordination and mutual “contact,” and he distinguishes this from mere imaginative perspective-taking; this sets the stage for his analysis of shared emotions and why “affective sharing” needs clearer criteria than simple emotional contagion or matching feelings. Finally, he maps “varieties of we,” moving from intimate dyads and triads to thicker communal and national identifications: larger-scale wes are highly mediated, shaped by norms and institutions, and often sustained through “us–them” demarcation—sometimes actively orchestrated by political forces—so understanding we-formation also means understanding the risks of overly exclusive group identification.3. Interview Chapters00:00 - Introduction01:07 - Overview04:12 - Phenomenology10:29 - "I" and "we"13:03 - Worry17:12 - Individualist bias25:33 - Semantic variance27:13 - More empirical research29:26 - Individual and social aspects33:49 - Data38:05 - Husserl44:27 - Primacy51:49 - Higher order theories of consciousness59:40 - Vagueness1:07:36 - Group membership1:12:39 - Empathy1:19:17 - Collective intentionality1:23:00 - Technology1:28:55 - Artificial intelligence1:31:45 - Value of philosophy1:35:05 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
What are laws of nature, do they govern the universe or merely summarize it, and what do those answers imply about induction, chance, and time’s arrow?My links: https://linktr.ee/frictionphilosophy.1. GuestBarry Loewer is Distinguished Professor of Philosophy at Rutgers University and the director of the Rutgers Center for Philosophy and the Sciences. In this interview, we explore philosophical issues related to laws of nature and related topics.2. Interview SummaryBarry Loewer begins by situating the very idea of “laws of nature” historically: people have long noticed regularities, and often tied them to theology, but the modern notion of simple mathematical laws that describe motion and form the aim of physics really crystallizes in the 17th–18th centuries (especially in Descartes, influenced by Galileo). On that early picture, laws were not just descriptions but part of how God “governed” inert matter, since matter itself was taken to be passive. This historical backdrop sets up the interview’s central contrast between “governing” (non-Humean) and “systematizing” (Humean) conceptions of laws.Loewer then develops the Humean line through David Lewis’s “best system” idea: take the total distribution of fundamental properties across spacetime, and the laws are whatever axioms best systematize it by balancing simplicity and informativeness. He contrasts this with Maudlin-style governance using a vivid joke: on the governing view, God sets initial conditions + laws and can “go on vacation,” whereas on the Humean view God would have to create the whole history “all at once,” and we later extract the best system from it. The conversation then turns to why many philosophers resist Humeanism: they want something to “hold the universe together,” and they worry that if laws are mere regularities then induction becomes unjustifiable; Loewer replies that Hume shows there’s no guarantee of induction anyway—science is inherently risky—and he brings in Goodman’s “grue” problem to show that even stating the induction problem correctly requires constraints on which predicates/generalizations count as projectible.In the final stretch, the interview broadens into the metaphysical question behind Loewer’s book-title riff on Hawking: what “breathes fire into the equations,” i.e., why this universe and this lawlike structure at all—and what the world (and knowers like us) must be like for physics to succeed. Loewer suggests physics can’t itself answer “why there is a universe” or “why there are laws,” since any such explanation would already presuppose laws (a theological answer might be possible, but it wouldn’t be a scientific one). He then connects laws to chance and time via the Albert–Loewer “Mentaculus” program: add a “Past Hypothesis” that the universe began in a very low-entropy state, combine it with the dynamical laws and a Boltzmann-style probability measure, and you get a package that yields objective chances and explains time’s arrow—what he calls a “probability map of the universe.”3. Interview Chapters00:00 - Introduction00:45 - Development of views about laws15:15 - Two schools26:37 - Popularity of non-Humean views30:15 - Induction38:15 - Further issues with induction49:28 - What breathes fire into the equations?1:01:25 - Background to the Mentaculus project1:04:15 - Time1:10:20 - Statistical mechanics1:14:32 - Putting the Mentaculus package together1:22:52 - Value of philosophy1:35:57 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
If humans are as irrational and “automatic” as some psychologists suggest, why does explaining what people believe and want still feel like the best way to understand what they do?1. GuestEmma Borg is Professor at the Institute of Philosophy, School of Advanced Studies, University of London, although before that was for a long time Professor at the University of Reading. Her work focuses on the philosophy of language, mind, and cognitive science. In this interview, we focus on her recent book, "Acting for Reasons: In Defence of Common-sense Psychology".Check out her book!https://academic.oup.com/book/58959https://www.amazon.com/dp/B0DNCYHXC5/2. Book SummaryEmma Borg’s Acting for Reasons: In Defence of Common-sense Psychology argues that the familiar ‘common-sense psychology’ (CP) framework—explaining action via contentful mental states like beliefs and desires—remains broadly vindicated despite recent experimental and theoretical backlash. Borg characterizes CP as combining (i) a claim about action generation (typically, behaviour is caused “in the right way” by an agent’s reasons) and (ii) a claim about action understanding (typically, we explain and predict others by attributing mental states and inferring what those states should lead them to do). The book’s central aim is to resist the increasingly popular conclusion that “common-sense psychology is wrong” and to show that CP’s reach is much broader than “high days and holidays” cases of explicit deliberation.The first half of the book takes on the Heuristics-and-Biases-inspired attacks on CP’s picture of decision-making. Borg distinguishes two strands: the No Reasons challenge, where heuristics are treated as automatic, “gut-feel” processes that bypass reasons altogether, and the Insufficient Reasons challenge, where people do consult reasons but in a biased, evidentially thin, or otherwise irrational way. She argues that defining heuristics as reasons-insensitive (or inferring that from their “fast, automatic” feel) is a mistake, and that much of the empirical case for endemic irrationality relies on contentious interpretations and methodological pitfalls (including concerns tied to replication, stability, and ecological validity). Overall, Borg’s conclusion on this side is that widespread heuristic reasoning does not by itself undermine CP’s general assumption of individual rationality and reasons-responsiveness.The remainder of the book turns to CP’s second component—how we understand other people—and targets “deflationary” alternatives that try to explain social cognition without robust belief–desire attribution (e.g., behaviour-reading, mirror-neuron stories, “submentalizing,” or more “minimal” mentalizing). Borg argues that fully behaviour-reading approaches face serious empirical and theoretical problems, and that mid-ground views still don’t justify demoting CP to a niche role. Her final position is that deflationary resources may at most supplement CP (for certain developmental or special-purpose explanations), but they don’t supplant CP as the central, everyday framework for making sense of intentional action—so, taken together, the book concludes that common-sense psychology is broadly vindicated.3. Interview Chapters00:00 - Introduction00:47 - Overview of common-sense psychology02:50 - Further consequences of view06:38 - Intuitive view10:51 - Kahneman and Tversky18:35 - "No reasons" challenge23:36 - "Insufficient reasons" challenge29:39 - Vagueness34:25 - Introspectable properties challenge41:37 - Unconscious action47:18 - Reasons-sensitivity52:41 - Semantic issue57:28 - Response to insufficient reasons1:06:57 - Useful fictions1:13:40 - Why read the book?1:17:32 - Value of philosophy1:19:54 - Conclusion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fric.substack.com/subscribe
loading
Comments