Discover
Clearer Thinking with Spencer Greenberg

Clearer Thinking with Spencer Greenberg
Author: Spencer Greenberg
Subscribed: 705Played: 36,753Subscribe
Share
Description
Clearer Thinking is a podcast about ideas that truly matter. If you enjoy learning about powerful, practical concepts and frameworks, wish you had more deep, intellectual conversations in your life, or are looking for non-BS self-improvement, then we think you'll love this podcast! Each week we invite a brilliant guest to bring four important ideas to discuss for an in-depth conversation. Topics include psychology, society, behavior change, philosophy, science, artificial intelligence, math, economics, self-help, mental health, and technology. We focus on ideas that can be applied right now to make your life better or to help you better understand yourself and the world, aiming to teach you the best mental tools to enhance your learning, self-improvement efforts, and decision-making. • We take on important, thorny questions like: • What's the best way to help a friend or loved one going through a difficult time? How can we make our worldviews more accurate? How can we hone the accuracy of our thinking? What are the advantages of using our "gut" to make decisions? And when should we expect careful, analytical reflection to be more effective? Why do societies sometimes collapse? And what can we do to reduce the chance that ours collapses? Why is the world today so much worse than it could be? And what can we do to make it better? What are the good and bad parts of tradition? And are there more meaningful and ethical ways of carrying out important rituals, such as honoring the dead? How can we move beyond zero-sum, adversarial negotiations and create more positive-sum interactions?
284 Episodes
Reverse
Read the full transcript here. Are the existential risks posed by superhuman AI fundamentally different from prior technological threats such as nuclear weapons or pandemics? How do the inherent “alien drives” that emerge from AI training processes complicate our ability to control or align these systems? Can we truly predict the behavior of entities that are “grown” rather than “crafted,” and what does this mean for accountability? To what extent does the analogy between human evolutionary drives and AI training objectives illuminate potential failure modes? How should we conceptualize the difference between superficial helpfulness and deeply embedded, unintended AI motivations? What lessons can we draw from AI hallucinations and deceptive behaviors about the limits of current alignment techniques? How do we assess the danger that AI systems might actively seek to preserve and propagate themselves against human intervention? Is the “death sentence” scenario a realistic prediction or a worst-case thought experiment? How much uncertainty should we tolerate when the stakes involve potential human extinction? Nate Soares is the President of the Machine Intelligence Research Institute and the co-author of the book If Anyone Builds It, Everyone Dies. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs. Links: Nate and Eliezer's recent book: If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All The Machine Intelligence Research Institute Staff Spencer Greenberg — Host + Director Ryan Kessler — Producer + Technical Lead Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. What does it mean to treat facts as drafts rather than monuments? If truth is something we approach, how do we act while it’s still provisional? When definitions shift, what really changes? How do better instruments quietly rewrite the world we think we know? Are we mostly refining truths or replacing them? When do scientific metaphors clarify and when do they mislead? What public stories make self-correction legible and trusted? What features make science self-correct rather than self-congratulatory? How should we reward replication, repair, and tool-building? Do we need more generalists - or better bridges between tribes? How does measurement expand the very questions we can ask? Is progress a goal-seeking march or a search for interesting stepping stones? Should we teach computing as a liberal art to widen its aims? Will AI turn software into a home-cooked meal for everyone? How do we design tools that increase wonder, not just efficiency? Samuel Arbesman is Scientist in Residence at Lux Capital. He is also an xLab senior fellow at Case Western Reserve University’s Weatherhead School of Management and a research fellow at the Long Now Foundation. His writing has appeared in the New York Times, the Wall Street Journal, and The Atlantic, and he was previously a contributing writer for Wired. He is the author of the new book The Magic of Code, and his previous books are Overcomplicated: Technology at the Limits of Comprehension and The Half-Life of Facts: Why Everything We Know Has an Expiration Date. He holds a PhD in computational biology from Cornell University and lives in Cleveland with his family. Links: Sam's Recent Titles: The Half-Life of Facts and The Magic of Code Staff Spencer Greenberg — Host + Director Ryan Kessler — Producer + Technical Lead Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. What changes when we treat violence as a human problem rather than a demographic story? Are fear, anger, and shame the real levers behind sudden harm? How much agency can we ask of people shaped by chaos without ignoring that chaos? Where is the line between explanation and excuse? What would an honest narrative about community safety sound like? Do neighborhoods want fewer police, or different policing grounded in respect? How do we build cultures where accountability and care reinforce each other? If separation is required for rehabilitation, how do we keep it from becoming psychological punishment? How do we welcome people back into society without chaining them to their worst moment? Shaka Senghor is a resilience expert and author whose journey from incarceration to inspiration has empowered executives, entrepreneurs, and audiences around the world. Born in Detroit amid economic hardship, Shaka overcame immense adversity - including 19 years in prison - to become a leading authority on resilience, grit, and personal transformation. Since his release in 2010, Shaka has guided individuals and organizations to break free from their hidden emotional and psychological prisons, turning resilience from theory into actionable practice. Links: Shaka's Book: How to Be Free: A Proven Guide to Escaping Life's Hidden Prisons Shaka's TED Talk Staff Spencer Greenberg — Host + Director Ryan Kessler — Producer + Technical Lead Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. What changes when psychology stops naming traits and starts naming parts - can “entities and rules” turn fuzzy labels into testable mechanisms? If the mind is a web of governors with set points, what exactly is being controlled - and how do error signals become feelings? Are hunger, fear, and status-seeking all negative-feedback problems, and where do outliers like anger or awe fit? What would count as disconfirming evidence for a cybernetic view - useful constraint or unfalsifiable epicycle? Could a “parliament of drives” explain why identical situations yield different choices? And how would we measure the votes? Do abstractions like the Big Five help, or do they hide the machine under the hood? How many rules do we need before prediction beats metaphor? And could a new paradigm help make psychology a more mature and cumulative science? SLIME MOLD TIME MOLD is a mad science hive mind with a blog. If you believe the rumors, it’s run by 20 rats in a trenchcoat. You can reach them at slimemoldtimemold@gmail.com, follow them on twitter at @mold_time, and read their blog at slimemoldtimemold.com Links: The Mind in the Wheel Obesity and Lithium Staff Spencer Greenberg — Host + Director Ryan Kessler — Producer + Technical Lead Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. Are we trying to maximize moment-to-moment happiness or life satisfaction? Can self-reports really guide policy and giving? What happens to quality of life metrics when we judge impact by wellbeing instead of health or income? How should we compare treating depression to providing clean water when their benefits feel incomparable? Do cultural norms and scale-use quirks impact the accuracy of global happiness scores? How much do biases warp both our forecasts and our data? Is it ethical to chase the biggest happiness returns at the expense of other meaningful interventions? Where do autonomy, agency, and justice fit if philanthropy aims to reduce suffering or maximize aggregate happiness? Can we balance scientific rigor with the irreducibly subjective nature of joy, misery, and meaning? What should donors actually do with wellbeing-based cost-effectiveness numbers in the face of uncertainty and long-run effects? And could a wellbeing lens realistically reshape which charities, and which policies, the world funds next? Dr. Michael Plant is the Founder and Director of the Happier Lives Institute, a non-profit that researches the most cost-effective ways to increase global well-being and provides charity recommendations. Michael is a Post-Doctoral Research Fellow at the Wellbeing Research Centre, Oxford and his PhD in Philosophy from Oxford was supervised by Peter Singer. He is a co-author of the 2025 World Happiness Report. He lives in Bristol, England, with his wife. Links: The Happier Lives Institute Wellbeing Research Centre at Oxford PersonalityMap (correlation between life satisfaction and moment-to-moment happiness) The Elephant in the Bed Net World Happiness Report 2025 Staff Spencer Greenberg — Host + Director Ryan Kessler — Producer + Technical Lead Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. Are existential risks from AI fundamentally different from those posed by previous technologies such as nuclear weapons? How can global cooperation overcome the challenges posed by national interests? What mechanisms might enable effective governance of technologies that transcend borders? How do competitive pressures drive harmful behaviors even when they threaten long-term stability? How might we balance innovation with precaution in rapidly advancing fields? Is slow progress the key to dodging hidden catastrophes in technological advancement? Is it possible to design systems that reward cooperation over defection on a global scale? How do we ensure emerging technologies uplift humanity rather than undermine it? What are the ethics of delegating decision-making to non-human intelligences? Can future generations be safeguarded by the choices we make today? Kristian is an entrepreneur and author of the Darwinian Trap, and has contributed to policy and standards with AI and climate change. In the climate sector, he contributed to global carbon accounting standards, represented Sweden at the UN Climate Conference and founded the carbon accounting software Normative.io. His work in AI governance includes contributions to policies in the EU and UN and authoring an influential report on AI Assurance Tech. Currently, as the co-founder and CEO of Lucid Computing, he develops technology to monitor the location of export controlled AI chips. He can be reached via email at kristian@lucidcomputing.ai. Thanks to a listener who pointed us to this 2017 report that may be responsible for some confounding bias around the idea that only 100 companies are reponsible for the majority of emissions. Links: Kristian's book: The Darwinian Trap Kristian's company: Lucid Computing Staff Spencer Greenberg — Host + Director Ryan Kessler — Producer + Technical Lead Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. How do we distinguish correlation from causation in organizational success? How common is it to mistake luck or data mining for genuine effects in research findings? What are the challenges in interpreting ESG (Environmental, Social, Governance) criteria? Why is governance considered distinct from environmental and social impact? How should uncertainty in climate science affect our policy choices? Are regulation and free markets really at odds, or can they be mutually reinforcing? How does economic growth generated by markets fund social programs and environmental protection? How does “publish or perish” culture shape scientific research and incentives? What psychological and neuroscientific evidence explains our tendency toward confirmation bias? Will LLMs exacerbate or mitigate cognitive traps? How do biases shape popular narratives about diversity and corporate purpose? How can we balance vivid stories with rigorous data to better understand the world? Alex Edmans FBA FAcSS is Professor of Finance at London Business School. Alex has a PhD from MIT as a Fulbright Scholar, was previously a tenured professor at Wharton, and an investment banker at Morgan Stanley. He serves as non-executive director of the Investor Forum and on Morgan Stanley’s Institute for Sustainable Investing Advisory Board, Novo Nordisk’s Sustainability Advisory Council, and Royal London Asset Management’s Responsible Investment Advisory Committee. He is a Fellow of the British Academy and a Fellow of the Academy of Social Sciences. Links: Alex’s TEDx Talk Alex’s books: May Contain Lies and Grow The Pie Alex’s Blog A double bind in collective learning (article) Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. Has society reached ‘peak progress’? Can we sustain the level of economic growth that technology has enabled over the last century? Have researchers plucked the last of science's "low-hanging fruit?" Why did early science innovators have outsized impact per capita? As fields mature, why does per-researcher output fall? Can a swarm of AI systems materially accelerate research? What does exponential growth hide about the risk of collapse? Will specialized AI outcompete human polymaths? Is quality of life still improving - and how confident are we in those measures? Is it too late to steer away from the attention economy? Can our control over intelligent systems scale as we develop their power? Will AI ever be capable of truly understanding human values? And if we reach that point, will it choose to align itself? Holden Karnofsky is a Member of Technical Staff at Anthropic, where he focuses on the design of the company's Responsible Scaling Policy and other aspects of preparing for the possibility of highly advanced AI systems in the future. Prior to his work with Anthropic, Holden led several high-impact organizations as the co-founder and co-executive director of charity evaluator GiveWell, and one of three Managing Directors of grantmaking organization Open Philanthropy. You can read more about ideas that matter to Holden at his blog Cold Takes. Further reading: Holden's "most important century" series Responsible scaling policies Holden's thoughts on sustained growth Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. Why do humans live as long as they do? Since whales have literally tons more cells than humans, why don't they develop cancers at much higher rates than humans? What can the genetic trade-offs we observe in other organisms teach us about increasing human longevity? Will we eventually be able to put people into some kind of stasis? What is the state of such technology? What counts as being dead? How much brain damage can a person sustain before they're no longer the same person? Is lowering temperature the same thing as slowing time? What does it mean to turn organic tissue into "glass"? Would clones of me be the same person as me? How should we feel about death? What is "palliative" philosophy? Why are people generally supportive of curing diseases but less supportive of increasing human lifespan? Will humans as a species reach 2100 A.D.? Dr. Ariel Zeleznikow-Johnston is a neuroscientist at Monash University, Australia, where he investigates methods for characterising the nature of conscious experiences. In 2019, he obtained his PhD from The University of Melbourne, where he researched how genetic and environmental factors affect cognition. His research interests range from the decline, preservation, and rescue of cognitive function at different stages of the lifespan, through to comparing different people's conscious experience of colour. By contributing to research that clarifies the neurobiological, cognitive, and philosophical basis of what it is to be a person, he hopes to accelerate the development of medical infrastructure that will help prevent him and everyone else from dying. Read his writings on Substack, follow him on Bluesky or X / Twitter, email him at arielzj.phd@gmail.com, or learn more about him on his website. Further reading The Future Loves You: How and Why We Should Abolish Death, by Ariel Zeleznikow-Johnston Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. How has utilitarianism evolved from early Chinese Mohism to the formulations of Jeremy Bentham and John Stuart Mill? On what points did Bentham and Mill agree and disagree? How has utilitarianism shaped Effective Altruism? Does utilitarianism only ever evaluate actions, or does it also evaluate people? Does the "veil of ignorance" actually help to build the case for utilitarianism? What's wrong with just trying to maximize expected value? Does acceptance of utilitarianism require acceptance of moral realism? Can introspection change a person's intrinsic values? How does utilitarianism intersect with artificial intelligence? Tyler John is a Visiting Scholar at the Leverhulme Centre for the Future of Intelligence and an advisor to several philanthropists. His research interests are in leveraging philanthropy for the common good, ethics for advanced AI, and international AI security. Tyler was previously the Head of Research and Programme Officer in Emerging Technology Governance at Longview Philanthropy, where he advised philanthropists on over $60m in grants related to AI safety, biosecurity, and long-term economic growth trajectories. Tyler earned his PhD in philosophy from Rutgers University — New Brunswick, where he researched mechanism design to promote the interests of future generations, political legitimacy, rights and consequentialism, animal ethics, and the foundations of cost-effectiveness analysis. Follow him on X / Twitter at @tyler_m_john. Further reading An Introduction to Utilitarianism Intrinsic Values Test by Clearer Thinking Blue Dot Impact 80,000 Hours Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. What does autism feel like from the inside? Do autistic people lack empathy? What is context insensitivity? What are some ways special interests can manifest in autistic people? What are some less common ways stimming can manifest? What are the main components of autism? Can you be diagnosed with autism if you meet all the diagnostic criteria but didn't have any symptoms in childhood? Is autism only a problem in relation to neurotypical people? Is there a link between IQ and autism? What does the DSM fail to capture about autism? Is there some underlying commonality among all the seemingly disparate symptoms of autism? How have the label and diagnosis changed as the field of psychology has grown and improved? Thinking about autism as a spectrum is better than thinking about it as a binary, but is there an even better way to think about it? How does gender intersect with autism? How does ADHD intersect with autism? How valid is self-diagnosis? How can you better interact with autistic people in your life? What should you do if you think you might have autism? Dr. Megan Anna Neff is a clinical psychologist, author, and founder of Neurodivergent Insights. She is the author of Self-Care for Autistic People and The Autistic Burnout Workbook. Dr. Neff contributes regularly to Psychology Today and has been featured in outlets like CNN, PBS, ABC, and The Los Angeles Times. After discovering her own neurodivergence at age 37, she became passionate about raising awareness of non-stereotypical presentations of autism and ADHD. Through Neurodivergent Insights, she creates educational and wellness resources for the neurodivergent community, while also co-hosting the Divergent Conversations podcast. Learn more about her at her website, neurodivergentinsights.org, or email her at meganannaneff@neurodivergentinsights.org. Further reading "How Do I Know if I’m Autistic in Adulthood?" by Megan Neff Divergent Conversations (podcast) Episode 48: “What is Autism?” (Part 1): Understanding Autistic Communication Embrace Autism Is This Autism? Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. Is AI that's both superintelligent and aligned even possible? Does increased intelligence necessarily entail decreased controllability? What's the difference between "safe" and "under control"? There seems to be a fundamental tension between autonomy and control, so is it conceivable that we could create superintelligent AIs that are both autonomous enough to do things that matter and also controllable enough for us to manage them? Is general intelligence needed for anything that matters? What kinds of regulations on AI might help to ensure a safe future? Should we stop working towards superintelligent AI completely? How hard would it be to implement a global ban on superintelligent AI development? What might good epistemic infrastructure look like? What's the right way to think about entropy? What kinds of questions are prediction markets best suited to answer? How can we move from having good predictions to making good decisions? Are we living in a simulation? Is it a good idea to make AI models open-source? Anthony Aguirre is the Executive Director of the Future of Life Institute, an NGO examining the implications of transformative technologies, particularly AI. He is also the Faggin Professor of the Physics of Information at UC Santa Cruz, where his research spans foundational physics to AI policy. Aguirre co-founded Metaculus, a platform leveraging collective intelligence to forecast science and technology developments, and the Foundational Questions Institute, supporting fundamental physics research. Aguirre did his PhD at Harvard University and Postdoctoral work as a member at the Institute for Advanced Study in Princeton. Learn more about him at his website, anthony-aguirre.com; follow him on X / Twitter at @anthonynaguirre, or email him at contact@futureoflife.org. Further reading Keep The Future Human The Future of Life Institute "Unification of observational entropy with maximum entropy principles" by Joseph Schindler, Philipp Strasberg, Niklas Galke, Andreas Winter, and Michael G. Jabbour Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. What is the current global birth rate? What factors have contributed, or are currently contributing, to this rate? What outcomes will we experience as a result, and when? How accurate are demographers' projections on this topic? How much of a problem is local over-population? Could a low global birth rate eventually be overcome by high birth rates within a few specific groups? Why does any of this matter? How is average age in the US changing? What should the American government do to address this change, if anything? Is there a correlation between religiosity and birth rates? How are birth rates connected to the culture wars in the US? Will artificial wombs someday help to stabilize the global population? What's the "right" or "best" size of the global population? Could global depopulation solve climate change? Dean Spears is an economic demographer at the University of Texas at Austin and a founding executive director of r.i.c.e., a nonprofit working for children’s health in rural north India. He is the author of After the Spike: Population, Progress, and the Case for People. See more of Dean’s research at deanspears.net. Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. Is AI going to ruin everything? What kind of AI-related dangers should we be most worried about? What do good institutions look like? Should designing better institutions be a major priority for modern civilizations? What are the various ways institutions decay? How much should we blame social media for the current state of our institutions? Under what conditions, if any, should the flow of information be regulated? What are some of the lesser-known kinds of AI disalignment? What actions should we take in light of the lack of consensus about AI? Gabe Alfour has a background in theoretical computer science and has long been interested in understanding and tackling fundamental challenges of advancing and shaping technological progress. Fresh out of university, he developed a new programming language and founded a successful French crypto consultancy. Gabe has long had an interest in artificial intelligence, which he expected to be a major accelerator of technological progress. But after interacting with GPT-3, he became increasingly concerned with the catastrophic risks frontier AI systems pose, and decided to work on mitigating them. He studied up on AI and joined online open-source AI community EleutherAI, where he met Connor Leahy. In 2022, they co-founded Conjecture, an AI safety start-up. Gabe is also an advisor with ControlAI, an AI policy nonprofit. Email Gabe at ga@conjecture.dev, follow him on Twitter / X at @Gabe_cc, or read his writings on his blog at site.cognition.cafe. Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. What defines a cult? Is there such a thing as a good cult? Do Clearer Thinking's tools actually help people? Why does Clearer Thinking share its tools for free for everyone to use? How legitimate is the research Clearer Thinking does to create its tools? Is that research too reliant on self-report? Do Clearer Thinking's tools focus too much on the average person and fail to account for significant variance among people? Should AI companies be required to create and release text watermarking tools? Should smart, knowledgeable people speak out more? Would the average person think (without priming or knowledge of the discourse around it) that Elon Musk's gesture at the inauguration was a Nazi salute? Does Spencer sometimes coin new terms where useful terms already exist? Does Spencer think that everyone should adopt valuism, his life philosophy? Is magic real? What critiques have stuck with Spencer over the years and shaped his work? Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. NOTE: The video version of this conversation is available on YouTube: https://youtu.be/hWNknrc23Fo In light of the replication crisis, should social scientists try to replicate every major finding in the field's history? Why is human memory so faulty? And since human memory is so faulty, why do we take eyewitness testimony in legal contexts so seriously? How different are people's experiences of the world? What are the various failure modes in social science research? How much progress have the social sciences made implementing reforms and applying more rigorous standards? Why does peer review seem so susceptible to importance hacking? When is observation more important than interpretation, and vice versa? Do the top journals contain the least replicable papers? What value do Freud's ideas still provide today? How useful are neo-Freudian therapeutic methods? Should social scientists run studies on LLMs? Which of Paul's books does ChatGPT like the least? Paul Bloom is Professor of Psychology at the University of Toronto, and Brooks and Suzanne Ragen Professor Emeritus of Psychology at Yale University. Paul Bloom studies how children and adults make sense of the world, with special focus on pleasure, morality, religion, fiction, and art. He has won numerous awards for his research and teaching. He is past-president of the Society for Philosophy and Psychology, and co-editor of Behavioral and Brain Sciences. He has written for scientific journals such as Nature and Science, and for popular outlets such as The New York Times, The Guardian, The New Yorker, and The Atlantic Monthly. He is the author of seven books, including his most recent, Psych: The Story of the Human Mind. Find more about him at paulbloom.net, or follow his Substack. Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. What does it mean to bring your "A-game" to relationships? What is emotional "fitness"? What forces do relationships need to balance to remain stable and healthy? Are we attracted to a particular brand of heartbreak? What are "original attachment wounds"? Can Dark Triad traits be tamed? What emergent properties do relationships exhibit? What nourishes a relationship? When should relationships end? Why might people choose to maintain abusive relationships? How can trauma victims regain their sense of agency? What is self-care really about? As a cutting-edge Relationship Coach, Annie Lalla maps the emotional complexities of long-term romance. She helps clients & students build romantic esteem by cultivating collaboration skills that go beyond power struggles, shame, or blame. Annie stands for True Love and is world-class at supporting relational growth across a lifetime. Follow her on Instagram at @lallabird, email her at annie@annielalla.com, or learn more about her on her website, annielalla.com. Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. Do we still have a lot to learn from ancient Greco-Roman philosophies? What is telos? What is ataraxia? What is "dark" Stoicism? What is the "resilient asshole" problem? What is (or what has) value according to Stoicism? What are the similarities and differences between Stoicism and Buddhism? Why might someone prefer a life "philosophy" over a set of life "hacks"? What is good? And how do you know? How could you know if you potentially adopted the wrong life philosophy? What value can modern humans find in Stoicism, Epicureanism, Pyrrhonism, and Cyrenaicism? Gregory Lopez has been practicing Stoicism for over a decade and Buddhism a bit longer. He is co-author of A Handbook for New Stoics and Beyond Stoicism. He is also the founder of the New York City Stoics, co-founder of The Stoic Fellowship, a member of the Modern Stoicism team, and a faculty member of Stoa Nova. Additionally, he co-facilitates Stoic Camp New York annually with Massimo Pigliucci. You can find out more and contact him at his website, greglopez.me. Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. In times of such extreme political polarization, where can we find common ground? Should we require disclosure of AI authorship? Should AI companies be required to provide fingerprinting tools that can identify when something has been generated by one of their models? Should movie theaters be required to report when movies actually start? Should members of Congress be prohibited from insider trading? Should gerrymandering be outlawed? Should there be age limits on political office? Should we provide free school meals nation-wide? What roadblocks stand in the way of people being able to vote on their phones? What's Spencer's formula for productivity? Which of the productivity factors do most people fail to take into account? What are some "doubly-rewarding" activities? Is altruism a harmful idea? What are people worst at predicting? Bradley Tusk is a venture capitalist, political strategist, philanthropist, and writer. He is the CEO and co-founder of Tusk Ventures, the world's first venture capital fund that invests solely in early stage startups in highly regulated industries, and the founder of political consulting firm Tusk Strategies. Bradley's family foundation is funding and leading the national campaign to bring mobile voting to U.S. elections and also has run anti-hunger campaigns in 24 different states, helping to feed over 13 million people. He is also an adjunct professor at Columbia Business School. Before Vote With Your Phone, Bradley authored The Fixer: My Adventures Saving Startups From Death by Politics and Obvious in Hindsight. He hosts a podcast called Firewall about the intersection of tech and politics, and recently opened an independent bookstore, P&T Knitwear, on Manhattan's Lower East Side. In his earlier career, Bradley served as campaign manager for Mike Bloomberg's 2009 mayoral race, as Deputy Governor of Illinois, overseeing the state's budget, operations, legislation, policy, and communications, as communications director for US Senator Chuck Schumer, and as Uber's first political advisor. Connect with Bradley on Substack and LinkedIn. Further reading Episode 230: Who really controls US elections? (with Bradley Tusk) Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Read the full transcript here. What do westerners misunderstand about "tribal" cultures? How does justice in very small communities differ from justice in large nation-states? Why do some cultures have bride prices (i.e., groom's family pays bride's family) and others have dowries (i.e., bride's family pays groom's family)? How do cultures differ with respect to the body parts they sexualize? How many cultures across time have used psychedelics? Do all religions make moral demands? How do religions change as the people who practice them grow in number? How strong is the link between religious belief and individual behavior? To what extent are anthropologists conscious of their own behaviors and biases? Why do certain types of false beliefs persist for so long? How do shamanism and witchcraft differ? Aside from their official roles, what de facto roles do shamans play in their communities? What personality traits and/or mental health conditions are linked to wanting to become a shaman? Are any taboos universal across all human cultures? Why are taboos against incest and cannibilism so common? What is the value of anthropology? Manvir Singh is an anthropologist at the University of California, Davis and a regular contributor to The New Yorker, where he writes about cognitive science, evolution, and cultural diversity. He studies complex cultural traditions that reliably emerge across societies, including dance songs, lullabies, hero stories, shamanism, and institutions of justice. He graduated with a PhD from Harvard University in 2020 and, since 2014, has conducted ethnographic fieldwork with Mentawai communities on Siberut Island, Indonesia. He is the author of Shamanism: The Timeless Religion (2025). Follow him on Twitter / X at @mnvrsngh or @manvir on Bluesky, or learn more about him on his website, manvir.org. Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Igor Scaldini — Marketing Consultant Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
in the middle of the podcast for the second time. so informative I had to immediately rewind to the beginning when the episode was over.