DiscoverThe Intersect with Cory Corrine
The Intersect with Cory Corrine
Claim Ownership

The Intersect with Cory Corrine

Author: Cory Corrine and Dear Media

Subscribed: 1Played: 8
Share

Description


The Intersect is a new technology and science podcast from Pulitzer Prize–winning journalist and media executive Cory Corrine (née Haik), exploring what it means to be human and find meaning in our automated world.


25 Episodes
Reverse
What happens when the world’s largest AI talent agency creates thousands of podcasts per week? And when these shows aren’t hosted by humans, but rather by AI characters? In this episode, I’m joined by Jeanine Wright, CEO of Inception Point AI, a media company that uses 125 AI agents to produce a staggering 3,000 podcast episodes weekly across more than 5,000 shows. Jeanine and I explore what it means to engineer humans out of the production process, and we unpack what’s in store for the future of entertainment. Hint: it involves humans designing AI characters so deeply that we’ll ultimately have to negotiate with AI talent for how they show up and where they appear. Is this a bad thing? Or an exciting new frontier? About Jeanine Wright:Jeanine Wright is the Co-Founder and CEO of Inception Point AI, where she’s building the first AI-native media company and exploring what it means to create and connect in an automated world. Her path has taken her from trial lawyer to podcast startup founder to COO of Wondery, Amazon’s podcast division. She’s led companies through rapid growth, acquisitions, and global expansion — always centered on the themes of identity and storytelling. Jeanine also serves on several boards, guiding companies at the crossroads of media, technology, and human connection.Follow Jeanine Wright on LinkedIn at @jeaninepercivalwrightFollow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This week, I sit down with author and journalist Mallary Tenore Tarpley for a candid conversation about eating disorders in the age of AI. With the continued rise of “SkinnyTok” on Instagram and TikTok, the internet is becoming a main character in shaping our relationship with our bodies. But what's even more concerning is how AI is now being woven into the conversation with AI diet chatbots, and in some cases acting like anorexia coaches.Mallary’s new book, “Slip: Life in the Middle of Eating Disorder Recovery,” details her journey through recovery from disordered eating. She shares the power of restorative narratives in shaping one's story, and how technology can be a fantastic service but also a hindrance to the complex and non-linear recovery process.  If you or someone you know is struggling with an eating disorder you can connect with the National Eating Disorders Association at nationaleatingdisorders.org.About Mallary Tenore Tarpley:Mallary Tenore Tarpley is a writer, author and journalism and writing professor at the University of Texas at Austin’s Moody College of Communication and McCombs School of Business. Her writing has appeared in The New York Times, The Washington Post, the Los Angeles Times, Time, and Teen Vogue, among other publications. Follow Mallary Tenore Tarpley on Substack @mallarytenore, Instagram at @mallarytenoretarpley and LinkedIn.Mallary’s  book “Slip: Life in the Middle of Eating Disorder Recovery,” is available wherever you get your books!Check out Mallary’s piece in Teen Vogue: “AI Therapy? How Teens Are Using Chatbots for Mental Health and Eating Disorder Recovery”Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This week, I sit down with Alvaro Bedoya, a former FTC commissioner and fierce critic of AI chatbots, especially for use by children. As a longtime advocate for stronger tech regulation, Alvaro shares what he believes are the necessary actions that parents, policymakers, and users need to take right now to keep kids safe in the age of AI.While OpenAI recently announced the rollout of new parental controls allowing parents to link their kids’ accounts to theirs, the new protections are vague. Alvaro argues that bright-line rules are the only real way to protect the public. We explore the limits of relying on tech companies to self-regulate, and we discuss what real accountability looks like when it comes to AI.About Alvaro Bedoya:Alvaro Bedoya was a commissioner at the Federal Trade Commission from 2022 until June of this year. There, he led the Commission’s creation of its first interdisciplinary behavioral team including a psychologist, a pediatrician, and a specialist in human-computer interaction. Previously, he served as the first chief counsel for the Senate Subcommittee on Privacy upon its creation in 2010, and then created the Center on Privacy & Technology at Georgetown Law. Follow Alvaro Bedoya on X at @BedoyaUSA and LinkedIn.Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This week, I sit down with Josh Rothman, a staff writer at The New Yorker. We unpack his provocative essay, "AI Is Coming for Culture." Josh argues that AI isn’t just reshaping our jobs, politics, and wellbeing; it’s reshaping culture itself. But this isn’t only about AI-generated songs or stories. It’s about how we experience art, film, music, and books together. Think of Taylor Swift’s Swifties or Lady Gaga’s Little Monsters. Could fandoms like this form around AI-made music? And if they could…is that necessarily a bad thing? And if the stories we consume, and the memes we laugh at are produced by computers rather than people, how does that change the meaning of culture? Together, we explore how culture, creativity, and originality are being redefined in the age of AI. About Joshua Rothman:Joshua Rothman is a staff writer at The New Yorker, where he covers ideas, tech, science, and culture and contributes the weekly column Open Questions. He is the author of the weekly column Open Questions, which explores, from various angles, what it means to be human. Previously, he was the magazine's ideas editor. He has also been an ideas columnist at the Boston. Check out Joshua’s recent piece in The New Yorker: What Is Culture in the Age of A.I.?Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This episode touches on a topic that affects tens of millions of us: digital addiction. Whether it's constant doom-scrolling, binge-watching, or indulging in habits like pornography, digital addiction can take on many forms. And while we may rely on our phones for a quick hit of connection or relief, we’re often left feeling worse. My guest Chandler Rogers, the co-founder and CEO of the app Relay, is re-imagining addiction recovery through digital peer-to-peer support. Chandler’s story begins with his own struggle around pornography,  but widens to something much bigger…A digital epidemic of isolation and compulsive habits.We dive into how overcoming addiction begins with confronting the deeper emotional pain at their root, and why human connection and accountability to other people may be the path to successful recovery. And tech that enables this? That’s a powerful use case.Topics Covered:What is digital addiction and how does the Relay app address the shame and pain underneath?How has pornography shaped expectations in relationships?Is AI in pornography impacting intimacy for addicts?How can tech enable healing by building human connection?About Chandler Rogers:Chandler Rogers is the CEO & co-founder of Relay, a startup tackling one of the most common yet hidden addictions in modern society—pornography. After years of feeling stuck in the cycle, Chandler built Relay to bring connection, structure, and hope to thousands looking for healthier ways to navigate stress, shame, and emotional pain. His work explores the broader patterns of behavioral escapism that impact intimacy, trust, and connection in relationships. He lives in Salt Lake City, Utah with his wife and 18-month-old son.Follow Chandler Rogers on LinkedIn and on Instagram.You can learn more about his app, Relay, here. Got an idea for the show? Sent them to ideas@theintersectshow.com Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
​​Twenty years ago, Hurricane Katrina forever changed the way we think about natural disasters, emergency response, and community resilience.  For me, it became a defining moment as a journalist shaping how I understand technology, content, and community. This year marks the 20th anniversary of the tragic event, and in this episode, I sit down with Michelle Payne, chief strategy and resiliency officer at the United Way of Southeast Louisiana, and Kirby Nagel, their public information officer. Together, we explore how technology and community networks have evolved since 2005. From real-time flood sensors to social media alerts, the tools for connecting and informing people during emergencies are more advanced than ever. This episode is a timely reminder that disaster preparedness is a challenge facing all of us, especially in a world reshaped by climate change.Chapters:00:00 Cory's Personal Katrina Story03:30 Meet Kirby Jane Nagle and Michelle Payne of United Way of Southeast Louisiana09:22 New Tech Flood Sensors13:10 Camp Mystic14:57 FEMA's Future Role in Disaster Relief15:50 Create a Climate Disaster Savings Plan 17:15 Future Tech and Disaster Equity19:00 Future Proofing for Climate DisastersAbout Kirby Jane Nagle:Kirby Jane Nagle is the public information officer for United Way of Southeast Louisiana, where she brings strategic thinking, creative storytelling, and a relentless work ethic to every project she leads. A communications expert specializing in media relations, fundraising, and crisis messaging, Kirby has helped raise more than $25 million in disaster response funds, supporting the region through hurricanes, tornadoes, floods, and the unprecedented challenges of the COVID-19 pandemic. In times of disaster, Kirby transitions to help lead United Way’s disaster response team, directing critical communications while also stepping into hands-on recovery efforts, including managing boots-on-the-ground operations and even driving a forklift when needed. Her work and that of the disaster response team were instrumental in securing a transformational MacKenzie Scott gift in recognition of United Way’s COVID-19 relief efforts and positioning the organization as a national leader in disaster response and community impact. About Michelle Payne:Michelle Clarke Payne is the Chief Strategy & Resiliency Officer at United Way of Southeast Louisiana, where she leads strategy, storytelling, and resource development to mobilize people into action. From securing emergency aid after hurricanes to launching rapid financial assistance programs during the pandemic, she has helped raise nearly $10 million for relief and recovery while spearheading United Way's efforts in the launch of the United Way Resiliency Center founded by Rebuilding Together. Beyond disaster response, she mentors the next generation of marketers as an adjunct professor at Tulane University, earning recognition as the American Advertising Federation’s 2023 Educator of the Year and a Loyola University “40 Under 40” honoree. Her leadership extends through service as President of the Junior League of New Orleans, member of the Women United Global Leadership Council, and recognition as a 2025 Top Female Achiever by New Orleans Magazine and a 2025 CityBusiness Icon Award honoree.Learn More About United Way:https://www.unitedwaysela.org/  Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This week on The Intersect, Cory begins by asking what may seem like a wild question: Can AI really sit with you through a psychedelic trip? What could go right (and wrong) when we invite technology into some of our most vulnerable emotional experiences? Even in 2025, women’s biology, which includes the daily fluctuations of hormone cycles, has largely been left out of psychedelic research (and most medical research and clinical trials for that matter …) This has large implications not just for women’s health, and for science in general, and Dr. Grace Blest-Hopley, a neuroscientist is working to change that through her company called Hystelica, which is dedicated to understanding how women’s biology intersects with psychedelic medicine. Cory and Grace discuss the wide-ranging impacts of women’s hormones and the brain, the possibilities and limitations of AI as a psychedelic “trip sitter,” and how medical research can be more inclusive, and therefore more accurate.Topics Covered:Why women’s biology can’t be treated as “niche” in psychedelic researchHow tech and AI tools might empower women to track, understand, and advocate for their healthThe possibilities and pitfalls of using AI in psychedelic preparation, integration, and beyondWhy trust, safety, and human presence still matter most in the middle of the journeyAbout Grace Blest-Hopley:Dr. Grace Blest-Hopley is a Neuroscientist with 12 years experience researching cannabis, cannabinoids, and psychedelics. Grace completed her PhD in Neuroscience at King's College London and currently serves as the Chief Scientific Officer at NWPharma Tech. She is the Research Director at Heroic Hearts Project, a charity that supports combat veterans with mental health challenges resulting from trauma and is also the founder of Hystelica, a community focused on understanding women's biology for safe and effective psychedelic use. In addition to her research and professional roles, she serves as an officer in the British Army Reserve. Dr. Blest-Hopley advocates for the therapeutic potential of these substances and strives to advance the field of psychedelic research. Her work contributes to improving the well-being of individuals in need, particularly combat veterans, while also promoting a better understanding of women's biology in relation to psychedelics.Follow Grace on Instagram @hystelica and @drblesthopleyFollow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
It continues to be a wild world on the internet. Social platforms have become the backdrop of our real lives. We live in hybrid spaces between our phones and the “real” world. And while we know social media has the power to connect us and broaden our perspectives, it’s also the stage from which harmful trends, predatory behavior and mental health challenges can emerge. This affects all of us, but teenagers are especially susceptible. In this episode, I’m joined by Vice President and Global Head of Safety at Meta, Antigone Davis, to discuss Meta’s recent launch of Teen Accounts. We discuss how the company is prioritizing teen safety on Instagram, Facebook and WhatsApp, and how parents are part of the solution for creating a safer online experience for their children. We cover how to reset our algorithms, filter out problematic messages, and the coordination it will take between tech companies and governments to ensure continued safety for teenagers and children online. Topics Covered:How teen social media use has evolved over the past decade.The built-in protections of Meta’s Teen Accounts and how teenagers are reacting.What parents can do to protect their children on Facebook, Instagram and WhatsApp.What Meta is doing to address harmful content, privacy risks, and real-time broadcasting.About Antigone Davis:Antigone Davis is Meta’s VP and Global Head of Safety, overseeing safety efforts across platforms including Facebook, Instagram, Whatsapp, and Messenger. Antigone has been at Meta for a decade, has a background in law and a deep understanding of the challenges surrounding online safety, digital rights, and content moderation. Her work at Meta focuses on improving the safety policies and tools that help protect users from harmful content and interactions on social media.Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Is work just something we endure for a paycheck? Or could it be something more? As AI takes on a bigger presence in all aspects of life, how can it elevate our professional lives to … dare we say it … make work fun? In this episode, I’m joined by @bperreau – founder and CEO of a new AI-powered leadership product called Parafoil – and Bree Groff – author of the newly released, Today Was Fun: A Book About Work (Seriously). We explore how to make the future of work bright.  Topics Covered:How AI can help us be more effective leaders?How to build a personal AI coach ethicallyThe power of “cozy teams” at work and how to nurture oneHow to find joy in work even if we dislike parts of our jobsAbout Ben Perreau:Ben Perreau is the founder and CEO of Parafoil, where he’s pioneering ‘leadership bionics’ — combining cognitive science and AI to help managers rapidly evolve into exceptional leaders. Before Parafoil, Ben was a journalist and a product and strategy leader, guiding teams through transformations at startups and global companies. He’s dedicated his career to building products and cultures that blend vision, humanity, and measurable impact. Outside of work, Ben’s a lifelong music and food lover who believes leadership should feel as human as it is ambitious.Follow Ben Perreau on Instagram, LinkedIn and X. Join the waitlist at Parafoil.co About Bree Groff:Bree Groff is the author of the new release, Today Was Fun: A Book About Work (Seriously). She is a speaker and consultant and has guided executives at companies including Google, Pfizer, Calvin Klein, Atlassian, Hilton, and many others. She was formerly a Partner at SYPartners, the CEO of NOBL Collective, and holds a masters in Learning and Organizational Change from Northwestern University.Follow Bree Groff on Instagram, Substack, and LinkedIn. Check out her new release Today Was Fun: A Book About Work (Seriously)!Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
How do we know if our group chats are private? Does using a platform like iMessage, WhatsApp or Google Messenger protect what we say? Or can tech companies or governments access our messages and even monetize this data? The answers are complicated. In today’s episode with Udbhav Tiwari, Signal's VP of Strategy and Global Affairs, we explore the intersection of privacy, strategy, and the role of AI in reshaping communication. Signal is a private messaging app that ensures users can text without being tracked, monitored, or shown ads. Used by hundreds of millions of people including journalists, whistleblowers, governments and activists, it may be the gold standard of private digital communication.So should we all migrate our chats to Signal? Let’s explore that. Especially as we share more of ourselves with AI, truly private spaces online are becoming increasingly rare. Topics Covered:How can we best protect our privacy when it comes to messages and group chats? What does safety and privacy look like within surveillance capitalism?Is it possible to use third party AI agents and still have privacy?A look at Signalgate and the role of Signal in upholding user privacy.About Udbhav Tiwari: Udbhav Tiwari is Signal's VP for Strategy and Global Affairs, driving the project's public affairs and other external engagements. Prior to this, he worked at the Director for Global Product Policy at Mozilla and the public policy team at Google.Follow Udbhav Tiwari on LinkedIn @udbhav-tiwari.Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
The culture of work is in transformation as AI is reshaping the job search for both applicants and hiring managers. Chatbots are writing resumes, robots conducting first-round interviews and hiring managers are navigating the complexities of AI  automation bias in application screening.  So as humans, what can we do to stand out amidst a sea of candidates, and how can we continue to define our company cultures?In this episode, I’m joined by Daisy Auger-Dominguez, a global c-suite executive, strategist, author and keynote speaker who’s held leadership roles at major companies including Google and Disney. With decades of experience guiding workplace culture through change, Daisy offers insights into the fast-shifting landscape of hiring and recruitment in the age of AI. We discuss how job seekers can use AI to enhance their applications without losing what makes them unique, and how recruiters can stay attuned to the nuances that machines often miss.Topics Covered:AI has made the hiring process faster, but at what cost?Are resumes still relevant in an AI-saturated job market?The cover letter may be more important than ever beforeHow to future-proof your career How we can use AI as an assistant, but not as an authorAbout Daisy Auger-Dominguez: Daisy Auger-Dominguez is a global c-suite executive, strategist, author and keynote speaker helping organizations lead with purpose, people, and performance at the center. As CEO of Auger-Domínguez Ventures, she partners with Fortune 500s, startups, and mission-driven organizations — in-house as a Chief People Officer or as an advisor — to build high-trust, inclusive and resilient teams, shape vibrant cultures and operations, and craft systems that thrive in times of change.Follow Daisy Auger-Dominguez on Instagram @daisyaugerdominguez, LinkedIn @daisyaugerdominguez, TikTok @daisyaugerdominguez and YouTube.Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode, I’m joined by Jacqueline Raich, CEO and founder of Primer – a beauty-tech startup designing a smart mirror powered by AI. Unlike the aesthetic filters on  your phone’s photo apps, this physical mirror coaches you to achieve the look you want through customized makeup tutorials. As Jacqueline describes, it’s akin to applying makeup in the style of paint-by-numbers where your face becomes the canvas. In our conversation, we discuss the origin story of Primer, the impact it will have on the beauty industry as well as its potential to help all people be more confident and comfortable applying makeup. Beyond the story of Primer, we discuss the challenges of being a first-time founder, and we share some generative AI tips like how to tweak prompts for ChatGPT and strategically leverage it as a thought partner. Topics Covered:How Primer is fixing the “personalization problem” in the beauty industryWhat responsive AI is and how Primer is integrating it into its hardware?How Primer will enable beauty influencers and online makeup tutorials to be even more helpfulHow business builders and founders can use ChatGPT as a constructive thought partnerAbout Jacqueline Raich:After nearly 15 years in merchandising and strategy roles at top luxury retailers, Primer founder Jacqueline Raich pivoted toward a new passion. A Wharton and Parsons graduate, she tapped into her deep industry expertise and instinct for elevating the customer experience to pioneer the first analog mirror equipped with advanced AR capabilities. With Primer, she’s creating a space where beauty enthusiasts can discover, share, and grow their skills. To develop the model, Jacqueline worked closely with advisors from Lululemon Studio Mirror, Estée Lauder, Meta, and other leaders in beauty and consumer tech.Follow Jacqueline Raich on LinkedIn at @jacquelineraich, and watch a sneak preview of Primer at https://www.primer.beauty/video. Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
We’re living through a crisis of connection, and both men and women are hurting. We’re all craving intimacy, but are struggling to meaningfully connect. Dating apps, AI chatbots and evolving gender roles are contributing to our challenge of connection, but as I explore in this episode, hope is not lost. It’s up to us to make space for intimacy in our lives. And that process starts with knowing ourselves, something AI can help us do.In this episode, I’m joined by Kaamna Bhojwani, a certified sexologist and researcher. She is one of the leading voices at the intersection of technology and human intimacy. Kaamna has spent years exploring how our digital world is reshaping the most intimate parts of our lives: how we relate, how we connect, and what it means to feel close to someone.She unpacks this age of relational tech based on her research in psychology, spirituality, and her experience working with individuals and couples all over the world navigating sex and intimacy in their lives and relationships. Topics Covered:The state of sex today, and how technology is impacting our interpersonal relationshipsWhat relational tech is, and how it can help us become more self-aware and help us develop intimacy in our relationshipsWays AI can help us confront and work through our feelings of sexual shameHow the language we use to discuss sex can reinforce gender rolesHow AI can help people seeking true connectionAbout Kaamna Bhojwani: Kaamna Bhojwani is a certified sexologist, speaker, media personality and one of the leading voices at the intersection of technology and human intimacy including AI companions, teledildonics and humanoid robots. As the host of the Sex, Tech and Spirituality podcast, Kaamna creates space for the unpacking of our deepest emotions with a view towards collective expansion. Kaamna writes a column for Psychology Today called Becoming Technosexual and is a regular expert guest on NBC, Reuters, Al Jazeera and more.  Follow Kaamna Bhojwani on Instagram @kaamnalive and LinkedIn at @kaamnabhojwani. Listen to Cory Corrine as a guest in Episode 10 of Kaamna’s Sex, Tech and Spirituality podcast, AI=Artificial Intimacy? Rethinking Desire, Authenticity and Personal Responsibility.Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Being a woman online is increasingly dangerous. It means living with the constant possibility that a simple AI prompt can turn your personal image into something disturbing, offensive and humiliating.In this episode, I’m joined by Kat Tenbarge, an award-winning journalist who has been covering online harassment of women since the early days of deepfakes. In the last several years, thanks to AI, Kat has witnessed a disturbing trend in how deepfakes are becoming more pervasive. They are impacting a wide range of women and girls (not just celebrities), and platforms and police are ill-equipped to fight it. But as much as AI is changing the scale and speed of sexual harassment online, this isn’t a story about being powerless. It’s a story about possibility. And as Kat shares, when women organize, when we demand accountability, we can change the culture, shape policies, and build a safer and more tolerant internet.Topics Covered:What does sexual harassment look like in the age of artificial intelligence?How can we regulate the rapid creation of non-consensual, synthetic sexual content online?Will President Trump’s ‘Take It Down Act’ actually protect women online?Should tech companies be held responsible for regulating the spread of deepfakes on their platforms?About Kat Tenbarge: Kat Tenbarge is an award-winning feminist journalist who writes the newsletter Spitfire News. Her work has been published in WIRED, NBC News, Business Insider, and more. She has reported on high-profile cases of gender-based violence against influencers and celebrities.Follow Kat Tenbarge on Bluesky @kattenbarge.bsky.social and on Instagram @kattenbarge. Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
We often think of love and addiction as opposite forces. Love is life-giving. Addiction is life-limiting. Love expands your world. Addiction shrinks it. But what if I told you that biologically speaking, love and addiction are more similar than you may think. And that chatting with AI bots can actually activate part of our brain that triggers a “love” response, which mimics our brain activity when we’re experiencing addiction.We unpack all of this in this episode of The Intersect, where I am joined by Maia Szalavitz, one of the leading voices on addiction in America. Together we dive into what’s going on in our brains when we experience love, and how like drugs, shopping and other vices, we can actually become addicted to it. Maia has written extensively on addiction. She has survived a heroin addiction herself, and unpacks how AI chatbots are designed to pull us in and keep us hooked. She reveals that addiction isn’t about a specific substance, but rather is about how addiction is defined by continued behavior despite negative consequences. That’s why obsessively relying on chatbots may be more dangerous than we think.Topics Covered:What happens to your brain when you’re in love, and does it mimic your brain during addiction?How can connecting with AI chatbots mimic the feeling of falling in love?   How is dependency different from addiction?How can AI companies become aware of the addictive qualities of their products?How can chatbots help people navigate social or emotional challenges?In what circumstances should chatbot use be regulated? About Maia Szalavitz: Maia Szalavitz is a contributing opinion writer for the New York Times and the author, most recently, of Undoing Drugs: How Harm Reduction Is Changing the Future of Drugs and Addiction. An author and journalist working at the intersection of brain, culture and behavior, Szalavitz has written for Time Magazine, the Washington Post, Elle, New Scientist, Scientific American Mind and many others. She's author/co-author of five books on subjects as wide ranging as empathy, polygamy, trauma and addictions.Follow Maia Szalavitz on X and LinkedInCheck out Maia’s recent piece in the NYT’s: Love Is a Drug. A.I. Chatbots Are Exploiting That.Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Many of us agree on the conveniences of ChatGPT. It offers answers to tough questions, it can analyze tons of data instantaneously and … dare I say it .. it can even provide us with some form of companionship. But what we might not realize is that when we share private information with ChatGPT, whether it’s about our health, relationships, feelings or anything else, it’s not actually private. It’s not protected. Our queries are owned by OpenAI. And if circumstances emerged that required it, our inputs could be used against us in a court of law. We’re divided on whether or not we should care about this, and what, if anything, we should do about it. This is why I am exploring online privacy in the age of AI in this episode of The Intersect, where I am joined by tech and culture reporter Taylor Lorenz and Stanford Privacy and Data Policy Fellow Dr. Jennifer King. Together we decode this pivotal moment, and offer ways to navigate it mindfully. As Jen reminds us, there are plenty of ways we can still protect ourselves and all hope is not lost even as technology becomes increasingly embedded in every aspect of our lives. Listen to uncover how. Topics Covered:What are we giving away by being fully honest with AI?What ‘data nihilism’ is and why Gen Z feels powerless in protecting their privacy onlineWhy our “moral panic” over smartphones is obsoleteWhy you might want to rethink what you share with pregnancy tracking appsHow you can protect your data in the age of ChatGPTAbout Taylor Lorenz:Taylor Lorenz is a tech and online culture reporter and founder of User Magazine, a tech and online culture newsletter on Substack. She hosts a weekly tech and online culture podcast, Power User. Taylor is a former technology columnist at The Washington Post, and a former technology reporter for The New York Times, The Atlantic, and Business Insider. Her work has appeared in New York Magazine, The Hollywood Reporter, Rolling Stone, and other major outlets. She regularly appears on CNN, NBC, the BBC and other TV news channels to discuss online culture. Follow Taylor Lorenz on TikTok, Instagram, YouTube and LinkedIn at @taylorlorenzAbout Jennifer King:Jennifer King is the Privacy and Data Policy Fellow at the Stanford University Institute for Human-Centered Artificial Intelligence. An information scientist by training, Dr. King is a recognized expert and scholar in information privacy. Sitting at the intersection of human-computer interaction, law, and the social sciences, her research examines the public’s understanding and expectations of online privacy as well as the policy implications of emerging technologies. Follow Dr. Jennifer King on LinkedInFollow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode of The Intersect, I’m joined by Dr. Hamsa Bastani -- an Associate Professor at the Wharton School of the University of Pennsylvania -- and Kristina Peterson -- a longtime high school English teacher and author of AI in the Writing Workshop – to discuss the ways that AI is transforming our education system.Over the last couple of semesters, AI has become an overwhelming presence in school classrooms and on college campuses. While many remain concerned about students cheating and misusing AI, there’s a deeper question at play: How is this new technology reshaping the way students learn? In this episode, we explore how education is transforming as students and educators integrate it into their work and lives. The challenge remains big though: How can we ensure the education system maintains its relevance, meaning and humanity?Topics Covered:What does it mean to cheat in the age of AI?How can educators integrate AI into their classrooms to facilitate teaching and learning?How can AI be a thought partner for students rather than a crutch? Has AI allowed us to forget what it’s like to struggle in school? Does this matter?What can parents do to best navigate the rise of AI in the classroom? About Hamsa Bastani:Hamsa Bastani is an Associate Professor at the Wharton School, University of Pennsylvania, where she co-directs the Healthcare Analytics Lab. Her research develops innovative machine learning methods to address societal challenges, particularly in healthcare and education. She has partnered with national governments, including Greece and Sierra Leone, to deploy algorithms at the country-scale to improve public health outcomes, and her research has been published in leading outlets including Nature, Management Science, and Operations Research.Follow Hamsa Bastani on LinkedIn: https://www.linkedin.com/in/hamsa-bastani-4a346955/ About Kristina Peterson:Kristina Peterson is a veteran high school English teacher, researcher, and co-founder of EmpowerEd Consulting, specializing in the ethical integration of generative AI in education. She is the co-author of AI in the Writing Workshop: Finding the Write Balance (April 2025), which explores how AI can serve as a writing partner while still preserving student voice and creativity. Kristina’s work bridges classroom practice with national conversations on innovation, equity, and digital literacy. She also consults with educators, universities, and law enforcement to help them adapt responsibly to emerging AI tools.Follow Kristina Peterson on LinkedIn: https://www.linkedin.com/in/kristina-peterson-617525262/ Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode of The Intersect, I sit down with Lila Shroff, an assistant editor at The Atlantic who covers technology, science, health and culture.Together we explore how AI is rapidly shaping the habits, experiences and worldviews of Gen Z, and how AI companies are diligently working to attract 18-24 year olds and convert them to loyal customers.A member of Gen Z herself, Lila introduces the idea of the “Gen Z lifestyle subsidy,” which is the trend of AI companies subsidizing the cost of subscriptions for their premium offerings to college-aged students. Beyond this, we cover how education, data privacy, and intimate relationships are being re-imagined and influenced by AI for the better and for worse. And how this era is not just about humans talking to chatbots. Now chatbots are talking to chatbots to help humans navigate their lives from making restaurant reservations to writing job applications. The ultimate question: Is this the path to a more efficient life, or are we losing our agency? Topics Covered:What the Gen Z “lifestyle subsidy” is and how it’s impacting an entire generationHow 18-24-year-olds have become AI “power users” and what AI companies are doing to drive adoption and earn their loyaltyWhat growing up with chatbots means for young children born into an AI world The generational shift in attitude toward sharing private information with chat botsAbout Lila Shroff: Lila Shroff is an assistant editor on The Atlantic’s Science, Technology, & Health team. Before The Atlantic, Lila served on the editorial board at Reboot and co-led a working group at the Stanford Human-Centered AI Institute researching AI and the arts. She is particularly interested in the social and cultural impacts of AI. She graduated from Stanford where she studied AI and literature. Follow Lila Shroff on X and LinkedIn at @lilashroff.Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterTranscript available at https://www.theintersectshow.com/what-is-the-real-cost-of-free-chatgpt/See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode of The Intersect, I sit down with Joanna Peña-Bickley, co-founder and CEO of Vibes AI. We explore where technology meets our most human needs and concerns, and discuss how AI can support brain health to help us age with dignity.As wearable technology continues to advance, how can we better understand and care for our brains? Joanna and her team want to empower us with tools to understand and care for our brains. In our conversation, we delve into how Vibes AI is using voice analysis to detect early signs of cognitive decline, the connection between our hearing and our cognitive functioning, and an innovative vision for how AI and wearable tech might help us extend not just our lifespan, but our “joy span.”Topics Covered:Why brain health is a critical and often overlooked pillar of wellnessHow Vibes AI is working to identify early signs of cognitive decline by analyzing biomarkers in our voice.How hearing loss is connected to cognitive health, and how improving our hearing can lead to positive outcomes for our brains.  Ways that having a healthy brain can increase not just our “joyspan,” but also our healthspan.  How our voices have a unique frequency and when we’re “on the same wavelength” as others, it can lead to deeper connections.About Joanna Peña-Bickley: Joanna Peña-Bickley is a design engineer known as the mother of Cognitive Experience Design and a leader in Generative AI. Her multidisciplinary work across tech, media, and design for the AI era has led to over 150 products for companies like IBM, Amazon, and NASA.An advocate for inclusive design and AI ethics, she co-founded neurotechnology company Vibes AI that aims to make brain health and wellness accessible to all. She also launched the AI Design Corps to drive workforce upskilling.Named one of Fortune's Most Powerful Women, Joanna is working to  shape the future of human-centered AI.Follow Joanna Peña-Bickley on Instagram and YouTube @joannapenabickley and TikTok @joannapenabickley0.Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode of The Intersect I am joined by Aidan Walker -- a writer, internet culture researcher and the creator of the substack How to Do Things with Memes -- to discuss how culture isn’t created for us but by us.Aidan studies where viral content comes from, how it spreads and what it reveals about our world — a world in which we’re all producers. It’s not just creators, influencers, or journalists who determine what content is important and shapes our culture but the commenters, the reposters, and the larger online community.But what happens when our feed is filled with AI-generated content? Listen to my conversation with Aidan to learn more.Topics Covered:The origins and implications of "slop capitalism" in the digital content economyThe algorithmic shift from meaningful engagement to content saturationThe role of Substack, TikTok, and digital community in reclaiming thoughtful contentWhy platforms prefer “slop” over quality: an incentive structure driven by control, not just profitHow smartphones became the default middleman for all modern experiences—dating, jobs, entertainmentHow cultural expression is increasingly limited to what algorithms can track, monetize, and approveThe power of reframing internet users from passive consumers to active producersWhy honoring internet culture as serious, collaborative creative work is vital for our futureAbout Aidan Walker:Aidan Walker is a writer and meme researcher who posts on TikTok and YouTube under the handle @aidanetcetera. He also writes the Substack newsletter How To Do Things With Memes.Some of Aidan Walker’s recent work on Slop, and other reference material: The unstoppable rise of Chubby: Why TikTok's AI-generated cat could be the future of the internetHow Spammers and Scammers Leverage AI-Generated Images on Facebook for Audience GrowthFollow Aidan Walker on Instagram, TikTok and YouTube at @aidan.etceteraFollow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
loading
Comments