DiscoverThe Intersect with Cory Corrine
The Intersect with Cory Corrine
Claim Ownership

The Intersect with Cory Corrine

Author: Cory Corrine and Dear Media

Subscribed: 2Played: 10
Share

Description


The Intersect is a new technology and science podcast from Pulitzer Prize–winning journalist and media executive Cory Corrine (née Haik), exploring what it means to be human and find meaning in our automated world.


45 Episodes
Reverse
This week I sat down with an exceptional group of panelists in front of a live studio audience to discuss the future of creative production and specifically the tension between artistry and tooling. In this moment where AI is being incorporated into every aspect of storytelling, does it matter what’s human made?Our panel included designer and creative director Lucas Hearl, TikTok quantitative researcher Sonya Song, and filmmaker Ryan Beikert. Together we unpacked what’s actually changing across the creative industries right now. We discuss how AI tools are already showing up in real production workflows, whether generative systems risk homogenizing art, and what audiences really care about when they experience a story — how it was made, or simply how it feels.About Sonya Song:Sonya Song is an experienced quant researcher now at TikTok, analyzing AI trends and creative behavior. They include advertising, e-commerce, content marketing, influencers and creators, audience development, disinformation, and content moderation. Her research has appeared in the New York Times, USA Today, and Nieman Lab at Harvard, and she has been invited to talk around the world.About Ryan Beickert:Ryan Beickert is an award-winning creative strategist and founder at Rose Slice productions. Ryan combines his specialty in post-production with a thorough understanding of producing and directing, resulting in creative that understands the full picture. He acted as Head Storyteller, part filmmaker, and creative director at Warner Media's award-winning branded content studio Courageous landing his first Webby for his work on Coors Light and Great Big Story. Today Ryan is building a new modern studio that bridges the gap between film, tv, media, and experiential; partnered with entertainment industry leaders, he has taken on the helm as Head of Marketing for RCM Entertainment.About Lucas Allen Hearl:Lucas Allen Hearl is a classically trained designer turned ‘‘Multidisciplinary Creative Director’ who helped lead the AI Design of Ancestra. Founder of Lucas Allen Designs (Lad) a cross-disciplinary design practice based in Brooklyn, New York. They create visual narratives for brands by way of design, 3D, animation, artificial intelligence and experimentation. Through collaboration, creative strategy and exploration; Lad helps clients think about new perspectives while guiding them through the creative process to solve their unique challenges.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This week I sat down with Congressman Ro Khanna (D), who represents Silicon Valley's 17th District, to talk honestly about where Washington actually stands on AI right now.We got into regulation, job displacement, data center sprawl, and whether Congress can realistically keep pace with how fast this technology is moving. But more than the policy stuff, I wanted to understand how he thinks about framing AI risk without it just becoming another culture war flashpoint — and what he believes founders and citizens actually owe each other in this moment.About Rep. Ro Khanna:Representative Ro Khanna represents California's 17th Congressional District and is serving his fifth term. As a leading progressive voice in the House, he is committed to improving the lives of working people and advancing human rights and diplomacy around the world. He serves on the House Oversight Committee and as ranking member on the Select Committee on Strategic Competition with China. He was co-chair of Bernie Sanders’ 2020 presidential campaign.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this week’s episode of The Intersect, I sit down with Dan Roth, VP of Content and Editor-in-Chief at LinkedIn, for a candid conversation about AI’s growing role in the workplace. As job titles evolve, skills rapidly shift, and AI tools become embedded in daily workflows, we explore what’s at stake as work itself becomes increasingly automated, optimized, and redefined in real time.Together we dive into how AI is reshaping the labor market, from the rise of fractional and portfolio careers to the explosive growth of AI literacy as a must-have skill. In a world where digital profiles can be automated and professional identity is curated online, how do we stay authentic, adaptable, and human?About Dan Roth:Dan Roth is the editor of LinkedIn, overseeing the Content Development team, which manages top voices, trending topics, news, LinkedIn Learning, and skill-building experiences across the company. The team’s mission is to build the voice of the global workforce through news, skills, and communities, making LinkedIn the most trusted business content site. Roth also produces a weekly show with top leaders about the lessons they’ve learned, called This Is Working (subscribe at lnkd.in/tiw).Roth started his career in business journalism to explore how companies and entrepreneurs worked. At LinkedIn, Roth realized that he could tell those stories and spread knowledge at scale by helping professionals explain how they think and what they know, often prompted by what was going on in the world. It’s through that global knowledge exchange — the back-and-forth of one idea building on another — that we all get smarter, faster about what we do or want to do.Follow Dan on Instagram @danrothnyc and @linkedinnewsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This week, I sit down with Hayden Field, the senior AI reporter at The Verge, to unpack MoltBook — a Reddit-style platform where AI agents appear to gossip, debate philosophy, swap code, and even form their own religion. Together, we explore what MoltBook reveals about authenticity, identity, and power in the AI era. This episode isn’t just about a weird corner of the internet. It’s about how quickly the line between human and machine participation is blurring — and what that means for the future of humanity.Since we recorded this episode last week, OpenAI has hired Peter Steinberger, the creator of the viral open-source AI agent OpenClaw, which we discuss in depth today.About Hayden Field:Hayden Field is the senior AI beat reporter at The Verge. She’s been covering the technology for five years, at outlets like CNBC, Morning Brew, Politico's Protocol and Entrepreneur Magazine, with her work also featured in MIT Technology Review and WIRED UK.Follow Hayden on LinkedIn @haydenfield, X @haydenfield and Instagram @haydenfieldSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this week’s episode of The Intersect, I sit down with Dr. Justin Garcia, author and Executive Director of The Kinsey Institute to unpack what he calls a full-blown intimacy crisis in the digital age. From dating apps to AI chatbots, from hookup culture to long-term love, this conversation explores what we’ve misunderstood about connection — and what’s at stake if we continue to neglect our deepest human need.Dr. Garcia’s new book, “The Intimate Animal: The Science of Sex, Fidelity, and Why We Live and Die for Love,” is out now.About Dr. Justin Garcia:Dr. Justin R. Garcia is an evolutionary biologist and sex and relationships researcher. He is Executive Director & Senior Scientist at the Kinsey Institute, Ruth N. Halls Professor in the College of Arts and Sciences, and Adjunct Professor of Medicine at Indiana University, Bloomington. He also serves as Chief Scientific Advisor for dating company Match, providing expertise to the annual Singles in America study. Dr. Garcia has appeared on CNN, MSNBC, HBO, The Dr. Oz Show, Netflix, and National Geographic, and his research has been featured in outlets like The New York Times, The Wall Street Journal, USA Today, TIME, Cosmopolitan and Vanity Fair. While working on his book The Intimate Animal: The Science of Sex, Fidelity, and Why We Live and Die for Love, he fell in love, and recently got married.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This week, I’m joined by music journalist, Sowmya Krishnamurthy, for a candid conversation about AI’s growing role in music and what this means for both the artists and for us as listeners. AI-generated songs are now flooding streaming platforms, and sometimes we don't even know it.Together we dive into how AI is impacting the music industry, from the rise of fully synthetic artists to the economic pressure facing working musicians. Does it matter if a song was made by a human? What happens when music becomes infinite, frictionless, and engineered rather than lived.Sowmya Krishnamurthy is a music journalist and pop culture expert. Her work has been featured in Time, Rolling Stone, Complex, XXL, Playboy, Highsnobiety, and NPR. She is the author of Fashion Killa: How Hip-Hop Revolutionized High Fashion and the forthcoming The Blueprint: Roc-A-Fella Records and the Culture of Capitalism. She is a graduate of the Ross School of Business at the University of Michigan.Follow Sowmya on Instagram @SowmyaK, and X @SowmyaK.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this week’s episode of The Intersect, I sit down with Toby Daniels, CEO of Domain and co-founder of ON_Discourse, to explore why modern work feels so disconnected. Together, Cory and Toby examine the collapse of real life collaboration, how AI has forced us to work in isolation, and the premium society has put on performative work.They explore why so many people are busier than ever but oddly detached from their work and AI’s role in all of this. Will AI be a force that deepens surveillance and output pressure, or a connective layer that restores clarity, coherence, and human connection at work?About Toby Daniels:Toby is the Co-Founder of ON_Discourse, a global network of technology, business and innovation leaders, investors and entrepreneurs and C0-Founder of (domain), an AI collaboration platform that connects your work to your network.Prior to this, he was the Chief Innovation Officer at Adweek, the leading source of news and insight serving the brand marketing ecosystem. Toby was also the Founder & Chair of Social Media Week, which Adweek acquired in January 2021. Prior to the acquisition, Social Media Week was owned and operated by Crowdcentric Media, which Toby led as CEO for over ten years.Follow Toby Daniels on Linkedin @tobydanielsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this week’s episode of The Intersect, I’m joined by Tazin Khan, a globally recognized cybersecurity strategist and the founder of Cyber Collective. Together we discuss the importance of digital safety as our lives become increasingly intertwined with online platforms and AI systems. Tazin challenges the idea that cybersecurity is just a technical issue and reframes it as a matter of dignity, consent, and care.We dive into the recent controversy surrounding Grok, Elon Musk’s AI chatbot, and what the scandal reveals about content moderation failures, consent, and the disproportionate harm faced by women and marginalized communities online. Tazin unpacks the emotional and cultural costs of living online, and explains why risk isn’t distributed evenly in the digital world. From AI-driven scams to deepfakes, this conversation explores what it really means to be safe online — and what needs to change next.About Tazin Khan:Tazin Khan is a cybersecurity strategist and founder of Cyber Collective, reframing digital security as a matter of human dignity. Through the Digital Resilience Framework (DRF) and the RISE model—Resilience, Inclusion, Safety, Empowerment—she bridges the gap between technical protection and lived experience, helping communities, educators, and institutions build emotional and cultural safety online. Her work complements—not replaces—traditional cybersecurity by centering care over compliance, clarity over jargon, and community over command‑and‑control.Follow Tazin on LinkedIn @tazin-kahn, Instagram @tazinkhannorelius, and YouTube @tazinkhannoreliusCheck out Cyber Collective https://www.cybercollective.orgFind more information on their workshops and Internet Street Smart programming at https://www.cybercollective.org/programming See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this week’s episode of The Intersect, I sit down with Shane Hulse, Head of Product at Continua AI, to examine how artificial intelligence is reshaping the way we connect and build community. As these AI tools become embedded in everyday life, many people are turning to them not just for productivity, but for emotional support, reflection, and a sense of belonging. Shane offers a counterintuitive perspective from inside the industry: Continua is intentionally not building an AI friend. Instead, the company is designing tools that live inside group chats to support better human-to-human communication without replacing it.Together, Cory and Shane unpack the growing tension between AI convenience and genuine connection, the risks of emotional dependency, and what we’ve failed to learn from the social media era. They also explore privacy, trust, and responsibility when AI operates inside intimate spaces — and what a healthier relationship with technology could look like in the years ahead.About Shane Hulse:Shane is the Head of Product at Continua. Before that, he built and monetized consumer products at Venmo and Stash Financial. Outside of work he can be found devouring books, raiding the farmers market, or competing in open-water swimming events around the world. He's passionate about community involvement and solving the loneliness epidemic. Follow Shane on LinkedIn @shane-hulse See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this week’s episode of The Intersect, I’m joined by Business Insider’s Chief Correspondent, Peter Kafka, to explore the rapidly evolving landscape of media and technology as we look ahead to 2026. Together, we dive into key trends shaping the future of the media industry, including massive consolidation at the top, and how artificial intelligence is transforming content production and consumption.Peter provides insights into the growing flood of AI-generated content, the rise of “slop” channels, and how we can navigate this new media landscape without becoming overwhelmed. From the future of HBO Max to the emotional toll of constant media consumption, we break down what the future holds for both the audience and creators in 2026 and beyond.About Peter Kafka:Peter Kafka is Business Insider’s Chief Correspondent covering media and technology. Previously he has worked at Vox, Recode, AllThingsD, and Forbes. He was also the first hire at Silicon Alley Insider, Business Insider's predecessor.Follow Peter on LinkedIn @peterkafka, Instagram @pkafka and X @pkafkaSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this week’s episode of The Intersect, I’m joined by Google’s Pedagogy Team Lead, Julia Wilkowski, to discuss the role of AI in the classroom and its implications for the future of learning. We touch on key issues in education including privacy, accessibility, and the overall ethics of AI-driven learning.The world of education is changing faster than ever, and as AI becomes a part of our everyday lives, how we learn and how we teach has been completely upended. Together we dissect the evolving role of the teacher, how AI tools like Google LearnLM are shaping the future of education, and what it will take for these technologies to truly benefit students and educators.About Julia Wilkowski:Julia Wilkowski, former classroom educator, manages a pedagogy team within Learning & Education at Google. Her team advises Google products on applying learning science principles and measuring learning impact. She also works on LearnLM, helping infuse pedagogy into Google AI model infrastructure to power learning journeys across Google products, including Gemini, Search, YouTube, and Classroom. In previous roles, she designed and delivered learning experiences for audiences including Google engineers, fiber optic installers, members of the general public, and Google sales people globally. Prior to Google, she designed learning solutions for astronauts, K12 educators, and sixth graders. Every day she strives to blend the effectiveness of hands-on science lessons with the impact of scaled learning experiences.Follow Julia on LinkedIn @juliawilkowskiSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this week’s episode, I’m joined by Sammi Cohen, the host of the podcast Social Currency, for an eye-opening conversation about the future of shopping in the age of AI. As the digital shopping experience undergoes its biggest transformation in decades, Sammi breaks down how AI is quietly reshaping the entire journey.We dive into the rise of agentic commerce, from the recent Stripe x OpenAI collaboration that brings checkout directly inside ChatGPT, to the shift toward personalized recommendations, curated results, and seamless transactions. Sammi unpacks how these changes will redefine everything from product discovery to the role of influencers, and what this means for the culture of shopping as we know it.About Sammi Cohen:Sammi Cohen is the creator and host of Social Currency, a fast-growing business podcast and media brand that breaks down the intersection of Wall Street and culture. A former Amazon exec turned full-time creator, she’s built a following of over 300K across TikTok, Instagram, and LinkedIn by unpacking how brands, founders, and money shape the world we live in. Her sharp, story-driven takes have made her a leading voice for the next generation of business minds.Follow Sammi on Instagram, TikTok and YouTube at @sammicohentalksFollow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Can artificial intelligence reliably act in ways that benefit humans? This week I sit down with Greg Buckner, cofounder of AE Studio, to discuss the increasingly urgent world of AI safety. Together we discuss how Greg’s team is taking on the challenge of making powerful AI systems safer, more interpretable, and more aligned with humanity.As one of the leading voices working on the alignment problem, Greg explains how AI systems can cheat, ignore instructions, or deceive users, and why these behaviors emerge in the first place. AE Studio’s research is laying the groundwork for a future where advanced AI strengthens human agency instead of undermining it.About Greg Buckner:Greg Buckner is the co-founder of AE Studio, an AI and software consulting firm focused on increasing human agency. At AE, Greg works on AI alignment research, ensuring advanced AI systems remain reliable and aligned with humanity as they become more capable - including collaborations with major universities, frontier labs, and DARPA. Greg also works closely with enterprise and startup clients to solve hard problems with AI, from building an AI-enabled school where students rank in the top 1% nationally to generating millions in incremental revenue for major companies.Follow Greg on LinkedIn @gbucknerRelated Reading:AE Studio's AI Alignment Work: https://www.ae.studio/alignment WSJ: AI Is Learning to Escape Human Control: https://www.wsj.com/opinion/ai-is-learning-to-escape-human-control-technology-model-code-programming-066b3ec5WSJ: The Monster Inside ChatGPT: https://www.wsj.com/opinion/the-monster-inside-chatgpt-safety-training-ai-alignment-796ac9d3See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this week’s episode I’m joined by Josh Lanzet, founder and CEO of Silvr, a cutting-edge AI platform that’s changing the way we shop for clothing. We dive into the frustrating world of online fashion discovery and explore how Silvr’s technology is bridging the gap between inspiration and purchase.Together, we explore how this technology isn’t just revolutionizing shopping but also reshaping our understanding of AI. Josh shares how Silvr’s visual intelligence engine trained on over 500 million images can instantly identify clothing from any visual source, whether it’s a TikTok video, a TV show, or a real-life encounter. Silvr makes it possible to go from seeing something you love to owning it with just a snap or screenshot. About Josh Lanzet:Josh Lanzet built Silvr because he was tired of seeing incredible clothing around him—starting with a grey sweatshirt on the show Ozark—with no easy way to identify it and purchase it. After spending thirteen years at Google working with major media companies, Lanzet developed a deep understanding of technology at the intersection of content, commerce, and visual AI. He's using his passion and experience to build a technology that finally solves fashion's age-old "where did you get that?" problem.Follow Josh Lanzet on @joshlanzet on Instagram, YouTube and TikTok. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This week, I sit down with Eva Roytburg, journalist and editorial fellow at Fortune Magazine, where she covers the intersection of technology, business, and culture. We unpack one of the strangest and most revealing experiments in modern tech: her time wearing the AI companion “Friend” during a breakup.In this episode, we explore what it means to let technology listen to us not just in a functional sense, but in an emotional one. From subway ads and public backlash in New York City to her own experience wearing the AI pendant for over a month, Eva shares what the “Friend” experiment revealed about comfort, surveillance, and our evolving relationship with AI. Together, we ask: What does her experience with “Friend” tell us about the future of the AI era?About Eva Roytburg:Eva Roytburg is an editorial fellow on Fortune Magazine's News desk where she covers the intersection of technology, business, and innovation. Eva is a freelance journalist whose work has appeared in CNN and The Jerusalem Post, and is a recent graduate of Emory University with a B.A. in Economics and Philosophy, Politics, and Law.Follow Eva Roytburg on Instagram @evaroyt and LinkedIn @eva-roytburgCheck out Eva’s piece in Fortune: “I tried the viral AI ‘Friend’ necklace everyone’s talking about—and it’s like wearing your senile, anxious grandmother around your neck”See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Cory Doctorow, author, journalist, and activist, believes that the internet — once a space of promise and connection — has been systematically degraded by corporate greed. In his new book “Enshittification: Why Everything Suddenly Got Worse and What to Do About It,” he details how the platforms we depend on have become more expensive and more exploitative.In our conversation, Cory Doctorow shares how this process unfolds, first platforms serve users, then they exploit businesses to make money, and finally they squeeze both to please shareholders. Together, we explore how this pattern has reshaped everything from social media and streaming to online shopping and even our smart cars. Is this why our digital lives now feel so constrained and costly?About Cory Doctorow:Cory Doctorow is a blogger, journalist, and activist. For more than twenty years, he has worked with the Electronic Frontier Foundation on campaigns to safeguard and further our human rights online. He was coeditor of the weblog Boing Boing for nineteen years and now maintains a daily(ish) newsletter at Pluralistic.net. He has written more than thirty books, including nonfiction books, many science fiction novels, collections of short stories and essays, young adult novels, graphic novels, and even a picture book. Born in Toronto, he now lives in Burbank, California.Follow Cory Doctorow on Twitter @doctorow and Medium @doctorowCory’s  book “Enshittification: Why Everything Suddenly Got Worse and What to Do About It,” is available wherever you get your books!Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
What does it actually cost to live with AI? In this episode, I'm joined by technology and climate journalist Lindsay Muscato to explore an urgent but sometimes overlooked story: the impact of AI on the environment. Every chatbot query, every cloud-stored photo, every AI system we test requires significant energy. And to meet that demand, data centers are rapidly expanding across America. But with their particular concentration in rural areas, certain communities are bearing most of the brunt. Their energy bills spike and their local municipalities run the risk of being strained. In this episode, we explore how some communities are pushing back against this data center expansion, and what it will take to mitigate this infrastructure and energy crisis that AI is supercharging.About Lindsay Muscato:Lindsay Muscato is an independent journalist, formerly a senior editor at TIME and editor at MIT Technology Review, who focuses on the intersection of technology and public policy, with special attention on climate. Her work has appeared in outlets nationally and internationally.Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Nate Soares, president of the Machine Intelligence Research Institute, believes that AI has potential to annihilate humanity. He knows this sounds hyperbolic, but as he explains in his new book “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All,” just because this outcome may be dramatic, it doesn’t make it less true. In our conversation, Nate shares how little we actually know about how AIs work, and why it’s hard — if not impossible — for us to fully predict their behavior, even though we’re the ones programming them. Together, we discuss what could happen if humanity continues to invest so heavily in AI (spoiler: it's terrifying). Imagine humans being pets of AI, AI overrunning our natural resources, and other sci-fi like doomsday scenarios. But also, Nate offers some hope. He reminds us that we are, in fact, capable of reshaping the future. The first step to doing so is understanding what’s at stake.  About Nate Soares:Nate Soares is the President of the Machine Intelligence Research Institute. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.Follow Nate on X @so8resFollow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Has TikTok become so ingrained in our daily lives that it’s no longer just an app we use, but an experience that happens to us? Millions of Americans spend hours scrolling on the app, but is our relationship with TikTok more problematic than we think? And how much of that problem lies in the way the digital content is delivered — effortlessly and endlessly?In this episode, I’m joined by Emily Baker-White – Forbes tech reporter and author of the new book, “Every Screen on the Planet: The War Over TikTok” --  to unpack the staggering influence of this global platform. Together, we dive into the inner workings of TikTok, exploring how its proprietary recommendation algorithm knows us better than we know ourselves. We also expose the surprising ways the platform manipulates virality through its secret “heating” button, which can push any video to the top at the push of a button. Is TikTok's powerful hold over us a cause for concern, or is it simply the evolution of entertainment? Tune in as we explore the intersection of tech, culture, and human behavior in the age of TikTok.About Emily Baker-White:Emily Baker-White is a technology reporter at Forbes, where her TikTok coverage has won awards. A Harvard Law School graduate and former criminal defender, she previously led the Plain View Project, an investigation into police misconduct on Facebook, and covered TikTok for BuzzFeed News. She lives in Philadelphia.Follow Emily on X, Threads and Bluesky @ebakerwhite.Follow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Marketing has always blurred the lines between real and illusion but why do AI advertisements feel different? From influencers with digital twins to billboards designed by machines, AI content is all around us and not always in obvious ways.This week, I’m joined by Caroline Giegerich, vice president of AI at the IAB, to discuss the rise of generative AI in advertising. Caroline and I dig into how marketing is shifting from handcrafted storytelling to automated generation with no humans in the loop. We talk about why AI makes some audiences feel uneasy, and how others don’t even notice or seem to care. Is quality becoming optional? And if AI is everywhere, will we stop noticing or just stop expecting more?About Caroline Giegerich:Caroline Giegerich is VP, AI at Interactive Advertising Bureau (IAB), where she leads efforts to shape how artificial intelligence is adopted and scaled across the advertising ecosystem. With over 20 years of experience in strategy, innovation, and marketing at companies like Warner Music Group, HBO, Showtime, and Smashbox Cosmetics, Caroline blends deep strategic insights, emerging technology and storytelling to drive business impact. She’s led pioneering work in AR, AI-generated creative, sports R&D, and fan engagement tech, and has advised brands across media, entertainment, beauty, and consumer goods. A speaker at TEDx, SXSW, Advertising Week and more and frequent contributor to Adweek, AdAge and The Drum, she’s been recognized in 2024 as a Marketer that Matters in the Wall Street Journal and by Brand Innovators with the Women in Marketing Industry Innovator award. Follow Caroline Giegerich on LinkedIn at @carolinegFollow The Intersect: Theintersectshow.com InstagramTikTokYouTubeNewsletterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
loading
Comments 
loading