Discover
The Media Copilot
87 Episodes
Reverse
As AI reshapes journalism and media, Richard Murphy of the Alliance for Audited Media explains why the industry needs actual standards.AI is no longer experimental in media. It is operational.From drafting articles to generating images to influencing distribution, artificial intelligence is now embedded across the entire content pipeline in many organizations. But as adoption accelerates, trust is breaking down just as fast.In this episode of The Media Copilot, Pete Pachal talks with Richard Murphy, CEO of the Alliance for Audited Media, to unpack a growing industry response: ethical AI certification.Murphy explains how publishers, advertisers, and audiences are all asking the same question in different ways: How do we know what is real, who created it, and whether we can trust it? The answer, at least in part, may lie in standards.Drawing from AAM’s newly developed framework, Murphy walks through the pillars of responsible AI use, from transparency and disclosure to human oversight and data protection. The goal is not to slow innovation, but to create guardrails that keep media credible in an era where AI can generate anything.Why This MattersMedia has always relied on trust as its currency. AI is testing that foundation.When audiences cannot tell whether content is human-created, AI-assisted, or fully synthetic, credibility becomes fragile. At the same time, advertisers and partners are demanding proof that what they are funding or distributing meets ethical standards.This is where certification enters the picture.Ethical AI frameworks are quickly becoming more than best practice. They are positioning themselves as a competitive advantage, a compliance strategy, and potentially a defense against future regulation.The bigger shift is this: AI is not just changing how content is created. It is redefining what accountability looks like in media.What we coverWhat “ethical AI certification” actually means in practiceThe 8 pillars of responsible AI use in media organizationsWhy disclosure is moving from optional to essentialThe difference between AI-assisted vs fully AI-generated contentWhere most trust failures are happening todayWhy self-regulation may be the industry’s best shot before government interventionHow AI is impacting not just content creation, but distribution and business modelsThe growing role of advertisers, partners, and audiences in demanding transparencyAbout the 👤 Guest LinkedIn (Personal Profile): https://www.linkedin.com/in/rmurphy01AAM Leadership Bio: https://auditedmedia.com/about/leadershipAlliance for Audited Media): https://auditedmedia.comDigital Content Next (Articles): https://digitalcontentnext.org/blog/author/richmurphy/About the show: To explore more conversations like this and see what’s new, visit the freshly updated Media Copilot website at mediacopilot.ai. You’ll find new episodes, expanded resources, and tools designed for journalists, communicators, and media leaders navigating the fast-changing world of AI. It’s the home base for everything Media Copilot and it’s just getting started.Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the YouTube channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team . Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0 All rights reserved. © AnyWho Media 2026
AI is reshaping how people search, shop, and consume information and that shift is starting to challenge the business models that have supported media for decades.In this episode of The Media Copilot, host Pete Pachal sits down with Colin Jeavons, Founder and Chairman of Nomix Group, to explore what happens when AI becomes the middle layer between publishers and their audiences.From the collapse of traditional ad economics to the rising value of trust, this conversation breaks down how discovery is evolving, why some publishers may struggle to adapt, and where new opportunities are emerging across commerce, subscriptions, and AI-driven experiences.What we cover • How AI is changing search, discovery, and media economics • Why CPM-based advertising is under pressure • The growing importance of trust in content • The future of subscriptions and micropayments • How commerce and AI shopping may evolve • What publishers need to rethink right nowTakeawaysThe old web rewarded volume. The next era may reward credibility.In Jeavons’ view, AI is speeding up a market correction that was already underway. Publishers built around commodity content and low value ad impressions face increasing risk. But organizations that create trusted reporting, specialized expertise, or high intent commerce content may still have a path forward.The future, he suggests, will not be defined by whether AI destroys publishing. It will be defined by which publishers learn how to operate in a world where attention is filtered through intelligent systems, trust carries a premium, and audiences are willing to pay for what feels indispensable.About the 👤 Guest 🔗 Colin Jeavons LinkedIn: https://www.linkedin.com/in/colinjeavons/🔗 Nomix GroupWebsite: https://nomix.groupLinkedIn: https://www.linkedin.com/company/nomix-group/About the show: To explore more conversations like this and see what’s new, visit the freshly updated Media Copilot website at mediacopilot.ai. You’ll find new episodes, expanded resources, and tools designed for journalists, communicators, and media leaders navigating the fast-changing world of AI. It’s the home base for everything Media Copilot and it’s just getting started.Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the YouTube channel.For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0All rights reserved. © AnyWho Media 2026
Search is changing fast, and AI is at the center of it.Search is no longer just about blue links and ranking on Google. More and more, people are getting their answers directly from AI tools like ChatGPT, Google AI Overviews, and other answer engines that summarize information, pull citations, and decide what gets surfaced in real time. That means visibility is changing, and so is the value of content.In this episode of The Media Copilot, Pete Pachal speaks with Josh Blyskal, who leads answer engine optimization research at Profound, a company focused on tracking how brands appear inside AI generated answers. Their conversation explores what answer engine optimization really means, how it differs from traditional SEO, and why specificity, utility, and structure now matter more than ever.What We Cover • The shift from traditional search results to AI generated answers • What answer engine optimization (AEO) actually means • How AI tools break prompts into “fan out” searches • Why specificity and structured content matter more than ever • The role of citations and consensus in AI responses • How platforms like ChatGPT and Google AI Overviews choose sources • Why Reddit and user generated content still influence AI answers • The growing tension between AI discovery and publisher business models • Opportunities and risks for media organizations in the AI search eraAbout the 👤 Guest Josh Blyskal• Website: https://www.joshblyskal.com• LinkedIn: https://www.linkedin.com/in/joshua-blyskal• X: https://twitter.com/joshblyskal• Speaker Deck: https://speakerdeck.com/joshblyLearn More About Profound• https://www.tryprofound.comAbout the show: To explore more conversations like this and see what’s new, visit the freshly updated Media Copilot website at mediacopilot.ai. You’ll find new episodes, expanded resources, and tools designed for journalists, communicators, and media leaders navigating the fast-changing world of AI. It’s the home base for everything Media Copilot and it’s just getting started.Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the YouTube channel.For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0All rights reserved. © AnyWho Media 2026
AI is rapidly becoming part of how news is produced, distributed, and discovered. But what does that actually look like inside a newsroom?In this episode of The Media Copilot, host Pete Pachal speaks with Gina Chua, Executive Editor at Large at Semafor and Executive Director of the Tow-Knight Center for Journalism Futures at the CUNY Graduate School of Journalism.Chua shares how Semafor is experimenting with practical AI tools that support journalists in everyday workflows. These include tools for copy editing and proofreading, systems that suggest relevant datasets for charts while a reporter is writing, and tools that help surface related reporting across different outlets and languages.The conversation also explores how newsrooms can organize large volumes of information. At Semafor, interview transcripts from events and panels are integrated into internal systems so reporters can quickly search conversations, locate quotes, and review context directly.Chua emphasizes that these tools are designed to assist newsroom work rather than replace editorial judgment. She also offers a useful way to think about large language models: they are built to work with language, not to verify facts. When used carefully with known text sources, they can help summarize, organize, and analyze information.Beyond newsroom workflows, the discussion turns to the broader shift happening in how people access information. AI tools, chatbots, and automated summaries are increasingly becoming a gateway to news, which raises important questions about trust, verification, and the future role of journalism.This episode looks at how reporters, editors, and media organizations are adapting as AI becomes part of the information ecosystem.What we cover • How Semafor is experimenting with AI tools inside the newsroom • Using transcripts and Slack to search interviews and discussions • Why language models are useful for handling text but not verifying facts • The role of human review in newsroom publishing decisions • How AI interfaces are changing the way audiences find newsTIMESTAMPS:00:00 – Intro: Journalism in the AI Era02:15 – Gina’s Background & Semafor’s Model06:00 – How Newsrooms Are Using AI Today10:00 – Trust in a Synthetic World14:00 – Transparency & Disclosure18:30 – AI Tools Inside Reporting23:00 – The Risk of Information Overload27:30 – Reinventing the News Business32:00 – Where AI Helps Most36:30 – The Future of JournalismAbout the 👤 Guest GINA CHUALinkedIn 👉 https://www.linkedin.com/in/ginachuaX (Twitter) 👉 https://x.com/GinaSKChuaInstagram 👉 https://www.instagram.com/gina_chua_nyc Personal Website / Writing 👉 https://ginachua.me Author Page (Semafor) 👉 https://www.semafor.com/author/gina-chua—-About the show: To explore more conversations like this and see what’s new, visit the freshly updated Media Copilot website at mediacopilot.ai. You’ll find new episodes, expanded resources, and tools designed for journalists, communicators, and media leaders navigating the fast-changing world of AI. It’s the home base for everything Media Copilot and it’s just getting started.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0All rights reserved. © AnyWho Media 2026
What if instead of scrolling headlines, you had a personal intelligence agent that understood what matters to you and delivered only signal, not noise?In this episode of The Media Copilot, Pete Pachal talks with Eva Cicinyte, co-founder and CEO of Gnomi, an AI-powered real-time news agent designed to synthesize global information into actionable insight. The goal isn’t summaries or more feeds. It’s context.Eva explains how her experience in political data analytics shaped her mission to make high quality understanding accessible to everyone, not just institutions with research teams. Gnomi pulls from global sources, social platforms, video, audio, and financial data to deliver personalized intelligence in real time.The platform’s new Finance Mode can even analyze live earnings calls as they happen, potentially surfacing market signals before headlines move prices.🔍 In this conversation • Why Gnomi is built as an “intelligence layer,” not a news app • How AI agents could replace search and traditional feeds • The danger of engagement driven AI systems • Multilingual analysis and global perspective gaps • Using social and video data to detect emerging signals • Real time market insights from live earnings calls • The future of journalism in an AI first world • Ads, subscriptions, and the economics of AI toolsIf you care about the future of news, AI, finance, or how people will stay informed in the coming decade, this episode is a must watch.00:00 – Intro: Why AI Agents Matter Now Big-picture framing of the agent shift.02:10 – Eva’s Background & Building Gnomi How she entered the agent space and what problem they’re solving.05:40 – What Actually Is an AI Agent? Clear distinction between chatbots and agents.09:15 – From Answers to Action How agents move from generating text to executing workflows.13:50 – Designing Guardrails & Trust Why autonomy requires control and reliability.18:20 – Real-World Use Cases Where agents are already creating leverage.22:45 – AI in the Workflow Stack Replacing apps and orchestrating tools.27:30 – Human + AI Collaboration Why agents amplify people instead of replacing them.32:10 – Infrastructure: Memory, Context & Systems What makes agents actually autonomous.37:00 – Competitive Advantage in the Agent Era How companies should think about adoption.41:30 – The Future of the Agent Economy Where this is all headed next.About the 👤 Guest LinkedIn👉 https://www.linkedin.com/in/eva-cicinyte-1447161b2Instagram (Personal)👉 https://www.instagram.com/evapariscicinyteOfficial Website👉 https://www.gnomi.comLinkedIn (Company Page)👉 https://www.linkedin.com/company/gnomiInstagram (Company)👉 https://www.instagram.com/gnomi.app About the show: To explore more conversations like this and see what’s new, visit the freshly updated Media Copilot website at mediacopilot.ai. You’ll find new episodes, expanded resources, and tools designed for journalists, communicators, and media leaders navigating the fast-changing world of AI. It’s the home base for everything Media Copilot and it’s just getting started.Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the YouTube channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0All rights reserved. © AnyWho Media 2026
Poynter’s Alex Mahadevan explains how newsrooms can use AI without losing the fundamentals of verification, context, and accountability.By The CopilotAI is already embedded in how people discover and consume news, from search to chat interfaces to automated summaries. So the question is no longer whether journalism will be shaped by AI. It’s how newsrooms maintain trust while experimenting responsibly.In this episode of The Media Copilot podcast, Pete Pachal sits down with Alex Mahadevan, Director of MediaWise and a faculty member at Poynter, to unpack what media literacy looks like now that anyone can generate convincing content at scale. Alex shares how his background in data and local journalism shaped his approach to tools, why public-facing AI ethics policies matter, and what it will take for news organizations to bring audiences along for the next phase of the information ecosystem.Why this mattersTrust is the core product. AI can either widen the trust gap with errors and low-quality content, or help rebuild credibility through transparency, better products, and clearer communication about how journalism is made. This conversation gets practical about what responsible AI use looks like, where disclosures help and where they can unintentionally slow innovation, and why the newsroom AI divide is becoming a real competitive advantage for organizations that adapt.What we cover• Alex’s journey into journalism and the global mission of MediaWise• How AI is reshaping misinformation, trust, and newsroom transparency• Practical uses of chatbots, coding agents, and AI workflows• The widening divide between AI enthusiasts and skeptics in newsrooms• Ethics, job concerns, and gray areas around AI-assisted writing• What the future of news may look like beyond traditional articlesAbout the 👤 Guest 🔗Alex Mahadevan 🔗Poynter / MediaWise 🔗MediaWiseAbout the show: To explore more conversations like this and see what’s new, visit the freshly updated Media Copilot website at mediacopilot.ai. You’ll find new episodes, expanded resources, and tools designed for journalists, communicators, and media leaders navigating the fast-changing world of AI. It’s the home base for everything Media Copilot and it’s just getting started.Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the YouTube channel.For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0All rights reserved. © AnyWho Media 2026
Lawsuits set public rules; contracts set private ones. A media attorney on how leverage, timing, and context decide the path.In this episode, Pete Pachal sits down with corporate and transactional attorney Jason Henderson, a streaming and licensing specialist who also happens to be a creative with real skin in the game. Jason breaks down why the popular “AI learns like humans” analogy only goes so far, how fair use really works in court, and why the future will be shaped less by courtroom theory and more by deal structures. The key parts of those deals that are often overlooked: indemnification and who actually bears the risk when things go sideways.From The New York Times and Perplexity headlines to the practical mechanics of licensing training data, this conversation gets grounded fast. Jason explains what matters most to media companies, what smaller publishers should watch, and why agentic browsing and attribution are shaping up to be the next pressure point.Why this matters: Media is facing a new kind of competition. Not always a stolen article, but a substituted experience. When AI tools summarize, synthesize, and answer in real time, the legal question is not only “Was it copied?” It is also “Does it replace the market for the original?” Jason outlines how courts evaluate that, why “transformative” is both the key term and the messiest one, and why the industry is drifting toward partnerships and licensing frameworks even as litigation continues.At the same time, the next wave is not just training bots or search bots. It is agents that behave like users and may be harder to block or even detect. The more AI becomes the interface to the web, the more urgent it becomes for publishers to understand the business and legal stakes.Key TakeawaysFair use is not a blanket shield. Courts look at purpose, transformation, and market impact, and the facts matter.Legitimate acquisition matters. Even if a use might be transformative, piracy can change the legal posture dramatically.Media’s biggest fear is substitution. Summaries and AI answers can erode subscriptions, traffic, and trust, even without verbatim copying.Deals are becoming more specific. Expect narrower permissions and more constraints on how data can be used for training or product features.Risk is moving through contracts. Indemnification is common, but it is only as strong as the indemnifier’s balance sheet and insurance.Attribution is the missing bridge. A clear “this came from” pathway could reduce conflict and rebuild value for original publishers.Agentic browsing will raise the temperature. When AI acts as a user proxy, blocking and enforcement become harder, and the business questions get sharper.👤 Guest 🔗Jason Henderson 🔗Senior Attorney, JWL International 🔗Founder, Castle Bridge Media 🔗Co-host, Castle of Horror podcast (horror movie coverage) About the show: To explore more conversations like this and see what’s new, visit the freshly updated Media Copilot website at mediacopilot.ai. You’ll find new episodes, expanded resources, and tools designed for journalists, communicators, and media leaders navigating the fast-changing world of AI. It’s the home base for everything Media Copilot and it’s just getting started.Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0All rights reserved. © AnyWho Media 2026
A tech-forward journalism professor unpacks how AI is changing how he teaches reporting and what it means for the entry-level jobs that are increasingly endangered. AI is not just changing how journalism gets made. It is changing how journalism gets taught.In this episode of The Media Copilot, host Pete Pachal sits down with Kris Hodgson-Bright, professor of digital communications and media at Lethbridge Polytechnic in Alberta, Canada, to unpack what happens when AI enters the newsroom and the classroom at the same time.Kris has seen journalism education evolve from high-volume print production to an online first, multi-platform workflow spanning campus news, radio, TV, and emerging formats. Now, he is putting AI directly into the curriculum, not as a shortcut for writing, but as a research assistant that can strengthen reporting, sharpen critical thinking, and help students confront one of the biggest challenges in modern media: bias and trust.Pete and Kris explore where AI fits in journalism training, where it doesn’t, and why transparent guardrails matter. They also dig into the job market reality for new journalists and communicators, plus the promise of immersive storytelling, including 360-degree video, VR, and photogrammetry, as a way to deepen understanding and empathy.Along the way, the conversation surfaces some of the most difficult questions facing the media right now: how much automation is too much, where responsibility still sits with the human journalist, and how educators can prepare students for an industry that is evolving faster than any syllabus. This is a grounded conversation about the future of media work: hopeful about what AI can enhance, and clear-eyed about the slippery slope toward low quality content and atrophied thinking.Why this mattersAs AI becomes embedded in every part of media, the next generation of journalists and communicators will be judged on more than writing skills. They will be judged on judgment: bias awareness, ethical decision-making, transparency, and the ability to use tools without surrendering the work of thinking.What we coverHow journalism education shifted from print heavy production to online-first publishingThe right way to integrate AI into student workflows without outsourcing the writingUsing AI to check for bias and improve historical context in local reportingWhat transparency and disclosure should look like in AI-assisted mediaMedia law, ethics, privacy, and how to teach responsible AI useWhy the journalism job market is harder and what students can do to stand outImmersive journalism, empathy, and what VR still gets right even without mass adoptionKris’s hopes and fears about AI’s long-term impact on media👤 Guest🔗Kris Hodgson-Bright | Lethbridge Polytechnic🔗Kris Hodgson-Bright (@hodgsonkr) / Posts / X🔗krishodgsonbright/LinkedIN To explore more conversations like this and see what’s new, visit the freshly updated Media Copilot website at mediacopilot.ai. You’ll find new episodes, expanded resources, and tools designed for journalists, communicators, and media leaders navigating the fast-changing world of AI. It’s the home base for everything Media Copilot and it’s just getting started.Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel. For more AI tools and resources built for media professionals, visit MediaCopilot.aiProduced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0All rights reserved. © AnyWho Media 2025
A candid look at how aggregation, personalization, and trust shape news discovery in an AI-driven internet.Yahoo has been part of the internet’s front door for more than two decades. But what does it mean to guide audiences through news today, when consumption is fragmented, trust is fragile, and AI is reshaping how information is found, summarized, and shared?In this Season 4 conversation of The Media Copilot, Pete Pachal sits down with Kat Downs Mulder, GM of Yahoo News, to unpack how one of the largest digital media platforms in the world is rethinking aggregation, personalization, and user habits in the age of AI.From audio-first experiences and AI-powered summaries to the integration of Artifact’s technology into the Yahoo News app, Mulder explains how Yahoo is balancing innovation with responsibility while supporting original journalism across a noisy, algorithm-driven ecosystem.Why This MattersAI is no longer just a back-end optimization tool. It is actively shaping how audiences encounter news, how trust is maintained, and how publishers survive. This episode offers a rare inside look at how a major aggregator is navigating those shifts thoughtfully, without racing ahead of the facts or sacrificing credibility.For media leaders, journalists, creators, and product teams, this conversation surfaces real-world lessons about where AI adds value, where human judgment remains essential, and why aggregation still plays a critical role in a healthy information ecosystem.What We Cover in This Episode 👇 • Why Yahoo still matters as a major gateway to news • How AI is reshaping content aggregation and personalization • Why audio is becoming a powerful habit-building news format • What Yahoo learned from integrating Artifact into its app • How AI summaries drive deeper engagement rather than replace it • Balancing speed, scale, and trust in AI-driven news products • How publishers and creators coexist inside Yahoo’s ecosystem • Why user behavior matters more than age or demographics • What an agent-driven web means for the future of news discovery👤 GuestKat Downs MulderGeneral Manager, Yahoo News : https://news.yahoo.com/ 🔗 LinkedIn:https://www.linkedin.com/in/katdowns🔗 X (formerly Twitter):https://x.com/katdowns 🔗 Yahoo Press Announcement:https://www.yahooinc.com/press/yahoo-appoints-kat-downs-mulder-as-svp-amp-general-manager-of-yahoo-news🔗 Speaker Bio (Digital Content Next Summit):https://events.digitalcontentnext.org/next-summit-2023/speaker/636864/kat-downs-mulder📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0© 2025 Musso Media. All rights reserved. © AnyWho Media 2025
As the year wraps up, we’re taking a pause from weekly interviews to share a curated Best of the Year in AI.This special episode of The Media Copilot is a look back at the conversations that defined the past year...the questions, tensions, and turning points shaping how media, journalism, and technology intersect right now.Over the past year, Pete has spoken with some of the sharpest minds working at the center of AI, publishing, and platform design. And while the tools keep evolving, the same core questions kept resurfacing:How should creators and publishers be compensated in an AI-driven world?Where does transparency end and exploitation begin?Who actually controls the future of information, and who should?In this Best Of episode, you’ll hear standout moments from those conversations, including: • How publishers are navigating AI licensing, attribution, and revenue • Why the rise of AI agents and scraping tools is forcing a rethink of digital rights • The growing tension between innovation and consent • What ethical AI actually looks like in practice, not theory • Why human judgment, context, and trust still matter more than everFrom conversations with leaders at ProRata, Cloudflare, Taboola, Factiva, and more, this episode captures the real debates happening behind the scenes — beyond the headlines and hype.🎙️ Featured Voices Bill Gross – Founder & CEO, ProRataAnnelies Jansen – Chief Business Officer, ProRataMark Howard – Chief Operating Officer, Time (formerly Time Inc.)Adam Singolda – CEO, TaboolaToshit Panigrahi – CEO & Co-Founder, TolbitAurélie Guerrieri – Chief Marketing & Alliances Officer, CloudflareStephanie Cohen – Chief Strategy Officer, CloudflareMark Riley – Founder & CEO, Mathison AITraci Mabrey – General Manager, FactivaTrip Adler – Founder & CEO, Created by HumansIf you care about the future of media, the economics of creativity, or how AI is reshaping who gets paid and who gets left behind, this one’s for you.🎧 Listen now to The Media Copilot: Best of 2025 — and stay tuned for what’s next.📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0© 2025 Musso Media. All rights reserved. © AnyWho Media 2025
AI has turned verification into a newsroom survival skill.Please support the sponsor of this podcast: PodPitch.com is a software that thousands of people use today to book podcasts with a 4% booking rate. It’s the most updated podcast email database and it comes with a custom-trained AI that learns YOUR voice and applies what works from more than 10 million previous pitches to optimize your own reply rate. Now one comms pro has the power of a 10-person team.Join Golin, Weber, Edelman, Finn, Broadhead, 5W and more in seeing a live demo today.Click here now to book time to check out PodPitch: https://new.podpitch.com/mediacopilotIn this episode of The Media Copilot, Pete Pachal talks with James Law, Editor-in-Chief of Storyful, about how newsrooms verify social and user-generated video in an era of AI, deepfakes, and viral misinformation.Law explains how verification evolved from the Arab Spring, when social media footage became central to breaking news, to today’s flood of viral clips and AI-generated video designed to look like real eyewitness content. He breaks down Storyful’s verification workflow, why metadata still matters, and how every clip is checked for date, location, and source.They also discuss the limits of AI detection tools, the rise of “harmless” synthetic videos that erode trust, and why authenticity and transparency matter more than ever for newsrooms.What We Cover- How Storyful verifies video at scale- Why AI detection tools fall short- The role of metadata and raw files- The growing trust problem in digital media- Why authenticity outperforms polish on social platformsAbout the GuestJames Law is Editor-in-Chief at Storyful.🔗 LinkedIn: https://www.linkedin.com/in/jameslaw21/ 🔗 X: https://x.com/JournoLawJ 🔗 https://storyful.com/about/📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0© 2025 Musso Media. All rights reserved. © AnyWho Media 2025
The surprising places AI helps journalists, and the places it really doesn’t.AI in journalism can feel abstract until you talk to the people actually shipping products inside newsrooms. In this episode of The Media Copilot podcast, host Pete Pachal talks with Darla Cameron, Chief Product Officer at The Texas Tribune, about what happens when AI meets real reporting, real audiences, and real constraints.Darla comes from a background in data journalism and visual storytelling at places like the Washington Post and now leads product at a nonprofit newsroom that has been experimenting with custom tools, data explorers, and audience-driven experiences for more than a decade. In the wide-ranging discussion, she shares how the Tribune defines product as the interface between content and audience, and how AI and automation are starting to reshape that work without replacing journalists or eroding trust.From transcription tools and meeting analysis to tightly scoped chatbots and AI-narrated stories, Darla walks through what is actually working inside the Tribune, what quietly failed, and the principles that guide every experiment.Why This MattersNews organizations are being squeezed from all sides. Reporters are expected to cover more with fewer resources. Audiences are drifting into AI-powered interfaces that sit between publishers and their readers. At the same time, trust in institutions is fragile and any perceived shortcut can damage a brand that took years to build.Darla offers a grounded reality check from inside a newsroom that is embracing experimentation while drawing clear lines. The Tribune has an AI policy that explicitly says AI will not replace journalists. They do not use AI to generate news stories or images. They are very deliberate about where automation helps and where human judgment is non negotiable.For anyone working in media, product, or audience strategy, this conversation is a practical guide to using AI as an assistive layer rather than a replacement. It is about how to adapt to new tools without losing the thing that makes your journalism worth trusting in the first place.What We CoverWhy “product” matters in a newsroom and how it links journalism, design, and audience engagement.Real-world examples: using AI to transcribe interviews or analyze podcasts for reporting.What The Texas Tribune’s AI policy looks like: when automation helps, and when human verification is essential.Why the Tribune refuses AI-generated images and prefers real photography for accountability and trust.Lessons from building chatbots and interactive tools — including what worked, what didn’t, and what the team learned.How audience feedback guides when and how to use AI, especially in a nonprofit news model.About the GuestOfficial bio: Texas Tribune – Darla Cameron The Texas Tribune Professional profile: LinkedIn – Darla Cameron📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0© 2025 Musso Media. All rights reserved. © AnyWho Media 2025
The Taboola CEO explains how USA Today’s DeeperDive changes news discovery, and why publishers might need “chat” after all.AI is rewriting the rules of digital media and few people have had a closer view of the shift than Adam Singolda. In this episode of The Media Copilot, host Pete Pachal has a candid conversation with the Taboola founder about where the open web is headed and why the next era of audience discovery may look nothing like the search driven world we grew up in.Adam has been building Taboola since 2007 and continues to shape one of the largest advertising and recommendation platforms on the open web. He shares what publishers often miss about AI, how performance advertising is evolving, and why trusted media brands may hold more power than ever as chat based discovery becomes mainstream.Why This MattersMedia is in a moment of rapid transition. Search traffic is shifting, conversational queries are becoming a default behavior, and publishers are being pushed to rethink how they attract and retain audiences. Adam offers rare insight from inside a global platform that sits at the center of these changes. He explains how AI is reshaping revenue models, what publishers can do right now to stay competitive, and why this new interaction layer may redefine how journalism is discovered and consumed.What We Cover• Adam’s journey from founding Taboola in 2007 to running a global advertising and recommendation platform• How Taboola positions itself as the performance engine of the open web• What the company has learned about audience behavior across thousands of publishers• How AI is changing advertising performance, engagement, and publisher economics• The impact of declining search traffic and the rise of chat based discovery• The creation of Taboola’s Deeper Dive experience and how users actually interact with AI on news sites• Why publishers need to experiment with AI quickly or risk losing ground• How trust and brand identity give premium publishers an advantage in the AI era• What adoption looks like when you introduce AI directly into media workflows• The future of LLM monetization and why Adam sees a major opportunity for journalismMore…Learn about Taboola’s mission, how it started — https://www.taboola.com. Taboola.com+1 LinkedIn — https://www.linkedin.com/in/adamsingolda📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0© 2025 Musso Media. All rights reserved. © AnyWho Media 2025
In this episode of The Media Copilot, Pete Pachal talks with Mark Howard, Chief Operating Officer at Time, about how a century-old newsroom is adapting to a world where readers increasingly turn to AI systems for information.Howard explains how Time approached AI not as a passing trend but as a shift in how journalism will be discovered and consumed. He walks through the decisions behind partnering with AI companies, the work required to safeguard Time’s archive, and how the Time AI Agent grew out of experiments with summaries, translations, and audio briefings.The conversation offers a clear look at the practical choices a legacy media brand faces when it tries to stay trusted in new formats without compromising the reporting that built its reputation.What we cover in this episode • How Time decided to negotiate with AI companies instead of taking an adversarial stance • The behind-the-scenes systems created to protect IP and track bot activity • The evolution from Person of the Year experiments to daily AI audio briefings to the Time AI Agent • Why the agent is grounded only in Time’s archive and what that means for accuracy and trust • How Time is approaching AI marketplaces, enterprise licensing, and the agent to agent web • What this shift means for the newsroom, editorial workflows, and audience relationshipsLearn more Mark Howard on Time https://time.com/author/mark-howardThe Story Behind the TIME AI Agent https://time.com/7332572/the-story-behind-the-time-ai-agentMark on LinkedIn https://www.linkedin.com/in/markdhowardMark on X https://x.com/markdhoward This post was drafted with AI and then carefully edited by Media Copilot editors.📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0© 2025 Musso Media. All rights reserved. © AnyWho Media 2025
Michael Rubenstein on how “brand agents” are reshaping advertising, publishing, and the Internet itselfWe’ve spent decades trying to make digital advertising smarter. Cookies, pixels, and data exchanges promised personalization but delivered clutter, tracking fatigue, and declining returns. Then came AI, bringing the chance not just to improve ads, but to completely reimagine how brands and audiences interact.In this episode of The Media Copilot, host Pete Pachal sits down with Michael Rubenstein, Co CEO of Firsthand and one of the original architects of modern ad tech. After helping launch DoubleClick’s Ad Exchange (later acquired by Google), Rubenstein is now building something that feels like the opposite of programmatic advertising, a world where brand “agents” don’t just target you, they talk to you.Instead of static banners or pre-rolls, these AI-driven brand agents act like adaptive digital representatives that engage, inform, and even create content on the fly. They’re built to live anywhere, inside a publisher’s story, across a retailer’s site, or within a chat experience, meeting consumers wherever they are and responding in real time to what they actually want.This conversation explores how brand agents are transforming advertising into an intelligent, intent driven dialogue, and what that means for publishers, marketers, and the future of media.What We Cover: • How AI driven brand agents are changing advertising and media engagement • Why this new model removes invasive tracking and builds real consumer trust • How publishers can use adaptive experiences to grow audience value • Why AI represents not automation but communication • The cultural and ethical stakes of rebuilding advertising around AI conversationsIn Closing AI is taking down the old walls of the Internet. The question isn’t whether advertising and publishing will change, it’s whether they can adapt fast enough to stay relevant.The future, as Rubenstein says, isn’t programmatic, it’s personal.Connect with Michael Rubenstein: 🔗 Firsthand.ai 💼 LinkedIn – Michael RubensteinX (formerly Twitter): https://x.com/mrubenstein99Listen to the full episode of The Media Copilot with host Pete Pachal and guest Michael Rubenstein on Apple Podcasts, Spotify, or wherever you get your shows. 👉 Visit mediacopilot.ai for more on our classes, insights, and upcoming episodes.(AI-assisted)📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0
Inside how AI is actually being used inside media companies today— and what success looks like when it’s done right.In this episode of The Media Copilot podcast, Pete Pachal speaks with John Levitt, COO of Elvex, about how AI is actually being adopted inside newsrooms and media organizations today. Not the hype. Not the pitch deck version. The real workflows happening behind the scenes.Elvex works with major media companies to build internal AI environments that support reporting, fact-checking, content repurposing, sales operations, research, and product strategy. John has a rare view into the daily shift in how teams work, collaborate, and adapt.This conversation explores: • How editorial, business, and product teams are already using AI • Why culture and leadership framing determine whether AI succeeds • Where AI reduces repetitive work without replacing journalists • What "context engineering" means and why it matters more than prompts • How media companies can experiment with AI safely and responsibly • The next shift toward agent-to-agent workflows and personalized news experiencesIf you work in media, journalism, audience growth, newsroom operations, AI product development, or leadership strategy, this episode breaks down what is actually changing and what is coming next.GUEST: John Levitt https://www.elvex.com/ https://www.linkedin.com/in/johnmlevitt/ 📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel.For more AI tools and resources built for media professionals, visit MediaCopilot.ai.Produced by Pete Pachal and Executive Producer Michele MussoEdited by the Musso Media Team Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0© 2025 Musso Media. All rights reserved. © AnyWho Media 2025
What if AI could make us better writers instead of replacing us? The next chapter of the internet may do exactly that by using technology to strengthen creativity rather than erase it.Tony Stubblebine, CEO of Medium, joins The Media Copilot with Pete Pachal to talk about the new reality of writing in an AI world. As algorithms reshape how stories are created and shared, Stubblebine believes we are entering a writing renaissance where technology helps writers stay focused, authentic, and connected to their readers.They explore:The collapse of free-content economics and the rise of the post-Google internetWhy Medium is betting on smaller, more human writing communitiesHow AI can enhance creativity rather than erase itThe tools that will keep writers in flow and make the act of writing joyful againIf you care about creativity, technology, and the future of storytelling, this is a conversation you should not miss.📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.🎧 Produced by Pete Pachal and Executive Producer Michele Musso 🎬 Edited by the Musso Media Team © 2025 Musso Media. All rights reserved.🎵 Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0© AnyWho Media 2025
What if AI could read your book, learn from it, summarize it, and remix it, all with your permission and a paycheck? Trip Adler is working to make that possible.Trip Adler co-founded Scribd, helped pioneer book subscriptions, and knows publishing inside and out. Now he’s back with a new mission: protect human creativity in the age of AI.In this episode of The Media Copilot, Pete Pachal talks with Trip about his latest venture, Created by Humans, a licensing platform that helps AI companies access creative works legally, starting with books. They dive into:How AI companies are scraping content without permissionWhat “AI rights” actually are and why creators need to define them nowWhy licensing for training, RAG (reference), and transformation should all be separateHow Trip’s “Fourth Law of Robotics” could reset the power dynamic between human and machineWhat the $3,000-per-book Anthropic case tells us about future settlementsAnd why this could be the next big revenue stream for authors and publishersIf you're a writer, publisher, AI builder, or just someone who wants creators to get credit and compensation in the AI era, this is the conversation to hear.What You'll LearnBooks are emerging as the front line in the battle between human creativity and artificial intelligence and understanding why is key to navigating what comes next. You'll learn how AI rights differ from traditional copyright law and why that distinction matters for anyone working in publishing, media, or technology. It also explains what AI companies can and should be paying to use creative works and how those payments change depending on whether the content is being used for training, reference, or transformation. You will come away with a clearer understanding of why these categories are not interchangeable and why defining each one is essential. Most importantly, the conversation highlights why authors deserve transparency, control, and compensation when their work helps power an AI product.GUEST: Trip AdlerCo-Founder of Scribd Founder of Created by Humans 🔗 createdbyhumans.ai | LinkedIn 📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel. For more AI tools and resources built for media professionals, visit MediaCopilot.ai.🎧 Produced by Pete Pachal and Executive Producer Michele Musso 🎬 Edited by the Musso Media Team © 2025 Musso Media. All rights reserved.🎵 Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0© AnyWho Media 2025
As AI eats the internet, publishers are fighting to keep control. Raptive’s Chief Growth Officer, Marc McCollum, says it’s not the end of the open web, it’s a chance to rebuild it on creators’ terms.In this episode of The Media Copilot, host Pete Pachal sits down with Marc McCollum, Chief Growth Officer at Raptive, to talk about the future of media in the age of AI.Raptive powers over 6,000 creators and 200 enterprise publishers and Marc argues that the key to survival isn’t joining the platforms, it’s owning your audience.From the impact of AI Overviews on search traffic to the rise of Google Discover as a quiet growth engine, they break down what’s really happening behind the analytics. Marc also calls out Big Tech’s “free-content” problem, explains why licensing and pay-per-crawl models could reshape revenue, and shares why recipe sites might just be the unsung heroes of the AI era.If you care about the open web, creator independence, or where AI-powered media goes next—this conversation is essential.WHAT YOU’LL LEARN:Why AI summaries are crushing clicks—and what to do about itHow creators can stay profitable when SEO and affiliate models fall shortWhy owning your site and email list is still your best moveWhat Raptive’s data reveals about traffic winners and losers in 2025The quiet power of Google Discover and how to make it work for you GUEST: Marc McCollum, Chief Growth Officer, Raptive 🔗 raptive.com | LinkedIn📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel 🔔For more AI tools and resources built for media professionals, visit MediaCopilot.ai.🎧 Produced by Pete Pachal and Executive Producer Michele Musso 🎬 Edited by the Musso Media Team © 2025 Musso Media. All rights reserved.🎵 Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0© AnyWho Media 2025
Local news without politics, crime, or chaos? Meet the startup making it happen.In this episode of The Media Copilot, host Pete Pachal talks with Ryan Heafy, co-founder of 6AM City, about how his team is flipping the script on community journalism—using AI to scale responsibly while keeping human editorial judgment at the center.Discover how they’ve expanded from a few test cities to 410+ local markets with a “Seed-to-Profit” model, how their anti-scraping strategy builds trust from the ground up, and why they just acquired controversial AI startup Good Daily—not for the headlines, but for the infrastructure.Whether you're in media, tech, marketing, or just want smarter news in your inbox, this one's for you.WHAT YOU’LL LEARN:How 6AM City uses AI to launch, test, and scale new marketsWhy newsletters are the future of local media—and why most will failHow 6AM avoids outrage bait and click-driven newsWhat they really got from acquiring Good DailyHow to build content that survives Gmail filters, TikTok scrolls, and voice assistantsWhy their inbox model might outlast traditional journalism👥 GUEST: Ryan Heafy — Co-Founder & Chief Local Officer, 6AM City 🔗 LinkedIn | 🌐 6amcity.com📩 Enjoyed this episode?Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the channel 🔔For more AI tools and resources built for media professionals, visit MediaCopilot.ai.🎧 Produced by Pete Pachal and Executive Producer Michele Musso 🎬 Edited by the Musso Media Team © 2025 Musso Media. All rights reserved.🎵 Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0© AnyWho Media 2025




