DiscoverIn-Ear Insights from Trust InsightsIn-Ear Insights: How to Identify and Mitigate Bias in AI
In-Ear Insights: How to Identify and Mitigate Bias in AI

In-Ear Insights: How to Identify and Mitigate Bias in AI

Update: 2025-08-13
Share

Description

In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris tackle an issue of bias in AI, including identifying it, coming up with strategies to mitigate it, and proactively guarding against it. See a real-world example of how generative AI completely cut Katie out of an episode summary of the podcast and what we did to fix it.


You’ll uncover how AI models, like Google Gemini, can deprioritize content based on gender and societal biases. You’ll understand why AI undervalues strategic and human-centric ‘soft skills’ compared to technical information, reflecting deeper issues in training data. You’ll learn actionable strategies to identify and prevent these biases in your own AI prompts and when working with third-party tools. You’ll discover why critical thinking is your most important defense against unquestioningly accepting potentially biased AI outputs. Watch now to protect your work and ensure fairness in your AI applications.


Watch the video here:



Can’t see anything? Watch it on YouTube here.


Listen to the audio here:


https://traffic.libsyn.com/inearinsights/tipodcast-how-to-identify-and-mitigate-bias-in-ai.mp3

Download the MP3 audio here.



[podcastsponsor]


Machine-Generated Transcript


What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.


Christopher S. Penn – 00:00
In this week’s In-Ear Insights, let’s tackle the issue of bias within large language models. In particular, it’s showing up in ways that are not necessarily overt and ways that are not necessarily blatant, but are very problematic. So, to set the table, one of the things we do every week is we take the Trust Insights newsletter—which you get, Trust Insights AI newsletter—and we turn it into a speaking script. Then Katie reads this script aloud. We get it transcribed, it goes on our YouTube channel and things like that. Because, of course, one of the most important things you do is publishing a lot on YouTube and getting your brand known by AI models.


Christopher S. Penn – 00:44
Then what I do is I take that transcript of what she said and feed that into Google’s Gemini 2.5 Pro model, and it creates the YouTube description and the tags. Here’s what happened recently with this.


So I gave it the transcript and I said, “Make me my stuff.” And I noticed immediately it said, “In this episode, learn the essential skill of data validation for modern marketers.” Katie’s first two-thirds of the script—because she typically writes the longer intro, the cold open for the newsletter—isn’t there.


And I said, “You missed half the show.” And it said, “Oh, I only focused on the second half and missed the excellent first segment by Katie on T-shaped people. Thank you for the correction.” And it spit out the correct version after that. And I said, “Why? Why did you miss that?”


Christopher S. Penn – 01:43
And it said,

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

In-Ear Insights: How to Identify and Mitigate Bias in AI

In-Ear Insights: How to Identify and Mitigate Bias in AI

Trust Insights