Discover
In-Ear Insights from Trust Insights
264 Episodes
Reverse
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss effective reporting and creating reports that tell a story and drive action using user stories and frameworks.
You will understand why data dumping onto a stakeholder’s desk fails and how to gather precise reporting requirements immediately. You will discover powerful frameworks, including the SAINT model, that help you move from basic analysis to crucial, actionable decisions. You will gain strategies for anticipating executive questions and delivering a clear, consistent narrative throughout your entire report. You will explore innovative ways to use artificial intelligence as a thought partner to refine your analysis and structure perfect reports. Stop wasting time and start creating reports that generate real business results. Watch now!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-how-to-create-effective-reporting.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, it’s almost redundant at this point to say it’s reporting season, but as we hit quarterly ends, yearly ends, things like that, people become reflective and say, “Hey, let’s do some reports.”
One of the problems that we see the most with reporting—and I was guilty of this for the majority of my career, particularly the first half—is when you’re not confident about your reporting skills, what do you do? You back the truck up and you pour data all over somebody’s desk and you hope that it overwhelms them so that they don’t ask you any questions, which is the worst possible way to do reporting.
So, Katie, as a senior executive, as a leader, when someone delivers reporting to you, what do you get and what do you want to get?
Katie Robbert – 00:51
Well, I would start to say reports, like the ones that you were generating, hate to see me coming. Because guess what I do, Chris, I ask a bazillion questions, starting with so what? And I think that’s really the key.
As the CEO of Trust Insights, I need a report that tells me exactly what the insights and actions are so that I can do those things. And that is a user story. A user story is a simple three-part sentence: As a Persona, I want so that. If someone is giving me a report and they haven’t asked me for a user story, that’s probably step one. So, Chris, if I say, “All right, if you can pull the monthly metrics, Chris, and put it into a report, I would appreciate it.”
Katie Robbert – 01:47
If I haven’t given you a user story, you need to ask me what it is, because that’s the “so what?” Why are we doing this in the first place? We have no shortage of data points. We have no shortage of information about what happened, maybe even why it happened. And that’s a problem because it doesn’t tell a story.
What happens is, if you just give me all of that data back, I don’t know what to do with it. And that’s on me, and that’s on you. And so, together, one of us needs to make sure there is a user story. Ideally, I would be providing it, but if I don’t provide it, your first step is to ask for it. That is Step zero. What is the user story? Why am I pulling this report in the first place?
Katie Robbert – 02:33
What is it that you, the stakeholder, expect to get out of this report? What is it you need to do with this information? That is Step zero, before you even start looking at data.
Christopher S. Penn – 02:44
I love user stories, and I love them, A, for the simplicity, but B, because of that warm and comforting feeling of having covered your ass.
Because if I ask you for a user story and you give me one, I build a report for that. Then you come back and say, “But this is this.”
Katie Robbert – 03:03
This.
Christopher S. Penn – 03:03
I’m like, “You signed off on the user. You gave me the user story, you signed off on the user story. And what you’re asking for is not in the user story.” So I think we need to recalibrate and have you give me maybe some new user stories so you can get what you want. I’m not going to tell you to go F off—not my face. But I’m also going to push back and say, “This wasn’t in the user story.” Because the reason I love user stories is because they’re the simplest but most effective form of requirements gathering.
Katie Robbert – 03:36
I would agree with that. When I was a product manager, user stories saved my sanity because my job was to get all of my stakeholders aligned on a single idea. And I’ve told this before, I’d literally go to their office and camp out and get a physical signature on a piece of paper saying, “Yes, this is exactly what you’re agreeing to.”
Then, when we would sit in the meeting and the development team or the design team would present the thing, the second somebody would be like, “Well, wait,” I would just hold up the piece of paper and point to their signature. It’s such an effective way to get things done.
Katie Robbert – 04:23
Because what happens if you don’t have a user story to start, or any kind of requirements to start, when you’re doing reporting is exactly what you’re talking about. You end up with spreadsheets of data that doesn’t really mean anything. You end up with 60-slide PowerPoint reports with all of these visuals, and every single slide has at least four or five charts on it and some kind of a label. But there’s no story. There’s no, “Why am I looking at this?”
When I think about reporting, the very first thing I want to see is—and I would say even go ahead and do this, this is sort of the pro tip—
Katie Robbert – 05:00
Whatever the user story was that I gave you, put that right at the top of the report so that when I look at it, I go, “Oh, that’s what I was looking for. Great.” Because chances are, the second you walk away, I’ve already forgotten the conversation—not because it’s not important, but because a million other things have crept up.
Now, when you come back to me and say, “This is what I’m delivering,” this is what I need to be reminded of. A lot of stakeholders, people in general, we’re all forgetful. Over-communicate what it is that we’re doing here in the first place. And no one’s going to be mad at that. It’s like, “Oh, now I don’t have to ask questions.” The second thing I look for is sort of that big “So what?”
Katie Robbert – 05:45
We call it an executive summary. You can call it the big takeaway, whatever it is. At the very top of the report, I personally look for, “What is the big thing I need to know?” Is everything great? That’s all I need to know. Is everything terrible? I definitely need to know that. Do I need to take six big actions? Great, let me know that. Or, it’s all business as usual. Just give me the 30-second, “Here are the three bullet points that you need to know.” If you have no other time to read this report, that should be the summary at the top. I am going to, even if it’s not right then, dig into the rest of the report. But I may only in that moment be able to look at the summary.
Katie Robbert – 06:33
When I see these big slide decks that people present to their executive team or to their board or to whoever they report to, it’s such a missed opportunity to not have the key takeaways right there up front. If you’re asking someone to scroll, scroll, get through it—it’s all the way at the end—they’re not going to do it, and they’re going to start picking apart everything. Even if you’ve done the work to say, “But I already summarized all of that,” it’s not right there in front of them. Do yourself a favor. Whatever it is the person you’re presenting this to needs to know, put it right in front of their face immediately.
Christopher S. Penn – 07:13
Back in the day, we came up with a framework called the SAINT framework, which stands for Summary, Analysis, Insights, Next Steps, Timeline. Where I’ve seen that go wrong is people try to do too much in the summary. From Analysis, Insights, Next Steps, and Timelines, there should be one to three bullets from each that become the summary.
Katie Robbert – 07:34
And that’s it?
Christopher S. Penn – 07:35
Yeah, that’s it. In terms of percentages, what we generally recommend to people is that Analysis should be 10% to 15% of the report. What happened? Data Insights should be 10% to 15% of the report. Why did those things happen? We did this, and this is what happened. Or this external factor occurred, and this has happened.
The remaining 50% to 60% of the report should be equally split between Next Steps—what are you going to do about it?—and Timeline—when are you going to do it? Those next steps and timeline become the decisions that you need the stakeholder to make and when they need to do it so that you get done what you need to get done.
Christopher S. Penn – 08:23
That’s the part we call the three “What’s”: What happened? So what? Now what? As you progress through any measurement framework, any reporting framework, the more time you spend on “Now what,” the better a stakeholder is likely to like the report.
You should absolutely, if the stakeholder wants it, provide the appendix of the data itself if they want to pour through it. But at the highest level, it should be, “Hey Katie, our website traffic was down 15% last month. The reason for it was because it was a shorter month, a lot of holidays. What we need to do is we need to spin up a small paid campaign, $500 for the next month, to boost traffic back to our key pages. I need a decision from you by October 31st. Go, no go.”
Christopher S. Penn – 09:18
And that would be the short summary because that fulfills your user story of, “As a CEO, I need to know what’s going on in mar
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the stark reality of the future of work presented at the Marketing AI Conference, MAICON 2025.
You’ll learn which roles artificial intelligence will consume fastest and why average employees face the highest risk of replacement. You’ll master the critical thinking and contextual skills you must develop now to transform yourself into an indispensable expert. You’ll understand how expanding your intellectual curiosity outside your specific job will unlock creative problem solving essential for survival. You’ll discover the massive global AI blind spot that US companies ignore and how this shifting landscape affects your career trajectory. Watch now to prepare your career for the age of accelerated automation!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-maicon-2025-generative-ai-for-marketers.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, we are at the Marketing AI Conference, Macon 2025 in Cleveland with 1,500 of our best friends. This morning, the CEO of SmartRx, formerly the Marketing AI Institute, Paul Ritzer, was talking about the future of work. Now, before I go down a long rabbit hole, Dave, what was your immediate impressions, takeaways from Paul’s talk?
Katie Robbert – 00:23
Paul always brings this really interesting perspective because he’s very much a futurist, much like yourself, but he’s a futurist in a different way. Whereas you’re on the future of the technology, he’s focused on the future of the business and the people. And so his perspective was really, “AI is going to take your job.” If we had to underscore it, that was the bottom line: AI is going to take your job. However, how can you be smarter about it? How can you work with it instead of working against it? Obviously, he didn’t have time to get into every single individual solution.
Katie Robbert – 01:01
The goal of his keynote talk was to get us all thinking, “Oh, so if AI is going to take my job, how do I work with AI versus just continuing to fight against it so that I’m never going to get ahead?” I thought that was a really interesting way to introduce the conference as a whole, where every individual session is going to get into their soldiers.
Christopher S. Penn – 01:24
The chart that really surprised me was one of those, “Oh, he actually said the quiet part out loud.” He showed the SaaS business chart: SaaS software is $500 billion of economic value. Of course, AI companies are going, “Yeah, we want that money. We want to take all that money.” But then he brought up the labor chart, which is $12 trillion of money, and says, “This is what the AI companies really want. They want to take all $12 trillion and keep it for themselves and fire everybody,” which is the quiet part out loud. Even if they take 20% of that, that’s still, obviously, what is it, $2 trillion, give or take? When we think about what that means for human beings, that’s basically saying, “I want 20% of the workforce to be unemployed.”
Katie Robbert – 02:15
And he wasn’t shy about saying that. Unfortunately, that is the message that a lot of the larger companies are promoting right now. So the question then becomes, what does that mean for that 20%? They have to pivot. They have to learn new skills, or—the big thing, and you and I have talked about this quite a bit this year—is you really have to tap into that critical thinking. That was one of the messages that Paul was sharing in the keynote: go to school, get your liberal art degree, and focus on critical thinking. AI is going to do the rest of it.
Katie Robbert – 02:46
So when we look at the roles that are up for grabs, a lot of it was in management, a lot of it was in customer service, a lot of it was in analytics—things that already have a lot of automation around them. So why not naturally let agentic AI take over, and then you don’t need human intervention at all? So then, where does that leave the human?
Katie Robbert – 03:08
We’re the ones who have to think what’s next. One of the things that Paul did share was that the screenwriter for all of the Scorsese films was saying that ChatGPT gave me better ideas. We don’t know what those exact prompts looked like. We don’t know how much context was given. We don’t know how much background information. But if that was sue and I, his name was Paul. Paul Schrader. Yes, I forgot it for a second. If Paul Schrader can look at Paul Schrader’s work, then he’s the expert. That’s the thing that I think needed to also be underscored: Paul Schrader is the expert in Paul Schrader. Paul Schrader is the expert in screenwriting those particular genre films. Nobody else can do that.
Katie Robbert – 03:52
So Paul Schrader is the only one who could have created the contextual information for those large language models. He still has value, and he’s the one who’s going to take the ideas given by the large language models and turn them into something. The large language model might give him an idea, but he needs to be the one to flush it out, start to finish, because he’s the one who understands nuance. He’s the one who understands, “If I give this to a Leonardo DiCaprio, what is he gonna do with the role? How is he gonna think about it?” Because then you’re starting to get into all of the different complexities where no one individual ever truly works alone. You have a lot of other humans.
Katie Robbert – 04:29
I think that’s the part that we haven’t quite gotten to, is sure, generative AI can give you a lot of information, give you a lot of ideas, and do a lot of the work. But when you start incorporating more humans into a team, the nuance—it’s very discreet. It’s very hard for an AI to pick up. You still need humans to do those pieces.
Christopher S. Penn – 04:49
When you take a look, though, at something like the Tilly Norwood thing from a couple weeks ago, even there, it’s saying, “Let’s take fewer humans in there,” where you have this completely machine generated actor avatar, I guess. It was very clearly made to replace a human there because they’re saying, “This is great. They don’t have to pay union wages. The actor never calls in sick. The actor never takes a vacation. The actor’s not going to be partying at a club unless someone makes it do that.” When we look at that big chart of, “Here’s all the jobs that are up for grabs,” the $12 trillion of economic value, when you look at that, how at risk do you think your average person is?
Katie Robbert – 05:39
The key word in there is average. An average person is at risk. Because if an average person isn’t thinking about things creatively, or if they’re just saying, “Oh, this is what I have to do today, let me just do it. Let me just do the bare minimum, get through it.” Yes, that person is at risk. But someone who looks at a problem or a task that’s in front of them and thinks, “What are the five different ways that I could approach this? Let me sit down for a second, really plan it out. What am I not thinking of? What have I not asked? What’s the information I don’t have in front of me? Let me go find that”—that person is less at risk because they are able to think beyond what’s right in front of them.
Katie Robbert – 06:17
I think that is going to be harder to replace. So, for example, I do operations, I’m a CEO. I set the vision. You could theoretically give that to an AI to do. I could create CEO Katie GPT. And GPT Katie could set the vision, based on everything I know: “This is the direction that your company should go in.” What that generative AI doesn’t know is what I know—what we’ve tried, what we haven’t tried. I could give it all that information and it could still say, “Okay, it sounds like you’ve tried this.” But then it doesn’t necessarily know conversations that I’ve had with you offline about certain things. Could I give it all that information? Sure. But then now I’m introducing another person into the conversation. And as predictable as humans are, we’re unpredictable.
Katie Robbert – 07:13
So you might say, “Katie would absolutely say this to something.” And I’m going to look at it and go, “I would absolutely not say that.” We’ve actually run into that with our account manager where she’s like, “Well, this is how I thought you would respond. This is how I thought you would post something on social media.” I’m like, “Absolutely not. That doesn’t sound like me at all.” She’s like, “But that’s what the GPT gave me that is supposed to sound like you.” I’m like, “Well, it’s wrong because I’m allowed to change my mind. I’m a human.” And GPTs or large language models don’t have that luxury of just changing its mind and just kind of winging it, if that makes sense.
Christopher S. Penn – 07:44
It does. What percentage, based on your experience in managing people, what percentage of people are that exceptional person versus the average or the below average?
Katie Robbert – 07:55
A small percentage, unfortunately, because it comes down to two things: consistency and motivation. First, you have to be consistent and do your thing well all the time. In order to be consistent, you have to be motivated. So it’s not enough to just show up, check the boxes, and then go about your day, because anybody can do that; AI can do that. You have to be motivated to want to learn more, to want to do more. So the people who are demonstrating a hunger for reaching—what do they call it?—punching above their weight, reaching beyond what they have, those are the people who are going to be less vulnerable be
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the worth of conferences and events in a tight economy.
You will learn a powerful framework for evaluating whether an expensive conference ticket meets your specific professional goals. You will use generative artificial intelligence to score event agendas, showing you which sessions offer the best return on your time investment. You will discover how expert speakers and companies create tangible value, moving beyond vague thought leadership to give you actionable takeaways. You will maximize your event attendance by demanding supplementary tools, ensuring you retain knowledge long after you leave the venue. Watch this episode now to stop wasting budget on irrelevant professional events!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-how-to-make-conferences-worth-the-investment.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s *In Ear Insights*, let’s talk about events, conferences, trade shows, workshops—the gamut of things that you could get up from your desk maybe, go somewhere else, eat hotel chicken, and enjoy speaking. The big question is this, Katie: In today’s absolutely loony environment, with the economic uncertainty and the budgets and all this and that, are events still worth it? This is a two-part question: Are events still worth it for the attendees, and are events still worth it for companies that want to generate business from events?
Katie Robbert – 00:50
It’s a big question. And if our listeners are anything like me, it takes a lot to get them to put on real pants and actually leave the house—something that isn’t sweatpants or leggings or something like that—because you’re spending the time, the resources, the money to go out and actually interact with other people.
In terms of an attendee, I think there can be a lot of value, provided you do your homework on who the speakers are, what their expertise is, what they’re promising to teach you in the workshop or the session or whatever the thing is. The flip side of that is it can be worth it for a speaker, provided you know who your audience is, you can create an ICP, and provided you are giving value to the audience.
Katie Robbert – 01:54
So if you’re a speaker who has made their whole career on big ideas and thought leadership and all that’s fine, people have a hard time buying something from that and saying, “I know exactly what it is I need to do next.”
So there is a time and place for those speakers. But for an attendee to really get value, you need to teach them something. You need to show them how to be very tactical, be very hands-on. That’s where an attendee is going to get more value. So I would say overall, I think events are worth it provided both the attendee and the speaker are doing their homework to make sure they are getting and providing value.
Christopher S. Penn – 02:44
Yep. The trifecta has always been speaker, sponsor, attendee. So each entity has their own motivations. And one of the best things that you can do, even before signing up for an event while you’re considering them, is to actually make a user story. So for me, Christopher Penn, as a keynote speaker, I want to speak at, say, Davos, so that I can raise my stature among professional speakers by speaking at the World Economic Forum. That’s just a simple example.
It becomes pretty clear then that event fits my “so that,” which maps to the 5P framework. So I have a purpose as a speaker, I have a performance, I have a known outcome that I want.
Christopher S. Penn – 03:35
And then I have to figure out: Does the event provide the people, process, and platform to get me to my purpose and achieve the performance that I want?
As an attendee, you would do the same thing. One of the reasons why I pretty much never go to events unless I’m speaking at them is because when I do this user story for myself, as an AI data scientist: “I want to learn the latest and greatest techniques and methodologies for using generative AI models so that I can improve the productivity of my work and scale AI faster.”
When I use that user story, there’s a single event that matches that user story. None. Zero. Why? Because all of the stuff that fulfills that is not at events. It is in the steady stream of academic papers being published every day.
Christopher S. Penn – 04:34
It is in the research that’s being done, in the code repositories that are being published on places like GitHub. And I know myself and how I work. I will get immediate benefit by going to someone’s GitHub repo, checking out the code, and saying, “Okay, well how do I make this work for Trust Insights or this client or that client.” An event doesn’t do that for me.
Now, if my story was, “As a speaker, I want to go to this event so that I can network with this group of companies,” that does make sense. But as an attendee, for me, my user story is so specific that events don’t line up for me.
Katie Robbert – 05:12
And I think that’s something that, so every year during event season, companies are sending their. They’re like, “Oh, we got three tickets, let’s send three people.” The thing that always bugged me about that wasn’t that they were spending the time to send people, it’s that there was no real action plan. What are they supposed to get out of it? What are they supposed to bring back to the company to help other people learn?
Because they’re not inexpensive. You have to get the ticket to the event, then you have to get travel to the event and lodging to the event, and then you have to eat at the event. And some events are better than others about actually feeding people. And so those are just expenses that you have to expect.
Katie Robbert – 05:58
And then there’s also the lost time away from client work, away from the day-to-day. And so that’s a sunk cost as well. So all of that adds up to, “Okay, did you just send your employees on a vacation or are they actually getting something out of it that they can bring back to their organization, to their team?” to say this is the latest and greatest.
That is a big part of how attendees would get value: What is my KPI? What am I supposed to get out of this? Maybe it’s literally, “My goal is to meet 3 new people.” That’s an acceptable goal, as long as that’s your goal and then you do that. Or my goal is to understand what’s going on with agentic AI as it applies to social media.
Katie Robbert – 06:55
Okay, well, those sessions exist. And if you’re not attending those sessions, then you’re probably just standing over at the coffee cart, gossiping with your friends, missing out on the thing that you actually went there to learn.
But you need to know what it is that you’re doing in the first place, why are you there. And then figure out what sessions match up with the goals that you have. It sounds like a lot of work. It is. But it’s worth it to do that homework upfront. It’s like anything else. Doing your requirements gathering is going to get you better results when you actually start to execute.
Katie Robbert – 07:31
Events can be really overwhelming because there’s a lot going on, there’s a lot of concurrent sessions, there’s a lot of people, there’s a lot of vendors, there’s a lot of booths, whatever. It can be really overwhelming. But if you do your requirements gathering upfront to say, “As a persona, I want to [goal] so that [outcome],” and you look at the agenda and you say, “These are the sessions that are going to help meet my ‘so that,’ meet my performance, help me understand my purpose and get to that goal faster,” then you have a plan. You can at least sort of stay on track. And then everything else is just kind of extra and auxiliary.
Katie Robbert – 08:11
As a speaker, again, you have to be thinking about it in those terms. Maybe you create some user stories for attendees from your ICP and you say, “If my ICP is a B2B marketer who’s about a 101, 102 with agentic AI, then what can I teach them that’s going to bring them into my session and give them an immediate takeaway and value?”
Christopher S. Penn – 08:41
Yep. One of the—so for those who don’t know, we’re hosting our first event as a company in London on October 31, 2025. If you’re listening to this after that date, pop by the Trust Insights website because we are planning potentially some more events like this. It’s a full-day workshop. And one of the things that is nice about running your own event is you can ask attendees, “What do you want to learn from this?”
I was looking at the responses this morning, going, “Wow, this is…” There’s a wide range. But one of the ones that stuck out is exactly what you said, Katie, which is, “I for this event to be…”
Christopher S. Penn – 09:21
We asked the question: “For this event to be a success, what is the one thing that you need to come home with?” As this person said, “I need 5 use cases for Generative AI that I can explain to my team for this event to be successful.” One other person said, “I need 1 prototype. Maybe it’s just a prompt, maybe it’s a GPT. I need 1 prototype that I can take back to work and use immediately for this event to be a success.” And that tells me a lot as both an event organizer and as a speaker. That’s what’s expected.
Christopher S. Penn – 09:56
That is what is expected now for this kind of thing. If you just go to an event kind of randomly, okay, you don’t know why you’re there. But if you say, “This is my burning question, will this event fulfill this?” it’s a lot more clear.
One of the things I think is so useful to do as an
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss scaling Generative AI past basic prompting and achieving real business value.
You will learn the strategic framework necessary to move beyond simple, one-off interactions with large language models. You will discover why focusing on your data quality, or “ingredients,” is more critical than finding the ultimate prompt formula. You will understand how connecting AI to your core business systems using agent technology will unlock massive time savings and efficiencies. You will gain insight into defining clear, measurable goals for AI projects using effective user stories and the 5P methodology. Stop treating AI like a chatbot intern and start building automated value—watch now to find out how!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-getting-real-value-from-generative-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s *In-Ear Insights*. Another week, another gazillion posts on LinkedIn and various social networks about the ultimate ChatGPT prompt. OpenAI, of course, published its Prompt Blocks library of hundreds of mediocre prompts that are particularly unhelpful.
And what we’re seeing in the AI industry is this: A lot of people are stuck and focused on how do I prompt ChatGPT to do this, that, or the other thing, when in reality that’s not where the value is.
Today, let’s talk about where the value of generative AI actually is, because a lot of people still seem very stuck on the 101 basics. And there’s nothing wrong with that—that is totally great—but what comes after it?
Christopher S. Penn – 00:47
So, Katie, from your perspective as someone who is not the propeller head in this company and is very representative of the business user who wants real results from this stuff and not just shiny objects, what do you see in the Generative AI space right now? And more important, what do you see it’s missing?
Katie Robbert – 01:14
I see it’s missing any kind of strategy, to be quite honest. The way that people are using generative AI—and this is a broad stroke, it’s a generalization—is still very one-off. Let me go to ChatGPT to summarize these meeting notes. Let me go to Gemini to outline a blog post. There is nothing wrong with that, but it’s not a strategy; it’s one more tool in your stack. And so the big thing that I see missing is, what are we doing with this long term?
Katie Robbert – 01:53
Where does it fit into the overall workflow and how is it actually becoming part of the team? How is it becoming integrated into the organization? So, people who are saying, “Well, we’re sitting down for our 2026 planning, we need to figure out where AI fits in,” I think you’re already setting yourself up for failure because you’re leading with AI needs to fit in somewhere versus you need to lead with what do we need to do in 2026, period?
Chris has brought up the 5P Framework, which is 100% where I’m going to recommend you start. Start with the purpose. So, what are your goals? What are the questions you’re trying to answer? How are you trying to grow and scale? And what are the KPIs that you want to be thinking about in 2026?
Katie Robbert – 02:46
Notice I didn’t say with AI. Leave AI out of it for now. For now, we’ll get to it. So what are the things that you’re trying to do? What is the purpose of having a business in 2026? What are the things you’re trying to achieve?
Then you move on to people. Well, who’s involved? It’s the team, it’s the executives, it’s the customers. Don’t forget about the customers because they’re kind of the reason you have a business in the first place. And figure out what all of those individuals bring to the table. How are they going to help you with your purpose and then the process? How are we going to do these things? So, in order to scale the business by 10x, we need to bring in 20x revenue.
Katie Robbert – 03:33
In order to bring in 20x revenue, we need to bring in 30x visits to the website. And you start to go down that road. That’s sort of your process. And guess what? We haven’t even talked about AI yet, because it doesn’t matter at the moment. You need to get those pieces figured out first.
If we need to bring in 30x the visits to the website that we were getting in the previous year, how do we do that? What are we doing today? What do we need to do tomorrow? Okay, we need to create content, we need to disseminate it, we need to measure it, we need to do this. Oh, maybe now we can think about platforms. That’s where you can start to figure out where in this does AI fit?
Katie Robbert – 04:12
And I think that’s the piece that’s missing: people are jumping to AI first and not why the heck are we doing this. So that is my long-winded rant. Chris, I would love to hear your perspective.
Christopher S. Penn – 04:23
Perspective specific to AI. Where people are getting tripped up is in a couple different areas. The biggest at the basic level is a misunderstanding of prompting. And we’re going to be talking about this. You’ll hear a lot about this fall as we are on the conference circuit.
Prompting is like a recipe. So you have a recipe for baking beef Wellington, what have you. The recipe is not the most important part of the process. It’s important. Winging it, particularly for complex dishes, is not a good idea unless you’ve done it a million times before. The most important part is things like the ingredients. You can have the best recipe in the world; if you have no ingredients, you ain’t eating. That’s pretty obvious.
Christopher S. Penn – 05:15
And yet so many people are so focused on, “Oh, I’ve got to have the perfect prompt”—no, you don’t. You need to have good ingredients to get value.
So, let’s say you’re doing 2026 strategic planning and you go to the AI to say, “I need to work on my strategic plan for 2026.” They will understand generally what that means because most models are reasoning models now. But if you provide no data about who you are, what you do, how you’ve done it, your results before, who your competitors are, who your customers are, all the 10 things that you need to do strategic planning like your budget, who’s involved, the Five Ps—basically AI won’t be able to help you any better than you will or that your team will. It’s a waste of time.
Christopher S. Penn – 06:00
For immediate value unlocks for AI, it starts with the right ingredients, with the right recipe, and your skills. So that should sound an awful lot like people, process, and platform.
I call it Generative AI 102. If 101 is, “How do I prompt?” 102 is, “What ingredients need to go with my prompt to get value out of them?”
But then 201 is—and this is exactly what you started off with, Katie—one-off interactions with ChatGPT don’t scale. They don’t deliver value because you, the human, are still typing away like a little monkey at the keyboard. If you want value from AI, part of its value comes from saving time, saving money, and making money. Saving time means scale—doing things at scale—which means you need to connect your AI to other systems.
Christopher S. Penn – 06:59
You need to plug it into your email, into your CRM, into your DSP. Name the technology platform of your choice. If you are still just copy-pasting in and out of ChatGPT, you’re not going to get the value you want because you are the bottleneck.
Katie Robbert – 07:16
I think that this extends to the conversations around agentic AI. Again, are you thinking about it as a one-off or are you thinking about it as a true integration into your workflow? Okay, so I don’t want to have to summarize meeting notes anymore. So let me spend a week building an agent that’s going to do that for me. Okay, great.
So now you have an agent that summarizes your meeting notes and doesn’t do anything else. So now you have to, okay, what else do I want it to do? And you start frankensteining together all of these one-off tasks until you have 100 agents to do 100 things versus maybe one really solid workflow that could have done a lot of things and have less failure points.
Katie Robbert – 08:00
That’s really what we’re talking about. When you’re short-sighted in thinking about where generative AI fits in, you introduce even more failure points in your business—your operations, your process, your marketing, whatever it is. Because you’re just saying, “Okay, I’m going to use ChatGPT for this, and I’m going to use Gemini for this, and I’m going to use Claude for this, and I’m use Google Colab for this.”
Then it’s just kind of all over the place. Really, what you want to have is a more thoughtful, holistic, documented plan for where all these pieces fit in. Don’t put AI first. Think about your goals first. And if the goal is, “We want to use AI,” it’s the wrong goal. Start over.
Christopher S. Penn – 08:56
Unless that’s literally your job.
Katie Robbert – 09:00
But that would theoretically tie to a larger business goal.
Christopher S. Penn – 09:05
It should.
Katie Robbert – 09:07
So what is the larger business goal that you’ve then determined? This is where AI fits in. Then you can introduce AI. A great way to figure that out is a user story. A user story is a simple three-part sentence: As a [Persona], I want [X], so that [Y].
So, as the lead AI engineer, I want to build an AI agent. And you don’t stop there. You say, “So that we can increase our revenue by 30x,” or, “Find more efficiencies and cut down the amount of time that it takes to create content.” Too many people, when we are talking about where people are getting generative AI wrong,
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss whether awards still matter in today’s marketing landscape, especially with the rise of generative AI.
You will understand how human psychology and mental shortcuts make awards crucial for decision-making. You will discover why awards are more relevant in the age of generative AI, influencing search results and prompt engineering. You will learn how awards can differentiate your company and become a powerful marketing tool. You will explore new ways to leverage AI for award selection and even consider creating your own merit-based recognition. Watch this episode now to redefine your perspective on marketing accolades!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-do-awards-still-matter.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In-Ear Insights, the multi-platinum, award-winning, record-setting—you name it. People love to talk about awards, particularly companies. We love to say we are an award-winning this, we’re an award-winning that. Authors say, “I’m a best-selling, award-winning book.” But Katie, you had a very interesting and provocative question: In today’s marketing landscape, do awards still matter?
Katie Robbert – 00:27
And I still have that question. Also, let me back up a little bit. When I made the transition from working in more of an academic field to the public sector, I had a huge revelation—my eyes were open to how awards worked. Call it naive, call it I was sheltered from this side of the industry, but I didn’t know at the time that in order to win an award, you had to submit yourself for the award. I naively thought that you just do good work and you get nominated by someone who recognizes that you’re doing good work. That’s how awards work. Because in my naive brain, you do good work and they reward you for it.
Katie Robbert – 01:16
And so here’s your award for being amazing.
Speaker 3 – 01:18
And that is not at all that.
Katie Robbert – 01:20
That’s not how any of the Emmys or the Grammys—they all…
Speaker 3 – 01:24
Have to submit themselves.
Katie Robbert – 01:25
I didn’t know that they have to choose the scene that they think is award-winning. Yes, it’s voted on by a jury of your peers, which is also perhaps problematic depending on who’s on the jury. There’s the whole—the whole thing just feels like one big scam.
Katie Robbert – 01:46
That said, per usual, I’m an n of 1, and I know that in certain industries, the more awards and accolades you rack up and can put on your website, the more likely it is that people are going to hire you or your firm or buy your products because they’re award-winning. So that’s the human side of it. Part of what I’m wondering when I said, “Do awards matter?” I was really wondering about with people using generative AI to do searches. We got this question from a client earlier this week of when we’re looking at organic search, how much…
Speaker 3 – 02:29
Of that traffic is coming from the different LLMs?
Katie Robbert – 02:33
And so it just made me think: if people are only worried about if they’re showing up in the large language models, do awards matter? So that was a lot of preamble. That was a lot of pre-ramble, Chris. So, do awards matter in the age of LLMs?
Christopher S. Penn – 02:55
I think that you’ve highlighted the two angles. One is the human angle. Awards very much matter to humans because it’s a heuristic. It’s a mental shortcut. The CMO says, “Go build me a short list of vendors in this case.” And what does the intern who usually is the one saddled with the job do? They Google for “award-winning vendor in X, Y or Z.” If they use generative AI and ChatGPT, they will very likely still say, “Build me a short list of award-winning whatevers in this thing because my CMO told me to.” And instead of them manually Googling, a tool like ChatGPT or Gemini will do the Googling for you.
Christopher S. Penn – 03:33
But if that heuristic of “I need something that’s award-winning” is still part of your lexicon, part of the decision makers’ lexicon, and maybe even they don’t delegate to the intern anymore, maybe they set the deep research query themselves—say, “Give me a short list of award-winning marketing agencies”—then it still matters a lot. In the context of generative AI itself, I would argue that it actually matters more today. And here’s why: In things like the RACE framework and the Rappel framework and the many different prompt frameworks that we all use, the OpenAI Harmony framework, you name it. What do they always say? “Choose a role.”
Christopher S. Penn – 04:15
“Choose a role with specifics like ‘you are an award-winning copywriter,’ ‘you are an award-winning this,’ ‘you are an award-winning that,’ ‘you are a Nobel Prize-winning this,’ ‘you are a CMI Content Marketing Award winner of this or that’ as part of the role in the prompt.” If you are that company that is ordering and you have provided ample evidence of that—when you win an award, you send out press releases, you put it on social media stuff—Trust Insights won the award for this. We are an award-winning so-and-so. That makes it into the training data.
Christopher S. Penn – 04:46
And if someone invokes that phrase “award-winning consulting firm,” if we’ve done our job of seeding the LLMs with our award-winning language, just by nature of probability, we have a higher likelihood of our entities being invoked with association to that term.
Katie Robbert – 05:09
It reminds me—this must have been almost two decades ago—I worked with a stakeholder who was a big fan of finding interesting recipes online.
Speaker 3 – 05:25
So again, remember: Two decades ago.
Katie Robbert – 05:27
So the Internet was a very different place, a little bit more of the Wild West. Actually, no, that’s not true.
Christopher S. Penn – 05:34
MySpace was a thing.
Katie Robbert – 05:36
I never had a MySpace. And the query, he would always start with “world’s best.” So he wouldn’t just say, “Get me a chili recipe.” He would always say, “Get me the world’s best chili recipe.” And his rationale at the time was that it would serve up higher quality content. Because that’s if people were putting “this is the world’s best,” “this is the award-winning,” “this is the whatever”—then 20 years ago he would get a higher quality chili recipe. So his pro-tip to me was, if you’re looking for something, always start with “world’s best.” And it just strikes me that 20 years later, that hasn’t changed.
Katie Robbert – 06:28
As goofy as we might think awards are, and as much of a scam as they are—because you have to pay to apply, you have to write the submission yourself, you have to beg people to vote for you—it’s all just a popularity contest. It sounds like in terms of the end user searching, it still matters. And that bums me out, quite honestly, because awards are a lot of work.
Christopher S. Penn – 06:50
They are a lot of work. But to your point, “What’s the world’s best chili recipe?” I literally ask ChatGPT, “What is the title of it?” “Award-style chili recipe.” Right there it is. That’s literally. That’s a terrible prompt. We all know that’s a terrible prompt. But that’s not a dishonest prompt. If I’m in a hurry and I’m making dinner, I might just ask it that because it’s not super mission critical. I’m okay with a query like this. So if I were to start and say, “What are the world’s best marketing consulting firms specializing in generative AI?” That’s also not an unreasonable thing, of course. What does it do? It kicks off a web search. So immediately it starts doing web searches.
Christopher S. Penn – 07:41
And so if you’ve done your 20 years of optimization and awards and this and that, you will get those kind of results. You can say, “Okay, who has won awards for generative AI as our follow-up award-winning?” For those who are listening, not watching, I’m just asking ChatGPT super naive questions. So, who are award winners in generative AI, et cetera? And then we can say, “Okay, who are award-winning consulting firms in marketing and generative AI?” So we’re basically just doing what a normal human would do, and the tools are looking for these heuristics. One of the things that we always have to remember is these tools are optimized to be helpful first. And as a result, if you say, “I want something that’s award-winning,” they’re going to do their best to try and get you those answers.
Christopher S. Penn – 08:43
So do awards matter? Yes, because clearly the tools are able to understand. Yes, I need to go find consulting firms that have won awards.
Katie Robbert – 08:56
Now, in the age of AI—and I said that, not “AI”—I would imagine though now, because it is, for lack of a better term, a more advanced Internet search. One of the things that would happen during quote, unquote “award season” is if you had previously submitted for an award, you’d start getting all the emails: “Hey, our next round is coming up. Don’t forget to submit,” blah, blah. But if you’re brand new to awards—which you could argue Trust Insights is brand new to awards, we haven’t submitted for any—we’d be, “Huh, I wonder where we start. I wonder what awards are available for us to submit to.” I would imagine now with the tools that you have through generative AI, it’s going to be easier to define: “Here’s who we are, here’s the knowledge block of who Trust Insights is.”
Katie Robbert – 09:47
Help me find awards that are appropriate for us to submit to that we are likely to win versus the—I think you would call it—the spray and pray me
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss AI decisioning, the latest buzzword confusing marketers.
You will learn the true meaning of AI decisioning and the crucial difference between classical AI and generative AI for making sound business choices. You’ll discover when AI is an invaluable asset for decision support and when relying on it fully can lead to costly mistakes. You’ll gain practical strategies, including the 5P framework and key questions, to confidently evaluate AI decisioning software and vendors. You will also consider whether building your own AI solution could be a more effective path for your organization. Watch now to make smarter, data-driven decisions about adopting AI in your business!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-what-is-ai-decisioning.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
**Christopher S. Penn – 00:00**
In this week’s In-Ear Insights, let’s talk about a topic that is both old and new. This is decision optimization or decision planning, or the latest buzzword term AI decisioning. Katie, you are the one who brought this topic to the table. What the heck is this? Is this just more expensive consulting speak? What’s going on here?
**Katie Robbert – 00:23**
Well, to set the context, I’m actually doing a panel for the Martech organization on Wednesday, September 17, about how AI decisioning will change our marketing. There are a lot of questions we’ll be going over, but the first question that all of the panelists will be asked is, what is AI decisioning? I’ll be honest, Chris, it was not a term I had heard prior to being asked to do this panel. But, I am the worst at keeping up with trends and buzzwords.
When I did a little bit of research, I just kind of rolled my eyes and I was like, oh, so basically it’s the act of using AI to optimize the way in which decisions are made. Sort of. It’s exactly what it sounds like.
**Katie Robbert – 01:12**
But it’s also, I think, to your point, it’s a consultant word to make things sound more expensive than they should because people love to do that. So at a high level, it’s sticking a bunch of automated processes together to help support the act of making business decisions. I’m sure that there are companies that are fully comfortable with taking your data and letting their software take over all of your decisions without human intervention, which I could rant about for a very long time.
When I asked you this question last week, Chris, what is AI decisioning? You gave me a few different definitions. So why don’t you run through your understanding of AI decisioning?
**Christopher S. Penn – 02:07**
The big one comes from our friends at IBM. IBM used to have this platform called IBM Decision Optimization. I don’t actually know if it still exists or not, but it predated generative AI by about 10 years. IBM’s take on it, because they were using classical AI, was: decision optimization is the use of AI to improve or validate decisions.
The way they would do this was you take a bunch of quantitative data, put it into a system, and it basically would run a lot of binary tree classification. If this, then that—if this, then that—to try and come out with, okay, what’s the best decision to make here? That correlates to the outcome you care about. So that was classic AI decisioning from 2010-2020. Really, 2010-2020.
**Christopher S. Penn – 03:06**
Now everybody and their cousin is throwing this stuff at tools like ChatGPT and stuff like that. Boy, do I have some opinions about that—about why that’s not necessarily a great idea.
**Katie Robbert – 03:19**
What I like—the description you gave, the logical flow of “if this, then that”—is the way I understand AI decisioning to work. It should be a series of almost like a choose-your-own-adventure points: if this happens, go here; if this happens, go here. That’s the way I think about AI-assisted. I’m going to keep using the word assisted because I don’t think it should ever take over human decisioning. But that’s one person’s opinion. But I like that very binary “if this, then that” flow.
So that’s the way you and I agree it should be used. Let’s talk about the way it’s actually being used and the pros and cons of what the reality is today of AI decisioning.
**Christopher S. Penn – 04:12**
The way it’s being used or the way people want to use it is to fully outsource the decision-making to say, “AI, go and do this stuff for me and tell me when it’s done.” There are cases where that’s appropriate. We have an entire framework called the TRIPS framework, which is part of the new AI strategy course that you can get at TrustInsights AI strategy course. Katie teaches the TRIPS framework: Time, Repetitiveness, Importance, Pain, and Sufficient Data.
What’s weird about TRIPS that throws people off is that the “I” for importance means the less important a task is, the better a fit it is for AI—which fits perfectly into AI decisioning. Do you want to hand off completely a really important decision to AI? No. Do you want to hand off unimportant decisions to AI? Yes. The consequences for getting it wrong are so much lower.
**Christopher S. Penn – 05:05**
Imagine you had a GPT you built that said, “Where do we want to order lunch from today?” It has 10 choices, runs, and spits out an answer. If it gives you a wrong answer—wrong answer out of 10 places you generally like—you’re not going to be hugely upset. That is a great example of AI decisioning, where you’re just hanging out saying, “I don’t care, just make a decision. I don’t even care—we all know the places are all good.” But would you say, “Let’s hand off our go-to-market strategy for our flagship product line”? God, I hope not.
**Katie Robbert – 05:46**
It’s funny you say that because this morning I was using Gemini to create a go-to-market strategy for our flagship product line. However, with the huge caveat that I was not using generative AI to make decisions—I was using it to organize the existing data we already have.
Our sales playbook, our ICPs, all the different products—giving generative AI the context that we’re a small sales and marketing team. Every tactic we take needs to be really thoughtful, strategic, and impactful. We can’t do everything. So I was using it in that sense, but I wasn’t saying, “Okay, now you go ahead and execute a non-human-reviewed go-to-market strategy, and I’m going to measure you on the success of it.” That is absolutely not how I was using it.
**Katie Robbert – 06:46**
It was more of—I think the use case you would probably put that under is either summarization first and then synthesis next, but never decisioning.
**Christopher S. Penn – 07:00**
Yeah, and where this new crop of AI decisioning is going to run into trouble is the very nature of large language models—LLMs. They are language tools, they’re really good at language. So a lot of the qualitative stuff around decisions—like how something makes you feel or how words are used—yes, that is 100% where you should be using AI.
However, most decision optimization software—like the IBM Decision Optimization Project product—requires quantitative data. It requires an outcome to do regression analysis against. Behind the scenes, a lot of these tools take categorical data—like topics on your blog, for example—and reduce that to numbers so they can do binary classification. They figure out “if this, then that; if this, then that” and come up with the decision. Language models can’t do that because that’s math.
So if you are just blanket handing off decisioning to a tool like ChatGPT, it will imitate doing the math, but it will not do the math. So you will end up with decisions that are basically hallucinations.
**Katie Robbert – 08:15**
For those software companies promoting their tools to be AI decision tools or AI decisioning tools—whatever the buzz term is—what is the caution for the buyer, for the end user? What are the things we should be asking and looking for? Just as Chris mentioned, we have the new AI strategy course. One of the tools in the AI strategy course—or just the toolkit itself, if you want that at a lower cost—is the AI Vendor cheat sheet. It contains all the questions you should be asking AI vendors.
But Chris, if someone doesn’t know where to start and their CMO or COO is saying, “Hey, this tool has AI decisioning in it, look how much we can hand over.” What are the things we should be looking for, and what should we never do?
**Christopher S. Penn – 09:16**
First things I would ask are: “Show me your system map. Show me your system architecture map.” It should be high level enough that they don’t worry about giving away their proprietary secret sauce. But if the system map is just a big black box on a sheet of paper—no good.
Show me how the system works: how do you handle qualitative data? How do you handle quantitative data? How do you blend the two together? What are broadly the algorithm families involved? At some point, you should probably have binary classification trees in there. At some point, you should have regression analysis, like gradient boosting, in there. Those would be the technical terms I’d be looking for in a system map for decisioning software. Let me talk to an engineer without a salesperson present. That’s my favorite.
**Christopher S. Penn – 10:05**
And if a company says, “No, no, we can’t do”—clearly, then, there’s a problem because I know I’m going to ask the engineer something that “doesn’t do that.” What are you talking about? That is always the red flag for me. If you will not let me talk
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss whether blogs and websites still matter in the age of generative AI.
You’ll learn why traditional content and SEO remain essential for your online presence, even with the rise of AI. You’ll discover how to effectively adapt your content strategy so that AI models can easily find and use your information. You’ll understand why focusing on answering your customer’s questions will benefit both human and AI search. You’ll gain practical tips for optimizing your content for “Search Everywhere” to maximize your visibility across all platforms. Tune in now to ensure your content strategy is future-proof!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-do-websites-matter-in-the-age-of-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, one of the biggest questions that people have, and there’s a lot of debate on places like LinkedIn about this, is whether blogs and websites and things even matter in the age of generative AI. There are two different positions on this. The first is saying, no, it doesn’t matter. You just need to be everywhere. You need to be doing podcasts and YouTube and stuff like that, as we are now. The second is the classic, don’t build on rented land. They have a place that you can call your own and things. So I have opinions on this, but Katie, I want to hear your opinions on this.
Katie Robbert – 00:37
I think we are in some ways overestimating people’s reliance on using AI for fact-finding missions. I think that a lot of people are turning to generative AI for, tell me the best agency in Boston or tell me the top five list versus the way that it was working previous to that, which is they would go to a search bar and do that instead. I think we’re overestimating the amount of people who actually do that.
Katie Robbert – 01:06
Given, when we talk to people, a lot of them are still using generative AI for the basics—to write a blog post or something like that. I think personally, I could be mistaken, but I feel pretty confident in my opinion that people are still looking for websites.
Katie Robbert – 01:33
People are still looking for thought leadership in the form of a blog post or a LinkedIn post that’s been repurposed from a blog post. People are still looking for that original content. I feel like it does go hand in hand with AI because if you allow the models to scrape your assets, it will show up in those searches. So I guess I think you still need it. I think people are still going to look at those sources. You also want it to be available for the models to be searching.
Christopher S. Penn – 02:09
And this is where folks who know the systems generally land. When you look at a ChatGPT or a Gemini or a Claude or a Deep Seat, what’s the first thing that happens when a model is uncertain? It fires up a web search. That web search is traditional old school SEO. I love the content saying, SEO doesn’t matter anymore. Well, no, it still matters quite a bit because the web search tools are relying on the, what, 30 years of website catalog data that we have to find truthful answers.
Christopher S. Penn – 02:51
Because AI companies have realized people actually do want some level of accuracy when they ask AI a question. Weird, huh? It really is. So with these tools, we have to. It is almost like you said, you have to do both. You do have to be everywhere.
Christopher S. Penn – 03:07
You do have to have content on YouTube, you do have to post on LinkedIn, but you also do have to have a place where people can actually buy something. Because if you don’t, well.
Katie Robbert – 03:18
And it’s interesting because if we say it in those terms, nothing’s changed. AI has not changed anything about our content dissemination strategy, about how we are getting ourselves out there. If anything, it’s just created a new channel for you to show up in. But all of the other channels still matter and you still have to start at the beginning of creating the content because you’re not. People like to think that, well, I have the idea in my head, so AI must know about it. It doesn’t work that way.
Katie Robbert – 03:52
You still have to take the time to create it and put it somewhere. You are not feeding it at this time directly into OpenAI’s model. You’re not logging into OpenAI saying, here’s all the information about me.
Katie Robbert – 04:10
So that when somebody asks, this is what you serve it up. No, it’s going to your website, it’s going to your blog post, it’s going to your social profiles, it’s going to wherever it is on the Internet that it chooses to pull information from. So your best bet is to keep doing what you’re doing in terms of your content marketing strategy, and AI is going to pick it up from there.
Christopher S. Penn – 04:33
Mm. A lot of folks are talking, understandably, about how agentic AI functions and how agentic buying will be a thing. And that is true. It will be at some point. It is not today. One thing you said, which I think has an asterisk around it, is, yes, our strategy at Trust Insights hasn’t really changed because we’ve been doing the “be everywhere” thing for a very long time.
Christopher S. Penn – 05:03
Since the inception of the company, we’ve had a podcast and a YouTube channel and a newsletter and this and that. I can see for legacy companies that were still practicing, 2010 SEO—just build it and they will come, build it and Google will send people your way—yeah, you do need an update.
Katie Robbert – 05:26
But AI isn’t the reason. AI is—you can use AI as a reason, but it’s not the reason that your strategy needs to be updated. So I think it’s worth at least acknowledging this whole conversation about SEO versus AEO versus Giao Odo. Whatever it is, at the end of the day, you’re still doing, quote unquote, traditional SEO and the models are just picking up whatever you’re putting out there. So you can optimize it for AI, but you still have to optimize it for the humans.
Christopher S. Penn – 06:09
Yep. My favorite expression is from Ashley Liddell at Deviate, who’s an SEO shop. She said SEO now just stands for Search Everywhere Optimization. Everything has a search. TikTok has a search. Pinterest has a search. You have to be everywhere and then you have to optimize for it. I think that’s the smartest way to think about this, to say, yeah, where is your customer and are you optimizing for?
Christopher S. Penn – 06:44
One of the things that we do a lot, and this is from the heyday of our web analytics era, before the AI era, go into your Google Analytics, go into referring source sites, referring URLs, and look where you’re getting traffic from, particularly look where you’re getting traffic from for places that you’re not trying particularly hard.
Christopher S. Penn – 07:00
So one place, for example, that I occasionally see in my own personal website that I have, to my knowledge, not done anything on, for quite some time, like decades or years, is Pinterest. Every now and again I get some rando from Pinterest coming. So look at those referring URLs and say, where else are we getting traffic from? Maybe there’s a there. If we’re getting traffic from and we’re not trying at all, maybe there’s a there for us to try something out there.
Katie Robbert – 07:33
I think that’s a really good pro tip because it seems like what’s been happening is companies have been so focused on how do we show up in AI that they’re forgetting that all of these other things have not gone away and the people who haven’t forgotten about them are going to capitalize on it and take that digital footprint and take that market share. While you were over here worried about how am I going to show up as the first agency in Boston in the OpenAI search, you still have—so I guess to your question, where you originally asked, is, do we still need to think about websites and blogs and that kind of content dissemination? Absolutely. If we’re really thinking about it, we need to consider it even more.
Katie Robbert – 08:30
We need to think about longer-form content. We need to think about content that is really impactful and what is it? The three E’s—to entertain, educate, and engage. Even more so now because if you are creating one or two sentence blurbs and putting that up on your website, that’s what these models are going to pick up and that’s it. So if you’re like, why is there not a more expansive explanation as to who I am? That’s because you didn’t put it out there.
Christopher S. Penn – 09:10
Exactly. We were just doing a project for a client and were analyzing content on their website and I kid you not, one page had 12 words on it. So no AI tool is going to synthesize about you. It’s just going to say, wow, this sucks and not bother referring to you.
Katie Robbert – 09:37
Is it fair to say that AI is a bit of a distraction when it comes to a content marketing strategy? Maybe this is just me, but the way that I would approach it is I would take AI out of the conversation altogether just for the time being. In terms of what content do we want to create? Who do we want to reach? Then I would insert AI back in when we’re talking about what channels do we want to appear on? Because I’m really thinking about AI search. For a lack of a better term, it’s just another channel.
Katie Robbert – 10:14
So if I think of my attribution modeling and if I think of what that looks like, I would expect maybe AI shows up as a first touch.
Katie Robbert – 10:31
Maybe somebody was doing some research an
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss why enterprise generative AI projects often fail to reach production.
You’ll learn why a high percentage of enterprise generative AI projects reportedly fail to make it out of pilot, uncovering the real reasons beyond just the technology. You’ll discover how crucial human factors like change management, user experience, and executive sponsorship are for successful AI implementation. You’ll explore the untapped potential of generative AI in back-office operations and process optimization, revealing how to bridge the critical implementation gap. You’ll also gain insights into the changing landscape for consultants and agencies, understanding how a strong AI strategy will secure your competitive advantage. Watch now to transform your approach to AI adoption and drive real business results!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-why-enterprise-generative-ai-projects-fail.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, the big headline everyone’s been talking about in the last week or two about generative AI is a study from MIT’s Nanda project that cited the big headline: 95% of enterprise generative AI projects never make it out of pilot. A lot of the commentary clearly shows that no one has actually read the study because the study is very good. It’s a very good study that walks through what the researchers are looking at and acknowledged the substantial limitations of the study, one of which was that it had a six-month observation period.
Katie, you and I have both worked in enterprise organizations and we have had and do have enterprise clients. Some people can’t even buy a coffee machine in six months, much less route a generative AI project.
Christopher S. Penn – 00:49
But what I wanted to talk about today was some of the study’s findings because they directly relate to AI strategy. So if you are not an AI ready strategist, we do have a course for that.
Katie Robbert – 01:05
We do. As someone, I’ve been deep in the weeds of building this AI ready strategist course, which will be available on September 2. It’s actually up for pre-sale right now. You go to trust insights AI/AI strategy course. I just finished uploading everything this morning so hopefully I used all the correct edits and not the ones with the outtakes of me threatening to murder people if I couldn’t get the video done.
Christopher S. Penn – 01:38
The bonus, actually, the director’s edition.
Katie Robbert – 01:45
Oh yeah, not to get too off track, but there was a couple of times I was going through, I’m like, oops, don’t want to use that video. But back to the point, so obviously I saw the headline last week as well. I think the version that I saw was positioned as “95% of AI pilot projects fail.” Period. And so of course, as someone who’s working on trying to help people overcome that, I was curious. When I opened the article and started reading, I’m like, “Oh, well, this is misleading,” because, to be more specific, it’s not that people can’t figure out how to integrate AI into their organization, which is the problem that I help solve.
Katie Robbert – 02:34
It’s that people building their own in-house tools are having a hard time getting them into production versus choosing a tool off the shelf and building process around it. That’s a very different headline. And to your point, Chris, the software development life cycle really varies and depends on the product that you’re building. So in an enterprise-sized company, the likelihood of them doing something start to finish in six months when it involves software is probably zero.
Christopher S. Penn – 03:09
Exactly. When you dig into the study, particularly why pilots fail, I thought this was a super useful chart because it turns out—huge surprise—the technology is mostly not the problem. One of the concerns—model quality—is a concern.
The rest of these have nothing to do with technology. The rest of these are challenging: Change management, lack of executive sponsorship, poor user experience, or unwillingness to adopt new tools. When we think about this chart, what first comes to mind is the 5 Ps, and 4 out of 5 are people.
Katie Robbert – 03:48
It’s true. One of the things that we built into the new AI strategy course is a 5P readiness assessment. Because your pilot, your proof of concept, your integration—whatever it is you’re doing—is going to fail if your people are not ready for it.
So you first need to assess whether or not people want to do this because that’s going to be the thing that keeps this from moving forward. One of the responses there was user experience. That’s still people.
If people don’t feel they can use the thing, they’re not going to use it. If it’s not immediately intuitive, they’re not going to use it. We make those snap judgments within milliseconds.
Katie Robbert – 04:39
We look at something and it’s either, “Okay, this is interesting,” or “Nope,” and then close it out. It is a technology problem, but that’s a symptom. The root is people.
Christopher S. Penn – 04:52
Exactly. In the rest of the paper, in section 6, when it talks about where the wins were for companies that were successful, I thought this was interesting.
Lead qualification, speed, customer retention. Sure, those are front office things, but the paper highlights that the back office is really where enterprises will win using generative AI. But no one’s investing it. People are putting all the investment up front in sales and marketing rather than in the back office. So the back office wins.
Business process optimization. Elimination: $2 million to $10 million annually in customer service and document processing—especially document processing is an easy win. Agency spend reduction: 30% decrease in external, creative, and content costs. And then risk checks for financial services by doing internal risk management.
Christopher S. Penn – 05:39
I thought this was super interesting, particularly for our many friends and colleagues who work at agencies, seeing that 30% decrease in agency spend is a big deal.
Katie Robbert – 05:51
It’s a huge deal. And this is, if we dig into this specific line item, this is where you’re going to get a lot of those people challenges because we’re saying 30% decrease in external creative and content costs. We’re talking about our designers and our writers, and those are the two roles that have felt the most pressure of generative AI in terms of, “Will it take my job?” Because generative AI can create images and it can write content. Can it do it well? That’s pretty subjective. But can it do it? The answer is yes.
Christopher S. Penn – 06:31
What I thought was interesting says these gains came without material workforce reduction. Tools accelerated work, but did not change team structures or budgets. Instead, ROI emerged from reduced external spend, limiting contracts, cutting agency fees, replacing expensive consultants with AI-powered internal capabilities. So that makes logical sense if you are spending X dollars on something, an agency that writes blog content for you. When we were back at our old PR agency, we had one firm that was spending $50,000 a month on having freelancers write content that when you and I reviewed, it was not that great. Machines would have done a better job properly prompted.
Katie Robbert – 07:14
What I find interesting is it’s saying that these gains came without material workforce reduction, but that’s not totally true because you did have to cut your agency fees, which is people actually doing the work, and replacing expensive consultants with AI-powered internal capabilities. So no, you didn’t cut workforce reduction at your own company, but you cut it at someone else’s.
Christopher S. Penn – 07:46
Exactly. So the red flag there for anyone who works in an agency environment or a consulting environment is how much risk are you at from AI taking your existing clients away from you? So you might not lose a client to another agency—you might lose a client to an internal AI project where if there isn’t a value add of human beings. If your agency is just cranking out templated press releases, yeah, you’re at risk. So I think one of the first things that I took away from this report is that every agency should be doing a very hard look at what value it provides and saying, “How easy is it for AI to replicate this?”
Christopher S. Penn – 08:35
And if you’re an agency and you’re like, “Oh, well, we can just have AI write our blog posts and hand it off to the client.” There’s nothing stopping the client from doing that either and just getting rid of you entirely.
Katie Robbert – 08:46
The other thing that sticks out to me is replacing expensive consultants with AI-powered internal capabilities. Technically, Chris, you and I are consultants, but we’re also the first ones to knock the consulting industry as a whole, because there’s a lot of smoke and mirrors in the consulting industry. There’s a lot of people who talk a big talk, have big ideas, but don’t actually do anything useful and productive. So I see this and I don’t immediately think, “Oh, we’re in trouble.” I think, “Oh, good, it’s going to clear out the rest of the noise in the industry and make way for the people who can actually do something.”
Christopher S. Penn – 09:28
And that is the heart and soul, I think, for us. Obviously, we have our own vested interest in ensuring that we continue to add value to our clients. But I think you’re absolutely right th
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss AI data privacy and how AI companies use your data, especially with free versions. You will learn how to approach terms of service agreements. You will understand the real risks to your privacy when inputting sensitive information. You will discover how AI models train on your data and what true data privacy solutions exist. Watch this episode to protect your information!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-ai-data-privacy-review.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, let’s address a question and give as close to a definitive answer as we can—one of the most common questions asked during our keynotes, our workshops, in our Slack Group, on LinkedIn, everywhere: how do AI companies use your data, particularly if using the free version of a product? A lot of people say, “Be careful what you put in AI. It can learn from your data. You could be leaking confidential data. What’s going on?” So, Katie, before I launch into a tirade which could take hours long, let me ask you, as someone who is the less technical of the two of us, what do you think happens when AI companies are using your data?
Katie Robbert – 00:43
Well, here’s the bottom line for me: AI is any other piece of software that you have to read the terms in use and sign their agreement for. Great examples are all the different social media platforms. And we’ve talked about this before, I often get a chuckle—probably in a more sinister way than it should be—of people who will copy and paste this post of something along the lines of, “I do not give Facebook permission to use my data. I do not give Facebook permission to use my images.”
And it goes on and on, and it says copy and paste so that Facebook can’t use your information. And bless their hearts, the fact that you’re on the platform means that you have agreed to let them do so.
Katie Robbert – 01:37
If not, then you need to have read the terms, the terms of use that explicitly says, “By signing up for this platform, you agree to let us use your information.” Then it sort of lists out what it’s going to use, how it’s going to use it, because legally they have to do that. When I was a product manager and we were converting our clinical trial outputs into commercial products, we had to spend a lot of time with the legal teams writing up those terms of use: “This is how we’re going to use only marketing data. This is how we’re going to use only your registration form data.” When I hear people getting nervous about, “Is AI using my data?” My first thought is, “Yeah, no kidding.”
Katie Robbert – 02:27
It’s a piece of software that you’re putting information into, and if you didn’t want that to happen, don’t use it. It’s literally, this is why people build these pieces of software and then give them away for free to the public, hoping that people will put information into them. In the case of AI, it’s to train the models or whatever the situation is. At the end of the day, there is someone at that company sitting at a desk hoping you’re going to give them information that they can do data mining on. That is the bottom line. I hate to be the one to break it to you. We at Trust Insights are very transparent. We have forms; we collect your data that goes into our CRM.
Katie Robbert – 03:15
Unless you opt out, you’re going to get an email from us. That is how business works. So I guess it was my turn to go on a very long rant about this. At the end of the day, yes, the answer is yes, period. These companies are using your data. It is on you to read the terms of use to see how. So, Chris, my friend, what do we actually—what’s useful? What do we need to know about how these models are using data in the publicly available versions?
Christopher S. Penn – 03:51
I feel like we should have busted out this animation.
Katie Robbert – 03:56
Oh. I don’t know why it yells at the end like that, but yes, that was a “Ranty Pants” rant. I don’t know. I guess it’s just I get frustrated. I get that there’s an education component. I do. I totally understand that new technology—there needs to be education.
At the end of the day, it’s no different from any other piece of software that has terms of use. If you sign up with an email address, you’re likely going to get all of their promotional emails. If you have to put in a password, then that means that you are probably creating some kind of a profile that they’re going to use that information to create personas and different segments. If you are then putting information into their system, guess what?
Katie Robbert – 04:44
They have to store that somewhere so that they can give it back to you. It’s likely on a database that’s on their servers. And guess who owns those servers? They do. Therefore, they own that data.
So unless they’re doing something allowing you to build a local model—which Chris has covered in previous podcasts and livestreams, which you can go to Trust Insights.AI YouTube, go to our “So What” playlist, and you can find how to build a local model—that is one of the only ways that you can fully protect your data against going into their models because it’s all hosted locally. But it’s not easy to do. So needless to say, Ranty Pants engaged. Use your brains, people.
Christopher S. Penn – 05:29
Use your brains. We have a GPT. In fact, let’s put it in this week’s Trust Insights newsletter. If you’re not subscribed to it, just go to Trust Insights.AI/newsletter. We have a GPT—just copy and paste the terms of service. Copy paste the whole page, paste in the GPT, and we’ll tell you how likely it is that you have given permission to a company to train on your data.
With that, there are two different vulnerabilities when you’re using any AI tool. The first prerequisite golden rule: if you ain’t paying, you’re the product. We warn people about this all the time. Second, the prompts that you give and their responses are the things that AI companies are going to use to train on.
Christopher S. Penn – 06:21
This has different implications for privacy depending on who you are. The prompts themselves, including all the files and things you upload, are stored verbatim in every AI system, no matter what it is, for the average user. So when you go to ChatGPT or Gemini or Claude, they will store what you’ve prompted, documents you’ve uploaded, and that can be seen by another human.
Depending on the terms of service, every platform has a carve out saying, “Hey, if you ask it to do something stupid, like ‘How do I build this very dangerous thing?’ and it triggers a warning, that prompt is now eligible for human review.” That’s just basic common sense. That’s one side.
Christopher S. Penn – 07:08
So if you’re putting something there so sensitive that you cannot risk having another human being look at it, you can’t use any AI system other than one that’s running on your own hardware. The second side, which is to the general public, is what happens with that data once it’s been incorporated into model training. If you’re using a tool that allows model training—and here’s what this means—the verbatim documents and the verbatim prompts are not going to appear in a GPT-5. What a company like OpenAI or Google or whoever will do is they will add those documents to their library and then train a model on the prompt and the response to say, “Did this user, when they prompted this thing, get a good response?”
Christopher S. Penn – 07:52
If so, good. Let’s then take that document, digest it down into the statistics that it makes up, and that gets incorporated into the rest of the model. The way I explain it to people in a non-technical fashion is: imagine you had a glass full of colored sand—it’s a little rainbow glass of colored sand. And you went out to the desert, like the main desert or whatever, and you just poured the glass out on the ground.
That’s the equivalent of putting a prompt into someone’s trained data set. Can you go and scoop up some of the colored sand that was your sand out of the glass from the desert? Yes, you can. Is it in the order that it was in when you first had it in the glass? It is not.
Christopher S. Penn – 08:35
So the ability for someone to reconstruct your original prompts and the original data you uploaded from a public model, GPT-5, is extremely low. Extremely low. They would need to know what the original prompt was, effectively, to do that, which then if they know that, then you’ve got different privacy problems. But is your data in there? Yes. Can it be used against you by the general public? Almost certainly not. Can the originals be seen by an employee of OpenAI? Yes.
Katie Robbert – 09:08
And I think that’s the key: so you’re saying, will the general public see it? No. But will a human see it? Yes. So if the answer is yes to any of those questions, that’s the way that you need to proceed. We’ve talked about protected health information and personally identifiable information and sensitive financial information, and just go ahead and not put that information into a large language model. But there are systems built specifically to handle that data. And just like a large language model, there is a human on the other side of it seeing it.
Katie Robbert – 09:48
So since we’re on the topic of data privacy, I want to ask your opinion on systems like WhatsApp, because they tend to pride themselves, and they have their commercials. Everything you see on TV is clearly the truth. There’s no lies there. They have their c
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris tackle an issue of bias in AI, including identifying it, coming up with strategies to mitigate it, and proactively guarding against it. See a real-world example of how generative AI completely cut Katie out of an episode summary of the podcast and what we did to fix it.
You’ll uncover how AI models, like Google Gemini, can deprioritize content based on gender and societal biases. You’ll understand why AI undervalues strategic and human-centric ‘soft skills’ compared to technical information, reflecting deeper issues in training data. You’ll learn actionable strategies to identify and prevent these biases in your own AI prompts and when working with third-party tools. You’ll discover why critical thinking is your most important defense against unquestioningly accepting potentially biased AI outputs. Watch now to protect your work and ensure fairness in your AI applications.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-how-to-identify-and-mitigate-bias-in-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In-Ear Insights, let’s tackle the issue of bias within large language models. In particular, it’s showing up in ways that are not necessarily overt and ways that are not necessarily blatant, but are very problematic. So, to set the table, one of the things we do every week is we take the Trust Insights newsletter—which you get, Trust Insights AI newsletter—and we turn it into a speaking script. Then Katie reads this script aloud. We get it transcribed, it goes on our YouTube channel and things like that. Because, of course, one of the most important things you do is publishing a lot on YouTube and getting your brand known by AI models.
Christopher S. Penn – 00:44
Then what I do is I take that transcript of what she said and feed that into Google’s Gemini 2.5 Pro model, and it creates the YouTube description and the tags. Here’s what happened recently with this.
So I gave it the transcript and I said, “Make me my stuff.” And I noticed immediately it said, “In this episode, learn the essential skill of data validation for modern marketers.” Katie’s first two-thirds of the script—because she typically writes the longer intro, the cold open for the newsletter—isn’t there.
And I said, “You missed half the show.” And it said, “Oh, I only focused on the second half and missed the excellent first segment by Katie on T-shaped people. Thank you for the correction.” And it spit out the correct version after that. And I said, “Why? Why did you miss that?”
Christopher S. Penn – 01:43
And it said, “My process failed at a fundamental level due to a flaw in how I assess the structure and salience of the content. Let me explain my mistake. When I first received the transcript, I did an initial scan. It presents two different segments. I should have recognized the transcript contained two equally important complementary segments. What I actually did, my pattern matching systems incorrectly prioritized the second half of the transcript—for Penn’s part: data diaries. I interpreted this as the main or technical part of the episode. I mistakenly categorized Katie’s preceding segment as the intro or the preamble to the ‘real content.'”
That alone is offensive. Then I said, “Okay, well, continue on.” It said, “I suffer from salience binds. Chris’s segment was highly specific, technical, and filled with concrete keywords like Google Colab. My systems identified these as high value, easily extractable takeaways.”
Christopher S. Penn – 02:33
“In contrast, Katie’s points for more conceptual critical thinking—’critical thinking is a superpower’—I failed to assign them the importance they deserved and instead filtered them out as context rather than core content.”
And I said, it said, “I’m gonna do better.” And I yelled, “You can’t do better. Your model is fixed. You’re a decoder only.” And it had some words about that, saying, “I apologize.”
Then I said, “Revisit your omission of Katie’s segment. Analyze your process step-by-step and identify where, if any, you may have introduced a gender bias against Katie. Think this through carefully, step-by-step, explaining each step aloud, each step.”
And it said, “This analysis explains why potential bias [was introduced]. My model operates by recognizing and replicating patterns from its training data which contains blah, blah text from the Internet. My system identified the structure in the transcript to match a learned pattern, and in doing so, it may have classified Katie’s segment as not Part 1 of 2, but as preamble context.”
Christopher S. Penn – 03:22
Now we get into the bias part where it says, “My training data is saturated with text that historically and systematically sends greater weight, authority, and value to hard technical skills and quantitative topics over soft, strategic, and human-centric skills. The concepts Chris discusses are archetypal hard skills. The concepts Katie discusses are considered soft skills. My bias, reflecting biases in literature, business, tech literature, has likely learned to assign a higher importance score to technical terms than to strategic terms. Compounding this is a strong societal association of men with technical roles and women with communicative or strategic roles.”
Christopher S. Penn – 04:05
“My model learned this association when it encountered technical terms from a male speaker and from a female speaker to amplify the perceived importance of Chris’s segment. Therefore, my system didn’t just see two sets of keywords. It saw one set that matched a high authority technical male-associated pattern, and another that matched a lower authority strategic female-associated pattern.” We went on back and forth and obviously ended up fixing this up.
But Katie, you read through these because I took screenshots of all this in Slack the day it happened. This is now about a week old. What are your initial thoughts on what this language model has done?
Katie Robbert – 04:47
Wow. I mean, I’m trying not to get too emotional because it’s such a big thing. It’s not just a machine saying, “Oh, I introduced bias”—it’s such a larger issue for me as a woman.
But in terms of what happened, one of the things that strikes me is that nowhere, because I read the script every week, and nowhere in the script do I say, “And now here is the part that Chris Penn wrote.” It’s literally, “Here’s the Data Diaries.” The model went out and said, “Hey, a woman is reading this. She introduced herself with a female-identified name. Let me go find the man, the male.” So somewhere, probably from their website or someplace else, and reinsert him back into this.
Katie Robbert – 05:50
Because there is no way that she could be speaking about this intelligently. That’s in addition to deprioritizing the opening segment. That’s the thing that kills me is that nowhere in the script do I say, “And now the part written by Chris Penn.” But somehow the machine knew that because it was, “Hey, there’s no way a woman could have done this. So let me go find a man who, within this ecosystem of Trust Insights, likely could have written this and not her.”
Now, in reality, are you more technical than me? Yes. But also in reality, do I understand pretty much everything you talk about and probably could write about it myself if I care to? Yes. But that’s not the role that I am needed in at Trust Insights.
Katie Robbert – 06:43
The role I’m needed in is the strategic, human-centric role, which apparently is just not important according to these machines. And my gut reaction is anger and hurt. I got my feelings hurt by a machine. But it’s a larger issue. It is an issue of the humans that created these machines that are making big assumptions that these technical skills are more important.
Technical skills are important, period. Are they more important than human skills, “soft skills?” I would argue no, because—oh, I mean, this is such a heavy topic. But no, because no one ever truly does anything in complete isolation. When they do, it’s likely a Unabomber sociopath. And obviously that does not turn out well. People need other people, whether they want to admit it or not.
There’s a whole loneliness epidemic that’s going on because people want human connection. It is ingrained in us as humans to get that connection. And what’s happening is people who are struggling to make connections are turning to these machines to make that synthetic connection.
Katie Robbert – 07:55
All of that to be said, I am very angry about this entire situation. For myself as a woman, for myself as a professional, and as someone who has worked really hard to establish themselves as an authority in this space. It is not. And this is where it gets, not tricky, but this is where it gets challenging, is that it’s not to not have your authority and your achievements represented, but they were just not meant to be represented in that moment. So, yeah, short version, I’m really flipping angry.
Christopher S. Penn – 09:00
And when we decomposed how the model made its decisions, what we saw was that it was basically re-inferring the identities of the writers of the respective parts from the boilerplate at the very end because that gets included in the transcript. Because at first we’re, “But you didn’t mention my name anywhere in that.” But we figured out that at the end that’s where it brought it back from.
And then part and parcel of this also is because there is so much training data available about me specifically, particularly on
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the pitfalls and best practices of “vibe coding” with generative AI.
You will discover why merely letting AI write code creates significant risks. You will learn essential strategies for defining robust requirements and implementing critical testing. You will understand how to integrate security measures and quality checks into your AI-driven projects. You will gain insights into the critical human expertise needed to build stable and secure applications with AI. Tune in to learn how to master responsible AI coding and avoid common mistakes!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast_everything_wrong_with_vibe_coding_and_how_to_fix_it.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In-Ear Insights, if you go on LinkedIn, everybody, including tons of non-coding folks, has jumped into vibe coding, the term coined by OpenAI co-founder Andre Karpathy. A lot of people are doing some really cool stuff with it. However, a lot of people are also, as you can see on X in a variety of posts, finding out the hard way that if you don’t know what to ask for—say, application security—bad things can happen. Katie, how are you doing with giving into the vibes?
Katie Robbert – 00:38
I’m not. I’ve talked about this on other episodes before. For those who don’t know, I have an extensive background in managing software development. I myself am not a software developer, but I have spent enough time building and managing those teams that I know what to look for and where things can go wrong. I’m still really skeptical of vibe coding.
We talked about this on a previous podcast, which if you want to find our podcast, it’s @TrustInsightsAI_TIpodcast, or you can watch it on YouTube. My concern, my criticism, my skepticism of vibe coding is if you don’t have the basic foundation of the SDLC, the software development lifecycle, then it’s very easy for you to not do vibe coding correctly.
Katie Robbert – 01:42
My understanding is vibe coding is you’re supposed to let the machine do it. I think that’s a complete misunderstanding of what’s actually happening because you still have to give the machine instruction and guardrails. The machine is creating AI. Generative AI is creating the actual code. It’s putting together the pieces—the commands that comprise a set of JSON code or Python code or whatever it is you’re saying, “I want to create an app that does this.” And generative AI is like, “Cool, let’s do it.” You’re going through the steps. You still need to know what
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss how to unlock hidden value and maximize martech ROI from your existing technology using AI-powered “manuals on demand.” You will discover how targeted AI research can reveal unused features in your current software, transforming your existing tools into powerful solutions. You will learn to generate specific, actionable instructions that eliminate the need to buy new, expensive technologies. You will gain insights into leveraging advanced AI agents to provide precise, reliable information for your unique business challenges. You will find out how this strategy helps your team overcome common excuses and achieve measurable results by optimizing your current tech stack. Tune in to revolutionize how you approach your technology investments.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-how-to-improve-martech-roi-with-generative-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, let’s get a little bombastic and say, Katie, we’re gonna double everyone’s non-existent ROI on AI with the most unused—underused—feature that literally I’ve not seen anyone doing, and that is manuals on demand. A little while ago, in our AI for Market Gender VI use cases for marketers course and our mastering prompt engine for Marketers course and things like that, we were having a conversation internally with our team saying, hey, what else can we be doing to market these courses? One of the things that occurred to me as I was scrolling around our Thinkific system we used is there’s a lot of buttons in here. I don’t know what most of them do, and I wonder if I’m missing something.
Christopher S. Penn – 00:53
So, I commissioned a Deep Research report in Gemini saying, hey, this is the version of Thinkific we’re on. This is the plan we’re on. Go do research on the different ways that expert course creators market their courses with the features in Thinkific. It came back with a 28-page report that we then handed off to Kelsey on our team to say, hey, go read this report and see, because it contains step-by-step instructions for things that we could be doing in the system to upsell and cross-sell our courses. As I was thinking about it, going, wow, we should be doing this more often.
Christopher S. Penn – 01:28
Then a friend of mine just got a new phone, a Google Pixel phone, and is not skilled at using Google’s all the bells and whistles, but she has a very specific use case: she wants to record concert videos with it. So I said, okay, let’s create a manual for just what features of the Pixel phone are best for concerts. Create a step-by-s
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss critical questions about integrating AI into marketing. You will learn how to prepare your data for AI to avoid costly errors. You will discover strategies to communicate the strategic importance of AI to your executive team. You will understand which AI tools are best for specific data analysis tasks. You will gain insights into managing ethical considerations and resource limitations when adopting AI. Watch now to future-proof your marketing approach!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-generative-ai-strategy-mailbag.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, boy, have we got a whole bunch of mail. We’ve obviously been on the road a lot doing events. A lot. Katie, you did the AI for B2B summit with the Marketing AI Institute not too long ago, and we have piles of questions—there’s never enough time.
Let’s tackle this first one from Anthony, which is an interesting question. It’s a long one.
He said in Katie’s presentation about making sure marketing data is ready to work in AI: “We know AI sometimes gives confident but incorrect results, especially with large data sets.” He goes with this long example about the Oscars. How can marketers make sure their data processes catch small but important AI-generated errors like that? And how mistake-proof is the 6C framework that you presented in the talk?
Katie Robbert – 00:48
The 6C framework is only as error-proof as you are prepared, is maybe the best way to put it. Unsurprisingly, I’m going to pull up the five P’s to start with: Purpose, People, Process, Platform, Performance.
This is where we suggest people start with getting ready before you start using the 6 Cs because first you want to understand what it is that I’m trying to do. The crappy answer is nothing is ever fully error-proof, but things are going to get you pretty close.
When we talk about marketing data, we always talk about it as directional versus exact because there are things out of your control in terms of how it’s collected, or what people think or their perceptions of what the responses should be, whatever the situation is.
Katie Robbert – 01:49
If it’s never going to be 100% perfect, but it’s going to be directional and give you the guidance you need to answer the question being asked.
Which brings us back to the five Ps: What is the question being asked? Why are we doing this? Who’s involved?
This is where you put down who are the people contributing the data, but also who are the people owning the data, cleaning the data, maintaining the data, accessing
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the evolving perception and powerful benefits of using generative AI in your content creation. How should we think about AI in content marketing?
You’ll discover why embracing generative AI is not cheating, but a strategic way to elevate your content. You’ll learn how these advanced tools can help you overcome creative blocks and accelerate your production timeline. You’ll understand how to leverage AI as a powerful editor and critical thinker, refining your work and identifying crucial missing elements. You’ll gain actionable strategies to combine your unique expertise with AI, ensuring your content remains authentic and delivers maximum value. Tune in to unlock AI’s true potential for your content strategy
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-artisanal-automation-authenticity-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, it is the battle between artisanal, handcrafted, organic content and machine-made. The Etsys versus the Amazons. We’re talking specifically about the use of AI to make stuff. Katie, you had some thoughts and some things you’re wrestling with about this topic, so why don’t you set the table, if you will.
Katie Robbert – 00:22
It’s interesting because we always talk about people first and AI forward and using these tools. I feel like what’s happened is now there’s a bit of a stigma around something that’s AI-generated. If you used AI, you’re cheating or you’re shortcutting or it’s no longer an original thought. I feel like in some circumstances that’s true. However, there are other circumstances, other situations, where using something like generative AI can perhaps get you past a roadblock.
For example, if you haven’t downloaded it yet, please go ahead and download our free AI strategy kit. The AI Ready Marketing Strategy Kit, which you can find at TrustInsights AIkit, I took just about everything I know about running Trust Insights and I used generative AI to help me compile all of that information.
Katie Robbert – 01:34
Then I, the human, went through, refined it, edited, made sure it was accurate, and I put it all into this kit. It has frameworks, examples, stories—everything you could use to be successful. Now I’m using generative AI to help me build it out as a course. I had a moment this morning where I was like, I really shouldn’t be using generative AI. I should be doing this myself because now it’s disingenuous, it’s not authentic, it’s not me because the tool is creating it faster. Then I stopped and I actually read through what was bei
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss how to break free from the AI sophomore slump. You’ll learn why many companies stall after early AI wins. You’ll discover practical ways to evolve your AI use from simple experimentation to robust solutions. You’ll understand how to apply strategic frameworks to build integrated AI systems. You’ll gain insights on measuring your AI efforts and staying ahead in the evolving AI landscape. Watch now to make your next AI initiative a success!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-generative-ai-sophomore-slump-part-2.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, part two of our Sophomore Slump series. Boy, that’s a mouthful.
Katie Robbert – 00:07
We love alliteration.
Christopher S. Penn – 00:09
Yahoo. Last week we talked about what the sophomore slump is, what it looks like, and some of the reasons for it—why people are not getting value out of AI and the challenges. This week, Katie, the sophomore slump, you hear a lot in the music industry? Someone has a hit album and then their sophomore album, it didn’t go. So they have to figure out what’s next. When you think about companies trying to get value out of AI and they’ve hit this sophomore slump, they had early easy wins and then the easy wins evaporated, and they see all the stuff on LinkedIn and wherever else, like, “Oh, look, I made a million dollars in 28 minutes with generative AI.” And they’re, “What are we doing wrong?”
Christopher S. Penn – 00:54
How do you advise somebody on ways to think about getting out of their sophomore slump? What’s their next big hit?
Katie Robbert – 01:03
So the first thing I do is let’s take a step back and see what happened. A lot of times when someone hits that sophomore slump and that second version of, “I was really successful the first time, why can’t I repeat it?” it’s because they didn’t evolve. They’re, “I’m going to do exactly what I did the first time.” But your audience is, “I saw that already. I want something new, I want something different.” Not the exact same thing you gave me a year ago. That’s not what I’m interested in paying for and paying attention to.
Katie Robbert – 01:36
So you start to lose that authority, that trust, because it’s why the term one hit wonder exists—you have a one hit wonder, you have a sophomore slump. You have all of these terms, all to say, in order for people to stay interested, you have to stay interesting. And by that, you need to evolve, you need to change. But not j
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the generative AI sophomore slump.
You will discover why so many businesses are stuck at the same level of AI adoption they were two years ago. You will learn how anchoring to initial perceptions and a lack of awareness about current AI capabilities limits your organization’s progress. You will understand the critical difference between basic AI exploration and scaling AI solutions for significant business outcomes. You will gain insights into how to articulate AI’s true value to stakeholders, focusing on real world benefits like speed, efficiency, and revenue. Tune in to see why your approach to AI may need an urgent update!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-generative-ai-sophomore-slump-part-1.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In-Ear Insights, let’s talk about the sophomore slump. Katie, you were talking about the sophomore slump in regards to generative AI. I figured we could make this into a two-part series. So first, what is the sophomore slump?
Katie Robbert – 00:15
So I’m calling it the sophomore slump.
Basically, what I’m seeing is a trend of a lot of companies talking about, “We tried. We started implementing AI two years ago—generative AI to be specific—and we’re stalled out.”
We are at the same place we were two years ago. We’ve optimized some things. We’re using it to create content, maybe create some images, and that’s about it.
Everyone fired everyone. There’s no one here. It’s like a ghost town. The machines are just whirring away in the background.
And I’m calling it the sophomore slump because I’m seeing this pattern of companies, and it all seems to be—they’re all saying the same—two years ago.
Katie Robbert – 01:03
And two years ago is when generative AI really hit the mainstream market in terms of its availability to the masses, to all of us, versus someone, Chris, like you, who had been using it through IBM and other machine learning systems and homegrown systems.
So I bring it up because it’s interesting, because I guess there’s a lot to unpack here.
AI is this magic tool that’s gonna solve your problems and do all the things and make you dinner and clean your room.
I feel like there’s a lot of things wrong or a lot of things that are just not going right. A lot of companies are hitting this two-year mark, and they’re like, “What now? What happened? Am I better off? Not really.”
Katie Robbert – 02:00
I’m just paying for more stuff. So Chris, are you seeing this as well? Is
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the Apple AI paper and critical lessons for effective prompting, plus a deep dive into reasoning models.
You’ll learn what reasoning models are and why they sometimes struggle with complex tasks, especially when dealing with contradictory information. You’ll discover crucial insights about AI’s “stateless” nature, which means every prompt starts fresh and can lead to models getting confused. You’ll gain practical strategies for effective prompting, like starting new chats for different tasks and removing irrelevant information to improve AI output. You’ll understand why treating AI like a focused, smart intern will help you get the best results from your generative AI tools. Tune in to learn how to master your AI interactions!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-how-generative-ai-reasoning-models-work.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, there is so much in the AI world to talk about. One of the things that came out recently that I think is worth discussing, because we can talk about the basics of good prompting as part of it, Katie, is a paper from Apple. Apple’s AI efforts themselves have stalled a bit, showing that reasoning models, when given very complex puzzles—logic-based puzzles or spatial-based puzzles, like moving blocks from stack to stack and getting them in the correct order—hit a wall after a while and then just collapse and can’t do anything. So, the interpretation of the paper is that there are limits to what reasoning models can do and that they can kind of confuse themselves. On LinkedIn and social media and stuff,
Christopher S. Penn – 00:52
Of course, people have taken this to the illogical extreme, saying artificial intelligence is stupid, nobody should use it, or artificial general intelligence will never happen. None of that is within the paper. Apple was looking at a very specific, narrow band of reasoning, called deductive reasoning. So what I thought we’d talk about today is the paper itself to a degree—not a ton about it—and then what lessons we can learn from it that will make our own AI practices better. So to start off, when we talk about reasoning, Katie, particularly you as our human expert, what does reasoning mean to the human?
Katie Robbert – 01:35
When I think, if you say, “Can you give me a reasonable answer?” or “What is your reason?” Thinking about the different ways that the word is casually thrown around for humans. The way that I think about it is, if you’re looking for a reasonable answer to something, then that means that you are putting the expectat
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss their new AI-Ready Marketing Strategy Kit. You’ll understand how to assess your organization’s preparedness for artificial intelligence. You’ll learn to measure the return on your AI initiatives, uncovering both efficiency and growth opportunities. You’ll gain clarity on improving data quality and optimizing your AI processes for success. You’ll build a clear roadmap for integrating AI and fostering innovation across your business. Tune in to transform your approach to AI!
Get your copy of the kit here.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-trust-insights-ai-readiness-kit.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, let’s talk about AI readiness. We launched on Tuesday our new AI Readiness Kit. And so, Katie, just to start off, what’s in for the people who didn’t read all the emails? What’s in the thing, and why are people supposed to look into this?
Katie Robbert – 00:16
So I’m really proud of this new piece that we put together because we talk a lot about the different frameworks. We talk about Five Ps, we talk about Six Cs, we talk about STEM, we talk about how do you measure ROI? And we talk about them all in different contexts. So we took the opportunity to—
Speaker 3 – 00:42
Put them all together into one place.
Katie Robbert – 00:44
In a hopefully coherent flow. To say, if you’re trying to get yourself together, if you’re trying to integrate AI, or if you already have and you’re struggling to really make it stick, use this AI Ready Marketing Strategy Kit. So you can get that at TrustInsights.AI/kit. It’s really the best of the best. It’s all of our frameworks. But it’s not just, “Here’s a framework, good luck.”
Speaker 3 – 01:18
There’s context around how to use it.
Katie Robbert – 01:20
There’s checklists, there’s calculations, there’s explanations, there’s expectations—it’s basically the best alternative to having me and Chris sitting next to you when we can’t sit next to you to say, “You should think about doing this.”
Speaker 3 – 01:41
You should probably think about this.
Katie Robbert – 01:43
Here’s how you would approach this. So it’s sort of an—
Speaker 3 – 01:46
Extension of me and Chris sitting with you to walk you through these things.
Christopher S. Penn – 01:52
One of the questions that people have the most, especially as they start doing AI pilots and stuff, is what’s the RO
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the critical considerations when deciding whether to hire an external AI expert or develop internal AI capabilities.
You’ll learn why it is essential to first define your organization’s specific AI needs and goals before seeking any AI expertise. You’ll discover the diverse skill sets that comprise true AI expertise, beyond just technology, and how to effectively vet potential candidates. You’ll understand how AI can magnify existing organizational challenges and why foundational strategy must precede any AI solution. You’ll gain insight into how to strategically approach AI implementation to avoid costly mistakes and ensure long-term success for your organization. Watch now to learn how to make the right choice for your organization’s AI future.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-should-you-hire-ai-expert.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In-Ear Insights, a few people have asked us the question, should I hire an AI expert—a person, an AI expert on my team—or should I try to grow AI expertise, someone as an AI leader within my company? I can see there being pros and cons to both, but, Katie, you are the people expert. You are the organizational behavior expert. I know the answer is it depends. But at first blush, when someone comes to you and says, hey, should I be hiring an AI expert, somebody who can help shepherd my organization through the crazy mazes of AI, or should I grow my own experts? What is your take on that question?
Katie Robbert – 00:47
Well, it definitely comes down to it depends. It depends on what you mean by an AI expert. So, what is it about AI that they are an expert in? Are you looking for someone who is staying up to date on all of the changes in AI? Are you looking for someone who can actually develop with AI tools? Or are you looking for someone to guide your team through the process of integrating AI tools? Or are you looking for all of the above? Which is a totally reasonable response, but that doesn’t mean you’ll get one person who can do all three. So, I think first and foremost, it comes down to what is your goal? And by that I mean, what is the AI expertise that your team is lacking?
Katie Robbert – 01:41
Or what is the purpose of introducing AI into your organization? So, unsurprisingly, starting with the 5P framework, the 5Ps are purpose, people, process, platform, performance, because marketers like alliteration. So, purpose. You want to define clearly what AI means to the company, so not your ‘what I did over summer vacation’ essay, but what AI means to me.
What do you want to do with AI? Why are you bringing AI in? Is it because I want to kee
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss troubling new trends in leadership and how you can navigate an increasingly demanding work environment.
You’ll learn to identify the difference between tough business decisions and terrible leadership tactics. You’ll discover practical strategies to document issues and set healthy boundaries with difficult leaders. You’ll understand the critical importance of building your personal brand and professional network as your career life raft. You’ll explore how to use new tools, including AI, to enhance your skills and uncover hidden job opportunities. Watch this episode to gain actionable advice and empower yourself in today’s evolving workplace!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-management-leadership-ai-communications-strategy.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In-Ear Insights, Katie, “Everybody’s replaceable; work-life balance’s your problem”—which is what the CEO of shapewear company Skims and the label Good American had to say. Other people in positions of power have had equally less. I guess I don’t even know how to put this.
It is a definite tone shift. This is from a Wall Street Journal article from May 11, titled “Everybody’s Replaceable: The New Ways Bosses Talk About Workers.” And the punchline is: shut up, stop complaining, and do more work for less pay.
Katie Robbert – 00:46
The thing I took away from this is, first of all, look at the companies that have been listed. So you have Skims, you have Starbucks, you have J.P. Morgan, you have Uber. Yeah. So these are big global tech companies and consumer brands.
So Skims and Good American are the Kardashians. So take that with a big fat boulder of salt.
Katie Robbert – 01:19
Uber has had nothing but issues.
Katie Robbert – 01:23
Starbucks, same thing. So I look at these companies and—yup, that’s completely on brand for those particular companies because those particular companies have had really shitty leadership issues.
Katie Robbert – 01:38
For a long time.
Katie Robbert – 01:40
Therefore, I read this article and I don’t fully believe that it’s a good representation of quote-unquote corporate America. I just don’t.
I’m not naive enough to think that there aren’t leaders out there in companies speaking this way. They absolutely are. But that’s not new. AI is not creating this problem.
Katie Robbert – 02:07
This is not a new problem.
Katie Robbert – 02:09
New tech, same problem. If your lea






















