Discover
In-Ear Insights from Trust Insights
266 Episodes
Reverse
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss essential sales frameworks and why they often fail today.
You will understand why traditional sales methods like Challenger and SPIN selling struggle with modern complex purchases. You will learn how to shift your sales focus from rigid, linear frameworks to the actual non-linear journey of the customer. You will discover how to use ideal customer profiles and strong documentation to build crucial trust and qualify better prospects. You will explore methods for leveraging artificial intelligence to objectively evaluate sales opportunities and improve your go/no-go decisions. Watch this episode to revolutionize your approach to high-stakes complex sales.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-sales-frameworks-basics-and-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
**Christopher S. Penn – 00:00**
In this week’s In Ear Insights. Even though AI is everywhere and is threatening to eat everything and stuff like that, the reality is that people still largely buy from people. And there are certainly things that AI does that can make that process faster and easier. But today I thought it might be good to review some of the basic selling frameworks, particularly for companies like ours, but in general, to help with complex sales.
One of the things that—and Katie, I’d like your take on this—one of the things that people do most wrong in sales at the very outset is they segment out B2B versus B2C when they really should be segmenting out: simple sale versus complex sales. Simple sales, a pack of gum, there are techniques for increasing number of sales, but it’s a transaction.
**Christopher S. Penn – 00:48**
You walk into the store, you put down your money, you walk out with your pack of gum as opposed to a complex sale. Things like B2B SaaS software, some versions of it, or consulting services, or buying a house or a college education where there’s a lot of stakeholders, a lot of negotiation, and things like that. So when you think about selling, particularly as the CEO of Trust Insights who wants to sell more stuff, what do you think about advising people on how to sell better?
**Katie Robbert – 01:19**
Well, I should probably start with the disclaimer that I am not a trained salesperson. I happen to be very good with people and reading the situation and helping understand the pain points and needs pretty quickly. So that’s what I’ve always personally relied on in terms of how to sell things. And that’s not something that I can easily teach. So to your point, there needs to be some kind of a framework.
I disagree with your opening statement that the biggest problem people have with selling or the biggest mistake that people make is the segmentation. I agree with simple versus complex, but I do think that there is something to be said about B2B versus B2C. You really have to start somewhere.
**Katie Robbert – 02:08**
And I think perhaps maybe if I back up even more, the advice that I would give is: Do you really know who you’re selling to? We’re all eager to close more business and make sure that the revenue numbers are going up and not down and that the pipeline is full.
The way to do that—and again, I’m not a trained salesperson, so this is my approach—is I first want to make sure I’m super clear on our ideal customer profile, what their pain points are, and that we’re super clear on our own messaging so that we know that the services that we offer are matching the pain points of the customers that we want to have in our pipeline. When we started Trust Insights, we didn’t have that.
**Katie Robbert – 02:59**
We had a good sense of what we could do, what we were capable of, but at the same time were winging it. I think that over the past eight or so years we’ve learned a lot around how to focus and refine. It’s a crowded marketplace for anyone these days. Anyone who says they don’t really have competitors isn’t really looking that hard enough.
But the competitors aren’t traditional competitors anymore. Competitors are time, competitors are resources, competitors are budget. Those are the reasons why you’re going to lose business. So if you have a sales team that’s trying to bring in more business, you need to make sure that you’re super hyper focused. So the long-winded way of saying the first place I would start is: Are you very specifically clear on who your ideal customer is?
**Katie Robbert – 03:53**
And are there different versions of that? Do they buy different things based on the different services that you offer? So as a non-salesperson who is forced to do sales, that’s where I.
**Christopher S. Penn – 04:04**
would start. That’s a good place to start. One of the things, and there’s a whole industry for this of selling, is all these different selling frameworks. You will hear some of them: SPIN selling, Solution Selling, Insight Selling, Challenger, Sandler, Hopkins, etc. It’s probably not a bad age to at least review them in aggregate because they’re all very similar. What differentiates them are specific tactics or specific types of emphasis. But they all follow the same Kennedy sales principles from the 1960s, which is: identify the problem, agitate the customer in some way so that they realize that the problem is a bigger problem than they thought, provide a solution of some point, a way, and then tell them, “Here’s how we solve this problem. Buy our stuff.” That’s the basic outline.
**Christopher S. Penn – 05:05**
Each of the systems has its own thin slice on how we do that better. So let’s do a very quick tour, and I’m going to be showing some stuff. If you’re listening to this, you can of course catch us on the Trust Insights YouTube channel. Go to Trust Insights.AI/YouTube. The first one is Solution Selling. This is from the 1990s. This is a very popular system. Again, look for people who actually have a problem you can fix. Two is get to know the audience. Three is the discovery process where you spend a lot of time consulting and asking the person what their challenges are.
**Christopher S. Penn – 05:48**
Figure out how you can add value to that, find an internal champion that can help get you inside the organization, and then build the closing win. So that’s Solution Selling. This one has been in use for almost 40 years in places, and for complex sales, it is highly effective.
**Katie Robbert – 06:10**
Okay. What’s interesting, though, is to your point, all the frameworks are roughly the same: give people what they need, bottom line. If you want to break it down into 1, 2, 3, 4, 5, 6 different steps because that’s easier for people to wrap their brains around, that’s totally fine. But really, it comes down to: What problems do they have? Can you solve the problem? Help them solve the problem, period. I feel, and I know we’re going to go through the other frameworks, so I’ll save my rant for afterwards.
**Christopher S. Penn – 06:47**
SPIN Selling, again, is very similar to the Kennedy system: Understand the situation, reveal the pain points, create urgency for change, and then lead the buyers to conclude on their own. This one spends less time on identifying the customers themselves. It assumes that your prospecting and your lead flow engine is separate and working. It is much more focused on the sales process itself.
If you think about selling, you have business development representatives or sales development representatives (SDRs) up front who are smiling and dialing, calling for appointments and things like that, trying to fill a pipeline up front. Then you have account executives and actual sales folks who would be taking those warmed-up leads and working them. SPIN Selling very much focuses on the latter half of that particular process. The next one is Insight Selling. Insight Selling is a.
**Christopher S. Penn – 07:44**
It is differentiated by the fact that it tries to make the sales process much more granular: coaching the customer, communicating value, collaborating, accelerating commitment, implementing by cultivating the relationship, and changing the insight.
The big thing about Insight Selling is that instead of very long-winded conversations and lots of meetings and calls, the Insight Selling process tries to focus on how you can take the sales process and turn it into bite-sized chunks for today’s short attention span audience. So you set up sales automation systems like Salesforce or marketing automation, but very much targeted towards the sales process to target each of these areas to say, what unusual insight can I offer a customer in this email or this text message, whatever essentially keeps them engaged.
**Christopher S. Penn – 08:40**
So it’s very much a sales engagement system, which I think.
**Katie Robbert – 08:45**
Makes sense because on a previous episode we were talking about client services, and if your account managers or whoever’s responsible for that relationship is saying only “just following up” and not giving any more context, I would ignore that. Following up on what? You have to remind me because now you’ve given me more work to do. I like this version of Insight Selling where it’s, “Hey, I know we haven’t chatted in a while, here’s something new, here’s something interesting that’s pertaining to you specifically.” It’s more work on the sales side, which quite honestly, it should be. Exactly.
**Christopher S. Penn – 09:25**
Insight Selling benefits most from a shop that is data-driven because you have to generate new insights, you have to provide things that are surprising, diff
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the essentials of excellent account management and how AI changes the game.
You will discover how to transition from simply helping clients to proactively taking tasks off their to-do list. You will learn the exact communication strategies necessary to manage expectations and ensure timely responses that build client trust. You will understand the four essential executive functions you must retain to prevent artificial intelligence from replacing your critical role. You will grasp how to perform essential quality checks on deliverables even without possessing deep technical expertise in the subject matter. Watch now to elevate your account management skills and secure your position in the future of consulting!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-account-management-in-age-of-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
**Christopher S. Penn – 00:00**
In this week’s In Ear Insights, Trust Insights is a consulting firm. We obviously do consulting. We have clients, we have accounts, and therefore account management. Katie, you and I worked for a few years together at a PR firm before we started Trust Insights and managed a team of folks. I should clarify with an asterisk: you managed a team of people then to keep those accounts running, keep customers and clients happy, and try to keep team members happy. Let’s talk about what are the basics of good account management—not just for keeping clients happy, but also keeping your team happy as well, to the extent that you can, but keeping stuff on the rails.
**Katie Robbert – 00:51**
The biggest thing from my experience, because I’ve been on both sides of it—well, I should say there are three sides of it. There’s the account manager, there’s the person who manages the account manager, and then there’s the account itself, the client. I’ve been on all three sides of it, and I currently sit on the side of managing the account manager who manages the accounts. If we talk about the account manager, that person is trying to keep things on the rails. They’re trying to keep things moving forward. Typically they are the ones who, if they choose, they can have the most power, or if they don’t, they have the least power.
**Katie Robbert – 01:38**
By that I mean, a good account manager has their hands in everything, is listening to every conversation between the stakeholders or the principals and the client, is really ingesting the information and understanding, “Okay, this is what was asked for. This is what we’re working on. This is discussed.” Whatever it is they don’t understand, they take the initiative to find out what it means.
If you’re working on a more technical client and you’re talking about GDELT and code bases and databases and whatever, and you’re like, “I’m just here to set up meetings,” then you’re not doing yourself any sort of favors.
**Katie Robbert – 02:21**
The expectation of the account manager is that they would say, “All right, I don’t understand everything that was discussed, but let me take the notes, do a little research, and at least get the basics of what’s happening so that I, as the person acting on behalf of the consulting agency, can then have conversations without having to loop in the principal every single time, and the principal can focus on doing the work.” The biggest success metric that I look for in an account manager is their ability to be proactive. One of the things that, as someone who manages and has managed larger teams, is someone just waiting around to be told what to do. That puts the burden back on the manager to constantly be giving you a to-do list.
**Katie Robbert – 03:13**
At the level of a manager, an account manager, you should be able to proactively come up with your own list. Those are just some of the things off the top of my mind, off the top of my head, Chris. But you also have to be fair. You managed the team at the agency alongside with me, but you were also part of the team that was executing the work. And you rely heavily on account managers to tell you what the heck is happening. So what do you look for in account manager skills?
**Christopher S. Penn – 03:49**
It goes back to something that our friend Mitch Joel often says, which is, “Don’t be another thing on the client’s to-do list,” because nobody wants that. Nobody wants more on their to-do list. Ideally, a good account manager is constantly fishing with the client to say, “What else can we take off your to-do list?”
**Katie Robbert – 04:09**
Right.
**Christopher S. Penn – 04:09**
How can we make your list shorter rather than longer? That determines—no, there’s that and one other thing, but that’s one of the key things that determines client success—is to say, “Look, here’s what we got done.” Because the more you go fishing and the more stuff that you take away from the client, the happier they are. But also, when it comes time for renewal, the more you can trot out the list and look at all the things we’re doing, look at all the things that we did—maybe that were just slightly out of scope, but within our capabilities—that we improved your life, we improved things, we got done everything we said we were going to get done.
**Christopher S. Penn – 04:47**
And maybe we demonstrated capabilities so that when renewal time comes, you can say, “Hey, maybe we should increase the retainer because we demonstrated some proof of concept success in these other areas that we also know are really challenging.” Management consultant David Meister talks about this a lot in terms of growing retainers. He says, “I will show up at my own expense to your annual planning meeting. I will sit in the back and I will not speak until spoken to, but I am there as a resource for you to ask me questions as an expert.” And he said 10 times out of 10, he walked away with a bigger retainer just by sitting, listening to your point, knowing what’s going on with the client, and also going fishing.
**Christopher S. Penn – 05:33**
The other thing—and this is both an account management thing and a sales thing—is, and this is something that I suck at, which is why I don’t work in account management, is very timely responses. Somebody—the client—lobs a tennis ball over the net and you immediately return. Even if you have nothing to say, you can just say, “Hey, got it. We’re here. We’re paying attention to your needs. We are responsive.” And those two things, being able to go fishing and being highly responsive, to me, are success indicators for a good account manager.
**Katie Robbert – 06:12**
I definitely agree with the highly responsive. One of my expectations for any of the teams, whether it’s now or at the agency, was if a client sends an email, just acknowledge it. Because there is nothing worse than the anxiety of, “Do I follow up? Do I set?” We deal with that sort of on the sales side—people will ghost us all the time. That’s just part of sales. And it’s a fine line of follow-up versus stalking. We want to be proactively following up, but we also don’t want to be harassing and stalking people because that then, to your first point, goes to you being one more thing on their list to follow up with.
**Katie Robbert – 06:57**
Let’s say a client sends over a list of questions and we don’t have time to get to it. One of the things that we used to do with the agency was, “Okay, let’s acknowledge it and then give a time frame.” We saw your email. We’ll get back to you within the next three business days just to set some kind of an expectation. Then, obviously, we would have a conversation with whoever’s responsible for doing the work first: “Is that a reasonable timeline?” But all of that was done by the account manager. All of that was coordinated by them. And that’s such an important role. One of the things that people get wrong about a role like an account manager or a project manager is that they’re just admins, and they’re really not.
**Katie Robbert – 07:41**
They’re really the person who keeps it all together. To keep going with that example, so the client says, “I have a bunch of things.” The account manager should be the first person to see that and acknowledge it. “We got it, we will respond to you.” And then whoever is on our side responsible for answering: “Okay, Chris, we have this list of questions. You said it could be done within 3 days. Let me go ahead and proactively block time for you and make sure that you can get that done so that I can then take that information and get back to the client, hopefully before the timeline is up, so that it’s—keep them really happy.” What is it? Under promise, over deliver?
**Katie Robbert – 08:27**
I was about to say the reverse, and that would have been terrible. It’s really, from my perspective, just always staying on top of things. I have a question because this is something I feel, especially in a smaller company, we struggle with in terms of role expectations. Do you expect an account manager to know as much about what’s happening as you, the expert and individual contributor, do?
**Christopher S. Penn – 09:00**
Here’s how I would frame that. We’ll use blenders.
**Katie Robbert – 09:05**
Sure. We love blenders.
**Christopher S. Penn – 09:07**
We love blenders. I would not expect in a kitchen, a sous chef to understand how electromagnets work and microcards and circuits that make the blender operate. I don’t expect them to know the internals of a blender. I do expect to know what goes in a blender, what should not go in a blender, and what it should look
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss effective reporting and creating reports that tell a story and drive action using user stories and frameworks.
You will understand why data dumping onto a stakeholder’s desk fails and how to gather precise reporting requirements immediately. You will discover powerful frameworks, including the SAINT model, that help you move from basic analysis to crucial, actionable decisions. You will gain strategies for anticipating executive questions and delivering a clear, consistent narrative throughout your entire report. You will explore innovative ways to use artificial intelligence as a thought partner to refine your analysis and structure perfect reports. Stop wasting time and start creating reports that generate real business results. Watch now!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-how-to-create-effective-reporting.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, it’s almost redundant at this point to say it’s reporting season, but as we hit quarterly ends, yearly ends, things like that, people become reflective and say, “Hey, let’s do some reports.”
One of the problems that we see the most with reporting—and I was guilty of this for the majority of my career, particularly the first half—is when you’re not confident about your reporting skills, what do you do? You back the truck up and you pour data all over somebody’s desk and you hope that it overwhelms them so that they don’t ask you any questions, which is the worst possible way to do reporting.
So, Katie, as a senior executive, as a leader, when someone delivers reporting to you, what do you get and what do you want to get?
Katie Robbert – 00:51
Well, I would start to say reports, like the ones that you were generating, hate to see me coming. Because guess what I do, Chris, I ask a bazillion questions, starting with so what? And I think that’s really the key.
As the CEO of Trust Insights, I need a report that tells me exactly what the insights and actions are so that I can do those things. And that is a user story. A user story is a simple three-part sentence: As a Persona, I want so that. If someone is giving me a report and they haven’t asked me for a user story, that’s probably step one. So, Chris, if I say, “All right, if you can pull the monthly metrics, Chris, and put it into a report, I would appreciate it.”
Katie Robbert – 01:47
If I haven’t given you a user story, you need to ask me what it is, because that’s the “so what?” Why are we doing this in the first place? We have no shortage of data points. We have no shortage of information about what happened, maybe even why it happened. And that’s a problem because it doesn’t tell a story.
What happens is, if you just give me all of that data back, I don’t know what to do with it. And that’s on me, and that’s on you. And so, together, one of us needs to make sure there is a user story. Ideally, I would be providing it, but if I don’t provide it, your first step is to ask for it. That is Step zero. What is the user story? Why am I pulling this report in the first place?
Katie Robbert – 02:33
What is it that you, the stakeholder, expect to get out of this report? What is it you need to do with this information? That is Step zero, before you even start looking at data.
Christopher S. Penn – 02:44
I love user stories, and I love them, A, for the simplicity, but B, because of that warm and comforting feeling of having covered your ass.
Because if I ask you for a user story and you give me one, I build a report for that. Then you come back and say, “But this is this.”
Katie Robbert – 03:03
This.
Christopher S. Penn – 03:03
I’m like, “You signed off on the user. You gave me the user story, you signed off on the user story. And what you’re asking for is not in the user story.” So I think we need to recalibrate and have you give me maybe some new user stories so you can get what you want. I’m not going to tell you to go F off—not my face. But I’m also going to push back and say, “This wasn’t in the user story.” Because the reason I love user stories is because they’re the simplest but most effective form of requirements gathering.
Katie Robbert – 03:36
I would agree with that. When I was a product manager, user stories saved my sanity because my job was to get all of my stakeholders aligned on a single idea. And I’ve told this before, I’d literally go to their office and camp out and get a physical signature on a piece of paper saying, “Yes, this is exactly what you’re agreeing to.”
Then, when we would sit in the meeting and the development team or the design team would present the thing, the second somebody would be like, “Well, wait,” I would just hold up the piece of paper and point to their signature. It’s such an effective way to get things done.
Katie Robbert – 04:23
Because what happens if you don’t have a user story to start, or any kind of requirements to start, when you’re doing reporting is exactly what you’re talking about. You end up with spreadsheets of data that doesn’t really mean anything. You end up with 60-slide PowerPoint reports with all of these visuals, and every single slide has at least four or five charts on it and some kind of a label. But there’s no story. There’s no, “Why am I looking at this?”
When I think about reporting, the very first thing I want to see is—and I would say even go ahead and do this, this is sort of the pro tip—
Katie Robbert – 05:00
Whatever the user story was that I gave you, put that right at the top of the report so that when I look at it, I go, “Oh, that’s what I was looking for. Great.” Because chances are, the second you walk away, I’ve already forgotten the conversation—not because it’s not important, but because a million other things have crept up.
Now, when you come back to me and say, “This is what I’m delivering,” this is what I need to be reminded of. A lot of stakeholders, people in general, we’re all forgetful. Over-communicate what it is that we’re doing here in the first place. And no one’s going to be mad at that. It’s like, “Oh, now I don’t have to ask questions.” The second thing I look for is sort of that big “So what?”
Katie Robbert – 05:45
We call it an executive summary. You can call it the big takeaway, whatever it is. At the very top of the report, I personally look for, “What is the big thing I need to know?” Is everything great? That’s all I need to know. Is everything terrible? I definitely need to know that. Do I need to take six big actions? Great, let me know that. Or, it’s all business as usual. Just give me the 30-second, “Here are the three bullet points that you need to know.” If you have no other time to read this report, that should be the summary at the top. I am going to, even if it’s not right then, dig into the rest of the report. But I may only in that moment be able to look at the summary.
Katie Robbert – 06:33
When I see these big slide decks that people present to their executive team or to their board or to whoever they report to, it’s such a missed opportunity to not have the key takeaways right there up front. If you’re asking someone to scroll, scroll, get through it—it’s all the way at the end—they’re not going to do it, and they’re going to start picking apart everything. Even if you’ve done the work to say, “But I already summarized all of that,” it’s not right there in front of them. Do yourself a favor. Whatever it is the person you’re presenting this to needs to know, put it right in front of their face immediately.
Christopher S. Penn – 07:13
Back in the day, we came up with a framework called the SAINT framework, which stands for Summary, Analysis, Insights, Next Steps, Timeline. Where I’ve seen that go wrong is people try to do too much in the summary. From Analysis, Insights, Next Steps, and Timelines, there should be one to three bullets from each that become the summary.
Katie Robbert – 07:34
And that’s it?
Christopher S. Penn – 07:35
Yeah, that’s it. In terms of percentages, what we generally recommend to people is that Analysis should be 10% to 15% of the report. What happened? Data Insights should be 10% to 15% of the report. Why did those things happen? We did this, and this is what happened. Or this external factor occurred, and this has happened.
The remaining 50% to 60% of the report should be equally split between Next Steps—what are you going to do about it?—and Timeline—when are you going to do it? Those next steps and timeline become the decisions that you need the stakeholder to make and when they need to do it so that you get done what you need to get done.
Christopher S. Penn – 08:23
That’s the part we call the three “What’s”: What happened? So what? Now what? As you progress through any measurement framework, any reporting framework, the more time you spend on “Now what,” the better a stakeholder is likely to like the report.
You should absolutely, if the stakeholder wants it, provide the appendix of the data itself if they want to pour through it. But at the highest level, it should be, “Hey Katie, our website traffic was down 15% last month. The reason for it was because it was a shorter month, a lot of holidays. What we need to do is we need to spin up a small paid campaign, $500 for the next month, to boost traffic back to our key pages. I need a decision from you by October 31st. Go, no go.”
Christopher S. Penn – 09:18
And that would be the short summary because that fulfills your user story of, “As a CEO, I need to know what’s going on in mar
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the stark reality of the future of work presented at the Marketing AI Conference, MAICON 2025.
You’ll learn which roles artificial intelligence will consume fastest and why average employees face the highest risk of replacement. You’ll master the critical thinking and contextual skills you must develop now to transform yourself into an indispensable expert. You’ll understand how expanding your intellectual curiosity outside your specific job will unlock creative problem solving essential for survival. You’ll discover the massive global AI blind spot that US companies ignore and how this shifting landscape affects your career trajectory. Watch now to prepare your career for the age of accelerated automation!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-maicon-2025-generative-ai-for-marketers.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, we are at the Marketing AI Conference, Macon 2025 in Cleveland with 1,500 of our best friends. This morning, the CEO of SmartRx, formerly the Marketing AI Institute, Paul Ritzer, was talking about the future of work. Now, before I go down a long rabbit hole, Dave, what was your immediate impressions, takeaways from Paul’s talk?
Katie Robbert – 00:23
Paul always brings this really interesting perspective because he’s very much a futurist, much like yourself, but he’s a futurist in a different way. Whereas you’re on the future of the technology, he’s focused on the future of the business and the people. And so his perspective was really, “AI is going to take your job.” If we had to underscore it, that was the bottom line: AI is going to take your job. However, how can you be smarter about it? How can you work with it instead of working against it? Obviously, he didn’t have time to get into every single individual solution.
Katie Robbert – 01:01
The goal of his keynote talk was to get us all thinking, “Oh, so if AI is going to take my job, how do I work with AI versus just continuing to fight against it so that I’m never going to get ahead?” I thought that was a really interesting way to introduce the conference as a whole, where every individual session is going to get into their soldiers.
Christopher S. Penn – 01:24
The chart that really surprised me was one of those, “Oh, he actually said the quiet part out loud.” He showed the SaaS business chart: SaaS software is $500 billion of economic value. Of course, AI companies are going, “Yeah, we want that money. We want to take all that money.” But then he brought up the labor chart, which is $12 trillion of money, and says, “This is what the AI companies really want. They want to take all $12 trillion and keep it for themselves and fire everybody,” which is the quiet part out loud. Even if they take 20% of that, that’s still, obviously, what is it, $2 trillion, give or take? When we think about what that means for human beings, that’s basically saying, “I want 20% of the workforce to be unemployed.”
Katie Robbert – 02:15
And he wasn’t shy about saying that. Unfortunately, that is the message that a lot of the larger companies are promoting right now. So the question then becomes, what does that mean for that 20%? They have to pivot. They have to learn new skills, or—the big thing, and you and I have talked about this quite a bit this year—is you really have to tap into that critical thinking. That was one of the messages that Paul was sharing in the keynote: go to school, get your liberal art degree, and focus on critical thinking. AI is going to do the rest of it.
Katie Robbert – 02:46
So when we look at the roles that are up for grabs, a lot of it was in management, a lot of it was in customer service, a lot of it was in analytics—things that already have a lot of automation around them. So why not naturally let agentic AI take over, and then you don’t need human intervention at all? So then, where does that leave the human?
Katie Robbert – 03:08
We’re the ones who have to think what’s next. One of the things that Paul did share was that the screenwriter for all of the Scorsese films was saying that ChatGPT gave me better ideas. We don’t know what those exact prompts looked like. We don’t know how much context was given. We don’t know how much background information. But if that was sue and I, his name was Paul. Paul Schrader. Yes, I forgot it for a second. If Paul Schrader can look at Paul Schrader’s work, then he’s the expert. That’s the thing that I think needed to also be underscored: Paul Schrader is the expert in Paul Schrader. Paul Schrader is the expert in screenwriting those particular genre films. Nobody else can do that.
Katie Robbert – 03:52
So Paul Schrader is the only one who could have created the contextual information for those large language models. He still has value, and he’s the one who’s going to take the ideas given by the large language models and turn them into something. The large language model might give him an idea, but he needs to be the one to flush it out, start to finish, because he’s the one who understands nuance. He’s the one who understands, “If I give this to a Leonardo DiCaprio, what is he gonna do with the role? How is he gonna think about it?” Because then you’re starting to get into all of the different complexities where no one individual ever truly works alone. You have a lot of other humans.
Katie Robbert – 04:29
I think that’s the part that we haven’t quite gotten to, is sure, generative AI can give you a lot of information, give you a lot of ideas, and do a lot of the work. But when you start incorporating more humans into a team, the nuance—it’s very discreet. It’s very hard for an AI to pick up. You still need humans to do those pieces.
Christopher S. Penn – 04:49
When you take a look, though, at something like the Tilly Norwood thing from a couple weeks ago, even there, it’s saying, “Let’s take fewer humans in there,” where you have this completely machine generated actor avatar, I guess. It was very clearly made to replace a human there because they’re saying, “This is great. They don’t have to pay union wages. The actor never calls in sick. The actor never takes a vacation. The actor’s not going to be partying at a club unless someone makes it do that.” When we look at that big chart of, “Here’s all the jobs that are up for grabs,” the $12 trillion of economic value, when you look at that, how at risk do you think your average person is?
Katie Robbert – 05:39
The key word in there is average. An average person is at risk. Because if an average person isn’t thinking about things creatively, or if they’re just saying, “Oh, this is what I have to do today, let me just do it. Let me just do the bare minimum, get through it.” Yes, that person is at risk. But someone who looks at a problem or a task that’s in front of them and thinks, “What are the five different ways that I could approach this? Let me sit down for a second, really plan it out. What am I not thinking of? What have I not asked? What’s the information I don’t have in front of me? Let me go find that”—that person is less at risk because they are able to think beyond what’s right in front of them.
Katie Robbert – 06:17
I think that is going to be harder to replace. So, for example, I do operations, I’m a CEO. I set the vision. You could theoretically give that to an AI to do. I could create CEO Katie GPT. And GPT Katie could set the vision, based on everything I know: “This is the direction that your company should go in.” What that generative AI doesn’t know is what I know—what we’ve tried, what we haven’t tried. I could give it all that information and it could still say, “Okay, it sounds like you’ve tried this.” But then it doesn’t necessarily know conversations that I’ve had with you offline about certain things. Could I give it all that information? Sure. But then now I’m introducing another person into the conversation. And as predictable as humans are, we’re unpredictable.
Katie Robbert – 07:13
So you might say, “Katie would absolutely say this to something.” And I’m going to look at it and go, “I would absolutely not say that.” We’ve actually run into that with our account manager where she’s like, “Well, this is how I thought you would respond. This is how I thought you would post something on social media.” I’m like, “Absolutely not. That doesn’t sound like me at all.” She’s like, “But that’s what the GPT gave me that is supposed to sound like you.” I’m like, “Well, it’s wrong because I’m allowed to change my mind. I’m a human.” And GPTs or large language models don’t have that luxury of just changing its mind and just kind of winging it, if that makes sense.
Christopher S. Penn – 07:44
It does. What percentage, based on your experience in managing people, what percentage of people are that exceptional person versus the average or the below average?
Katie Robbert – 07:55
A small percentage, unfortunately, because it comes down to two things: consistency and motivation. First, you have to be consistent and do your thing well all the time. In order to be consistent, you have to be motivated. So it’s not enough to just show up, check the boxes, and then go about your day, because anybody can do that; AI can do that. You have to be motivated to want to learn more, to want to do more. So the people who are demonstrating a hunger for reaching—what do they call it?—punching above their weight, reaching beyond what they have, those are the people who are going to be less vulnerable be
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the worth of conferences and events in a tight economy.
You will learn a powerful framework for evaluating whether an expensive conference ticket meets your specific professional goals. You will use generative artificial intelligence to score event agendas, showing you which sessions offer the best return on your time investment. You will discover how expert speakers and companies create tangible value, moving beyond vague thought leadership to give you actionable takeaways. You will maximize your event attendance by demanding supplementary tools, ensuring you retain knowledge long after you leave the venue. Watch this episode now to stop wasting budget on irrelevant professional events!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-how-to-make-conferences-worth-the-investment.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s *In Ear Insights*, let’s talk about events, conferences, trade shows, workshops—the gamut of things that you could get up from your desk maybe, go somewhere else, eat hotel chicken, and enjoy speaking. The big question is this, Katie: In today’s absolutely loony environment, with the economic uncertainty and the budgets and all this and that, are events still worth it? This is a two-part question: Are events still worth it for the attendees, and are events still worth it for companies that want to generate business from events?
Katie Robbert – 00:50
It’s a big question. And if our listeners are anything like me, it takes a lot to get them to put on real pants and actually leave the house—something that isn’t sweatpants or leggings or something like that—because you’re spending the time, the resources, the money to go out and actually interact with other people.
In terms of an attendee, I think there can be a lot of value, provided you do your homework on who the speakers are, what their expertise is, what they’re promising to teach you in the workshop or the session or whatever the thing is. The flip side of that is it can be worth it for a speaker, provided you know who your audience is, you can create an ICP, and provided you are giving value to the audience.
Katie Robbert – 01:54
So if you’re a speaker who has made their whole career on big ideas and thought leadership and all that’s fine, people have a hard time buying something from that and saying, “I know exactly what it is I need to do next.”
So there is a time and place for those speakers. But for an attendee to really get value, you need to teach them something. You need to show them how to be very tactical, be very hands-on. That’s where an attendee is going to get more value. So I would say overall, I think events are worth it provided both the attendee and the speaker are doing their homework to make sure they are getting and providing value.
Christopher S. Penn – 02:44
Yep. The trifecta has always been speaker, sponsor, attendee. So each entity has their own motivations. And one of the best things that you can do, even before signing up for an event while you’re considering them, is to actually make a user story. So for me, Christopher Penn, as a keynote speaker, I want to speak at, say, Davos, so that I can raise my stature among professional speakers by speaking at the World Economic Forum. That’s just a simple example.
It becomes pretty clear then that event fits my “so that,” which maps to the 5P framework. So I have a purpose as a speaker, I have a performance, I have a known outcome that I want.
Christopher S. Penn – 03:35
And then I have to figure out: Does the event provide the people, process, and platform to get me to my purpose and achieve the performance that I want?
As an attendee, you would do the same thing. One of the reasons why I pretty much never go to events unless I’m speaking at them is because when I do this user story for myself, as an AI data scientist: “I want to learn the latest and greatest techniques and methodologies for using generative AI models so that I can improve the productivity of my work and scale AI faster.”
When I use that user story, there’s a single event that matches that user story. None. Zero. Why? Because all of the stuff that fulfills that is not at events. It is in the steady stream of academic papers being published every day.
Christopher S. Penn – 04:34
It is in the research that’s being done, in the code repositories that are being published on places like GitHub. And I know myself and how I work. I will get immediate benefit by going to someone’s GitHub repo, checking out the code, and saying, “Okay, well how do I make this work for Trust Insights or this client or that client.” An event doesn’t do that for me.
Now, if my story was, “As a speaker, I want to go to this event so that I can network with this group of companies,” that does make sense. But as an attendee, for me, my user story is so specific that events don’t line up for me.
Katie Robbert – 05:12
And I think that’s something that, so every year during event season, companies are sending their. They’re like, “Oh, we got three tickets, let’s send three people.” The thing that always bugged me about that wasn’t that they were spending the time to send people, it’s that there was no real action plan. What are they supposed to get out of it? What are they supposed to bring back to the company to help other people learn?
Because they’re not inexpensive. You have to get the ticket to the event, then you have to get travel to the event and lodging to the event, and then you have to eat at the event. And some events are better than others about actually feeding people. And so those are just expenses that you have to expect.
Katie Robbert – 05:58
And then there’s also the lost time away from client work, away from the day-to-day. And so that’s a sunk cost as well. So all of that adds up to, “Okay, did you just send your employees on a vacation or are they actually getting something out of it that they can bring back to their organization, to their team?” to say this is the latest and greatest.
That is a big part of how attendees would get value: What is my KPI? What am I supposed to get out of this? Maybe it’s literally, “My goal is to meet 3 new people.” That’s an acceptable goal, as long as that’s your goal and then you do that. Or my goal is to understand what’s going on with agentic AI as it applies to social media.
Katie Robbert – 06:55
Okay, well, those sessions exist. And if you’re not attending those sessions, then you’re probably just standing over at the coffee cart, gossiping with your friends, missing out on the thing that you actually went there to learn.
But you need to know what it is that you’re doing in the first place, why are you there. And then figure out what sessions match up with the goals that you have. It sounds like a lot of work. It is. But it’s worth it to do that homework upfront. It’s like anything else. Doing your requirements gathering is going to get you better results when you actually start to execute.
Katie Robbert – 07:31
Events can be really overwhelming because there’s a lot going on, there’s a lot of concurrent sessions, there’s a lot of people, there’s a lot of vendors, there’s a lot of booths, whatever. It can be really overwhelming. But if you do your requirements gathering upfront to say, “As a persona, I want to [goal] so that [outcome],” and you look at the agenda and you say, “These are the sessions that are going to help meet my ‘so that,’ meet my performance, help me understand my purpose and get to that goal faster,” then you have a plan. You can at least sort of stay on track. And then everything else is just kind of extra and auxiliary.
Katie Robbert – 08:11
As a speaker, again, you have to be thinking about it in those terms. Maybe you create some user stories for attendees from your ICP and you say, “If my ICP is a B2B marketer who’s about a 101, 102 with agentic AI, then what can I teach them that’s going to bring them into my session and give them an immediate takeaway and value?”
Christopher S. Penn – 08:41
Yep. One of the—so for those who don’t know, we’re hosting our first event as a company in London on October 31, 2025. If you’re listening to this after that date, pop by the Trust Insights website because we are planning potentially some more events like this. It’s a full-day workshop. And one of the things that is nice about running your own event is you can ask attendees, “What do you want to learn from this?”
I was looking at the responses this morning, going, “Wow, this is…” There’s a wide range. But one of the ones that stuck out is exactly what you said, Katie, which is, “I for this event to be…”
Christopher S. Penn – 09:21
We asked the question: “For this event to be a success, what is the one thing that you need to come home with?” As this person said, “I need 5 use cases for Generative AI that I can explain to my team for this event to be successful.” One other person said, “I need 1 prototype. Maybe it’s just a prompt, maybe it’s a GPT. I need 1 prototype that I can take back to work and use immediately for this event to be a success.” And that tells me a lot as both an event organizer and as a speaker. That’s what’s expected.
Christopher S. Penn – 09:56
That is what is expected now for this kind of thing. If you just go to an event kind of randomly, okay, you don’t know why you’re there. But if you say, “This is my burning question, will this event fulfill this?” it’s a lot more clear.
One of the things I think is so useful to do as an
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss scaling Generative AI past basic prompting and achieving real business value.
You will learn the strategic framework necessary to move beyond simple, one-off interactions with large language models. You will discover why focusing on your data quality, or “ingredients,” is more critical than finding the ultimate prompt formula. You will understand how connecting AI to your core business systems using agent technology will unlock massive time savings and efficiencies. You will gain insight into defining clear, measurable goals for AI projects using effective user stories and the 5P methodology. Stop treating AI like a chatbot intern and start building automated value—watch now to find out how!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-getting-real-value-from-generative-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s *In-Ear Insights*. Another week, another gazillion posts on LinkedIn and various social networks about the ultimate ChatGPT prompt. OpenAI, of course, published its Prompt Blocks library of hundreds of mediocre prompts that are particularly unhelpful.
And what we’re seeing in the AI industry is this: A lot of people are stuck and focused on how do I prompt ChatGPT to do this, that, or the other thing, when in reality that’s not where the value is.
Today, let’s talk about where the value of generative AI actually is, because a lot of people still seem very stuck on the 101 basics. And there’s nothing wrong with that—that is totally great—but what comes after it?
Christopher S. Penn – 00:47
So, Katie, from your perspective as someone who is not the propeller head in this company and is very representative of the business user who wants real results from this stuff and not just shiny objects, what do you see in the Generative AI space right now? And more important, what do you see it’s missing?
Katie Robbert – 01:14
I see it’s missing any kind of strategy, to be quite honest. The way that people are using generative AI—and this is a broad stroke, it’s a generalization—is still very one-off. Let me go to ChatGPT to summarize these meeting notes. Let me go to Gemini to outline a blog post. There is nothing wrong with that, but it’s not a strategy; it’s one more tool in your stack. And so the big thing that I see missing is, what are we doing with this long term?
Katie Robbert – 01:53
Where does it fit into the overall workflow and how is it actually becoming part of the team? How is it becoming integrated into the organization? So, people who are saying, “Well, we’re sitting down for our 2026 planning, we need to figure out where AI fits in,” I think you’re already setting yourself up for failure because you’re leading with AI needs to fit in somewhere versus you need to lead with what do we need to do in 2026, period?
Chris has brought up the 5P Framework, which is 100% where I’m going to recommend you start. Start with the purpose. So, what are your goals? What are the questions you’re trying to answer? How are you trying to grow and scale? And what are the KPIs that you want to be thinking about in 2026?
Katie Robbert – 02:46
Notice I didn’t say with AI. Leave AI out of it for now. For now, we’ll get to it. So what are the things that you’re trying to do? What is the purpose of having a business in 2026? What are the things you’re trying to achieve?
Then you move on to people. Well, who’s involved? It’s the team, it’s the executives, it’s the customers. Don’t forget about the customers because they’re kind of the reason you have a business in the first place. And figure out what all of those individuals bring to the table. How are they going to help you with your purpose and then the process? How are we going to do these things? So, in order to scale the business by 10x, we need to bring in 20x revenue.
Katie Robbert – 03:33
In order to bring in 20x revenue, we need to bring in 30x visits to the website. And you start to go down that road. That’s sort of your process. And guess what? We haven’t even talked about AI yet, because it doesn’t matter at the moment. You need to get those pieces figured out first.
If we need to bring in 30x the visits to the website that we were getting in the previous year, how do we do that? What are we doing today? What do we need to do tomorrow? Okay, we need to create content, we need to disseminate it, we need to measure it, we need to do this. Oh, maybe now we can think about platforms. That’s where you can start to figure out where in this does AI fit?
Katie Robbert – 04:12
And I think that’s the piece that’s missing: people are jumping to AI first and not why the heck are we doing this. So that is my long-winded rant. Chris, I would love to hear your perspective.
Christopher S. Penn – 04:23
Perspective specific to AI. Where people are getting tripped up is in a couple different areas. The biggest at the basic level is a misunderstanding of prompting. And we’re going to be talking about this. You’ll hear a lot about this fall as we are on the conference circuit.
Prompting is like a recipe. So you have a recipe for baking beef Wellington, what have you. The recipe is not the most important part of the process. It’s important. Winging it, particularly for complex dishes, is not a good idea unless you’ve done it a million times before. The most important part is things like the ingredients. You can have the best recipe in the world; if you have no ingredients, you ain’t eating. That’s pretty obvious.
Christopher S. Penn – 05:15
And yet so many people are so focused on, “Oh, I’ve got to have the perfect prompt”—no, you don’t. You need to have good ingredients to get value.
So, let’s say you’re doing 2026 strategic planning and you go to the AI to say, “I need to work on my strategic plan for 2026.” They will understand generally what that means because most models are reasoning models now. But if you provide no data about who you are, what you do, how you’ve done it, your results before, who your competitors are, who your customers are, all the 10 things that you need to do strategic planning like your budget, who’s involved, the Five Ps—basically AI won’t be able to help you any better than you will or that your team will. It’s a waste of time.
Christopher S. Penn – 06:00
For immediate value unlocks for AI, it starts with the right ingredients, with the right recipe, and your skills. So that should sound an awful lot like people, process, and platform.
I call it Generative AI 102. If 101 is, “How do I prompt?” 102 is, “What ingredients need to go with my prompt to get value out of them?”
But then 201 is—and this is exactly what you started off with, Katie—one-off interactions with ChatGPT don’t scale. They don’t deliver value because you, the human, are still typing away like a little monkey at the keyboard. If you want value from AI, part of its value comes from saving time, saving money, and making money. Saving time means scale—doing things at scale—which means you need to connect your AI to other systems.
Christopher S. Penn – 06:59
You need to plug it into your email, into your CRM, into your DSP. Name the technology platform of your choice. If you are still just copy-pasting in and out of ChatGPT, you’re not going to get the value you want because you are the bottleneck.
Katie Robbert – 07:16
I think that this extends to the conversations around agentic AI. Again, are you thinking about it as a one-off or are you thinking about it as a true integration into your workflow? Okay, so I don’t want to have to summarize meeting notes anymore. So let me spend a week building an agent that’s going to do that for me. Okay, great.
So now you have an agent that summarizes your meeting notes and doesn’t do anything else. So now you have to, okay, what else do I want it to do? And you start frankensteining together all of these one-off tasks until you have 100 agents to do 100 things versus maybe one really solid workflow that could have done a lot of things and have less failure points.
Katie Robbert – 08:00
That’s really what we’re talking about. When you’re short-sighted in thinking about where generative AI fits in, you introduce even more failure points in your business—your operations, your process, your marketing, whatever it is. Because you’re just saying, “Okay, I’m going to use ChatGPT for this, and I’m going to use Gemini for this, and I’m going to use Claude for this, and I’m use Google Colab for this.”
Then it’s just kind of all over the place. Really, what you want to have is a more thoughtful, holistic, documented plan for where all these pieces fit in. Don’t put AI first. Think about your goals first. And if the goal is, “We want to use AI,” it’s the wrong goal. Start over.
Christopher S. Penn – 08:56
Unless that’s literally your job.
Katie Robbert – 09:00
But that would theoretically tie to a larger business goal.
Christopher S. Penn – 09:05
It should.
Katie Robbert – 09:07
So what is the larger business goal that you’ve then determined? This is where AI fits in. Then you can introduce AI. A great way to figure that out is a user story. A user story is a simple three-part sentence: As a [Persona], I want [X], so that [Y].
So, as the lead AI engineer, I want to build an AI agent. And you don’t stop there. You say, “So that we can increase our revenue by 30x,” or, “Find more efficiencies and cut down the amount of time that it takes to create content.” Too many people, when we are talking about where people are getting generative AI wrong,
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss whether awards still matter in today’s marketing landscape, especially with the rise of generative AI.
You will understand how human psychology and mental shortcuts make awards crucial for decision-making. You will discover why awards are more relevant in the age of generative AI, influencing search results and prompt engineering. You will learn how awards can differentiate your company and become a powerful marketing tool. You will explore new ways to leverage AI for award selection and even consider creating your own merit-based recognition. Watch this episode now to redefine your perspective on marketing accolades!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-do-awards-still-matter.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In-Ear Insights, the multi-platinum, award-winning, record-setting—you name it. People love to talk about awards, particularly companies. We love to say we are an award-winning this, we’re an award-winning that. Authors say, “I’m a best-selling, award-winning book.” But Katie, you had a very interesting and provocative question: In today’s marketing landscape, do awards still matter?
Katie Robbert – 00:27
And I still have that question. Also, let me back up a little bit. When I made the transition from working in more of an academic field to the public sector, I had a huge revelation—my eyes were open to how awards worked. Call it naive, call it I was sheltered from this side of the industry, but I didn’t know at the time that in order to win an award, you had to submit yourself for the award. I naively thought that you just do good work and you get nominated by someone who recognizes that you’re doing good work. That’s how awards work. Because in my naive brain, you do good work and they reward you for it.
Katie Robbert – 01:16
And so here’s your award for being amazing.
Speaker 3 – 01:18
And that is not at all that.
Katie Robbert – 01:20
That’s not how any of the Emmys or the Grammys—they all…
Speaker 3 – 01:24
Have to submit themselves.
Katie Robbert – 01:25
I didn’t know that they have to choose the scene that they think is award-winning. Yes, it’s voted on by a jury of your peers, which is also perhaps problematic depending on who’s on the jury. There’s the whole—the whole thing just feels like one big scam.
Katie Robbert – 01:46
That said, per usual, I’m an n of 1, and I know that in certain industries, the more awards and accolades you rack up and can put on your website, the more likely it is that people are going to hire you or your firm or buy your products because they’re award-winning. So that’s the human side of it. Part of what I’m wondering when I said, “Do awards matter?” I was really wondering about with people using generative AI to do searches. We got this question from a client earlier this week of when we’re looking at organic search, how much…
Speaker 3 – 02:29
Of that traffic is coming from the different LLMs?
Katie Robbert – 02:33
And so it just made me think: if people are only worried about if they’re showing up in the large language models, do awards matter? So that was a lot of preamble. That was a lot of pre-ramble, Chris. So, do awards matter in the age of LLMs?
Christopher S. Penn – 02:55
I think that you’ve highlighted the two angles. One is the human angle. Awards very much matter to humans because it’s a heuristic. It’s a mental shortcut. The CMO says, “Go build me a short list of vendors in this case.” And what does the intern who usually is the one saddled with the job do? They Google for “award-winning vendor in X, Y or Z.” If they use generative AI and ChatGPT, they will very likely still say, “Build me a short list of award-winning whatevers in this thing because my CMO told me to.” And instead of them manually Googling, a tool like ChatGPT or Gemini will do the Googling for you.
Christopher S. Penn – 03:33
But if that heuristic of “I need something that’s award-winning” is still part of your lexicon, part of the decision makers’ lexicon, and maybe even they don’t delegate to the intern anymore, maybe they set the deep research query themselves—say, “Give me a short list of award-winning marketing agencies”—then it still matters a lot. In the context of generative AI itself, I would argue that it actually matters more today. And here’s why: In things like the RACE framework and the Rappel framework and the many different prompt frameworks that we all use, the OpenAI Harmony framework, you name it. What do they always say? “Choose a role.”
Christopher S. Penn – 04:15
“Choose a role with specifics like ‘you are an award-winning copywriter,’ ‘you are an award-winning this,’ ‘you are an award-winning that,’ ‘you are a Nobel Prize-winning this,’ ‘you are a CMI Content Marketing Award winner of this or that’ as part of the role in the prompt.” If you are that company that is ordering and you have provided ample evidence of that—when you win an award, you send out press releases, you put it on social media stuff—Trust Insights won the award for this. We are an award-winning so-and-so. That makes it into the training data.
Christopher S. Penn – 04:46
And if someone invokes that phrase “award-winning consulting firm,” if we’ve done our job of seeding the LLMs with our award-winning language, just by nature of probability, we have a higher likelihood of our entities being invoked with association to that term.
Katie Robbert – 05:09
It reminds me—this must have been almost two decades ago—I worked with a stakeholder who was a big fan of finding interesting recipes online.
Speaker 3 – 05:25
So again, remember: Two decades ago.
Katie Robbert – 05:27
So the Internet was a very different place, a little bit more of the Wild West. Actually, no, that’s not true.
Christopher S. Penn – 05:34
MySpace was a thing.
Katie Robbert – 05:36
I never had a MySpace. And the query, he would always start with “world’s best.” So he wouldn’t just say, “Get me a chili recipe.” He would always say, “Get me the world’s best chili recipe.” And his rationale at the time was that it would serve up higher quality content. Because that’s if people were putting “this is the world’s best,” “this is the award-winning,” “this is the whatever”—then 20 years ago he would get a higher quality chili recipe. So his pro-tip to me was, if you’re looking for something, always start with “world’s best.” And it just strikes me that 20 years later, that hasn’t changed.
Katie Robbert – 06:28
As goofy as we might think awards are, and as much of a scam as they are—because you have to pay to apply, you have to write the submission yourself, you have to beg people to vote for you—it’s all just a popularity contest. It sounds like in terms of the end user searching, it still matters. And that bums me out, quite honestly, because awards are a lot of work.
Christopher S. Penn – 06:50
They are a lot of work. But to your point, “What’s the world’s best chili recipe?” I literally ask ChatGPT, “What is the title of it?” “Award-style chili recipe.” Right there it is. That’s literally. That’s a terrible prompt. We all know that’s a terrible prompt. But that’s not a dishonest prompt. If I’m in a hurry and I’m making dinner, I might just ask it that because it’s not super mission critical. I’m okay with a query like this. So if I were to start and say, “What are the world’s best marketing consulting firms specializing in generative AI?” That’s also not an unreasonable thing, of course. What does it do? It kicks off a web search. So immediately it starts doing web searches.
Christopher S. Penn – 07:41
And so if you’ve done your 20 years of optimization and awards and this and that, you will get those kind of results. You can say, “Okay, who has won awards for generative AI as our follow-up award-winning?” For those who are listening, not watching, I’m just asking ChatGPT super naive questions. So, who are award winners in generative AI, et cetera? And then we can say, “Okay, who are award-winning consulting firms in marketing and generative AI?” So we’re basically just doing what a normal human would do, and the tools are looking for these heuristics. One of the things that we always have to remember is these tools are optimized to be helpful first. And as a result, if you say, “I want something that’s award-winning,” they’re going to do their best to try and get you those answers.
Christopher S. Penn – 08:43
So do awards matter? Yes, because clearly the tools are able to understand. Yes, I need to go find consulting firms that have won awards.
Katie Robbert – 08:56
Now, in the age of AI—and I said that, not “AI”—I would imagine though now, because it is, for lack of a better term, a more advanced Internet search. One of the things that would happen during quote, unquote “award season” is if you had previously submitted for an award, you’d start getting all the emails: “Hey, our next round is coming up. Don’t forget to submit,” blah, blah. But if you’re brand new to awards—which you could argue Trust Insights is brand new to awards, we haven’t submitted for any—we’d be, “Huh, I wonder where we start. I wonder what awards are available for us to submit to.” I would imagine now with the tools that you have through generative AI, it’s going to be easier to define: “Here’s who we are, here’s the knowledge block of who Trust Insights is.”
Katie Robbert – 09:47
Help me find awards that are appropriate for us to submit to that we are likely to win versus the—I think you would call it—the spray and pray me
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss AI decisioning, the latest buzzword confusing marketers.
You will learn the true meaning of AI decisioning and the crucial difference between classical AI and generative AI for making sound business choices. You’ll discover when AI is an invaluable asset for decision support and when relying on it fully can lead to costly mistakes. You’ll gain practical strategies, including the 5P framework and key questions, to confidently evaluate AI decisioning software and vendors. You will also consider whether building your own AI solution could be a more effective path for your organization. Watch now to make smarter, data-driven decisions about adopting AI in your business!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-what-is-ai-decisioning.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
**Christopher S. Penn – 00:00**
In this week’s In-Ear Insights, let’s talk about a topic that is both old and new. This is decision optimization or decision planning, or the latest buzzword term AI decisioning. Katie, you are the one who brought this topic to the table. What the heck is this? Is this just more expensive consulting speak? What’s going on here?
**Katie Robbert – 00:23**
Well, to set the context, I’m actually doing a panel for the Martech organization on Wednesday, September 17, about how AI decisioning will change our marketing. There are a lot of questions we’ll be going over, but the first question that all of the panelists will be asked is, what is AI decisioning? I’ll be honest, Chris, it was not a term I had heard prior to being asked to do this panel. But, I am the worst at keeping up with trends and buzzwords.
When I did a little bit of research, I just kind of rolled my eyes and I was like, oh, so basically it’s the act of using AI to optimize the way in which decisions are made. Sort of. It’s exactly what it sounds like.
**Katie Robbert – 01:12**
But it’s also, I think, to your point, it’s a consultant word to make things sound more expensive than they should because people love to do that. So at a high level, it’s sticking a bunch of automated processes together to help support the act of making business decisions. I’m sure that there are companies that are fully comfortable with taking your data and letting their software take over all of your decisions without human intervention, which I could rant about for a very long time.
When I asked you this question last week, Chris, what is AI decisioning? You gave me a few different definitions. So why don’t you run through your understanding of AI decisioning?
**Christopher S. Penn – 02:07**
The big one comes from our friends at IBM. IBM used to have this platform called IBM Decision Optimization. I don’t actually know if it still exists or not, but it predated generative AI by about 10 years. IBM’s take on it, because they were using classical AI, was: decision optimization is the use of AI to improve or validate decisions.
The way they would do this was you take a bunch of quantitative data, put it into a system, and it basically would run a lot of binary tree classification. If this, then that—if this, then that—to try and come out with, okay, what’s the best decision to make here? That correlates to the outcome you care about. So that was classic AI decisioning from 2010-2020. Really, 2010-2020.
**Christopher S. Penn – 03:06**
Now everybody and their cousin is throwing this stuff at tools like ChatGPT and stuff like that. Boy, do I have some opinions about that—about why that’s not necessarily a great idea.
**Katie Robbert – 03:19**
What I like—the description you gave, the logical flow of “if this, then that”—is the way I understand AI decisioning to work. It should be a series of almost like a choose-your-own-adventure points: if this happens, go here; if this happens, go here. That’s the way I think about AI-assisted. I’m going to keep using the word assisted because I don’t think it should ever take over human decisioning. But that’s one person’s opinion. But I like that very binary “if this, then that” flow.
So that’s the way you and I agree it should be used. Let’s talk about the way it’s actually being used and the pros and cons of what the reality is today of AI decisioning.
**Christopher S. Penn – 04:12**
The way it’s being used or the way people want to use it is to fully outsource the decision-making to say, “AI, go and do this stuff for me and tell me when it’s done.” There are cases where that’s appropriate. We have an entire framework called the TRIPS framework, which is part of the new AI strategy course that you can get at TrustInsights AI strategy course. Katie teaches the TRIPS framework: Time, Repetitiveness, Importance, Pain, and Sufficient Data.
What’s weird about TRIPS that throws people off is that the “I” for importance means the less important a task is, the better a fit it is for AI—which fits perfectly into AI decisioning. Do you want to hand off completely a really important decision to AI? No. Do you want to hand off unimportant decisions to AI? Yes. The consequences for getting it wrong are so much lower.
**Christopher S. Penn – 05:05**
Imagine you had a GPT you built that said, “Where do we want to order lunch from today?” It has 10 choices, runs, and spits out an answer. If it gives you a wrong answer—wrong answer out of 10 places you generally like—you’re not going to be hugely upset. That is a great example of AI decisioning, where you’re just hanging out saying, “I don’t care, just make a decision. I don’t even care—we all know the places are all good.” But would you say, “Let’s hand off our go-to-market strategy for our flagship product line”? God, I hope not.
**Katie Robbert – 05:46**
It’s funny you say that because this morning I was using Gemini to create a go-to-market strategy for our flagship product line. However, with the huge caveat that I was not using generative AI to make decisions—I was using it to organize the existing data we already have.
Our sales playbook, our ICPs, all the different products—giving generative AI the context that we’re a small sales and marketing team. Every tactic we take needs to be really thoughtful, strategic, and impactful. We can’t do everything. So I was using it in that sense, but I wasn’t saying, “Okay, now you go ahead and execute a non-human-reviewed go-to-market strategy, and I’m going to measure you on the success of it.” That is absolutely not how I was using it.
**Katie Robbert – 06:46**
It was more of—I think the use case you would probably put that under is either summarization first and then synthesis next, but never decisioning.
**Christopher S. Penn – 07:00**
Yeah, and where this new crop of AI decisioning is going to run into trouble is the very nature of large language models—LLMs. They are language tools, they’re really good at language. So a lot of the qualitative stuff around decisions—like how something makes you feel or how words are used—yes, that is 100% where you should be using AI.
However, most decision optimization software—like the IBM Decision Optimization Project product—requires quantitative data. It requires an outcome to do regression analysis against. Behind the scenes, a lot of these tools take categorical data—like topics on your blog, for example—and reduce that to numbers so they can do binary classification. They figure out “if this, then that; if this, then that” and come up with the decision. Language models can’t do that because that’s math.
So if you are just blanket handing off decisioning to a tool like ChatGPT, it will imitate doing the math, but it will not do the math. So you will end up with decisions that are basically hallucinations.
**Katie Robbert – 08:15**
For those software companies promoting their tools to be AI decision tools or AI decisioning tools—whatever the buzz term is—what is the caution for the buyer, for the end user? What are the things we should be asking and looking for? Just as Chris mentioned, we have the new AI strategy course. One of the tools in the AI strategy course—or just the toolkit itself, if you want that at a lower cost—is the AI Vendor cheat sheet. It contains all the questions you should be asking AI vendors.
But Chris, if someone doesn’t know where to start and their CMO or COO is saying, “Hey, this tool has AI decisioning in it, look how much we can hand over.” What are the things we should be looking for, and what should we never do?
**Christopher S. Penn – 09:16**
First things I would ask are: “Show me your system map. Show me your system architecture map.” It should be high level enough that they don’t worry about giving away their proprietary secret sauce. But if the system map is just a big black box on a sheet of paper—no good.
Show me how the system works: how do you handle qualitative data? How do you handle quantitative data? How do you blend the two together? What are broadly the algorithm families involved? At some point, you should probably have binary classification trees in there. At some point, you should have regression analysis, like gradient boosting, in there. Those would be the technical terms I’d be looking for in a system map for decisioning software. Let me talk to an engineer without a salesperson present. That’s my favorite.
**Christopher S. Penn – 10:05**
And if a company says, “No, no, we can’t do”—clearly, then, there’s a problem because I know I’m going to ask the engineer something that “doesn’t do that.” What are you talking about? That is always the red flag for me. If you will not let me talk
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss whether blogs and websites still matter in the age of generative AI.
You’ll learn why traditional content and SEO remain essential for your online presence, even with the rise of AI. You’ll discover how to effectively adapt your content strategy so that AI models can easily find and use your information. You’ll understand why focusing on answering your customer’s questions will benefit both human and AI search. You’ll gain practical tips for optimizing your content for “Search Everywhere” to maximize your visibility across all platforms. Tune in now to ensure your content strategy is future-proof!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-do-websites-matter-in-the-age-of-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, one of the biggest questions that people have, and there’s a lot of debate on places like LinkedIn about this, is whether blogs and websites and things even matter in the age of generative AI. There are two different positions on this. The first is saying, no, it doesn’t matter. You just need to be everywhere. You need to be doing podcasts and YouTube and stuff like that, as we are now. The second is the classic, don’t build on rented land. They have a place that you can call your own and things. So I have opinions on this, but Katie, I want to hear your opinions on this.
Katie Robbert – 00:37
I think we are in some ways overestimating people’s reliance on using AI for fact-finding missions. I think that a lot of people are turning to generative AI for, tell me the best agency in Boston or tell me the top five list versus the way that it was working previous to that, which is they would go to a search bar and do that instead. I think we’re overestimating the amount of people who actually do that.
Katie Robbert – 01:06
Given, when we talk to people, a lot of them are still using generative AI for the basics—to write a blog post or something like that. I think personally, I could be mistaken, but I feel pretty confident in my opinion that people are still looking for websites.
Katie Robbert – 01:33
People are still looking for thought leadership in the form of a blog post or a LinkedIn post that’s been repurposed from a blog post. People are still looking for that original content. I feel like it does go hand in hand with AI because if you allow the models to scrape your assets, it will show up in those searches. So I guess I think you still need it. I think people are still going to look at those sources. You also want it to be available for the models to be searching.
Christopher S. Penn – 02:09
And this is where folks who know the systems generally land. When you look at a ChatGPT or a Gemini or a Claude or a Deep Seat, what’s the first thing that happens when a model is uncertain? It fires up a web search. That web search is traditional old school SEO. I love the content saying, SEO doesn’t matter anymore. Well, no, it still matters quite a bit because the web search tools are relying on the, what, 30 years of website catalog data that we have to find truthful answers.
Christopher S. Penn – 02:51
Because AI companies have realized people actually do want some level of accuracy when they ask AI a question. Weird, huh? It really is. So with these tools, we have to. It is almost like you said, you have to do both. You do have to be everywhere.
Christopher S. Penn – 03:07
You do have to have content on YouTube, you do have to post on LinkedIn, but you also do have to have a place where people can actually buy something. Because if you don’t, well.
Katie Robbert – 03:18
And it’s interesting because if we say it in those terms, nothing’s changed. AI has not changed anything about our content dissemination strategy, about how we are getting ourselves out there. If anything, it’s just created a new channel for you to show up in. But all of the other channels still matter and you still have to start at the beginning of creating the content because you’re not. People like to think that, well, I have the idea in my head, so AI must know about it. It doesn’t work that way.
Katie Robbert – 03:52
You still have to take the time to create it and put it somewhere. You are not feeding it at this time directly into OpenAI’s model. You’re not logging into OpenAI saying, here’s all the information about me.
Katie Robbert – 04:10
So that when somebody asks, this is what you serve it up. No, it’s going to your website, it’s going to your blog post, it’s going to your social profiles, it’s going to wherever it is on the Internet that it chooses to pull information from. So your best bet is to keep doing what you’re doing in terms of your content marketing strategy, and AI is going to pick it up from there.
Christopher S. Penn – 04:33
Mm. A lot of folks are talking, understandably, about how agentic AI functions and how agentic buying will be a thing. And that is true. It will be at some point. It is not today. One thing you said, which I think has an asterisk around it, is, yes, our strategy at Trust Insights hasn’t really changed because we’ve been doing the “be everywhere” thing for a very long time.
Christopher S. Penn – 05:03
Since the inception of the company, we’ve had a podcast and a YouTube channel and a newsletter and this and that. I can see for legacy companies that were still practicing, 2010 SEO—just build it and they will come, build it and Google will send people your way—yeah, you do need an update.
Katie Robbert – 05:26
But AI isn’t the reason. AI is—you can use AI as a reason, but it’s not the reason that your strategy needs to be updated. So I think it’s worth at least acknowledging this whole conversation about SEO versus AEO versus Giao Odo. Whatever it is, at the end of the day, you’re still doing, quote unquote, traditional SEO and the models are just picking up whatever you’re putting out there. So you can optimize it for AI, but you still have to optimize it for the humans.
Christopher S. Penn – 06:09
Yep. My favorite expression is from Ashley Liddell at Deviate, who’s an SEO shop. She said SEO now just stands for Search Everywhere Optimization. Everything has a search. TikTok has a search. Pinterest has a search. You have to be everywhere and then you have to optimize for it. I think that’s the smartest way to think about this, to say, yeah, where is your customer and are you optimizing for?
Christopher S. Penn – 06:44
One of the things that we do a lot, and this is from the heyday of our web analytics era, before the AI era, go into your Google Analytics, go into referring source sites, referring URLs, and look where you’re getting traffic from, particularly look where you’re getting traffic from for places that you’re not trying particularly hard.
Christopher S. Penn – 07:00
So one place, for example, that I occasionally see in my own personal website that I have, to my knowledge, not done anything on, for quite some time, like decades or years, is Pinterest. Every now and again I get some rando from Pinterest coming. So look at those referring URLs and say, where else are we getting traffic from? Maybe there’s a there. If we’re getting traffic from and we’re not trying at all, maybe there’s a there for us to try something out there.
Katie Robbert – 07:33
I think that’s a really good pro tip because it seems like what’s been happening is companies have been so focused on how do we show up in AI that they’re forgetting that all of these other things have not gone away and the people who haven’t forgotten about them are going to capitalize on it and take that digital footprint and take that market share. While you were over here worried about how am I going to show up as the first agency in Boston in the OpenAI search, you still have—so I guess to your question, where you originally asked, is, do we still need to think about websites and blogs and that kind of content dissemination? Absolutely. If we’re really thinking about it, we need to consider it even more.
Katie Robbert – 08:30
We need to think about longer-form content. We need to think about content that is really impactful and what is it? The three E’s—to entertain, educate, and engage. Even more so now because if you are creating one or two sentence blurbs and putting that up on your website, that’s what these models are going to pick up and that’s it. So if you’re like, why is there not a more expansive explanation as to who I am? That’s because you didn’t put it out there.
Christopher S. Penn – 09:10
Exactly. We were just doing a project for a client and were analyzing content on their website and I kid you not, one page had 12 words on it. So no AI tool is going to synthesize about you. It’s just going to say, wow, this sucks and not bother referring to you.
Katie Robbert – 09:37
Is it fair to say that AI is a bit of a distraction when it comes to a content marketing strategy? Maybe this is just me, but the way that I would approach it is I would take AI out of the conversation altogether just for the time being. In terms of what content do we want to create? Who do we want to reach? Then I would insert AI back in when we’re talking about what channels do we want to appear on? Because I’m really thinking about AI search. For a lack of a better term, it’s just another channel.
Katie Robbert – 10:14
So if I think of my attribution modeling and if I think of what that looks like, I would expect maybe AI shows up as a first touch.
Katie Robbert – 10:31
Maybe somebody was doing some research an
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss why enterprise generative AI projects often fail to reach production.
You’ll learn why a high percentage of enterprise generative AI projects reportedly fail to make it out of pilot, uncovering the real reasons beyond just the technology. You’ll discover how crucial human factors like change management, user experience, and executive sponsorship are for successful AI implementation. You’ll explore the untapped potential of generative AI in back-office operations and process optimization, revealing how to bridge the critical implementation gap. You’ll also gain insights into the changing landscape for consultants and agencies, understanding how a strong AI strategy will secure your competitive advantage. Watch now to transform your approach to AI adoption and drive real business results!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-why-enterprise-generative-ai-projects-fail.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, the big headline everyone’s been talking about in the last week or two about generative AI is a study from MIT’s Nanda project that cited the big headline: 95% of enterprise generative AI projects never make it out of pilot. A lot of the commentary clearly shows that no one has actually read the study because the study is very good. It’s a very good study that walks through what the researchers are looking at and acknowledged the substantial limitations of the study, one of which was that it had a six-month observation period.
Katie, you and I have both worked in enterprise organizations and we have had and do have enterprise clients. Some people can’t even buy a coffee machine in six months, much less route a generative AI project.
Christopher S. Penn – 00:49
But what I wanted to talk about today was some of the study’s findings because they directly relate to AI strategy. So if you are not an AI ready strategist, we do have a course for that.
Katie Robbert – 01:05
We do. As someone, I’ve been deep in the weeds of building this AI ready strategist course, which will be available on September 2. It’s actually up for pre-sale right now. You go to trust insights AI/AI strategy course. I just finished uploading everything this morning so hopefully I used all the correct edits and not the ones with the outtakes of me threatening to murder people if I couldn’t get the video done.
Christopher S. Penn – 01:38
The bonus, actually, the director’s edition.
Katie Robbert – 01:45
Oh yeah, not to get too off track, but there was a couple of times I was going through, I’m like, oops, don’t want to use that video. But back to the point, so obviously I saw the headline last week as well. I think the version that I saw was positioned as “95% of AI pilot projects fail.” Period. And so of course, as someone who’s working on trying to help people overcome that, I was curious. When I opened the article and started reading, I’m like, “Oh, well, this is misleading,” because, to be more specific, it’s not that people can’t figure out how to integrate AI into their organization, which is the problem that I help solve.
Katie Robbert – 02:34
It’s that people building their own in-house tools are having a hard time getting them into production versus choosing a tool off the shelf and building process around it. That’s a very different headline. And to your point, Chris, the software development life cycle really varies and depends on the product that you’re building. So in an enterprise-sized company, the likelihood of them doing something start to finish in six months when it involves software is probably zero.
Christopher S. Penn – 03:09
Exactly. When you dig into the study, particularly why pilots fail, I thought this was a super useful chart because it turns out—huge surprise—the technology is mostly not the problem. One of the concerns—model quality—is a concern.
The rest of these have nothing to do with technology. The rest of these are challenging: Change management, lack of executive sponsorship, poor user experience, or unwillingness to adopt new tools. When we think about this chart, what first comes to mind is the 5 Ps, and 4 out of 5 are people.
Katie Robbert – 03:48
It’s true. One of the things that we built into the new AI strategy course is a 5P readiness assessment. Because your pilot, your proof of concept, your integration—whatever it is you’re doing—is going to fail if your people are not ready for it.
So you first need to assess whether or not people want to do this because that’s going to be the thing that keeps this from moving forward. One of the responses there was user experience. That’s still people.
If people don’t feel they can use the thing, they’re not going to use it. If it’s not immediately intuitive, they’re not going to use it. We make those snap judgments within milliseconds.
Katie Robbert – 04:39
We look at something and it’s either, “Okay, this is interesting,” or “Nope,” and then close it out. It is a technology problem, but that’s a symptom. The root is people.
Christopher S. Penn – 04:52
Exactly. In the rest of the paper, in section 6, when it talks about where the wins were for companies that were successful, I thought this was interesting.
Lead qualification, speed, customer retention. Sure, those are front office things, but the paper highlights that the back office is really where enterprises will win using generative AI. But no one’s investing it. People are putting all the investment up front in sales and marketing rather than in the back office. So the back office wins.
Business process optimization. Elimination: $2 million to $10 million annually in customer service and document processing—especially document processing is an easy win. Agency spend reduction: 30% decrease in external, creative, and content costs. And then risk checks for financial services by doing internal risk management.
Christopher S. Penn – 05:39
I thought this was super interesting, particularly for our many friends and colleagues who work at agencies, seeing that 30% decrease in agency spend is a big deal.
Katie Robbert – 05:51
It’s a huge deal. And this is, if we dig into this specific line item, this is where you’re going to get a lot of those people challenges because we’re saying 30% decrease in external creative and content costs. We’re talking about our designers and our writers, and those are the two roles that have felt the most pressure of generative AI in terms of, “Will it take my job?” Because generative AI can create images and it can write content. Can it do it well? That’s pretty subjective. But can it do it? The answer is yes.
Christopher S. Penn – 06:31
What I thought was interesting says these gains came without material workforce reduction. Tools accelerated work, but did not change team structures or budgets. Instead, ROI emerged from reduced external spend, limiting contracts, cutting agency fees, replacing expensive consultants with AI-powered internal capabilities. So that makes logical sense if you are spending X dollars on something, an agency that writes blog content for you. When we were back at our old PR agency, we had one firm that was spending $50,000 a month on having freelancers write content that when you and I reviewed, it was not that great. Machines would have done a better job properly prompted.
Katie Robbert – 07:14
What I find interesting is it’s saying that these gains came without material workforce reduction, but that’s not totally true because you did have to cut your agency fees, which is people actually doing the work, and replacing expensive consultants with AI-powered internal capabilities. So no, you didn’t cut workforce reduction at your own company, but you cut it at someone else’s.
Christopher S. Penn – 07:46
Exactly. So the red flag there for anyone who works in an agency environment or a consulting environment is how much risk are you at from AI taking your existing clients away from you? So you might not lose a client to another agency—you might lose a client to an internal AI project where if there isn’t a value add of human beings. If your agency is just cranking out templated press releases, yeah, you’re at risk. So I think one of the first things that I took away from this report is that every agency should be doing a very hard look at what value it provides and saying, “How easy is it for AI to replicate this?”
Christopher S. Penn – 08:35
And if you’re an agency and you’re like, “Oh, well, we can just have AI write our blog posts and hand it off to the client.” There’s nothing stopping the client from doing that either and just getting rid of you entirely.
Katie Robbert – 08:46
The other thing that sticks out to me is replacing expensive consultants with AI-powered internal capabilities. Technically, Chris, you and I are consultants, but we’re also the first ones to knock the consulting industry as a whole, because there’s a lot of smoke and mirrors in the consulting industry. There’s a lot of people who talk a big talk, have big ideas, but don’t actually do anything useful and productive. So I see this and I don’t immediately think, “Oh, we’re in trouble.” I think, “Oh, good, it’s going to clear out the rest of the noise in the industry and make way for the people who can actually do something.”
Christopher S. Penn – 09:28
And that is the heart and soul, I think, for us. Obviously, we have our own vested interest in ensuring that we continue to add value to our clients. But I think you’re absolutely right th
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss AI data privacy and how AI companies use your data, especially with free versions. You will learn how to approach terms of service agreements. You will understand the real risks to your privacy when inputting sensitive information. You will discover how AI models train on your data and what true data privacy solutions exist. Watch this episode to protect your information!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-ai-data-privacy-review.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, let’s address a question and give as close to a definitive answer as we can—one of the most common questions asked during our keynotes, our workshops, in our Slack Group, on LinkedIn, everywhere: how do AI companies use your data, particularly if using the free version of a product? A lot of people say, “Be careful what you put in AI. It can learn from your data. You could be leaking confidential data. What’s going on?” So, Katie, before I launch into a tirade which could take hours long, let me ask you, as someone who is the less technical of the two of us, what do you think happens when AI companies are using your data?
Katie Robbert – 00:43
Well, here’s the bottom line for me: AI is any other piece of software that you have to read the terms in use and sign their agreement for. Great examples are all the different social media platforms. And we’ve talked about this before, I often get a chuckle—probably in a more sinister way than it should be—of people who will copy and paste this post of something along the lines of, “I do not give Facebook permission to use my data. I do not give Facebook permission to use my images.”
And it goes on and on, and it says copy and paste so that Facebook can’t use your information. And bless their hearts, the fact that you’re on the platform means that you have agreed to let them do so.
Katie Robbert – 01:37
If not, then you need to have read the terms, the terms of use that explicitly says, “By signing up for this platform, you agree to let us use your information.” Then it sort of lists out what it’s going to use, how it’s going to use it, because legally they have to do that. When I was a product manager and we were converting our clinical trial outputs into commercial products, we had to spend a lot of time with the legal teams writing up those terms of use: “This is how we’re going to use only marketing data. This is how we’re going to use only your registration form data.” When I hear people getting nervous about, “Is AI using my data?” My first thought is, “Yeah, no kidding.”
Katie Robber
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris tackle an issue of bias in AI, including identifying it, coming up with strategies to mitigate it, and proactively guarding against it. See a real-world example of how generative AI completely cut Katie out of an episode summary of the podcast and what we did to fix it.
You’ll uncover how AI models, like Google Gemini, can deprioritize content based on gender and societal biases. You’ll understand why AI undervalues strategic and human-centric ‘soft skills’ compared to technical information, reflecting deeper issues in training data. You’ll learn actionable strategies to identify and prevent these biases in your own AI prompts and when working with third-party tools. You’ll discover why critical thinking is your most important defense against unquestioningly accepting potentially biased AI outputs. Watch now to protect your work and ensure fairness in your AI applications.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-how-to-identify-and-mitigate-bias-in-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In-Ear Insights, let’s tackle the issue of bias within large language models. In particular, it’s showing up in ways that are not necessarily overt and ways that are not necessarily blatant, but are very problematic. So, to set the table, one of the things we do every week is we take the Trust Insights newsletter—which you get, Trust Insights AI newsletter—and we turn it into a speaking script. Then Katie reads this script aloud. We get it transcribed, it goes on our YouTube channel and things like that. Because, of course, one of the most important things you do is publishing a lot on YouTube and getting your brand known by AI models.
Christopher S. Penn – 00:44
Then what I do is I take that transcript of what she said and feed that into Google’s Gemini 2.5 Pro model, and it creates the YouTube description and the tags. Here’s what happened recently with this.
So I gave it the transcript and I said, “Make me my stuff.” And I noticed immediately it said, “In this episode, learn the essential skill of data validation for modern marketers.” Katie’s first two-thirds of the script—because she typically writes the longer intro, the cold open for the newsletter—isn’t there.
And I said, “You missed half the show.” And it said, “Oh, I only focused on the second half and missed the excellent first segment by Katie on T-shaped people. Thank you for the correction.” And it spit out the correct version after that. And I said, “Why? Why did you miss that?”
Christopher S. Penn – 01:43
And it said,
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the pitfalls and best practices of “vibe coding” with generative AI.
You will discover why merely letting AI write code creates significant risks. You will learn essential strategies for defining robust requirements and implementing critical testing. You will understand how to integrate security measures and quality checks into your AI-driven projects. You will gain insights into the critical human expertise needed to build stable and secure applications with AI. Tune in to learn how to master responsible AI coding and avoid common mistakes!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast_everything_wrong_with_vibe_coding_and_how_to_fix_it.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In-Ear Insights, if you go on LinkedIn, everybody, including tons of non-coding folks, has jumped into vibe coding, the term coined by OpenAI co-founder Andre Karpathy. A lot of people are doing some really cool stuff with it. However, a lot of people are also, as you can see on X in a variety of posts, finding out the hard way that if you don’t know what to ask for—say, application security—bad things can happen. Katie, how are you doing with giving into the vibes?
Katie Robbert – 00:38
I’m not. I’ve talked about this on other episodes before. For those who don’t know, I have an extensive background in managing software development. I myself am not a software developer, but I have spent enough time building and managing those teams that I know what to look for and where things can go wrong. I’m still really skeptical of vibe coding.
We talked about this on a previous podcast, which if you want to find our podcast, it’s @TrustInsightsAI_TIpodcast, or you can watch it on YouTube. My concern, my criticism, my skepticism of vibe coding is if you don’t have the basic foundation of the SDLC, the software development lifecycle, then it’s very easy for you to not do vibe coding correctly.
Katie Robbert – 01:42
My understanding is vibe coding is you’re supposed to let the machine do it. I think that’s a complete misunderstanding of what’s actually happening because you still have to give the machine instruction and guardrails. The machine is creating AI. Generative AI is creating the actual code. It’s putting together the pieces—the commands that comprise a set of JSON code or Python code or whatever it is you’re saying, “I want to create an app that does this.” And generative AI is like, “Cool, let’s do it.” You’re going through the steps. You still need to know what
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss how to unlock hidden value and maximize martech ROI from your existing technology using AI-powered “manuals on demand.” You will discover how targeted AI research can reveal unused features in your current software, transforming your existing tools into powerful solutions. You will learn to generate specific, actionable instructions that eliminate the need to buy new, expensive technologies. You will gain insights into leveraging advanced AI agents to provide precise, reliable information for your unique business challenges. You will find out how this strategy helps your team overcome common excuses and achieve measurable results by optimizing your current tech stack. Tune in to revolutionize how you approach your technology investments.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-how-to-improve-martech-roi-with-generative-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, let’s get a little bombastic and say, Katie, we’re gonna double everyone’s non-existent ROI on AI with the most unused—underused—feature that literally I’ve not seen anyone doing, and that is manuals on demand. A little while ago, in our AI for Market Gender VI use cases for marketers course and our mastering prompt engine for Marketers course and things like that, we were having a conversation internally with our team saying, hey, what else can we be doing to market these courses? One of the things that occurred to me as I was scrolling around our Thinkific system we used is there’s a lot of buttons in here. I don’t know what most of them do, and I wonder if I’m missing something.
Christopher S. Penn – 00:53
So, I commissioned a Deep Research report in Gemini saying, hey, this is the version of Thinkific we’re on. This is the plan we’re on. Go do research on the different ways that expert course creators market their courses with the features in Thinkific. It came back with a 28-page report that we then handed off to Kelsey on our team to say, hey, go read this report and see, because it contains step-by-step instructions for things that we could be doing in the system to upsell and cross-sell our courses. As I was thinking about it, going, wow, we should be doing this more often.
Christopher S. Penn – 01:28
Then a friend of mine just got a new phone, a Google Pixel phone, and is not skilled at using Google’s all the bells and whistles, but she has a very specific use case: she wants to record concert videos with it. So I said, okay, let’s create a manual for just what features of the Pixel phone are best for concerts. Create a step-by-s
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss critical questions about integrating AI into marketing. You will learn how to prepare your data for AI to avoid costly errors. You will discover strategies to communicate the strategic importance of AI to your executive team. You will understand which AI tools are best for specific data analysis tasks. You will gain insights into managing ethical considerations and resource limitations when adopting AI. Watch now to future-proof your marketing approach!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-generative-ai-strategy-mailbag.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, boy, have we got a whole bunch of mail. We’ve obviously been on the road a lot doing events. A lot. Katie, you did the AI for B2B summit with the Marketing AI Institute not too long ago, and we have piles of questions—there’s never enough time.
Let’s tackle this first one from Anthony, which is an interesting question. It’s a long one.
He said in Katie’s presentation about making sure marketing data is ready to work in AI: “We know AI sometimes gives confident but incorrect results, especially with large data sets.” He goes with this long example about the Oscars. How can marketers make sure their data processes catch small but important AI-generated errors like that? And how mistake-proof is the 6C framework that you presented in the talk?
Katie Robbert – 00:48
The 6C framework is only as error-proof as you are prepared, is maybe the best way to put it. Unsurprisingly, I’m going to pull up the five P’s to start with: Purpose, People, Process, Platform, Performance.
This is where we suggest people start with getting ready before you start using the 6 Cs because first you want to understand what it is that I’m trying to do. The crappy answer is nothing is ever fully error-proof, but things are going to get you pretty close.
When we talk about marketing data, we always talk about it as directional versus exact because there are things out of your control in terms of how it’s collected, or what people think or their perceptions of what the responses should be, whatever the situation is.
Katie Robbert – 01:49
If it’s never going to be 100% perfect, but it’s going to be directional and give you the guidance you need to answer the question being asked.
Which brings us back to the five Ps: What is the question being asked? Why are we doing this? Who’s involved?
This is where you put down who are the people contributing the data, but also who are the people owning the data, cleaning the data, maintaining the data, accessing
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the evolving perception and powerful benefits of using generative AI in your content creation. How should we think about AI in content marketing?
You’ll discover why embracing generative AI is not cheating, but a strategic way to elevate your content. You’ll learn how these advanced tools can help you overcome creative blocks and accelerate your production timeline. You’ll understand how to leverage AI as a powerful editor and critical thinker, refining your work and identifying crucial missing elements. You’ll gain actionable strategies to combine your unique expertise with AI, ensuring your content remains authentic and delivers maximum value. Tune in to unlock AI’s true potential for your content strategy
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-artisanal-automation-authenticity-ai.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, it is the battle between artisanal, handcrafted, organic content and machine-made. The Etsys versus the Amazons. We’re talking specifically about the use of AI to make stuff. Katie, you had some thoughts and some things you’re wrestling with about this topic, so why don’t you set the table, if you will.
Katie Robbert – 00:22
It’s interesting because we always talk about people first and AI forward and using these tools. I feel like what’s happened is now there’s a bit of a stigma around something that’s AI-generated. If you used AI, you’re cheating or you’re shortcutting or it’s no longer an original thought. I feel like in some circumstances that’s true. However, there are other circumstances, other situations, where using something like generative AI can perhaps get you past a roadblock.
For example, if you haven’t downloaded it yet, please go ahead and download our free AI strategy kit. The AI Ready Marketing Strategy Kit, which you can find at TrustInsights AIkit, I took just about everything I know about running Trust Insights and I used generative AI to help me compile all of that information.
Katie Robbert – 01:34
Then I, the human, went through, refined it, edited, made sure it was accurate, and I put it all into this kit. It has frameworks, examples, stories—everything you could use to be successful. Now I’m using generative AI to help me build it out as a course. I had a moment this morning where I was like, I really shouldn’t be using generative AI. I should be doing this myself because now it’s disingenuous, it’s not authentic, it’s not me because the tool is creating it faster. Then I stopped and I actually read through what was bei
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss how to break free from the AI sophomore slump. You’ll learn why many companies stall after early AI wins. You’ll discover practical ways to evolve your AI use from simple experimentation to robust solutions. You’ll understand how to apply strategic frameworks to build integrated AI systems. You’ll gain insights on measuring your AI efforts and staying ahead in the evolving AI landscape. Watch now to make your next AI initiative a success!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-generative-ai-sophomore-slump-part-2.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, part two of our Sophomore Slump series. Boy, that’s a mouthful.
Katie Robbert – 00:07
We love alliteration.
Christopher S. Penn – 00:09
Yahoo. Last week we talked about what the sophomore slump is, what it looks like, and some of the reasons for it—why people are not getting value out of AI and the challenges. This week, Katie, the sophomore slump, you hear a lot in the music industry? Someone has a hit album and then their sophomore album, it didn’t go. So they have to figure out what’s next. When you think about companies trying to get value out of AI and they’ve hit this sophomore slump, they had early easy wins and then the easy wins evaporated, and they see all the stuff on LinkedIn and wherever else, like, “Oh, look, I made a million dollars in 28 minutes with generative AI.” And they’re, “What are we doing wrong?”
Christopher S. Penn – 00:54
How do you advise somebody on ways to think about getting out of their sophomore slump? What’s their next big hit?
Katie Robbert – 01:03
So the first thing I do is let’s take a step back and see what happened. A lot of times when someone hits that sophomore slump and that second version of, “I was really successful the first time, why can’t I repeat it?” it’s because they didn’t evolve. They’re, “I’m going to do exactly what I did the first time.” But your audience is, “I saw that already. I want something new, I want something different.” Not the exact same thing you gave me a year ago. That’s not what I’m interested in paying for and paying attention to.
Katie Robbert – 01:36
So you start to lose that authority, that trust, because it’s why the term one hit wonder exists—you have a one hit wonder, you have a sophomore slump. You have all of these terms, all to say, in order for people to stay interested, you have to stay interesting. And by that, you need to evolve, you need to change. But not j
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the generative AI sophomore slump.
You will discover why so many businesses are stuck at the same level of AI adoption they were two years ago. You will learn how anchoring to initial perceptions and a lack of awareness about current AI capabilities limits your organization’s progress. You will understand the critical difference between basic AI exploration and scaling AI solutions for significant business outcomes. You will gain insights into how to articulate AI’s true value to stakeholders, focusing on real world benefits like speed, efficiency, and revenue. Tune in to see why your approach to AI may need an urgent update!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-generative-ai-sophomore-slump-part-1.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In-Ear Insights, let’s talk about the sophomore slump. Katie, you were talking about the sophomore slump in regards to generative AI. I figured we could make this into a two-part series. So first, what is the sophomore slump?
Katie Robbert – 00:15
So I’m calling it the sophomore slump.
Basically, what I’m seeing is a trend of a lot of companies talking about, “We tried. We started implementing AI two years ago—generative AI to be specific—and we’re stalled out.”
We are at the same place we were two years ago. We’ve optimized some things. We’re using it to create content, maybe create some images, and that’s about it.
Everyone fired everyone. There’s no one here. It’s like a ghost town. The machines are just whirring away in the background.
And I’m calling it the sophomore slump because I’m seeing this pattern of companies, and it all seems to be—they’re all saying the same—two years ago.
Katie Robbert – 01:03
And two years ago is when generative AI really hit the mainstream market in terms of its availability to the masses, to all of us, versus someone, Chris, like you, who had been using it through IBM and other machine learning systems and homegrown systems.
So I bring it up because it’s interesting, because I guess there’s a lot to unpack here.
AI is this magic tool that’s gonna solve your problems and do all the things and make you dinner and clean your room.
I feel like there’s a lot of things wrong or a lot of things that are just not going right. A lot of companies are hitting this two-year mark, and they’re like, “What now? What happened? Am I better off? Not really.”
Katie Robbert – 02:00
I’m just paying for more stuff. So Chris, are you seeing this as well? Is
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the Apple AI paper and critical lessons for effective prompting, plus a deep dive into reasoning models.
You’ll learn what reasoning models are and why they sometimes struggle with complex tasks, especially when dealing with contradictory information. You’ll discover crucial insights about AI’s “stateless” nature, which means every prompt starts fresh and can lead to models getting confused. You’ll gain practical strategies for effective prompting, like starting new chats for different tasks and removing irrelevant information to improve AI output. You’ll understand why treating AI like a focused, smart intern will help you get the best results from your generative AI tools. Tune in to learn how to master your AI interactions!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-how-generative-ai-reasoning-models-work.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, there is so much in the AI world to talk about. One of the things that came out recently that I think is worth discussing, because we can talk about the basics of good prompting as part of it, Katie, is a paper from Apple. Apple’s AI efforts themselves have stalled a bit, showing that reasoning models, when given very complex puzzles—logic-based puzzles or spatial-based puzzles, like moving blocks from stack to stack and getting them in the correct order—hit a wall after a while and then just collapse and can’t do anything. So, the interpretation of the paper is that there are limits to what reasoning models can do and that they can kind of confuse themselves. On LinkedIn and social media and stuff,
Christopher S. Penn – 00:52
Of course, people have taken this to the illogical extreme, saying artificial intelligence is stupid, nobody should use it, or artificial general intelligence will never happen. None of that is within the paper. Apple was looking at a very specific, narrow band of reasoning, called deductive reasoning. So what I thought we’d talk about today is the paper itself to a degree—not a ton about it—and then what lessons we can learn from it that will make our own AI practices better. So to start off, when we talk about reasoning, Katie, particularly you as our human expert, what does reasoning mean to the human?
Katie Robbert – 01:35
When I think, if you say, “Can you give me a reasonable answer?” or “What is your reason?” Thinking about the different ways that the word is casually thrown around for humans. The way that I think about it is, if you’re looking for a reasonable answer to something, then that means that you are putting the expectat
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss their new AI-Ready Marketing Strategy Kit. You’ll understand how to assess your organization’s preparedness for artificial intelligence. You’ll learn to measure the return on your AI initiatives, uncovering both efficiency and growth opportunities. You’ll gain clarity on improving data quality and optimizing your AI processes for success. You’ll build a clear roadmap for integrating AI and fostering innovation across your business. Tune in to transform your approach to AI!
Get your copy of the kit here.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-trust-insights-ai-readiness-kit.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!
Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, let’s talk about AI readiness. We launched on Tuesday our new AI Readiness Kit. And so, Katie, just to start off, what’s in for the people who didn’t read all the emails? What’s in the thing, and why are people supposed to look into this?
Katie Robbert – 00:16
So I’m really proud of this new piece that we put together because we talk a lot about the different frameworks. We talk about Five Ps, we talk about Six Cs, we talk about STEM, we talk about how do you measure ROI? And we talk about them all in different contexts. So we took the opportunity to—
Speaker 3 – 00:42
Put them all together into one place.
Katie Robbert – 00:44
In a hopefully coherent flow. To say, if you’re trying to get yourself together, if you’re trying to integrate AI, or if you already have and you’re struggling to really make it stick, use this AI Ready Marketing Strategy Kit. So you can get that at TrustInsights.AI/kit. It’s really the best of the best. It’s all of our frameworks. But it’s not just, “Here’s a framework, good luck.”
Speaker 3 – 01:18
There’s context around how to use it.
Katie Robbert – 01:20
There’s checklists, there’s calculations, there’s explanations, there’s expectations—it’s basically the best alternative to having me and Chris sitting next to you when we can’t sit next to you to say, “You should think about doing this.”
Speaker 3 – 01:41
You should probably think about this.
Katie Robbert – 01:43
Here’s how you would approach this. So it’s sort of an—
Speaker 3 – 01:46
Extension of me and Chris sitting with you to walk you through these things.
Christopher S. Penn – 01:52
One of the questions that people have the most, especially as they start doing AI pilots and stuff, is what’s the RO






















