In-Ear Insights: What is AI Decisioning?
Description
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss AI decisioning, the latest buzzword confusing marketers.
You will learn the true meaning of AI decisioning and the crucial difference between classical AI and generative AI for making sound business choices. You’ll discover when AI is an invaluable asset for decision support and when relying on it fully can lead to costly mistakes. You’ll gain practical strategies, including the 5P framework and key questions, to confidently evaluate AI decisioning software and vendors. You will also consider whether building your own AI solution could be a more effective path for your organization. Watch now to make smarter, data-driven decisions about adopting AI in your business!
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-what-is-ai-decisioning.mp3
- Need help with your company’s data and analytics? Let us know!
- Join our free Slack group for marketers interested in analytics!
[podcastsponsor]
Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
**Christopher S. Penn – 00:00 **
In this week’s In-Ear Insights, let’s talk about a topic that is both old and new. This is decision optimization or decision planning, or the latest buzzword term AI decisioning. Katie, you are the one who brought this topic to the table. What the heck is this? Is this just more expensive consulting speak? What’s going on here?
**Katie Robbert – 00:23 **
Well, to set the context, I’m actually doing a panel for the Martech organization on Wednesday, September 17, about how AI decisioning will change our marketing. There are a lot of questions we’ll be going over, but the first question that all of the panelists will be asked is, what is AI decisioning? I’ll be honest, Chris, it was not a term I had heard prior to being asked to do this panel. But, I am the worst at keeping up with trends and buzzwords.
When I did a little bit of research, I just kind of rolled my eyes and I was like, oh, so basically it’s the act of using AI to optimize the way in which decisions are made. Sort of. It’s exactly what it sounds like.
**Katie Robbert – 01:12 **
But it’s also, I think, to your point, it’s a consultant word to make things sound more expensive than they should because people love to do that. So at a high level, it’s sticking a bunch of automated processes together to help support the act of making business decisions. I’m sure that there are companies that are fully comfortable with taking your data and letting their software take over all of your decisions without human intervention, which I could rant about for a very long time.
When I asked you this question last week, Chris, what is AI decisioning? You gave me a few different definitions. So why don’t you run through your understanding of AI decisioning?
**Christopher S. Penn – 02:07 **
The big one comes from our friends at IBM. IBM used to have this platform called IBM Decision Optimization. I don’t actually know if it still exists or not, but it predated generative AI by about 10 years. IBM’s take on it, because they were using classical AI, was: decision optimization is the use of AI to improve or validate decisions.
The way they would do this was you take a bunch of quantitative data, put it into a system, and it basically would run a lot of binary tree classification. If this, then that—if this, then that—to try and come out with, okay, what’s the best decision to make here? That correlates to the outcome you care about. So that was classic AI decisioning from 2010-2020. Really, 2010-2020.
**Christopher S. Penn – 03:06 **
Now everybody and their cousin is throwing this stuff at tools like ChatGPT and stuff like that. Boy, do I have some opinions about that—about why that’s not necessarily a great idea.
**Katie Robbert – 03:19 **
What I like—the description you gave, the logical flow of “if this, then that”—is the way I understand AI decisioning to work. It should be a series of almost like a choose-your-own-adventure points: if this happens, go here; if this happens, go here. That’s the way I think about AI-assisted. I’m going to keep using the word assisted because I don’t think it should ever take over human decisioning. But that’s one person’s opinion. But I like that very binary “if this, then that” flow.
So that’s the way you and I agree it should be used. Let’s talk about the way it’s actually being used and the pros and cons of what the reality is today of AI decisioning.
**Christopher S. Penn – 04:12 **
The way it’s being used or the way people want to use it is to fully outsource the decision-making to say, “AI, go and do this stuff for me and tell me when it’s done.” There are cases where that’s appropriate. We have an entire framework called the TRIPS framework, which is part of the new AI strategy course that you can get at TrustInsights AI strategy course. Katie teaches the TRIPS framework: Time, Repetitiveness, Importance, Pain, and Sufficient Data.
What’s weird about TRIPS that throws people off is that the “I” for importance means the less important a task is, the better a fit it is for AI—which fits perfectly into AI decisioning. Do you want to hand off completely a really important decision to AI? No. Do you want to hand off unimportant decisions to AI? Yes. The consequences for getting it wrong are so much lower.
**Christopher S. Penn – 05:05 **
Imagine you had a GPT you built that said, “Where do we want to order lunch from today?” It has 10 choices, runs, and spits out an answer. If it gives you a wrong answer—wrong answer out of 10 places you generally like—you’re not going to be hugely upset. That is a great example of AI decisioning, where you’re just hanging out saying, “I don’t care, just make a decision. I don’t even care—we all know the places are all good.” But would you say, “Let’s hand off our go-to-market strategy for our flagship product line”? God, I hope not.
**Katie Robbert – 05:46 **
It’s funny you say that because this morning I was using Gemini to create a go-to-market strategy for our flagship product line. However, with the huge caveat that I was not using generative AI to make decisions—I was using it to organize the existing data we already have.
Our sales playbook, our ICPs, all the different products—giving generative AI the context that we’re a small sales and marketing team. Every tactic we take needs to be really thoughtful, strategic, and impactful. We can’t do everything. So I was using it in that sense, but I wasn’t saying, “Okay, now you go ahead and execute a non-human-reviewed go-to-market strategy, and I’m going to measure you on the success of it.” That is absolutely not how I was using it.
**Katie Robbert – 06:46 **
It was more of—I think the use case you would probably put that under is either summarization first and then synthesis next, but never decisioning.
**Christopher S. Penn – 07:00 **
Yeah, and where this new crop of AI decisioning is going to run into trouble is the very nature of large language models—LLMs. They are language tools, they’re really good at language. So a lot of the qualitative stuff around decisions—like how something makes you feel or how words are used—yes, that is 100% where you should be using AI.
However, most decision optimization software—like the IBM Decision Optimization Project product—requires quantitative data. It requires an outcome to do regression analysis against. Behind the scenes, a lot of these tools take categorical data—like topics on your blog, for example—and reduce that to numbers so they can do binary classification. They figure out “if this, then that; if this, then that” and come up with the decision. Language models can’t do that because that’s math.
So if you are just blanket handing off decisioning to a tool like ChatGPT, it will imitate doing the math, but it will not do the math. So you will end up with decisions that are basically hallucinations.
**Katie Robbert – 08:15 **
For those software companies promoting their tools to be AI decision tools or AI decisioning tools—whatever the buzz term is—what is the caution for the buyer, for the end user? What are the things we should be asking and looking for? Just as Chris mentioned, we have the new AI strategy course. One of the tools in the AI strategy course—or just the toolkit itself, if you want that at a lower cost—is the AI Vendor cheat sheet. It contains all the questions you should be asking AI vendors.
But Chris, if someone doesn’t know where to start and their CMO or COO is saying, “Hey, this tool has AI decisioning in it, look how much we can hand over.” What are the things we should be looking for, and what should we never do?
**Christopher S. Penn – 09:16 **
First things I would ask are: “Show me your system map. Show me your system architecture map.” It should be high level enough that they don’t worry about giving away their proprietary secret sauce. But if the system map is just a big black box on






















