Discover
Smart Products
15 Episodes
Reverse
I’m excited to share this conversation with Ankit Raheja. Ankit is a lead product manager focused on AI, data, and APIs at CDK Global. During this conversation, Ankit discussed the AI product development lifecycle, metrics for AI products, and how product managers could start their AI journey with small steps.LinksAnkit on LinkedInCDK AI Survey: What Automotive Leaders Think About Artificial IntelligenceDeepLearning.AI: Start or Advance Your Career in AI Transcript[00:00:00] Himakara Pieris: Welcome to the smart product show. My guest today is Ankit Raheja. , to start things off, could you tell us a bit about, , your current role and how you're using AI, as a product manager?[00:00:11] Ankit Raheja: Absolutely. Currently I am a lead product manager at CDK Global.[00:00:20] Ankit Raheja: CDK Global is the largest Car dealership software company in the United States, we power more than 15, 000 dealership location. So, so that's why it is one of the most biggest force but which you haven't, which you haven't heard about because you do not interact with it directly, but I'll tell you 15, 000 plus dealerships are using it.[00:00:53] Ankit Raheja: And. We are embedded across the whole user journey, starting from [00:01:00] the front office. Front office is when you go to a dealership for purchasing a car and getting all the different warranties and insurance options. Second is the fixed operations. The fixed operations is the car services that you get done when you go to a dealership.[00:01:21] Ankit Raheja: Then there is some back office. You can imagine dealerships need to take care of like inventory of the parts. And the vehicles and there are many more other things and last but not the least these dealerships need Massive infrastructures to run so we are embedded across all these four Parts of the user journey, the next question that you mentioned about Like where exactly we have used ai so so I have been in the ai space since 2013.[00:01:55] Ankit Raheja: It was a combination of data and AI. In past, we [00:02:00] have used AI across companies such as Cisco, Visa, and state compensation insurance fund. We have worked number one in the customer support. Use cases, then we have worked in market segmentation, use cases that visa and finally healthcare fraud detection, use cases that state compensation fund currently where I'm using AI at CDK, we are leveraging it across multiple ecosystems.[00:02:33] Ankit Raheja: Number one is we are trying to match potential customers with potential cars. So it's like a propensity to buy a model. Second is predictive service. Basically what we're trying to do is that when you go to a car dealership and, and sometimes you do not know what services. additional services that you need.[00:02:56] Ankit Raheja: And, and, you know, you are a busy professional, [00:03:00] you have so many other things to worry about. So we want these car dealership employees to be able to recommend you additional services that you may have not even thought about. So that's the second use case. Last but not the least. We are also exploring benchmarking use cases where something like dealers like you, for example, you have one dealership group and you don't know whether, how are you doing?[00:03:24] Ankit Raheja: Like, are you doing well? You need to back up on few of the things. So, so that's where the benchmarking comes in. So these are the current use cases. And as you know chatbots are becoming more and more prevalent now. So, yeah, but right now just want to focus on the current use cases and the use cases that I've worked on previously.[00:03:47] Himakara Pieris: Great. And before this you had an interesting use case with chatbots at Cisco as [00:03:54] Ankit Raheja: well. Absolutely. Yeah. I can definitely talk to you a little bit about the [00:04:00] chatbot at Cisco. The, let me tell you some... context around the issue. Basically Cisco has lot of switching products, router products, basically all B2B products.[00:04:17] Ankit Raheja: And some of them as you can imagine will become defective and, and you want to return those products. However, Cisco identified that a lot of these products do not need to be returned. Some of them are avoidable returns. So technically we were trying to solve an avoidable returns problems. This existing way to solve that was that these customers would reach out to the technical assistance center engineers.[00:04:55] Ankit Raheja: who are technical customer service engineers, if [00:05:00] in, in more layman terms, and they troubleshoot these problems from them and then decide whether this product should be returned or not. We realize. AI could be a really big help to these technical assistant center engineers because you can basically have a lot of skill.[00:05:25] Ankit Raheja: Number two it's like an intern. AI is like an intern, which is trying to learn new, new things. So as it learns more and more, it will get, become better and it will become a lot more helpful for them. And sometimes these technical assistance engineers are not available, that's where this chatbot can come in.[00:05:43] Ankit Raheja: So, multiple use cases, why we thought AI made sense, and, and we really had great impact by leveraging AI for this use cases. [00:05:56] Himakara Pieris: So Cisco and CDK, these are very large companies [00:06:00] with a ton of use cases. [00:06:02] Ankit Raheja: How did you decide [00:06:04] Himakara Pieris: the use cases and when to use AI, when to not use AI and what kind of framework do you use for that?[00:06:12] Ankit Raheja: Absolutely. I'll have a spicy take on this. The first rule of AI is not to use AI in the first place when you're in the discovery stage. You should be able to understand how. A human can do this work better for example, I'll give you two examples, autonomous driving car, what could happen right now, instead of autonomous driving car, what's happening, you're the one who are driving, so you're the one looking around, hey, here's the signal, here's this pedestrian, here's this road, so you should be able to do that first.[00:06:51] Ankit Raheja: Another thing for chatbot, right? So we had this technical assistance engineers who were doing it. So, so this is a very, [00:07:00] the framework is pretty simple and universal. AI is only one of the ways that may solve this customer's problem while ensuring its need to drive business value. We have seen so many times right now, as you've seen with the chat GPT hype, more and more products are coming out, but the time will tell how many of them will really be retained.[00:07:25] Ankit Raheja: Right now there's big hype, but eventually retention is the key. So to think about this, I have a very simple framework and this is overused a lot, but there's a bit nuance to it. The number one is user value. Are you providing real value to customers? Why should these customers hire your solution? Are you helping them with their jobs to be done?[00:07:52] Ankit Raheja: So that's the first thing. That's the first constraint that you'll look at. Number two, which is very important. You may not even get [00:08:00] funding if you don't have a good answer for it. That's your business goals. Just because your c e O said, Hey, I see the chatbot chat gpt is doing really well. You need to really start from the vision.[00:08:12] Ankit Raheja: Go to the strategy, goes to the goals and come with your KPIs. And what are your KPIs? Do you want to acquire more users? Number two, you want to retain more users. Number three, you need to monetize these user more by upscale or cross sell. Or last but not the least you need to drive more word of mouth, net promoter score.[00:08:33] Ankit Raheja: So that's the second thing, the business goals. The last constraint that we need to think about is the The technical component of it, like how comfortable are you? Okay. Using a predictive solution versus a deterministic solution. Sometimes, if you can imagine [00:09:00] there like you can make a machine go through and read one medical chart for cancer.[00:09:10] Ankit Raheja: Would you give all the... Onus on the machine to make a call. I would not say that. So you still need to have a human in loop. However, in some cases like recommendation engine for Amazon, there are so many different permutation combination that can, can, can come with the long tail option. So that's where the the AI makes sense.[00:09:33] Ankit Raheja: So it all depends from case to case basis. If you want me to go more into detail, I can definitely go more into detail about the AI use cases.[00:09:41] Himakara Pieris: generally speaking, start with with a focus on customer value and then map it to your business goal and strategy and have clear KPIs. And make sure that your proposed solution could deliver on those KPIs. Absolutely. [00:09:59] Ankit Raheja: So, [00:09:59] Himakara Pieris: how [00:10:00] would you compare, let's say, more of a deterministic solution? So, if you have a, I'm sure at all these companies, you have a very large and long backlog of things that you could do.[00:10:10] Himakara Pieris: Does this mean that AI solutions are possibly going to sink to the bottom of the backlog? because they are relatively more difficult to quantify or the, you know, the, the time to value might be not as quick as more of a deterministic solution. [00:10:28] Ankit Raheja: Sure. So it all depends on the use cases as. We have made this possible in this world of building and launching something fast and getting feedback.[00:10:42] Ankit Raheja: You can always build a minimum viable product. What I call it is minimum viable algorithm. You can always build a simple model. For example, if you think about LLM use cases. [00:11:00] You can always, there are still, there are so many other machine learning libraries which are already available that you can use to prove out the value quickly.[00:11:10] Ankit Raheja: And then you can get a buy in from your leadership.
I’m excited to share this conversation with Faizaan Charania. Faizaan is an AI product lead at LinkedIn. During this conversation, Faizzan discussed the potential of Generative AI and its applications, the importance of keeping GenAI solutions simple, and how to think about trust, transparency, and managing costs as a product manager working in Gen AI.LinksFaizaan On LinkedIn Transcript[00:00:00] Faizaan Charania: I love the analogy with cloud because cloud can make experimentation so easy. And you're just like trying to set up something new. Test it out. See, see if it works. Will I get product market fit? What are my users thinking about this feature? All of these things are also possible with GenAI. So for any PM who's thinking about GenAI, my recommendation would be test it out. [00:00:24] Hima: I'm, Himakara Pieris. You're listening to smart products. A show where we, recognize, celebrate and learn from industry leaders who are solving real world problems. Using AI.[00:00:34] [00:00:35] Himakara Pieris: My guest today is Faizan Charania. Faizan, welcome to the show. [00:00:40] Faizaan Charania: Thank you so much for inviting me, Hima. [00:00:43] Himakara Pieris: To start things off, could you tell us a bit about your background [00:00:46] Faizaan Charania: Yes. I am a product manager at LinkedIn. My main focus is around machine learning and artificial intelligence.[00:00:53] Faizaan Charania: And obviously these days I've been looking into gen AI as well. I've been in the machine [00:01:00] learning field for around Eight, over eight years now started on the research side, worked with startups, uh, was a machine learning engineer for a bit. And then I switched to product management. [00:01:12] Himakara Pieris: There is a lot of attention on generative AI at the moment. Could you tell me a bit about the way you see it? What is generative AI and how it's different from all the various other types of AI that we have seen so far?[00:01:24] Faizaan Charania: Yeah, definitely. There is so much hype around gen AI. Uh, one thing, uh, one code that I've heard multiple times is. Uh, this is like the iPhone moment or this is the desktop to mobile moment of technology again. To answer your second question around, uh, how is it different from all other kinds of AI?[00:01:46] Faizaan Charania: Because it's like so many things that we can qualify as AI, right? So a simple explanation that I try to go with is. Differentiate these two types of AIs, analytical AI and generative AI. [00:02:00] So analytical AI is where you, where you have some specific features or data points or like historical input, and you're trying to make one single decision based on that.[00:02:12] Faizaan Charania: So the decision can be, Hey, is this email spam or not spam, spam classifiers? It can be a ranking decision. So say you log into Facebook or Instagram or like any of these applications and what post should appear first? What should be first? What should be second? What should be third? And this is based on the text in the post, the images.[00:02:35] Faizaan Charania: It's based on what you like, what you don't like. So this is like a ranking problem. So ranking, decision making, all of these are a part of analytical AI and generative AI. As the name says, it's about taking, uh, generating new content. So if it's about post completion and everyone has heard about child GPD, so I'll just like [00:03:00] use that as one of the examples.[00:03:02] Faizaan Charania: Like, Hey, I asked you a question and give me a response in natural language format. So natural language generation is generative AI generating new images, images that did not exist before is generative AI. So like even for images, if you were to classify an image, Hey, is this. Safe for children or not safe for children, that's analytical, but if you want to generate a cartoon image, that's generative.[00:03:30] Himakara Pieris: From an overall landscape standpoint. So we have a ton of startups that are out there and then there are a couple of, in a way, key gatekeepers, Microsoft slash open AI. Um, I would say one of them, and then there is an emerging, emerging rivalry with, with Google, um, or refresh rivalry with Google on this front.[00:03:53] Himakara Pieris: And then there are also chip makers. How do you. So if a map out this landscape, [00:04:00] [00:04:00] Faizaan Charania: yeah, so, uh, when you're thinking about landscape, yes, Google and Microsoft are big players, but then there's like so many more important players over there. So if you're just thinking about the flow of generative AI at the base layer, you will have the infrastructure companies, these chip companies, and they are the ones who actually make gen AI possible.[00:04:25] Faizaan Charania: So that's one thing then at the top level, you will have applications that are using generative AI. And in the middle, you would find all of these other players who are building new features and new utilities to even make gen AI, um, efficient. So to give you one example for prompt engineering, there's new companies that are just focused on prompt engineering, making prompt engineering easy.[00:04:52] Faizaan Charania: Versioning of it, iteration, structures of it. Um, there's a prompt engineering [00:05:00] marketplace now. So people can sell prompts and people can buy prompts. So, I, yes, Microsoft and Google are the popular ones because they're like big players so there's like more Um, media limelight around them, but I think they're, they're just like one of the initial pioneers and there's just so many players and there's so much scope for everyone to be a part of this.[00:05:24] [00:05:24] Himakara Pieris: So I think what we're talking about is there is the foundational layer, right? Which Microsoft and Google's of the world are going to provide similarly to how they provide cloud computing today. And there's going to be a huge ecosystem that is getting built on top of it. And prompt engineering sounds like.[00:05:42] Himakara Pieris: One big part of it prompt prompt engineering and everything that's that goes around prompt engineering Are there any other ecosystem participants at that layer [00:05:54] Himakara Pieris: in your view [00:05:56] Faizaan Charania: In the initial days The market is going to evolve a lot [00:06:00] So when these new models were launched and again, I'm talking about November and December you You might have seen, um, a large list of startups that just like came about.[00:06:13] Faizaan Charania: So those are the ones who are early adopters and who are just making these things, uh, making like new applications possible. I think that's just the spur and that's the wide net that we are casting. But as time progresses, this is going to become business as usual. Gen AI won't be exciting anymore. Then the problems to solve are, hey.[00:06:34] Faizaan Charania: How do I scale this? How is it going to be efficient? How do I do it for cheaper? And there are many different players who are playing in the infrastructure side of this. There are many new startups. I, there's this one startup that I. Sort of from, I can't remember their name, but, um, they've been working on making Gen AI more efficient for like three years now.[00:06:59] Faizaan Charania: So Gen AI for the [00:07:00] public, it's, it seems like a new word and all of us are talking about it right now, but the early seeds were sown in 2017. And actually even before that, everyone has been building on the top of giants that came before them. But yeah, the concept has been around for a while and there are new marketplaces.[00:07:19] Faizaan Charania: There are no new ecosystem players that are just going to solidify even more as time passes. [00:07:25] Himakara Pieris: Let's say you are a product manager, , for a product that exists in the market today. Where do you see opportunities and threats and challenges, , someone should look out for, , as a PM?[00:07:39] Faizaan Charania: . My approach to Gen AI is to just think of it as a tool as it is. I've been doing this for like AI for a while and now Gen AI is just a flavor of it, right? So think of it as a tool and see how this tool can help me or my customers.[00:07:54] Faizaan Charania: Solve their, solve for their opportunities or solve the challenges that they're facing, more easily. [00:08:00] And that is the core of how we should approach all kinds of product solutions. And then see where can Gen AI come in? How can we solve problems using Gen AI? Is there some flow or some funnel that my user is going through right now?[00:08:15] Faizaan Charania: Where's the friction? Can Gen AI solve that? Can Gen AI make something possible which would make my users happy? But it was too difficult to do in the past. So there are many ways to think about this. The core of all of this should be the jobs to be done, the user needs, and then see where the unique capabilities of Gen AI are going to be useful for them.[00:08:41] Himakara Pieris: What I'm seeing is that you can use. Generative AI for summarization, , expansion, style translation, I think I can put. Graphic stuff for diffusion into into one of those three buckets as well.[00:08:56] Himakara Pieris: Am I missing something here? [00:08:58] Faizaan Charania: Summarization, [00:09:00] expansion, style translation. There's obviously all kinds of like generation. When you say style transformation, this could be just text style transformation.[00:09:09] Himakara Pieris: It could be anything from turning Drake's voice into JC's voice. I think I see all those as some kind of a, , a transfer operation, right? It's essentially any kind of transductive problem you can, approach with, , generative AI in, in some ways.[00:09:25] Faizaan Charania: Yes. Yes. And, , to go one step deeper into summarization, even summarization can be done on, uh, like in a very basic manner where you're just like summarizing one document or it could just 10x the
I'm excited to share this episode with Don Rosenthal. Don is a seasoned product leader with extensive experience in AI and large language models. He has led product teams at Google AI research, Facebook Applied AI, and Uber's Self-Driving Technology division. During this conversation, Don shared his insights on the anatomy of an LLM, ways to incorporate LLMs into products, risk mitigation strategies, and taking on LLM-powered projects. LinksDon on LinkedInAttention is all you needThe Illustrated TransformerTranscript[00:00:00] Don Rosenthal: please, please, please do go out and do come up with unique and exciting, uh, important new applications, build stuff that solves important problems we couldn't even try to address previously. I just want you to be sure that, uh, you're going into this with your eyes open and that you've prepared your stakeholders properly.[00:00:21] Don Rosenthal: There, um, there are a lot of successful applications that have been built with these LLMs and a lot of the pioneers have discovered all the pitfalls and where all the dragons hide so that we can We can avoid them. [00:00:35] Himakara Pieris: I'm, Himakara Pieris. You're listening to smart products. A show where we, recognize, celebrate and learn from industry leaders who are solving real world problems. Using AI.[00:00:46][00:00:47] Himakara Pieris: , today we are going to talk about large language models. And I can't think of a better person to have this conversation with than Don Rosenthal. So Don has [00:01:00] spent most of his career in AI.[00:01:02] Himakara Pieris: He started out as a developer. building ground support systems for the Hubble telescope, um, including being part of the team that built the first air ground system ever deployed for a NASA mission. He then went on to build and manage NASA's first AI applications group, where his team flew in, flew the first two AI systems in space.[00:01:22] Himakara Pieris: And he worked on prototype architectures for autonomous Mars rovers, done then commercialized. Uh, the air technology from Hubble Telescope in two of his air companies that he founded. He was the group product manager for autonomy at Uber 80 G. Uber's autonomous vehicle spin off in Pittsburgh. He was the PM for face recognition at Facebook.[00:01:43] Himakara Pieris: And most recently, Don was the group product manager for conversational at a I research [00:01:50] Himakara Pieris: done. Welcome to the smart production.[00:01:53] Don Rosenthal: Thank you very much. I'm really, really excited to be here. You might. Thank you for inviting me.[00:02:00][00:02:01] Himakara Pieris: So let's start with the basics. What is an LLM? [00:02:05] Don Rosenthal: Good place to start. Um, let me start out by saying that, uh, LLMs have finally solved, and I don't think that's really an exaggeration.[00:02:14] Don Rosenthal: They finally solved one of the longstanding foundational problems of natural language understanding. Understanding the user's intent. Um. What do I mean by that? Um, uh, any one of us who's used the recommender system for movies, TV, music, which pretty much all of us, um, we know how frustrating it can be to try to get the system to understand what we're, we're looking for.[00:02:40] Don Rosenthal: These systems have all trained us to dumb down our queries. Uh, in order to have any chance of a successful retrieval, you can't talk to in the way you would to a friend or or to any other person. You can't, for example, say, Hey, um, I like all kinds of music. Uh, the genre is not [00:03:00] important, jazz, pop, classical, rock, even opera, as long as it's got a strong goosebump factor, put together a playlist for me with that kind of vibe for the next 30 minutes while I do chores.[00:03:13] Don Rosenthal: But you can, in fact, say that to, uh, something that's got a large language model in it, like chat gbt. And go ahead and try it. When I did, I even asked it if it understood what I meant by goosebump factor, assuming I'd have to explain it, but it said, Sure, I know what it is and gave me a perfectly reasonable explanation and definition of it.[00:03:36] Don Rosenthal: So... Why and how is it able to do that? Um, we can get into the technology a little bit later, but for the 3, 000 foot level to start with, the point is that through an absolutely enormous amount of training, um, these systems have internally created a highly nuanced model of language. Which they can [00:04:00] then use for the semantic understanding of language that is input to it, as well as to craft highly nuanced and natural sounding language responses.[00:04:09] Don Rosenthal: Um, and it's important to, to, uh, to underscore that these are the two things that large language models do really well. Um, semantic understanding of language and its input to it, and Uh, highly nuanced and natural sounding land, which responses and yes, they hallucinate and they make up stuff out of thin air.[00:04:30] Don Rosenthal: But the interesting thing is that they always seem to hallucinate within the correct context of your query. So, you know, if you ask them about strawberries, it might make stuff up about strawberries, but it's not going to make stuff up about fire engines. And as for the highly nuanced Natural sounding responses.[00:04:53] Don Rosenthal: Um, just, uh, remember, for example, the response to the, uh, the query of generating instructions for [00:05:00] removing a peanut butter sandwich from a VCR written in the style of the ST James Bible, which kind of broke the Internet last November. [00:05:10] Himakara Pieris: Take us inside an LLM. Um, what makes this technology so transformative, if you will?[00:05:17] Don Rosenthal: Um, I'm not going to go into the, the technical details of how they work, but, um, it'd be great to be able to cover. Why they're so important and what has enabled them to become the agent of change in LLP to become so transformative. Um, and if you are interested in, in more details, the original paper from 2017 is attention is all you need.[00:05:42] Don Rosenthal: It's all over the internet. You can find it easily. I'd also recommend, um, the Illustrated Transformer by Jay Alamar, A L A M m a R, who is well known for his, uh, incredible capability of helping you to easily understand complicated, [00:06:00] uh, concepts. And if you'd rather watch a video than, than read an explanation to check out his video.[00:06:06] Don Rosenthal: The narrated transformer anyway, six transformers were able to. Help us leapfrog into the current generation of NLP tools. It's kind of important to first explain the state of the art just prior to their introduction, if that's okay. So, at that time, uh, NLP, the NLP world was using a set of technologies which were, uh, grouped together under the subfield of recurrent neural networks.[00:06:34] Don Rosenthal: Um, not a very descriptive name, um, But the TLDR is that these technologies took the input sequence, any type of sequence, but let's say with language, so sequence of words in the sentence, um, and, uh, the RRN took the sequence of words, fed them in, in order, one at a time, the, quick, brown, fox, etc. [00:07:00] But they included a really novel component, which enabled feedback connections that allowed them to inject information from previous time time steps.[00:07:09] Don Rosenthal: And this is what enabled them to capture contextual dependencies between words in a sentence instead of just having a look at one particular word. But so when quick was input, you get some feedback from the When brown was input, some feedback from the quick problem with this was, I mean, it was, it worked well for the time, but the problem was that the farther along in the sentence you got, the weaker the feedback was from the early previous steps.[00:07:39] Don Rosenthal: So by the time you got to the end of the input sequence, um, the system may have been left with so little signal from the initial inputs. that they had very little effect on the evaluation of the sequence. So, uh, put that all together. Words that were closer to each other affected each other more than words that were farther apart in [00:08:00] trying to understand what the sentence meant.[00:08:02] Don Rosenthal: And obviously that's a problem because language isn't constructed that way. Um, it also meant that sequences could only be evaluated. Sequentially, one at a time, and that made RNN processing really low. So the two stripes against RNNs, although they were really valuable for the time, was that they focused more on words that happened to be closer together in a sentence, and that they only processed sequentially, uh, one word at a time.[00:08:31] Don Rosenthal: Um, so then along came transformers with a new idea, which was, let's present all of the words in the sequence to the transformer at once, all at the same time. Thank you. And this lets the system evaluate the connections between each word and every other word, um, regardless of where they show up in the sentence, um, and it can do this to figure out which words should pay particular attention to which other words.[00:08:58] Don Rosenthal: And that's the intention part [00:09:00] of Attention is all you need. So no longer the words have to be close to each other to capture the contextual relevance between them. But it also meant, and this was the other key, uh, improvement. You could now evaluate all of the words in the input in parallel instead of analyzing one word at a time in order.[00:09:18] Don Rosenthal: And I'm being a little bit hand wavy and imprecise. But, um, I'm trying to give you the intuition about how these work rather than, than teach you how to build one. Um, but at this point now, we could analyze semantic information equally between all combinations of words, no matter where they appeared in this, in this sequence, and we could do this in parallel.[00:09:39] Don Rosenthal: So. NLP is solved, right? Um, unfortunately, not so fast. Uh, t
I’m excited to share this conversation with Khrystyna Sosiak. Khrystyna is a product manager at TomTom. Before that, she was a lead AI coach at Intel and a senior data scientist at mBank. During this conversation, Khrystyna shared her approach to navigating the complex landscape of AI projects, which includes investing in research, strategically placing bets, fostering stakeholder support, and embracing transparency. LinksKhrystyna On LinkedInTranscript[00:00:13] Himakara Pieris: I'm, Himakara Pieris. You're listening to smart products. A show where we recognize, celebrate and learn from industry leaders who are solving real-world problems. Using AI.[00:00:25] Himakara Pieris: Khrystyna welcome to smart products.[00:00:27] Khrystyna Sosiak: Thank you. I'm super excited to be here. Thank you for having me. [00:00:30] Himakara Pieris: To start things off, could you tell us a bit about your background, um, what kind of environments that you've worked in, and also what kind of AI projects that, that you've been part of?[00:00:39] Khrystyna Sosiak: Yes. So, uh, currently I'm a product manager at TomTom. I'm working on the external developer experience and uh, and analytics and billing topics. And in past I was working on the machine learning operations platforms and, uh, in my previous experience was a data scientist. So I was actually working with, [00:01:00] uh, with machine learning and with artificial intelligence before I moved into product.[00:01:05] Himakara Pieris: What would be a good example of an AI project that you worked on? [00:01:10] Khrystyna Sosiak: Probably one of the most, Exciting and interesting, , products that we've been working on that was very powerful is, , understanding the customer's behavior and, and the patterns.[00:01:22] Khrystyna Sosiak: And then based on that ing uh, the right products. So I was working in banks, so we would analyze. All the data that we can find about our customers, right, of course, with two G D P R and making sure that we only use the right data, but, and then making sure that all the communication that goes to the customers is the right communication about the right products and in the right way.[00:01:46] Khrystyna Sosiak: So really understanding the customer needs and, uh, the stage of the customer life and saying that's, that's what the customer need at this point, and that's how we. Understand that and how we can communicate and [00:02:00] sell it to the customers. So it's not about only making money, but it's understanding how we can actually.[00:02:06] Khrystyna Sosiak: Go through this journey of life with the customer and supporting them. So, and understanding that by the data that they're generating and by the insights that we can find in this data. And sometimes, you know, and like data that you have like that generated by your transactions and by your history, like, It's a really specific data that show a lot about the person that probably some people even don't know about themselves.[00:02:33] Khrystyna Sosiak: And the real goal is how we can use it for the benefit of the customer and not to harm the customer, right? And, um, we really change the way that we approach them. Uh, we approached the, the marketing communication with the customers, what was very interesting and transform transformational to see how very old fashioned organization would really move in direction into the [00:03:00] AI and making sure that all the decisions and the marketing strategies are powered by ai.[00:03:06] Khrystyna Sosiak: So yeah, that was very interesting. It took us a long time. We made a lot of mistakes on the way, but it was a super interesting learning experience. [00:03:17] Himakara Pieris: If I take a step back, so we're talking about mbank a consumer banking operation and reaching out the customers at the right time is something very important to, to become that part of the customer's daily life or, or their journey.[00:03:32] Himakara Pieris: How was that done before and what point. Did the bank decide to explore AI as a possible, , solution to, possible tool to improve the, communications with the customers? [00:03:46] Khrystyna Sosiak: I think the turning point was understanding that where the, you know, not only trends, but like the industry goals, right? And really AI powers the financial industry and the financial industry thing [00:04:00] in general.[00:04:00] Khrystyna Sosiak: It's been very innovative in, uh, Trying to adopt the new technology and trying to make sure that the customers get the best experience before it was all triggered by the events. So you can imagine, I mean, it's still used widely, right? And when we talk about recommendation systems and like how the communication is done, right?[00:04:20] Khrystyna Sosiak: You open the webpage, you open the app, and you, you scroll through some pages, you know about the credit card, for example, and then, Next day you would receive the email saying, Hey, here's the discount. Or in today, someone would call and say, Hey, we saw that you are interested in a credit card. Do you want to order the credit card?[00:04:41] Khrystyna Sosiak: We have this discount for you. And usually it was triggered by one event, right? Or the, the sequence of events. But it's also very event triggering, right? So you only can. You only can base your recommendations on what customer actually does on the webpage. You don't really go into details [00:05:00] of like, okay, what are the factors about the customers that can affect that and what is actually the things that they need?[00:05:07] Khrystyna Sosiak: It's, um, so yeah, it was something that was used. For years and, uh, it worked. You know, there was some success rates there, so I cannot say it didn't work, but we know that moving forward expectations of the customers are higher because when we live in the era of ai, when you have, you know, Netflix and Facebook with the recommendation title, your.[00:05:30] Khrystyna Sosiak: You know, reactions and like what you see, what you like, what you don't like. Really we need to be there as well. And just saying you clicked on something and that's why we think it's could be interesting for you. It's not good enough anymore. [00:05:45] Himakara Pieris: Sounds, like the previous, , approach for doing this is purely driven by specific events.[00:05:51] Himakara Pieris: You have a rule-based system. If you click on this page, then you must be interested in this product. Let's unleash all the marketing communication , to sell that product [00:06:00] towards you. Whereas now, , the idea is we can possibly make this better by using ai. , To make sure that we are making more personalized and more relevant recommendations to the customer.[00:06:10] Himakara Pieris: And by doing that, you improve the customer's experience and you would also improve the sort of the clickthroughs or, or, or signups for that product that you're, that you're positioning for the customer. , so when you start there, so it sounds like it started more with a. With an experimental approach.[00:06:26] Himakara Pieris: Is that right where you're saying, okay, we have this way, we are doing things now we have all these new tools that are coming to the market, coming to the world. Let's pick them up and see whether we can move the needle, , with these tools rather than the, the method that we are doing now, which is our baseline.[00:06:42] Himakara Pieris: Is that a fair assessment? [00:06:44] Khrystyna Sosiak: It's for assessment and to be honest, it's for assessment not only about this project and not only about this experience, about almost all of the experiences that I had with the big companies or even small companies trying to get into the AI and trying, [00:07:00] you know, if it's. Not like the, the companies that actually build it, right?[00:07:03] Khrystyna Sosiak: That they're trying to adopt it. It's really about, we have some data, we see the trends, we see that our competitors are using it, so how can we benefit from it? And I can see very often, like also talking to my colleagues and to my friends that there's very. There's a lot of companies that would hire like, uh, machine learning or, uh, engineer or data scientists say, that's the data we have.[00:07:26] Khrystyna Sosiak: We have no idea what we can do with it. You know, try to figure something out. And I think sometimes there is some wrong expectations about Right. What we can do and what we cannot do. So yeah, it's all started like that, right? We have the data. Here's the set of the business. Problems that we have, and then let's iterate.[00:07:46] Khrystyna Sosiak: Let's see what gonna work, what not gonna work. And a lot of things fails before something starts working. Right. And I think that's a learning experience that once you, you cannot, like, you cannot get there. [00:08:00] If you, they make mistakes and learn on a way, because then your experience and your success is much more meaningful because you actually understand what you've done and how you've done it and why you made those informed decisions about some steps of the machine learning process that we have.[00:08:18] Khrystyna Sosiak: And that was very important also for the data scientist and for the product manager to understand better how this industry works. And how building these products are different and why they're failing. [00:08:33] Himakara Pieris: So I imagine you're in a conference room on, there are two whiteboards on either side. On one whiteboard you have a whole set of business priorities and all the other side you have a catalog of all the data services that's available to you.[00:08:45] Himakara Pieris: And then in the middle you have a data scientist and a machine learning engineer with a, , with a, with a toolkit, right? So, so you're running through a bunch of experiments using the toolkit you have and the data you have to see where you can impact, , the business priorities that you've identified.[00:08
I'm excited to bring you this conversation with Ali Nahvi. Ali is a Sr. Technical Product Manager for AI and Analytics at Salesforce. During this conversation, he shared his thoughts on championing AI initiatives as a product manager, translating business needs into AI problem statements, and how to positioning yourself for success.LinksAli On LinkedInTranscript[00:00:00] Ali Nahvi: to get there, to build that success, success story. You need to fail. And failure is part of the process and sometimes it's not easy for people to see that,[00:00:09] Himakara Pieris: I'm, Himakara Pieris. You're listening to smart products. A show where we, recognize, celebrate, and learn from industry leaders who are solving real world problems. Using AI.[00:00:19] Himakara Pieris: I'm excited to bring you this conversation with Ali Navi. Ali is a senior technical product manager for AI and analytics at Salesforce. During this conversation, he shared his thoughts on championing air initiatives. As a product manager, translating business needs into air problem statements. And how to position yourself for success. [00:00:37] Himakara Pieris: Check the show notes for links. Enjoy the show. [00:00:42][00:00:43] Himakara Pieris: Ali, welcome to Smart Products. [00:00:47] Ali Nahvi: Thank you so much Ima, for having me [00:00:49] Himakara Pieris: to start things off could you share a bit about your background and how you got into AI product management? I.[00:00:58] Ali Nahvi: I'm an accidental [00:01:00] product manager. I started my career journey with business intelligence and I guess it was around 2012 or 13. It was the first time I've heard the board data science. Before that we simply called it math. And I love the idea. I decided to move from BI to ai and that was the major figure for me to come to us do a PhD.[00:01:27] Ali Nahvi: And I did my PhD in application of ai m ml in the context of project management. And after that I started as a data science consultant. In a consulting company and yeah. And, and, and one day out of blue my roommate from grad school called me at the time who was working at Amazon and he told me that, Hey, I mean, we have this thing in product manager and I think you should, should become one of them.[00:01:56] Ali Nahvi: I did some research and very quickly I [00:02:00] also. I've got the same impression that, well, this can be an ideal job for me. I love helping people. I love solving business problems. I love ai. And I also love business development and communication and being around people.[00:02:16] Ali Nahvi: So I thought, well, that might not be a bad idea. So I joined iron Mountain in my first for like manager role. And then I joined another company after a while Cengage, which was mainly focused around online education. And recently I've joined Salesforce as a senior technical product manager for AI analytics.[00:02:45] Himakara Pieris: What is the primary difference you see going from, bi to data science, to ai as a product product manager? Do you need a different skillset? Are those, BI skills transferable across all these verticals?[00:03:00][00:03:00] Ali Nahvi: Yeah, business intelligence definitely still helping me a lot. [00:03:04] Ali Nahvi: And from data science perspective, I'm one of those PMs who thinks that PMs should be technical and have the ability to have that super technical discussions with the teams especially in data sciences space. In data science, in AI ward, understanding the problem, understanding business requirements is, in my opinion, is solving half of the problem.[00:03:31] Ali Nahvi: If you get there, if you can really digest the problem statement and have the ability to transfer that into a data science language then you are a really good PM and, and to do that for me, Having that technical background around data science have been extremely helpful. [00:03:51] Himakara Pieris: What would be a hypothetical example for translating a business requirement into data science or machine learning language?[00:04:00][00:04:00] Ali Nahvi: Let's say I'm assigned to work with a stakeholder in sales or marketing. And I sit with them, set up a call and say, Hey, what's your pain point?[00:04:12] Ali Nahvi: And they say, okay, I wanna increase sales and productivity. And so I would say, okay so can you explain what you're doing on a day-to-day basis? And they, they explain, this whole sales process that they go through from lead generation to sales calls to closing deals, and I might be able to find some opportunities there.[00:04:36] Ali Nahvi: To use AI to help them to do a better job. For example, the lead generation piece. Maybe you don't need to call all the customers, all, all the leads coming to your way. Maybe you can optimize that. Okay? But then you need to build a bridge. Between that really weight business problem into a very solid, robust data science problem.[00:05:00][00:05:00] Ali Nahvi: The business requirement doesn't give you anything like dependent variable, independent variable, the data structure, anything like that. So as a product manager, it's my job to help the team to kind of define that problem. And another thing that I believe that, that, that's why I think data, data science, product managers should be technical, the feature engineering.[00:05:22] Ali Nahvi: That's extremely delicate thing to do in my opinion. It's, it's something that where you tie business with science and you really need to have good understanding about how data scientists would do feature engineering. And at the same time, you really need to have a robust understanding of how business operates to, in incorporate all the important features in your feature engineering and make sure you capture all the important elements. [00:05:51] Himakara Pieris: You talked about, doing these customer interviews or user interviews looking for opportunities, these might be data, sort of [00:06:00] curation opportunities or recommendation opportunities or clustering opportunities or, or what have you, that sort of.[00:06:09] Himakara Pieris: Buried in, in the story that they're saying. [00:06:11] Himakara Pieris: You identify that and then you transform it from there to a, a problem statement that machine learning and DataScience folks can understand. Right. Could you talk me through the full workflow that you're using? So what are the key steps? So sounds like you're always starting with a use interview.[00:06:28] Himakara Pieris: How does the rest of the process look like? [00:06:31] Ali Nahvi: Let's go back to that sales problem again. For example, on the late generation, they say that, okay, we generate 2000 leads per day, but we can only call. 500 of them. So the, the lead optimization problem that I mentioned before that would pop up or on the sales calls, they say that we have limited number of sales mentors who can help salespeople.[00:06:54] Ali Nahvi: So maybe we can leverage AI to listen to some of the recorded calls and provide some [00:07:00] insights. So these are all hypotheses that could come up and I will write them down, all of them as potential initiatives. And then I would ask these questions from my stakeholders all the time. Let's say we are six months from now, a year from now, let's say we are done with this and we build this model that is a crystal.[00:07:20] Ali Nahvi: Al can tell you this lady's gonna make it, this lady's not gonna make it. How would you use it in your day-to-day, how it's going to change your workflow? Okay. And, and based on that, I, I try to basically come up with an estimate, ideally a dollar value around the, the potential added value that initiative can have.[00:07:44] Ali Nahvi: And then I would work with my team engineering managers, data science managers, try to understand visibility, data accessibility, data availability, and level of effort. , and based on that, I create a diagram [00:08:00] in, in one axis we have value. In the other we have level of f effort. And when you build something like that, it, it, it would immediately pop up and, and the, the, the high highest priority initiatives would, would show themselves to you.[00:08:19] Himakara Pieris: Sounds like you're identifying opportunities and then solutions, and then you are going through an exercise of validating these solutions. Right? And then it moves to the implementation part. I want to go through and discuss how if it is different from a traditional software development process.[00:08:41] Ali Nahvi: Absolutely. There are major differences between data science and software engineering and lots of intersections. So intersections are obvious. They both need coding. They both need infrastructure.[00:08:54] Ali Nahvi: They both need data. But there is a, a delicate [00:09:00] difference between them that. It's, it's, it's kind of hidden in the name of data science as well. It's science, it's not engineering. So element of uncertainty is there. All of these initiatives that we came up with, they are just hypothesis. We have a hypothesis that based on the current data, based on the current evidence, we might be able to build a prediction model to meet whatever requirement that we have in mind.[00:09:26] Ali Nahvi: But for there might be a chance that, that, that hypothesis. Wouldn't be right or even there might be a chance that we build a model, but it's not really usable or explainable for the user. So these types of uncertainties I think significantly different. Differentiate data science from software engineering work.[00:09:55] Himakara Pieris: How do you account for and plan for the probability of [00:10:00] failure? There is a probability that your moral can't make predictions with enough level of accuracy. How do you put in guardrails to make sure that this kind of failure, probability of failure is accounted for and planned for in that process? [00:10:15] Ali Nahvi: That's a fantastic question. And I have two mitigation plans for that one on the soft side of the business an
I'm excited to bring you this conversation with Sayanti Ghosh. Sayanti is a Sr. AI/ ML product manager at Teck Resources — one of Canada's leading mining companies. Sayanti manages a recommender systems product at Teck to support clean coal processing. During this conversation, she shared her thoughts on assembling an AI/ ML team, build vs. buy decisions, and the types of risks/ KPIs she monitors.LinksSayanti On LinkedInTranscript[00:00:00] Sayanthi Ghosh: if you wanna go for build, very important to see where the company stands in AI product, maturity level. Is it just starting? Is just in an experimentation phase? Is it in the level of using AI in few of the products? Or it is in a phase, or it is in a phase where it is into the DNA of the organization.[00:00:21] Himakara Pieris: I'm, Himakara Pieris. You're listening to smart products. A show where we, recognize, celebrate and learn from industry leaders who are solving real-world problems. Using AI.[00:00:33] Himakara Pieris: I'm excited to bring you this conversation. Shanti gauche Shanti is a senior AI ML product manager at tech resources. One of Canada's leading mining companies. Shout the managers recommended systems product at tech to support clean coal processing. During this conversation, she shared her thoughts on putting together an AI ML team build was this by. [00:00:53] Himakara Pieris: Types of risks and KPS. She monitors. Check the show notes for links. Enjoy the show.[00:01:00][00:01:00][00:01:01] Himakara Pieris: Shanti, welcome to the Smart Product Show. [00:01:03] Sayanthi Ghosh: Thanks, Hima. Thanks for giving me this opportunity [00:01:07] Himakara Pieris: tell us a bit about what Tech Resources does and also how you use AI and ml. [00:01:14] Sayanthi Ghosh: Tech Resources is a mining company. Tech is a hundred plus years old company who works in copper, zinc, and , steel making coal.[00:01:25] Sayanthi Ghosh: Tech resources also has another wing, which is race. And tech digital. So that's the part where they work with all the AI and ML products. The whole idea is to increase production and there are various other problems in supply chain, in mining, in blending. So there are various aspects and opportunities at Tech where AI and ML and other software engineering products help them solve these critical problems to, , grow their mining, to grow their production, and make it much more sellable product for their customers.[00:01:59] Himakara Pieris: [00:02:00] What kind of AI and non-AI products are you involved in, and how do you draw that line? [00:02:08] Sayanthi Ghosh: It's an interesting question . So before we jump into the kind of AI and non-AI products, let's just, In one line, just give an idea of what we mean by AI and what we mean by non-ai. So anything that you would have a machine to train to and a machine could learn, we broadly put them into ai and anything that is rule-based, which doesn't have any learning capacity those type of things, we broadly put them into non-ai.[00:02:38] Sayanthi Ghosh: So at Tech, what I do specifically with ai, we run recommendation systems. So think about it as a factory and I am in coal processing, so my work is in the domain of coal cleaning the coal. So think about there is a factory, you mine some coal. You need to clean that coal before you sell it to [00:03:00] your customers.[00:03:01] Sayanthi Ghosh: So when you are mining that coal, And when you are cleaning that coal, your goal is to maximize the production of the coal. So you do not wanna lose clean material while you are cleaning it. So, If there is a factory to do that, there are several machines, ? And you want those operators to run the machine in a, in its most efficient way, so that you clean and get the maximum amount of coal.[00:03:28] Sayanthi Ghosh: Here you have a digital product which recommends these operators. What should be specific? Set points or parameters that you would put in each of these machines so that your machines are optimized. There are trade offs. I'm not going into too much technical detail, but there are trade offs, and then at the ultimate goal is to get maximum amount of clean coal.[00:03:55] Sayanthi Ghosh: There are a few parameters. Also, we have to meet few [00:04:00] specifications, so the idea is to meet those specification and also maximize the coal amount. So that's where my AI product comes in. So it's a recommendation system. So it has got a bunch of machine learning programs underneath and an optimizer on top, and then it sends out recommendations.[00:04:19] Sayanthi Ghosh: So this is one of the AI product, and to your question, the non-AI product. Now think about you clean the coal. So your machine learning recommendation system did great job. You cleaned the coal, you have lots of clean coal, now you need to send it to your customer. So there is a whole supply chain method running so you put it in a train, you first load it into the train. Train goes into the port. From the port, it goes to your customer. So there is a chain of events going on, and there is. Non-AI software engineering based product, which helps us optimize the amount of coal that we put into our trains.[00:04:58] Sayanthi Ghosh: So this is a very [00:05:00] high level though, but this is an example of my AI and non-AI product that I work with. [00:05:06] Himakara Pieris: How do you decide when to use AI and when to use a traditional or rule-based system? [00:05:12] Sayanthi Ghosh: The first thing I would always say, if you see that it can be solved without ai, don't overkill with ai. If it can be rule-based, go for rule-based solution. [00:05:24] Sayanthi Ghosh: Then the second thing you need to look into is data. It's very, very critical. You need a lot of amount of good training data because ai, without good data, it's like garbage in, garbage out. So you need to make sure you have relevant data, good amount of data, and the third important pillar is, Is your organization and your user ready for it, the cultural readiness to have an AI solution.[00:05:51] Himakara Pieris: I also wanna start at the. Point of recognizing whether you need AI for something, is that based [00:06:00] on inability to describe , the outcome effectively using a set of rules, what kind of criteria goes into making that determination?[00:06:11] Sayanthi Ghosh: It depends. So what is the problem that you are solving and what is the goal that you wanna achieve? Now, it could be that the goal that you wanna achieve is not at all possible by a rule-based system. Why it is not possible. If you would have a lot, if you have a data and you want your system to learn.[00:06:33] Sayanthi Ghosh: Get trained and then behave in a certain way and provide an outcome. In that case, I don't think you can end up writing so many rules, but you can also think of like there were chat bots in past, or even now they have rule set up and the chat bot is working fine, but then you need. Much more advanced. So now with modernization, with time, AI is a lot more [00:07:00] understanding and adapting as well, [00:07:02] Sayanthi Ghosh: so if you need that system to learn, Then probably a rule-based solution is not an ideal way. So it depends upon what is the problem, and what do you have? What kind of data do you have? It could happen that you know that you need ai, you know that you need a system which should learn, but then you don't have the data, or it is extremely expensive to get to that data, and you need a lot more time to even acquire the data. In that case, even if you want an ai, probably you have to think it in a different way. That you probably need more time to find the AI solution till you reach that solution.[00:07:44] Sayanthi Ghosh: Until you gather that data, you need a non-AI solution to sustain. [00:07:49] Himakara Pieris: (lets discuss your framework ) Let's go into the framework , , love to learn the process that you, use and follow.[00:07:55] Sayanthi Ghosh: As I mentioned, starting with the problem, so always. You understand who [00:08:00] are your user, customer segment, and then you go deep dive into the problem,[00:08:04] Sayanthi Ghosh: you need to check if you have enough information or enough data available in case your team has suggested that AI is the only solution or the best solution.[00:08:15] Sayanthi Ghosh: If you see there is an option or a solution that can go without AI fulfills the business needs. Fulfills the value or solve the customer pain point. Go for non-ai [00:08:28] Sayanthi Ghosh: once you do that, now you are in a space where you know about the problem. You have your vision ready. Try to figure out if, how easy or difficult it is to access the data. And how expensive it is. [00:08:44] Sayanthi Ghosh: Understand how can you access the data? How can you integrate with your current system? That's the second checkpoint. [00:08:51] Sayanthi Ghosh: Third is checking the current state of the data. So what do you have right now, what amount of data that you have, and if you need [00:09:00] more information, is it an open source information that you can find?[00:09:04] Sayanthi Ghosh: Do you have to buy? Do you have to spend money on that? Do you need to invest? For me, I had to in like, my company has to invest on sensors. They had to put sensors in place so that we get the data. So that's an investment, you need to check what is the current state of data and what you need. [00:09:23] Sayanthi Ghosh: Finally, , this I have seen happen many times. People think we have to create and innovate in our company, but sometimes it's a question of builder purchase, if there is off the shelf products that would work for you, if that fits with your investment and if you see and check your roi, if that works.[00:09:45] Sayanthi Ghosh: I would say rather than inventing the wheel, Let's purchase that product. If you think that no, that won't work, then make a decision based on the company the problem and your investments and every and your [00:10
I'm excited to bring you this conversation with Spurthi Kommajosula. Spurthi is a portfolio product manager at IBM. During this conversation, she shared her thoughts on the types of exploratory questions an AI PM could ask to discover optimal use cases, how to communicate effectively with other stakeholders, and AI governance.LinksSpurthi On LinkedInTranscript[00:00:00] Spurthi: The first question as product managers, we should ask ourselves, and really anybody who you're involving within your AI conversation, is what data is available right now? [00:00:10] Himakara Pieris: I'm, Himakara Pieris. You're listening to smart products. A show where we, recognize, celebrate, and learn from industry leaders who are solving real-world problems. Using AI.[00:00:20] Himakara Pieris: I'm excited to bring you this conversation with [00:00:24] Himakara Pieris: is a portfolio product manager at IBM. During this conversation, she shared her thoughts on the types of extra for the questions an AI PM could ask to discover optimal use cases, how to effectively communicate with all the stakeholders, and AI governance. check the show notes for links. Enjoy the show. [00:00:42] Himakara Pieris: welcome to the show. [00:00:44] Spurthi: thank you so much for having me. It's very exciting. [00:00:47] Himakara Pieris: Tell us a little bit about your background and what you're working on.[00:00:51] Spurthi: I currently am a PR portfolio product manager for business analytics at IBM. So essentially what that means is I manage a portfolio [00:01:00] consisting of Cognos Analytics, planning, analytics, analytics, content hub, business analytics, enterprise and Cognos controller. If you've been in the analytics space , you probably have heard of at least one of these products given at their longevity and how how involved they are in the analytics space.[00:01:16] Spurthi: It's been an exciting role. I get to touch upon a lot of these products. They fall under the scope of data and AI at IBM which is a very exciting place to be in. Apart from that, I'm also an adjunct professor at Conestoga College where I focus in on analytics and data and AI based courses, so that's been really exciting to convey to people the importance of AI in this world. [00:01:40] Himakara Pieris: Sounds like the product Suite is focused on making the power of AI available to the masses to the sems, et cetera. Is that right? [00:01:48] Spurthi: Yes, that's exactly what it is. These products are more so b2b. So they target medium and large base card organizations or enterprises that really want to, in, in like improve their, [00:02:00] I would say, BI ecosystems their planning ecosystems. So that way not only are you just planning and forecasting and improving how you handle your data within your organization, but also you're incorporating AI when you do that.[00:02:13] Spurthi: So what that essentially means is when you do your forecasting or when you do your budgeting, you're involving AI to in ensure and help you predict and get more information from the data and insights that you have, and use that to really drive the decision making within your organization. So the best case example that I can give you is say for instance, you are a small time retailer that essentially retails in, say, coffee.[00:02:37] Spurthi: So if you're in the coffee space, you really wanna know how much coffee products you need to buy, how many cost of what's the cost of goods sold within this month. Et cetera. But at the same time, you need to plan how much to buy for the remaining year. What are my main months of sale? What are my sales looking like?[00:02:53] Spurthi: How should I be targeting to meet my goals by the end of the year, et cetera? So when you [00:03:00] do that kind of level of planning, you really want to be able to use reliable insights to get that. Level of confidence to make your decisions in a more effective way. So that's essentially where my type of products come in.[00:03:12] Spurthi: We really ensure that your entire business intelligence suite and your data and AI suite is set up to give you as reliable insights as possible. Insights that can really drive to make your decision making easier. [00:03:27] Himakara Pieris: What would be a good example use case that we can get into?[00:03:30] Spurthi: Let's take the example of L'Oreal Paris a big manufacturer and retailer of cosmetic products within the world. So typically during red carpet events, you would see a lot of different looks, a lot of different trends that come up in makeup and fashion, et cetera. So what L'Oreal saw this as a business opportunity, but essentially in order for them to know what was trending, what was Going to be sold later on.[00:03:54] Spurthi: What would essentially be the product of the year that people would go after? They would need to analyze thousands, [00:04:00] if not millions, of data trends of trends within the red carpet at a very real time basis. So that's where business analytics comes in, helps them essentially target these data sources, these trends, and give them the insights that they need.[00:04:16] Spurthi: To do their demand and supply planning so that way they're set up for the rest of the year following the trends that have been seen in red carpet events, et cetera. So that's essentially given them that opportunity of pursuing that business that business opportunity in a very, very reliable and fast paced manner with data backing up their decisions.[00:04:35] Spurthi: That way they know what product to make more of, what product to essentially stock up on. So that way when they do go to market, they have enough sources to sell enough products to sell without any issues. And their planning and budgeting for all of this is done in a more seamless manner. So I think that's one of the best case examples that I can give you.[00:04:54] Spurthi: Another really key one that's used AI in the last couple of years to make their [00:05:00] processes better has been this financial institution that's incorporated data discovery. Within their audit modules, so that way when auditors come in and take and do their audit processes, et cetera, they're able to automate and rely on AI to drive certain in insights and automate some of their redundant tasks and administrative tasks, which is essentially save them thousands of hours.[00:05:23] Spurthi: And thousands of auditors have been able to essentially incorporate that and improve their audit processes. So not only are they able to audit better, but they're also able to audit faster and essentially save billable hours for a lot of the organizations. So if you look at, you know, AI right now in any of the spaces, you know, you pick supply chain or if you pick retail consumer goods, really just any aspect outside or even inside your organization, you can see that there is a space for ai.[00:05:52] Spurthi: It's just how you kind of. Pave the path to get that space set up and utilized for you to ensure that [00:06:00] AI really just adds a value to your organization. [00:06:02] Himakara Pieris: What are some of the challenges that you experience as a PM in the, AI space? [00:06:10] Spurthi: Everybody wants to be involved with AI because it's become such a big buzzword. And what that essentially means is as product managers, you wanna cater to this demand, but you also want your clients to incorporate their data and AI-based solutions in the best way possible.[00:06:26] Spurthi: Taking the example of say the audit example that I'd mentioned with the financial institution, they knew that they had to incorporate AI in some form or the other, but showing that they're such a niche place and incorporating it with audit. What's the best way for them to go forward? And the best way you can do that and have your clients understand where they can incorporate AI is through.[00:06:49] Spurthi: Constantly having a conversation with them, understanding where the pain points are, and then trying to see where they, where your products can add value to it. If, you know, you go [00:07:00] into incorporating AI with the mindset of this is going to solve all of my organization's problems, then you're not really using AI to its capacity or its full potential.[00:07:09] Spurthi: Really understanding the vision of what you wanna do with AI is, The best way to go forward, you know, and that may mean you have to incorporate it at some form or the other, or that may mean you may not exactly need ai, but you might need like an analytics based solution or you might need maybe a data management solution.[00:07:27] Spurthi: So incorporating it really just depends on. Where your pain point is as an organization and how effectively can data and AI solve that? And the second aspect, I think the second challenge of it is, while people wanna get into the space of ai, they're also very scared of what that means. And they're very, very of of what the AI piece could mean, especially with the conversations of AI being unpredictable or not trustworthy or not explainable being so relevant in this day and age.[00:08:00][00:08:00] Spurthi: Again, all of these just come back to the conversation of having an understanding of where AI can sit within your organization and having a strong understanding of what goes into your ai. What type of insights are you getting out and how rele relevant and reliable are those insights. The best case example that I can give you is when you use Watson Studio, which is a product offered under the data fabric solution space under the data and AI banner.[00:08:29] Spurthi: At I B M, as soon as you come up with your ML model or machine learning model, you get to see exactly the accuracy scores of the model. So that essentially tells you where the, the model stands in terms of its reliability, where the model is faulty, where the model is not accurate. You get a real understanding of what's going right and what's going wrong with your model.[00:08:51] Spu
I'm excited to bring you this conversation with Anand Natu from Meta, where he is a product manager working on responsible AI.Before Meta. Anand was at Amazon, where he worked on brand protection using AI. During this conversation, Anand shared his thoughts on building large-scale AI systems, how to think about risk, and common mistakes AI product managers make.LinksAnand On LinkedInTranscript[00:00:00] Anand Natu: Big data walked so that AI could run in the sense that it was the democratized language of , using data to drive decision-making that eventually became more sophisticated forms of artificial intelligence.[00:00:12] Hima: I'm, Himakara Pieris. You're listening to smart products. A show where we, recognize, celebrate, and learn from industry leaders who are solving real world problems. Using AI.[00:00:22] Himakara Pieris: I'm excited to bring you this conversation with Ananta natto. Lotto. Annette is a product manager working on responsible AI at meta. Prior to meter on and was at Amazon where he worked on brand protection using AI. During this conversation on unshared, his thoughts on building large ScaleIO systems, how to think about risk and common mistakes. AI product managers make check out the show notes for links. [00:00:45] Himakara Pieris: Enjoy the show. [00:00:46][00:00:47] Himakara Pieris: Anand, welcome to the Smart Product Show. [00:00:50] Anand Natu: Thank you for having me. [00:00:51] Himakara Pieris: To start things off , could you tell us a bit about your background, what you're doing now, and your journey, , in product management in ai? [00:00:58] Anand Natu: Academically, my background's in engineering traditional engineering. So my undergrad was in chemical engineering, and then I spent the first few years of my career as a management consultant working across a variety of industries, but with some focus on the technology space.[00:01:14] Anand Natu: I first got into product management. And got interested in AI kind of around the same time because I, I, number one, kind of wanted to switch into product management to be closer to building things, which is one of the things that I really missed after a few years working in consulting where I was a little bit further from that.[00:01:32] Anand Natu: And then the, you know, secondarily, I was, at the time at least interested in robotics and autonomous vehicles, and obviously AI is a big driver of innovation in that space and. That was kind of the motivating factor for me to do my master's in, in robotics at Stanford, which is what I did to kind of transition from consulting into product.[00:01:52] Anand Natu: And as it turned out, I never really ended up doing that much professional work in that space as my first camera role aside from an internship in, [00:02:00] in the mobile game space, was actually at Amazon. After finishing my Masters and at Amazon, I spent several years working in the consumer business, specifically on a team called Brand Protection, which is among other things, you know, the, the overarching mission of the team is to develop security and commerce features that power the experience for brands on Amazon.[00:02:24] Anand Natu: Mostly for the third party selling business. And so, you know, Basically the, the purpose of what our team's work was, was to create a better environment on Amazon for brands to organically grow and develop their presence and connect with the type of buyers and the type of audience that they're interested in marketing to.[00:02:39] Anand Natu: And, and basically compete with direct to consumer. Options and channels in the process. During that time, I worked that, that was kind of my first experience working with AI in a product management capacity. We worked on a number of different AI driven initiatives, including, you know, a big one that we'll get more into detail [00:03:00] on later involving figuring out how to use AI to basically identify the branding of products at massive scale massive scale, in this case being the entire Amazon catalog.[00:03:11] Anand Natu: And then after Amazon, I transitioned into my current role at Meta, which I've been at for just over a year now on the responsible AI team here, which is used to be just part of the central AI org and is now part of the central social impact organization in Meta. And I specifically work on fairness within responsible ai.[00:03:30] Anand Natu: So, We sort of research, develop, and ship software that's designed to ensure that all of the production AI systems used at meta work fairly and equally for different user groups on our services. And the, the scope of our team is fully horizontal, but in the last few years we've worked on just some of the more high priority business products that meta operates, like our ads, ads, products for, you know, ads, personalization, targeting some social.[00:03:57] Anand Natu: Some models that power social [00:04:00] features on Instagram, so on and so forth. And, and we also do some stuff in more sensitive areas like content integrity and moderation and, and things like that. [00:04:09] Himakara Pieris: Let's talk about the brand protection use case. Can you tell me about what was the driver, what did the original business case look like ?[00:04:17] Anand Natu: I'll start with a bit of context on kind of what I was working on and why it's important to to my org to Amazon in general. So at the time I was owning a program within my team called ASIN Stamping. So ASIN stands for Amazon Serial Identification Number. It's basically just the unique ID for a given product within the catalog.[00:04:37] Anand Natu: The point of the ASIN stamping program is to basically identify. For every single product in the catalog, the brand that that product belongs to, and stamp it or just basically populate a field in the catalog with a unique identifier called the brand id. And there's a separate data store called Brand Registry that is the sort of authoritative source of all brand IDs that exist within [00:05:00] the Amazon store worldwide.[00:05:02] Anand Natu: The reason why that's important is there are many reasons why that's important, but they fall into two big buckets. The first is catalog protections. If we know if we have an authoritative signal for what brand each product belongs to, we have a much more robust mechanism through which to vet and validate authentic branded selection on our platform.[00:05:23] Anand Natu: Prevent counterfeits, prevent malicious sellers from representing themselves as brand agents, so on and so forth. This is, this becomes really important for major brands who. Frequently have their stuff counterfeited, like luxury brands the N F L, things like that. There are several high profile examples of counterfeits becoming prevalent on Amazon in the past, which is part of the reason my team was created in the first place.[00:05:48] Anand Natu: So the security side is one big incentive for the business case of this program. The other is commercial benefits, which basically are related to. You know, features [00:06:00] ranging from ad targeting to advanced sort of content creation tooling that allow brands on Amazon to better represent and sort of.[00:06:10] Anand Natu: Show off their brand within the context of the Amazon e-commerce experience. And the purpose of those features is to basically better, like I said at the beginning, better allow brands to develop their brand equity and brand image within the Amazon ecosystem and connect to the kind of audience that they want to connect to.[00:06:29] Anand Natu: So that they start seeing Amazon as an actual home for brand development as opposed to what sellers historically have viewed Amazon as, which is, as you know, this, this e-commerce channel that's helpful for pushing volume and getting sales, but not necessarily a great place to like develop your brand image or find your really loyal, kind of like high retention customers.[00:06:50] Himakara Pieris: The two scenarios that you talked about there, one sounds like you're addressing a pain point, counterfeiting of products and which has a lot of implications. And then the second one sounds like you're [00:07:00] trying to create new value by providing a platform where the vendors could create brand equity.[00:07:06] Himakara Pieris: What was your process to assess the impact of these two sort of drivers at the start? [00:07:15] Anand Natu: Each of those two cases are directly sort of enabled or provided by features that basically leverage the data that the ASIN stamping program creates in the catalog. So the point of the stamping program is basically just to look at the entire catalog, which at the time was about three and a half billion products in total, give or take, and make sure that as many of them as possible have a brand id.[00:07:40] Anand Natu: Stamped on them. And that globally, the, the data that lives like the brand ID data across the entire catalog maintains an accuracy bar of 99% or higher. So that was the internal quality bar reset. And the mandate of the team was to basically push towards full coverage of the catalog with this high fidelity brand identity information [00:08:00] on all products listed on Amazon.[00:08:04] Anand Natu: The value of, so, In terms of the business impact of the downstream features, the power we typically represent that in in different ways depending on what we're looking at. If we're looking at commerce features, like things that help brands connect to customers, things like that, we would typically use things like downstream impact studies that demonstrate incremental lift in G M s or gross marketplace sales that result from brands identifying themselves as brand representatives.[00:08:30] Anand Natu: Basically going through all the relevant steps to. Onboard their brand into Amazon and leverage the benefits that we make available to vetted branded merchants on Amazon. And I, I forget the exact numbers, but we basically had some DSI studie
I’m excited to bring you this conversation with Tarig Khairalla. Tarig is an AI product manager at Kensho; Kensho is the AI and innovation hub for S&P Global. They are focused on helping customers unlock insights from messy data.In this episode, Tarig talked about the day-in-life of an AI product manager, how to overcome adoption challenges, and his approach to collaborating with both engineering partners and customers.LinksTarig on LinkedInAbout Kensho Transcript[00:00:00] Tarig Khairalla: being a product manager space right now is very exciting. Things are moving fast. There are opportunities everywhere. To leapfrog competitors and other companies out there. And I think it's an opportunity for a lot of product managers now to get into this space and, really, make a difference for a lot of customers.[00:00:20] Hima: I'm, Himakara Pieris. You're listening to smart products. A show where we recognize, celebrate, and learn from industry leaders who are solving real-world problems. Using AI.[00:00:30] Himakara Pieris: I'm excited to this conversation with Tarig Khairalla. Tarig is an[00:00:35] Himakara Pieris: AI product manager. Kensho. Kensho is the AI and innovation hub for s and p Global. They're focused on helping customers unlock insights from messy data. In this episode, Tarig talked about the day-in-life of an AI product manager, how to overcome challenges, and his approach to collaborating with both engineering partners and customers. Check the show notes for links.[00:00:54][00:00:56] Himakara Pieris: Welcome to the Smart Product Show, Tarig. [00:00:58] Tarig Khairalla: Thank you for having me[00:01:00][00:01:00] Himakara Pieris: Could you tell us a bit about your background and how you started and how you got into managing AI products?[00:01:06] Tarig Khairalla: Interestingly, coming out of school, I did not immediately become a product manager. I graduated with an accounting, finance, and economics degree. And I worked in the accounting space actually for the first few years at Ernest and Young. And so sometime during those first few years of working, there is when I got involved in the AI and machine learning space. And you know, from there, got stuck to it. I worked at Verizon afterwards for a little bit, and then here I am now with Kensho Technologies as a PM in the AI and machine learning space. [00:01:38] Himakara Pieris: Tell us a bit about what Kensho does and specifically what Kensho does with AI and machine learning to help companies work with s e data.[00:01:47] Tarig Khairalla: As you mentioned Kensho, we develop AI solutions that unlock insights that are hidden in messy data. If you think about the business and finance space, the majority of data created has no standard format, [00:02:00] right? You know, your typical professional gathers information from images.[00:02:05] Tarig Khairalla: Videos, text, audio, and, and so much more. And unfortunately, as a result of that critical insights are generally trapped and, and are hard to uncover in that data. And so the reality is that, you know, the data today is being created at a rate much faster than before. And so the solutions we build are tackling problems that many organizations today are, are, are facing.[00:02:30] Tarig Khairalla: You know, we build a variety of machine learning. Products that serve to structure, unify and contextualize data. We have products that are used in a daily basis for things like transcribing, voice to text, extracting financial information from pdf, f documents, identifying named entities like. People in places within text understanding sentences and paragraphs to tag them with a topic or concept that are being discussed.[00:02:59] Tarig Khairalla: Right? So, [00:03:00] you know, at the end of the day, what we're looking to do is, is, is make information more accessible, easier to use, and allow our customers to discover hidden insights much faster than that they could before and ultimately, you know, enabled them to make decisions with a conviction. [00:03:15] Himakara Pieris: Do you target specific industry verticals or specific business processes? [00:03:21] Tarig Khairalla: Yeah, our products are mainly geared towards you know, your finance workflows. . A lot of the models and products that we built, were trained on financial data, like the extraction capabilities that we have or trained on financial documents or the transcription products that we we provide are trained on earnings calls, for example, and many other types of financial related data.[00:03:44][00:03:44] Himakara Pieris: I presume you expect higher level of accuracy because your training data set is focused on domain specific data. [00:03:51] Tarig Khairalla: That's the angle, yes. That we are focused on business and finance to make sure that we're the best at developing machine [00:04:00] learning solutions for the finance professional.[00:04:02] Himakara Pieris: As a PM how do you think about ways to improve the product or add extra value? Is it primarily around, increasing accuracy or pushing into adjacent use cases? How do you build on top of what you already have as a pm? [00:04:19] Tarig Khairalla: It's a little bit of both, definitely.[00:04:21] Tarig Khairalla: Making sure that we are continuing to push the boundaries in terms of what's possible from an accuracy perspective across all our products. But you know, the other thing we do too is make sure that we can provide value beyond just what we offer with one product. For example, you know, a lot of our.[00:04:36] Tarig Khairalla: Kind of capabilities sometimes are synergistic in nature. So you can add something like, and you know, the product called Kero extract to some of, some, some other Kero product that's called Kero Classify. To be able to now provide something that is beyond just one product or one solution that that users can drive value from.[00:04:56] Himakara Pieris: How is it different being a product manager working [00:05:00] in in an AI product versus being a product manager who is working in a traditional software product? [00:05:06] Tarig Khairalla: I think that there's a lot of parallels and similarities, but with being in the AI space, you add a layer of now you have to work really closely with machine learning and data scientists, and you also add an element of uncertainty, [00:05:21] Tarig Khairalla: because as we're building AI and machine learning products, a lot of times we're uncertain whether a given experiment is gonna succeed. And so there's a little bit more uncertainty around it. There's a little bit more in terms of the discipline, right? You have to learn a little bit how to build machine learning models, [00:05:37] Tarig Khairalla: the life cycle of a machine learning model. You have to learn that and really kind of be able to implement it in your day-to-day and, and build processes around that to make sure that you're still delivering what you need to del deliver for your customers and, and clients. [00:05:51] Himakara Pieris: How do you plan the uncertainty is common in air and machine learning product life cycles?[00:05:57] Tarig Khairalla: certainly what's important to do [00:06:00] is have specific measures of success and targets in mind before starting, let's say, for example, a specific experiment. But I'll also say that it's, it's important to also tie inbox your activities. So you know, when you're scoping out a specific experiment or a specific project, understanding what your no star is going to be.[00:06:20] Tarig Khairalla: Understanding some of the measure of success that you're looking to go after and making sure that you're timeboxing what you're looking to achieve in a reasonable amount of time. Right? Typically what happens with machine learning experimentation is that, you know, it can, you, you can, you can experiment for, for a very long time, but is that going to be valuable?[00:06:38] Tarig Khairalla: Is there a return of investment in, you know, spending two to three to four months of time experimenting or something? Or are you better off pivoting to something else that can potentially drive value elsewhere? [00:06:50] Himakara Pieris: Do you have a data science and machine learning team that's embedded inside a product team, or is there a separation between the data science machine learning team and the product team?[00:07:00][00:07:00] Tarig Khairalla: The way we think about things is, there's three kind of main key players in developing a product.[00:07:05] Tarig Khairalla: We've got the product manager we've got the tech lead on the engineering side application side, and then there's the tech lead on the machine learning side. So the three of them combined with a, a designer is usually how we approach building products at Kin Show. [00:07:20] Himakara Pieris: How does that interface look like between let's say the machine learning lead and the engineering lead, and also between machine learning and product so those, different interfaces.. [00:07:31] Tarig Khairalla: . , we work really closely together. You know, say that we touch base regularly on a weekly basis to kind of talk about what we're looking to do. Right. Whether it's like a new product that we're building out if, if machine learning team is in kind of the research mode we make sure that we're continuing to talk to our tech leads to make sure that We build expectations ahead of time.[00:07:56] Tarig Khairalla: So if a model is being built out, it's in research [00:08:00] mode. Maybe there's not a lot of effort that's needing to be done on the kind of backend side of things on the application side. But once that model graduates to being more of a potential one that's gonna be productionized, we're already kind of established.[00:08:14] Tarig Khairalla: We're already on the same page as far as like, well, there's a model coming your way. We have to think about setting up that service up front. So I would say it's very collaborative in nature as we're scoping out.
My guest today is Vibhu Arora. Vibhu’s product career spans both startups and top-tier tech companies. Vibhu is a group product manager focused on AI, M a production at Walmart. Prior to that, he was a product manager at Facebook focused on AR VR e-commerce experience. In this conversation, Vibhu shared with us some of the ways Walmart is using AI and machine learning to improve the customer experience. His advice to product managers who want to get into managing AI and ML products and his thoughts on how to build a business case. LinksVibhu on LinkedInTranscript[00:00:00] Vibhu Arora: we created a system where Walmart would actually predict your most likely to use payment vehicle for the given transaction. We make a directed bet that the customer is actually going to use a specific payment method and auto select that, which basically removes one friction point from the journey. So one less friction point. So better conversion, better for customers, better for the business. [00:00:28] Himakara Pieris: I'm, Himakara Pieris. You're listening to smart products. A show where we, recognize, celebrate and learn from industry leaders who are solving real world problems. Using AI.[00:00:39] Himakara Pieris: My guest today's Vibhu Arora. Vibhu’s product career spans across both startups and top tier tech companies. Vibhu is a group product manager focused on AI, M a production at Walmart. Prior to that he was a product manager at Facebook focused on AR VR e-commerce experience. [00:00:52] Himakara Pieris: In this conversation. Vibhu shared with us some of the ways Walmart is using AI and machine learning to improve the customer experience. His advice to product managers who want to get into managing AI and ML products and his thoughts on how to build a business case. [00:01:07][00:01:08] Himakara Pieris: Vibhu welcome to the smart Fredette show. [00:01:10] Vibhu Arora: Thank you Hima for having me here. I'm excited to be part of the podcast today. [00:01:14] Himakara Pieris: We've seen. Walmart investing heavily in AI and ML to transform the customer experience. What are some of the interesting AI use cases you've had the chance to work on at Walmart? [00:01:24] Vibhu Arora: I started out with a product called extended Warranties. It's a very neat concept especially in, electronics and home business verticals where you're buying like a TV.[00:01:38] Vibhu Arora: The TV comes with one year warranty, but at the time of purchase, you get the option to buy extended warranty. And this option exists only at the time of purchase. You cannot buy it later. So it, it has kind of like this timing component and. It happens to be a winner for both the customer and the business.[00:02:01] Vibhu Arora: The customer wins because it's peace of mind. You have like four kids in your home, you're buying a thousand dollars tv. You want peace of mind, put in 50 bucks more. And even if the TV is broken into pieces, you know, it'll be replaced, the business is working off of statistics. How many TVs get broken and, how the pricing works accordingly.[00:02:22] Vibhu Arora: But ultimately , 90 to 95% of warranties never get claimed. Means like things don't break. So it's a highly profitable business. [00:02:34] Himakara Pieris: Sounds like it's a good example of how you can essentially use AI enabled service products to increase the cart value and the bottom line. Are there any other common types of finance or service use case examples where you're using machine learning? [00:02:50] Vibhu Arora: One of the most common like use cases for machine learning is fraud detection. If you generalize it one step further, it could even be called anomaly detection.[00:03:03] Vibhu Arora: One of the use cases, which we used for the credit card application process, was that of anomaly detection or more specifically fraud detection. We actually apply for the credit card. A bunch of inputs are taken in your name and, you know sort of like financial information.[00:03:25] Vibhu Arora: And based on that the machine learning model actually performs a lot of checks and assesses the risk and, and then spits out kind of a binary answer. Yes or no, you know? Yes. Whether Walmart and partner Capital One in this case are willing to take the risk to give credit to you versus no this profile is, is too risky to give credit.[00:03:54] Vibhu Arora: The other interesting use case. We, unlocked in payments, in checkout. We built a model where if you actually have a lot of payment vehicles in your account, which means, you know, you have your debit card, you have your credit card, you have your PayPal, you have, you know, whatnot, you have like a lot of payment vehicles in your account. And guess what? Some of, some of these you know, are running low on balance, et cetera.[00:04:24] Vibhu Arora: So there's like different states of each of these payment vehicles. So we created a system where Walmart would actually predict your most likely-to-use payment vehicle for the given transaction. So instead of like, instead of being super neutral about the transaction that, okay you're, you're going through this.[00:04:50] Vibhu Arora: And now you can select from any of these options. We used to make we, we used to make like a directed bet that the customer is actually going to use a specific payment method and auto-select that, which basically removes one friction point from the journey. Like the most likely-to-use payment method is automatically selected.[00:05:12] Vibhu Arora: So one less friction point. So better conversion, better for customers, better for the business. s[00:05:17] Himakara Pieris: Walmart, I think pioneered trending curries in e-commerce as well. Can you share a bit about that? [00:05:24] Vibhu Arora: If you start your search journey on the app, you, you start with the auto complete screen. And on the auto complete screen, there is actually a stage when you have not even entered a single letter, which we call like the starting screen or the landing screen.[00:05:43] Vibhu Arora: Which is the blank slate screen. So on this screen, we envisioned, the strategy here was, we, we want people to, to learn and know more about, trending products, [00:06:00] which, you know, other people are using. And it had like, again, like most good products, it's a win-win for both customers in Walmart.[00:06:07] Vibhu Arora: So customer gets to learn about the trends and they don't sort of like, they can, they can get to come out of their FOMO because they caught on the trend. and Walmart actually wins as well. Cause, it means like more product sales and, and a happy customer. and the feature is like on this, this, auto complete landing page, below your previous suggestions.[00:06:32] Vibhu Arora: We started showing inquiries and, we weren't actually expecting it to be, as big of a, a win, but, it actually did, take off pretty well. and we, we, we did experiments with it and we actually got like, you know, pretty solid feedback , from the experiments and ended up launching it.[00:06:57] Vibhu Arora: So it's now launched fully, so you can actually like [00:07:00] see it on the app as well. and again, this feature also, is built on, on machine learning models. It takes in a bunch of signals from the crowd, and has like a, puts them into, through a definition of trending versus non trending. And depending on whether yes or no, a particular query gets.[00:07:26] Vibhu Arora: Stamped as a trending query or not. So, again, you know, kind of like a powerful, cool, fun, implementation and use of ai, in, in Walmart search ecosystem. [00:07:40] Himakara Pieris: In couple of those use cases, they seem to be directly tied with improving UX and by doing that, removing friction and by removing friction increasing revenue generation, .[00:07:52] Himakara Pieris: One of the challenges that we hear all the time is difficulty that PM's experiencing in making a business case [00:08:00] because they're having trouble quantifying the impact at the start. Especially if you are at an organization that doesn't have lot of machine learning and data science resources, you're trying to venture into this area, [00:08:12] Himakara Pieris: Given this type of context, what's your advice to a product manager? On how to build a business case. [00:08:19] Vibhu Arora: What your questions sort of highlights is if you're, if you're not like a mature ai culture or a mature AI organization it can be very challenging to, to sort of like build this or cultivate this sort of mindset. So Absolutely. I think, you know, it all, it all begins with like empathy and it begins with the awareness.[00:08:48] Vibhu Arora: If I'm a senior product manager in a, in a situation where we want to try our first AI product and, and AI sort of understand, never [00:09:00] existed before, so I would, I would want to have like an awareness and empathy of my. Organization , because obviously, you know, I would need out of many things I.[00:09:11] Vibhu Arora: Lead alignment and funding from stakeholders peers and senior leadership and you know, how these people are thinking and where their minds is at. Is, is fundamentally going to be material in, in unlocking you know, this chart or not. And by the way, this, this also like, is one of the biggest learnings I had working in startup versus working in large companies is like in startup, like generally the strength of the idea wins and the the time to it takes to test something and pivot is so [00:10:00] rapid.[00:10:00] Vibhu Arora: It's, it's, it's unbelievable. You have a good idea. You, you test it and it doesn't work and you pivot. So it's like a small boat. You can make turns easily, but in large companies, like the, the boat is bigger. So pivoting is extremely costly. Like changing direction is one of the hardest things to do for big ships.[00:10:22] Vibhu Arora: So in bigger companies, Aligning with, with multiple organizations, multiple team members, stakeholders, peers, leaders on what d
Waleed is a product manager at Untether AI.Untether AI is building a processor that is designed specifically for AI workloads.During this conversation, Waleed shared his thoughts on why we need a new processor architecture for AI, the cost implications of running AI workloads, and what it would mean to reduce the cost of operating AI. LinksWaleed on LinkedInUntether AI
Sid is an AI product manager at GE HealthCare. In this episode, we talk about Sid’s approach to identifying AI use cases, his workflow from idea to production as an AI product manager, and his advice for product managers interested in leading AI products. Please check the notes section for links.LinksSid on LinkedInCRISP-DM: Towards a Standard Process Model for Data MiningAIR Recon DL paper on arXiv
Wissem is the VP of innovation at BusPas — a company that is creating a building block for smart mobility. BusPas uses their SCiNe AIOT device to collect data and make predictions on the edge.LinksWissem Maazoun on LinkedInLearn more about BusPasWissem showing us BusPas' SciNe device
Vinod Subramanian is the Chief data and product development officer at Syapse.Syapse collaborates with health systems, life sciences companies, and Food And Drug Administration to advance care for patients with serious diseases through precision medicine powered by AI. During this conversation, he shared his thoughts on how to identify the right use cases for AI, how to approach explainability, and how to think about talent when you are starting off with an AI project.LinksVinod Subramanian (@VinDNA) / TwitterSyapse on the web
In the fall of 2022, we set out to interview hundred product leaders to learn what they perceive to be the challenges and opportunities associated with embedding AI into their own products. When we compiled all their responses, it turned out there are ten types of challenges.The goal of this podcast is to interview people who have already gone through these challenges and have come up with ways to work through these challenges. We look forward to identifying, celebrating, and learning from industry leaders who are using AI to deliver real-world value and sharing some actionable advice and frameworks with all of you.LinksGet the research report => https://www.hydra.ai/reportLearn more about the podcast and submit guest suggestions => https://www.smartproducts.showGet in touch: hima@hydra.ai



