DiscoverThe Analytics Power Hour
The Analytics Power Hour
Claim Ownership

The Analytics Power Hour

Author: Michael Helbling, Moe Kiss, Tim Wilson, Val Kroll, and Julie Hoyer

Subscribed: 958Played: 22,850
Share

Description

Attend any conference for any topic and you will hear people saying after that the best and most informative discussions happened in the bar after the show. Read any business magazine and you will find an article saying something along the lines of "Business Analytics is the hottest job category out there, and there is a significant lack of people, process and best practice." In this case the conference was eMetrics, the bar was….multiple, and the attendees were Michael Helbling, Tim Wilson and Jim Cain (Co-Host Emeritus). After a few pints and a few hours of discussion about the cutting edge of digital analytics, they realized they might have something to contribute back to the community. This podcast is one of those contributions. Each episode is a closed topic and an open forum - the goal is for listeners to enjoy listening to Michael, Tim, and Moe share their thoughts and experiences and hopefully take away something to try at work the next day. We hope you enjoy listening to the Digital Analytics Power Hour.
289 Episodes
Reverse
Imagine a world where business users simply fire up their analytics AI tool, ask for some insights, and get a clear and accurate response in return. That’s the dream, isn’t it? Is it just around the corner, or is it years away? Or is that vision embarrassingly misguided at its core? The very real humans who responded to our listener survey wanted to know where and how AI would be fitting into the analyst’s toolkit, and, frankly, so do we! Maybe they (and you!) can fire up ol’ Claude and ask it to analyze this episode with Juliana Jackson from the Standard Deviation podcast and Beyond the Mean Substack to find out! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Did you know that, upon closer inspection, many a statistical test will reveal that "it's just a linear model" (#IJALM)? That wound up being a key point that our go-to statistician, Chelsea Parlett-Pelleriti, made early and often on this episode, which is the next installment in our informally recurring series of shows digging into specific statistical methods. The method for this episode? ANOVA! As a jumping off point to think about how data works—developing intuition about mean and variance (and covariates) while dipping our toes into F-statistics, family-wise error rates (FWER), and even a little Tukey HSD—ANOVA’s not too shabby! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Product managers for BI platforms have it easy. They "just" need to have the dev team build a tool that gives all types of users access to all of the data they should be allowed to see in a way that is quick, simple, and clear while preventing them from pulling data that can be misinterpreted. Of course, there are a lot of different types of users—from the C-level executive who wants ready access to high-level metrics all the way to the analyst or data scientist who wants to drop into a SQL flow state to everyone in between. And sometimes the tool needs to provide structured dashboards, while at other times it needs to be a mechanism for ad hoc analysis. Maybe the product manager’s job is actually…impossible? Past Looker CAO and current Omni CEO Colin Zima joined this episode for a lively discussion on the subject! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
It’s a process few people genuinely enjoy, but it’s one which we all find ourselves going through periodically in our careers: landing a new job. We grabbed MajorData himself, Albert Bellamy, for a wide-ranging discussion about the ins and outs of that process: LinkedIn invitation etiquette (and, more importantly, effectiveness), how networking is like spousal communication (!), the usefulness of reducing the mental load required of recruiters and hiring managers, and much, much more! You might just want to drop and do twenty push-ups by the end of the episode! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Synthetic data: it's a fascinating topic that sounds like science fiction but is rapidly becoming a practical tool in the data landscape. From machine learning applications to safeguarding privacy, synthetic data offers a compelling alternative to real-world datasets that might be incomplete or unwieldy. With the help of Winston Li, founder of Arima, a startup specializing in synthetic data and marketing mix modelling, we explore how this artificial data is generated, where its strengths truly lie, and the potential pitfalls to watch out for! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Is it just us, or are data products becoming all the rage? Is Google Trends a data product that could help us answer that question? What actually IS a data product? And does it even matter that we have a good definition? If any of these questions seem like they have cut and dried answers, then this episode may just convince you that you haven't thought about them hard enough! After all, what is more on-brand for a group of analysts than being thrown a question that seems simple only to dig in to realize that it is more complicated than it appears at first blush? On this episode, Eric Sandosham returned as a guest inspired by a Medium post he wrote a while back so we could all dive into the topic and see what we could figure out! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
No matter how simple a metric's name makes it sound, the details are often downright devilish. What is a website visit? What is revenue? What is a customer? Go one level deeper with a metric like customer acquisition cost (CAC) or customer lifetime value (CLV or LTV, depending on how you acronym), and things can get messy in a hurry. In some cases, there are multiple "right" definitions, depending on how the metric is being used. In some cases, there are incentive structures to thumb the definitional scale one way or another. In some cases, a hastily made choice becomes a well-established, yet misguided, norm. In some cases, public companies simply throw their hands up and stop reporting a key metric! Dan McCarthy, Associate Professor of Marketing at the Robert H. Smith School of Business at the University of Maryland, spends a lot of time and thought culling through public filings and disclosures therein trying to make sense of metric definitions, so he was a great guest to have to dig into the topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Data that tracks what users and customers do is behavioral data. But behavioral science is much more about why humans do things and what sorts of techniques can be employed to nudge them to do something specific. On this episode, behavioral scientist Dr. Lindsay Juarez from Irrational Labs joined us for a conversation on the topic. Nudge vs. sludge, getting uncomfortably specific about the behavior of interest, and even a prompting of our guest to recreate and explain a classic Seinfeld bit! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
We finally did it: devoted an entire episode to AI. And, of course, by devoting an episode entirely to AI, we mean we just had GPT-4o generate a script for the entire show, and we just each read our parts. It's pretty impressive how the result still sounds so natural and human and spontaneous. It picked up on Tim's tendency to get hot and bothered, on Moe's proclivity for dancing right up to the edge of oversharing specific work scenarios, on Michael's knack for bringing in personality tests, on Val's patience in getting the whole discussion to get back on track, and on Julie being a real (or artificial, as the case may be?) Gem. Even though it includes the word "proclivity," this show overview was entirely generated without the assistance of AI. And yet, it’s got a whopper of a hallucination: the episode wasn’t scripted at all! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
How is an outlier in the data like obscenity? A case could be made that they're both the sort of thing where we know it when we see it, but that can be awfully tricky to perfectly define and detect. Visualize many data sets, and some of the data points are obvious outliers, but just as many (or more) fall in a gray area—especially if they're sneaky inliers. z-score, MAD, modified z-score, interquartile range (IQR), time-series decomposition, smoothing, forecasting, and many other techniques are available to the analyst for detecting outliers. Depending on the data, though, the most appropriate method (or combination of methods) for identifying outliers can change! We sat down with Brett Kennedy, author of Outlier Detection in Python, to dig into the topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page. 
Do you cringe at the mere mention of the word, "insights"? What about its fancier cousin, "actionable insights"? We do, too. As a matter of fact, on this episode, we discovered that Moe has developed an uncontrollable reflex: any time she utters the word, her hands shoot up uncontrolled to form air quotes. Alas! Our podcast is an audio medium! What about those poor souls who got hired into an "Insights & Analytics" team within their company? Egad! Nonetheless, inspired by an email exchange with a listener, we took a run at the subject with Chris Kocek, CEO of Gallant Branding, who both wrote a book and hosts a podcast on the topic of insights! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Why? Or… y? What is y? Why, it's mx + b! It's the formula for a line, which is just a hop, a skip, and an error term away from the formula for a linear regression! On the one hand, it couldn't be simpler. On the other hand, it's a broad and deep topic. You've got your parameters, your feature engineering, your regularization, the risks of flawed assumptions and multicollinearity and overfitting, the distinction between inference and prediction... and that's just a warm-up! What variables would you expect to be significant in a model aimed at predicting how engaging an episode will be? Presumably, guest quality would top your list! It topped ours, which is why we asked past guest Chelsea Parlett-Pelleriti from Recast to return for an exploration of the topic! Our model crushed it. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
  In celebration of International Women’s Day, this episode of Analytics Power Hour features an all-female crew discussing the challenges and opportunities in AI projects. Moe Kiss, Julie Hoyer and Val Kroll, dive into this AI topic with guest expert, Kathleen Walch, who co-developed the CPMAI methodology and the seven patterns of AI (super helpful for your AI use cases!). Kathleen has helpful frameworks and colorful examples to illustrate the importance of setting expectations upfront with all stakeholders and clearly defining what problem you are trying to solve. Her stories are born from the painful experiences of AI projects being run like application development projects instead of the data projects that they are! Tune in to hear her advice for getting your organization to adopt a data-centric methodology for running your AI projects—you’ll be happier than a camera spotting wolves in the snow! 🐺❄️🎥 For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Every listener of this show is keenly aware that they are enabling the collection of various forms of hyper-specific data. Smartphones are movement and light biometric data collection machines. Many of us augment this data with a smartwatch, a smart ring, or both. A connected scale? Sure! Maybe even a continuous glucose monitor (CGM)! But… why? And what are the ramifications both for changing the ways we move through life for the better (Live healthier! Proactive wellness!) and for the worse (privacy risks and bad actors)? We had a wide-ranging discussion with Michael Tiffany, co-founder and CEO of Fulcra Dynamics, that took a run at these topics and more. Why, it's possible you'll get so excited by the content that one of your devices will record a temporary spike in your heart rate! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
We all know that data doesn't speak for itself, but what happens when multiple instruments of measurement contain flaws or gaps that impede our ability to measure what matters on their own? Turning to our intuition and triangulation of what's happening in the broader macro sense can often help explain our understanding of our customers' ever-changing choices, opinions, and actions. Thankfully we had Erika Olson, co-founder of fwd. — which in our opinion is essentially the Freakonomics of marketing consultancies — join Tim, Moe and Val for this discussion to dive into some real-world examples of things that are inherently hard to measure and ways to overcome those challenges. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Every so often, one of the co-hosts of this podcast co-authors a book. And by “every so often” we mean “it’s happened once so far.” Tim, along with (multi-)past guest Dr. Joe Sutherland, just published Analytics the Right Way: A Business Leader's Guide to Putting Data to Productive Use, and we got to sit them down for a chat about it! From misconceptions about data to the potential outcomes framework to economists as the butt of a joke about the absolute objectivity of data (spoiler: data is not objective), we covered a lot of ground. Even accounting for our (understandable) bias on the matter, we thought the book was a great read, and we think this discussion about some of the highlights will have you agreeing! Order now before it sells out! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
The start of a new year is a great time for reflection as well as planning for the year ahead. Join us for this special bonus episode where we talk through some of our favorite learnings and takeaways from our 2024 listener survey and some of the ways we’ve already been able to put that feedback into practice! We also have some freebies and helpful nuggets to share with our listeners, so be sure to tune in to learn more. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Every year kicks off with an air of expectation. How much of our Professional Life in 2025 is going to look a lot like 2024? How much will look different, but we have a pretty good idea of what the difference will be? What will surprise us entirely—the unknown unknowns? By definition, that last one is unknowable. But we thought it would be fun to sit down with returning guest Barr Moses from Monte Carlo to see what we could nail down anyway. The result? A pretty wide-ranging discussion about data observability, data completeness vs. data connectedness, structured data vs. unstructured data, and where AI sits from an input and an output and a processing engine. And more. Moe and Tim even briefly saw eye to eye on a thing or two (although maybe that was just a hallucination). For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
#261: 2024 Year in Review

#261: 2024 Year in Review

2024-12-2401:03:33

Ten years ago, on a cold dark night, a podcast was started, 'neath the pale moonlight. There were few there to see (or listen), but they all agreed that the show that was started looked a lot like we. And here we are a decade later with a diverse group of backgrounds, perspectives, and musical tastes (see the lyrics for "Long Black Veil" if you missed the reference in the opening of this episode description) still nattering on about analytics topics of the day. It's our annual tradition of looking back on the year, albeit with a bit of a twist in the format for 2024: we took a few swings at identifying some of the best ideas, work, and content that we'd come across over the course of the year. Heated exchanges ensued, but so did some laughs! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Data storytelling is a perpetually hot topic in analytics and data science. It's easy to say, and it feels pretty easy to understand, but it's quite difficult to consistently do well. As our guest, Duncan Clark, co-founder and CEO of Flourish and Head of Europe for Canva, described it, there's a difference between "communicating" and "understanding" (or, as Moe put it, there's a difference between "explaining" and "exploring"). Data storytelling is all about the former, and it requires hard work and practice: being crystal clear as to why your audience should care about the information, being able boil the story down to a single sentence (and then expand from there), and crafting a narrative that is much, much more than an accelerated journey through the path the analyst took with the data. Give it a listen and then live happily ever after! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
loading
Comments (3)

Vassili Savinov

Great episode. Fast simulations, precise enough to drive decisions, repeat to get uncertainty. Love it

May 12th
Reply

Leonardo Dantas

what is the book she said at 13:40? i really would like to know

Jun 4th
Reply (1)