DiscoverCallahanPredictive analytics to forecast the Oscars and your business results
Predictive analytics to forecast the Oscars and your business results

Predictive analytics to forecast the Oscars and your business results

Update: 2020-02-06
Share

Description

Callahan’s senior business analyst, James Meyerhoffer-Kubalik, and longtime film critic and Oscar enthusiast, Eric Melin, built an algorithm and used predictive analysis to forecast which movie would have the best chance of winning “Best Picture” at the 2020 Oscars. Using a predictive model, they created a formula capable of predicting the winner with 86% accuracy.


Callahan uses the same predictive models to forecast business results. In this podcast, James and Eric discuss the model used to predict the Oscars, and how the same approach and strategic analysis can predict business outcomes and drive the marketing strategy for your brand.


Listen here:




(Subscribe on iTunesStitcherGoogle PlayGoogle PodcastsPocket Casts or your favorite podcast service. You can also ask Alexa or Siri to “play the Uncovering Aha! podcast.”)


Welcome to Callahan’s Uncovering Aha! podcast. We talk about a range of topics for marketing decision-makers, with a special focus on how to uncover insights in data to drive brand strategy and inspire creativity. Featuring James Meyerhoffer-Kubalik and Eric Melin.


James:

I’m James Meyerhoffer-Kubalik, Senior Business Analyst at Callahan.


Eric:

And I am Eric Melin and Creative Director of First Person. And we are here to talk about the Oscars for the second year in a row. We’re going to do some amazing prediction and analysis courtesy of Mr. James right here who has created an algorithm that kicked my ass last year and it looks like is it going to do the exact same thing this year


James:

Oh breaking hearts again, Eric.


Eric:

what’s that? You’re going to break my heart?


James:

I’m going to break your heart.


Eric:

Yeah, yeah.


James:

I do that.


Eric:

So essentially James, last year you worked on an old, improving an older algorithm that you had before that you’d run for several years and me being a foam critic and a longtime Oscar enthusiast, I came to you with some ideas for some tweaks and you applied those tweaks and we ended up with an algorithm that predicted with I believe what, 0.5% or something-


James:

Yeah it was-


Eric:

that Green Book was going to win-


James:

Yeah-


Eric:

the Oscar for Best Picture.


James:

Yeah, we said last year too. They should have held hands and skipped through the finish line. Those two movies. Definitely. So, yeah, so like, like you said it, it originally started at Wichita state as a class project and we just used like IMDB ratings, IMDB categories, Golden Globe nominations, Golden Globe wins. And the model was only about adjusted R squared of 0.36. And so that’s when, you know, about a year and a half ago when I started and I met with you, we got together and we talked about it, how can we improve this model? And you gave me a lot of good ideas and I think we’re better than Nate silver, which I looked and I don’t even know if he’s still putting out his model on online or not.


James:

But yeah, kind of what we did based on our conversations, we got more variables that we found out were statistically significant. We also broke up the years, I started with 1980 data from 1985 and worked my way forward. And then based on having that conversation with you, we cut those data sets appropriately based on changes to the voting body so that we were able to get the most accurate model that is representational of today. Yep.


Eric:

Yeah. And it wasn’t just changes on the voting body, it was a very significant change in the way that the Academy had people picking Best Picture, which I’d like to explain a little bit now if I could.


James:

Absolutely.


Eric:

All right, so in 2009 a best picture expanded from five to 10 and this was something that they had in place in the years, 1936 to 1943 believe it or not. And this was a reaction to the fact that ratings were declining on TV and The Dark Knight was not nominated for best picture in 2008. So people thought, Hey, if we expand that to 10 we will likely get more big Hollywood movies in there instead of just these small indie films that usually to get in there, more people will tune in and the Oscars will be a bigger deal. They did that for two years, and then in 2011 they made a really big change that’s been in place since then. They started requiring a minimum of 5% of first place votes in order to receive a Best Picture, nod, so now they’re doing what’s called a weighted or a preferential ballot.


Eric:

And this ballot means that they’re now the people who are now voting the, the closed body of the Academy of Motion Picture Arts and Sciences, which is really important. We always have to stress this. This is not a publicly voted on award. This is a very closed body whose membership has changed in age and demographics over the years. But they started ranking them from first to 10th and only in 2011 has it been not exactly five or 10. So every year the number of nominees can be different. I believe one year it was eight, but every other year it’s been nine. And this is because each movie has to get that minimum threshold of 5% of first place votes in order to get a nomination. So this year, again we have nine nominees.


James:

Mm-hmm (affirmative).


Eric:

So do you want to introduce the nominees for Best Picture this year in 2020?


James:

Sure. So the nominees that we had that we worked the model off of, we had 1917, Once Upon a Time, The Irishman, which I believe was more of a Netflix production, Marriage Story, Little Women, Jojo Rabbit, Ford v Ferrari, Parasite and the Joker.


Eric:

Now the Netflix thing is really interesting because last year we had Roma, which was my favorite film of the year and also was an early favorite to win because of all the critics awards that it had won. And this was the reason that you threw out the box office things because Netflix doesn’t actually report box office. And this year we had The Irishman and Marriage Story, which are both Netflix films.


James:

Correct, correct. So we did find that it was statistically significant last year and what we found out, it was kind of an obscurity thing. So the better it did in terms of box office on average up to the Oscars, actual worse it did according to the model. But like you said, once we had last year with Roma, and this year with the Irishman and the Marriage Story, you almost have to completely get rid of that variable altogether.


Eric:

Yeah, I think a lot of algorithms are going to have to keep recalibrating themselves each year because of this. Netflix is increasing, you know, the amount of promotion that they’re putting into these movies. And honestly in the case of The Irishman, they were the only ones in Hollywood who would give Martin Scorsese the $200 million that he needed to de-age three main characters throughout like 40 or 50 years of their lives. And so Netflix is really, really dedicated to putting in the money that they need to win a freaking Oscar. And so this year is no exception.


James:

Yeah, no, you made a really good point there. I think, so the thing in terms of confidence in the model, you know, as you know, Netflix becomes more of a player, amazon becomes more of a player. I mean we will really just moving forward, start to work off those years in which, you know, it was applicable. So like the model I know we have from 2011 to 2019 or current, it only has really like one year based on Roma, but we were still able to accurately predict, so going forward it should gain in strength as we, you know these things become more of a commonality into the future.


Eric:

Right. And we should also mention that since Oscar’s so white three years ago, there’s been a big push in The Academy to get different people and younger people from all over the world, not just white old men in Hollywood to be part of The Academy. And so I think that changing memberships is going to affect things as well. And it would be interesting if you could, in the future if you started weighting the, the more recent nominees a little bit more or have you done that already?


James:

I haven’t.


Eric:

More recent winners.


James:

I haven’t just because I would need more years of data. it gets complicated. We kind of did that in a sense when we cut the data so that we don’t have that tail end from 1985 to 1995 weighing in. By cutting those, we do have more instances from this timeframe, which would be more representational of that. Definitely. So-


Eric:

Yeah.


James:

So in a way we have, but I see what you’re saying. Yeah.


Eric:

Well it’s interesting. I just don’t think a lot of people have thought about how ma

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Predictive analytics to forecast the Oscars and your business results

Predictive analytics to forecast the Oscars and your business results

James Meyerhoffer-Kubalik