Big Data Versus Little Data: Predicting the Academy Awards

A Microsoft Research economist’s focus shifts from presidential prediction to Hollywood handicapping

Loading the podcast player...

Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”

Last year, in the context of the U.S. presidential election, we had two shows on the topic of prediction and predictive modeling with one of the leading researchers in the field, David Rothschild. He did pretty darn well last year, going 50 for 51, using a model that made extensive use of polling data, historical data, economic data, and prediction markets.

This year, he’s at it again, trying to handicap Hollywood’s 2013 Academy Awards, which take place this weekend, February 24. I thought we’d have him back on the show to talk about some unique problems that make Oscar prediction different from Obama prediction, and, by the way, I’d like to thank PR pro Tom Parnell for that alluring alliteration.

David Rothschild works for Microsoft Research in its New York City offices, where, among his other duties, he runs a site called PredictWise.com. He has a Ph.D. in applied economics from the Wharton School of Business at the University of Pennsylvania, and before Microsoft, he worked for Yahoo Labs, also in New York City.

He joins us by phone. David, welcome to the podcast.

David Rothschild: Thank you for having me back. I really look forward to this conversation.

Steven Cherry: David, I think it’s fair to say that the most important data that went into your forecast last year was all the state-by-state polling data available to you and other modelers. There’s no polling data for the Oscars. It’s a little like cutting off your right arm and trying to bench press the same weight, isn’t it?

David Rothschild: Well, you know, there’s a bunch of different data sources that I always look at when I’m approaching a new question. One of them, generally, is polling data. Another is prediction market data. Another is fundamental data. And let me parse those for you for a second. So, you know, early in a political cycle, I’m going to look at a lot of fundamental points. This is prior to polling and prediction markets coming into play. You’re going to be looking at things like past election results and economic indicators and incumbency. And all of these things that kind of happen on a regular basis are always available and are there to form baseline predictions very early, before there’s a lot of actively collected data for an upcoming event. You hit the spring and summer, you get this polling data, it sops up a lot of this information, and you can make very, very accurate forecasts using this polling data and prediction market data, and it carries you through to an election.

And now when I look at something like the Oscars, you have a lot less. We don’t have polling data. There’s very little to no active attempts to gather data from the voters of the Academy. And when it comes to fundamental data, you don’t have as much identification as you do in the political context in the sense that you have things like how the movie did in the box office, you have the ratings, but a lot of these points are very movie specific and it makes it very hard to make category-by-category predictions with that data, and so really, really focusing a lot more on prediction market data when it comes to the Academy Awards.

Steven Cherry: Yeah, let’s just remind our listeners what prediction markets are.

David Rothschild: That is correct. So to take a step back, prediction markets are markets in which people buy and sell contracts which are going to be worth one dollar if an outcome occurs and zero dollars if an outcome does not occur. So right now there’s a contract on whether or not Argo will win best picture in the Oscars. It will be worth a dollar if Argo does win. It will be worth zero dollars if Argo does not win. And the price of that contract, real money traded in Europe, the price of that contract is a very strong indicator of the collective wisdom of how well—how likely that contract is to actually occur.

Steven Cherry: And I guess we should also point out that the prediction markets are pretty thin when it comes to some of these categories anyway, right?

David Rothschild: Oh, there’s no question about it. So, you know, I’m looking at all 24 categories right now, and some of them are a lot more liquid than others. And some of them, you know, you don’t expect to have as much information from. And that’s part of the game of kind of determining what’s useful and calibrating your forecast.

Steven Cherry: Now, you started to use something called “user-generated data.” What is it?

David Rothschild: Definitely. And so, you know, we saw an opening here in which to try to think, well, how do we add some more data to a question which we know people are interested in. And so, you know, part of what I do is trying to think of what’s the most efficient forecast you can make from the data that’s available—whether or not it’s polling data or prediction market data or whatnot—but then also thinking where there’s holes in that data and what can we think about, kind of experimental ways in which to gather more data.

And so one of the things that I’ve been working on is working on experimental markets. We have something which we call WiseQ, the Oscar game. And it’s an experimental prediction market. And we’re asking people to come on and make their predictions over not just the categories, who’s going to win the categories, but correlations between categories, which is a question which I find very interesting. And this is very similar in the U.S. electoral college as it is in the question of the Oscars, which is how much does one brand or one entity do across different categories.

So, for instance, you know, we were very concerned in the U.S. election—how does movement in Virginia correlate with movement in Ohio for the president. And similarly here, how does, say, the best director victory, how would that change how you’re looking at the best picture. And so if Lincoln wins some earlier categories, does that make you more or less confident that it’s going to win in later categories, and kind of exploring those correlations. We’re doing that with these games and seeing if we can learn anything, seeing if the wisdom of our crowds can add to what we know about these forecasts.

Steven Cherry: All right. So is it fair to say that the very sparseness of this data is what makes it a fruitful area of research and not just a way to chat about Hollywood movies on the company dime?

David Rothschild: You know, so everything that I do when it comes to forecasting is to think about how can we make more scalable forecasting, more scalable indices about upcoming events or current events, and try to make them as domain independent as possible. And so kind of the key thing, you know, why I’m back here talking about Oscars when I was talking about politics before, is it’s really the science of applying the same type of effort, the same type of skills, and the same technology across different domains.

And it’s the challenge of how, if we keep on doing different things like this, we do entertainment, sports, politics, etc., very public things, things that we have a lot of data for, will that help us create the technology to then effortlessly add new things, whether or not it’s business questions or whether or not it’s economic questions, probably things which carry a little more weight in many ways, more entertainment, more actual kind of decisions that people are making. And the idea of everything, though, is the scalability and this domain independence.

Steven Cherry: Okay. It’s time to chat about Hollywood movies on the company dime. So let’s get to some predictions. I’ll say by way of disclosure that, as we record this, I’ve seen eight of the nine films nominated for best picture, and in the 11 categories that I personally care about, I’ve seen 48 of the 59 nominees. And I say that not to brag but to give some context when I say I have no idea who’s going to win anything this year. You take on the even more monumental task of trying to handicap 24 categories. I know it’s tough because we’re recording this show almost a week before it’ll air, and then there will be another few days for the awards themselves, but what are some highlights right now?

David Rothschild: Well, you know, so let me just say that I come in from the opposite perspective here. So I’ve actually seen very few of the movies on this list. As we were talking, just to kind of emphasize again and again, you know, everything I do is very kind of data heavy. And I actually find it as a fun and kind of difficult challenge, and a different challenge is to look at this very differently from politics, because where I do have a lot of domain information in politics, I have a lot less domain information when it comes to the Oscars.

And so it’s really, it’s just seeing what the data tells me, without having any horse in the race. As the awards season has moved forward and Argo has dominated in the awards, we’ve actually seen a shift in this—in this main category of best picture. So one kind of fundamental data point that’s always been very clear is that the Oscar voters listen to themselves. So the number of nominations a movie gets is pretty heavily correlated with actually winning. This is not surprising. They see these movies, they like these movies, they’ve nominated them for things. And so the fact that Lincoln was nominated for best director where Argo was not, so Spielberg versus Ben Affleck, that actually was a great indication in the beginning that Lincoln was going to be a front-runner here in the best picture. But as awards show after awards show has gone toward Argo, we have Argo at 93 percent, a very, very strong likelihood of winning best picture.

Steven Cherry: Nate Silver, who does election prediction at the New York Times, has limited success at predicting the Oscars, but he keeps trying. And in 2011, he came up with what he calls “four rules to win your Oscar pool.” His first we’ve already mentioned, which is winning other awards, especially the Golden Globes, which are from the Hollywood Foreign Press Association. In his second rule, he has being in a picture that gets a best picture nomination, and I guess you sort of alluded to this too. So Denzel Washington is nominated for best actor in Flight, which wasn’t nominated for best picture, all other things being equal, has a lesser chance than, let’s say, Bradley Cooper from Silver Linings Playbook, which is. Do you think that’s true?

David Rothschild: Well, let me take a step back from what you’re saying, which I think is very interesting. And so Mr. Silver kind of comes up with fundamental models in a sense. So let me try to break down the data for you guys and think about it in terms of fundamental kind of information that’s out there and how well that does in forecasting versus, I’m going to compare that to prediction markets. The bottom line is that, you know, I did a lot of look into this and a lot of kind of research into how effective these types of rules could be. You know, looking at the other award shows, looking at kind of personal traits of the people nominated for those categories in which there’s data available. And as I said, it’s tough.

So there was some correlation, and there is some ability to predict things, but category by category, most of the data is kind of movie related. So it’s how well the movie did; it’s the ratings of the movie; it’s all these kind of things that are very movie specific, which makes it very hard to use that sort of data in order to predict who’s going to win in specific categories. Prediction markets, on the other hand, can kind of sop up a lot of that information, can in theory sop up all that information, plus put in a lot of idiosyncratic information that’s very hard to create a model around.

And that’s why, you know, I feel very confident in a much more heavy category-by-category prediction-market-built models that I utilize versus kind of utilizing the types of models that Mr. Silver uses. And we’ve seen that in 2011 and 2012, where the prediction markets did extremely well in predicting not just the main categories but some further down the line.

Steven Cherry: While we’re talking about the fundamental data, I would imagine box-office data is an example. Does it play a role in your model?

David Rothschild: Definitely. And the thing that I found really interesting looking at box-office data is, you probably have more domain information in some ways because, as I said, I’m not a huge movie buff. And when I was learning a lot about it, there’s just a lot of nuances with the data and a lot of different ways that you can look at it. One thing that popped out at me is that, you know, movies go wide in very different ways in different kind of time frames. So some movies start off smaller and get bigger, and some movies kind of start out with a full, wide appeal. And especially when it comes to Oscar-nominated movies, there’s a surprising mix.

And one of the things that I saw in my data was that it wasn’t the first weekend, and it wasn’t even how well the movie did over the first few weeks. It was much more about the trends. It was much more about what direction was the movie going in between the fourth and fifth week or even further down the line. And actually, I popped out one rubric, which was, if I was to look at one thing when it comes to gross, it was going to be the change between the fourth and fifth week, which seems to be some sort of deflection point that carried a lot of power as far as predicting how well a movie would do in certain categories when it came to the Oscars.

Steven Cherry: William Goldman, the screenwriter and novelist who’s written some pretty great movies in the past, including Butch Cassidy and the Sundance Kid, he has a rule of thumb about, like, why stars get so much money. And his rule is, it’s the stars that bring in the audience the first week, and after that the movie has to kind of prove itself. So maybe there’s a clue in there as well.

David Rothschild: Possibly. And, you know, and I would say one thing, which I was playing back and forth, a lot of subtlety, which, you know, I’ll give to your audience, which is that thinking about two very different types of forecast. One is what type of movie is going to get nominated, and then conditional on being nominated, how likely you are to win. And so there’s a lot of things to think about in that way.

So think about, you know, when a movie’s released. A movie that’s released in January or February is very unlikely, comparatively, all else being equal, to get a nomination to a movie that’s, you know, released in the late fall. But conditional—if you’re already nominated, conditional on being nominated—these kind of things drop out, right? Because if you are an odd movie that didn’t do that well in the box office or an odd movie that was released at a weird date or a bunch of other kind of things you would think would be meaningful but actually end up getting nominated, then you’ve already crossed that hurdle of being nominated, and you’re probably a pretty special movie to get nominated despite having, you know, weird choices that you weren’t widely released or you started in a weird date. And so it’s definitely a subtlety to think about those two things.

Steven Cherry: It’s a little bit like the difference between your life expectancy at birth and the life expectancy of people who make it to 50, say.

David Rothschild: That’s exactly right. It’s a very subtle and difficult thing to sometimes grasp, but that’s exactly right. And these guys have already made it to that cut, and then so it’s a very different thing you’re looking at at that point.

Steven Cherry: Let’s just close the loop on Nate Silver. His third rule is “dramas do better than comedies,” and there were essentially no comedies nominated for much of anything this year. I guess Moonrise Kingdom might count for one, and it got an original screenplay nomination, but that’s about it. So let’s skip that rule. His last rule is “spread the wealth,” and in this way, it’s a minus, it turns out, if you’ve won before. And so by that rule, Daniel Day-Lewis, who won best actor twice, for My Left Foot and There Will Be Blood—a very overrated movie by the way, though he was great. And Daniel Day-Lewis is now nominated for Lincoln. According to this rule, it would be a disadvantage vis-à-vis, say, Bradley Cooper, who hasn’t won anything before.

David Rothschild: You know, I think that it’s a great point because it brings up two really interesting things. And the first thing directly on what you’re saying, it’s true. So if you run regressions, in looking at the model and saying, put in how many times that someone has been nominated, and put in times someone has won, what you get is that being nominated and not winning is highly predictive of eventually winning, but winning too much, it definitely ends up being negatively correlated with outcome. There’s no question about that. But the problem is that when it comes to questions like that, there isn’t that much identification, and they’re very unique. So there’s a couple of actors who’ve been nominated many, many times and have won several, but it is a very, very small group, and a small group to the point where idiosyncratic knowledge of this year and knowing what’s going on, I think overrides the fundamentals.

So I’ll go back again to kind of thinking about the prediction markets versus the fundamentals on this, which is that Daniel Day-Lewis has won every other award, so that can go into it, but also the just overriding collective wisdom is that despite that rule of thumb, he will win this year. And that’s actually the most confident prediction that I have, Daniel Day-Lewis’s victory in this category at essentially as high as I can give a prediction and still have some uncertainty. And I think it’s just more about thinking, again, of the collective wisdom of the prediction markets seem just extremely confident of that, and that’s going to override what I think is a small but significant fundamental data point. But again, you just want to look at this year and see Daniel Day-Lewis is just a massive favorite to win that category.

Steven Cherry: So are there any other massive favorites?

David Rothschild: Any other massive favorites? Yes. Anne Hathaway as the best supporting actress, and again, that’s seeing something as something that she has dominated in all the awards, as well as there just isn’t a clear other person that people could rally around.

Steven Cherry: Well, David, I’ll post my 11 entirely unanalytical picks on the podcast show page, and we’ll see whether “Techwise” can do as well as “PredictWise.” Hopefully we’ll at least both do better than Nate Silver. Good luck on Sunday, and thanks for joining us today.

David Rothschild: Thank you very much for having me. It was a real pleasure, as always.

Steven Cherry: We’ve been speaking with Microsoft Research economist and movie blogger David Rothschild about the unique difficulties of doing big-data predictive modeling on a small-data event like the upcoming Academy Awards.

For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.

See Steven Cherry's Oscar picks here

This interview was recorded 14 February 2013.
Audio engineer: Francesco Ferorelli
Read more “
Techwise Conversations,” find us in iTunes, or follow us on Twitter.

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

Advertisement
Advertisement