Numlock Awards: Predicting the Oscars
The Numlock Awards Supplement is your one-stop awards season update. You’ll get two editions per week, one from Not Her Again’s Michael Domanico breaking down an individual Oscar contender or campaigner and taking you behind the storylines, and the other from Walt Hickey looking at the numerical analysis of the Oscars and the quest to predict them. Look for it in your inbox on Saturday and Sunday. Today’s edition comes from Walter.
Confused by all the awards season drama that seemed to come out of nowhere? Be sure to to check out Michael’s email all about smear campaigns!
The best way to predict the Oscars is by looking at previous awards that happen before the Oscars. It’s not perfect, but it’s doable. The real trick of forecasting them consistently basically comes down to two things:
Picking which awards to factor in
Identifying how much emphasis to put on them
Generally, my goal is to maintain the ideological thrust of that model (precursors inform and define the Oscar race) while updating several assumptions that I feel got stale.
The traditional FiveThirtyEight model handles those two decisions like so:
A smattering of big-city critic awards, several national prizes, and several large guilds.
CRITICS: New York, Chicago, and Los Angeles area film critics’ group awards.
NATIONAL GROUPS: The Golden Globes, the Critics’ Choice Movie Awards, the Satellite Awards, and the National Board of Review Awards.
GUILDS: Awards given by the Screen Actors Guild, Directors Guild, Producers Guild, American Cinema Editors, Writers Guild of America, and the British Academy of Film and Television Arts (BAFTAs).
First, determine how many times over the preceding 25 years the precursor award correctly matched with the Oscar outcome. Next, make that into a decimal and square it. (The effect of this is in making better predictors stand out: a precursor that gets it right 100% of the time has a score of 1, one that gets it right 90% of the time has a score of 0.81, while a 50-50 precursor has a score of 0.25.) Finally, if it’s one of that last group of precursors, double the score to account for membership overlap between the guilds and the Academy.
For Best Animated Feature, I added in the Annie Awards from their guild. Effectively, most acting prizes are informed almost entirely by the Screen Actors Guild, BAFTAs, Golden Globes and Critics’ Choice awards, and most director prizes are DGA coronations. I don’t see much of a need to tweak too much of which awards to factor into those, besides a few major general weighting overhauls I’ll describe shortly.
This year is shaping up to be a sloppy Best Picture race, so fine-tuning the process ahead of time is of really important.
Here are the tweaks I’m making, and why.
I’m changing the inputs, slightly.
CRITICS: Rather than simply look at biggest cities and automatically consider their critics important, I’m factoring in only critics groups that tend to pick the Oscar winners. We’re handling this relegation style: In any given year, only the regional critics groups that performed the best in a given category in the past 10 years will be in the model. In aggregate, they’ll be weighted as much as the average between the Critics’ Choice and Golden Globe.
NATIONAL GROUPS: The Satellite Awards and the National Board of Review will have to duke it out with the other critics groups in the category above. Absent a compelling award season television spot or serious ink spilled, they’re not really steering the season and, absent any actual Academy voters in their ranks, don’t actively reflect it. The Golden Globes and Critics’ Choice — which do to some extent steer the season, particularly in the acting prizes — remain essentially unchanged in how we handle them.
GUILDS: SAG, DGA, PGA and BAFTA remain paramount. The editors and writers will remain in, however their awful track record and split-category (Comedy and Drama for the editors, Original and Adapted for the writers) will effectively doom them to obscurity indefinitely. For the Beta version of the model, I’ll also be factoring in the American Society of Cinematographers, Costumers and Designer guilds.
I’m changing the weighting in two major ways.
Too much of the Academy has changed too recently to rely on 25-year success rates. We’re a what-have-you-done-for-me-lately model now. Instead of looking at the past 25 years, all years ranked recently, we now look at the most recent 20 years, with the 10 most recent years worth twice as much as a typical year and the 5 most recent years worth three times as much. This means that about 43 percent of the score will come from the past 5 years, when (coincidentally) I estimate 39 percent of the voting base has joined.
Rather than simply doubling the score when awards are allocated by guilds, they’ll instead be increased by a factor related to how much of the Academy they each comprise. This will be a relatively small change in the general model and won’t swing too significantly compared to previous years. In the Beta version of the model, there’s even more to this.
So that’s the gist of it. The model’s critical inputs have been tweaked to err on the side of actually predictive or informational factors rather than geography. A few more guilds will be added, but those won’t make much of a difference at all given their weak link to actual outcomes. The weighting is the largest overhaul, and will reward events that have actually kept up with the times and the Academy rather than ones stuck in 2011. And the composition of the Academy is, now more than ever, a factor in the weighting.
Well that’s all until next week when we have th-
But hey, Walter, what was that thing you said about the Beta model?
Oh, right, that thing.
So here’s the deal with Best Picture. While all the other events are a straight up-and-down vote — you don’t need a majority to win Best Director, just the most votes of any contender — Best Picture works on a different level, where each individual ranks their top five choices and then instant runoff voting occurs. The goal is that the film that wins will be the one with the broadest level of support throughout the Academy. So if my top 5 are:
Gotti
Aquaman
The Happytime Murders
Fifty Shades Freed
Venom
Then the Academy counts up every vote and finds out if they have a winner. If nobody breaks 50 percent, they do a runoff: they find out the last-place movie, eliminate it, and reallocate all the ballots to the second-ranked film on each of them. So if Gotti had the fewest votes, it’d be eliminated, and my vote would then go on to the Aquaman pile, and so on, until some movie breaks 50 percent.
Now this is all rather difficult to poll, clearly. The Beta model does something a little different. Rather than look at all the weights and seeing which one is largest, it uses those weights to simulate the Academy vote. It’ll use those weights to randomly generate 7,902 ballots, and then run an instant runoff to determine a victor, and it’ll do that process 10,000 times so we have a probability breakdown.
Since this is all rather experimental, I’ll have a few twists on it. Specifically, since we have an idea of each Academy branch’s preferences through their guild’s awards, we can also do a run of that where, say, the members of the Production Designers branch prefer their guild’s nominees (like Bohemian Rhapsody, Roma, The Favourite, Black Panther and Mary Poppins Returns) a little more than the overall Academy does. We can do a run of it where the “new Academy” has one set of preferences and the “old Academy” has a different set of preferences.
But that’s a few weeks away. We’re a little more than a week away from the Oscar nominations, and there’s still so much more to do. The Critics’ Choice Awards are tonight, and it’s by far the most important night of the season for Best Picture aspirants so far.