I Tried to Predict the 2024 Election—Here’s What I Got Right (and Totally Wrong)
Like many others, it took me a while to digest the results of the 2024 U.S. presidential election. But there’s one more important thing to do: to analyze how the predictions I made actually fared. And I finally got around to doing that.
I generally made three different predictions over the course of ~1.5 months: ① based on weighted high-quality polls only (I downloaded those from 538); ② adjusted by the full average polling bias from the 2016 and 2020 elections; and ③ adjusted by just half the 2016/20 polling bias.
What was my reasoning for integrating the polling bias from 2016 and 2020? “I’m not considering elections before that because ever since Trump entered the stage, election dynamics have significantly changed. Old rules don’t apply anymore. 2024 will be much more similar to 2020 & 2016 than to any election before that.” (See my very first
And why did I include the one with the half polling bias? In my
Beginning of November — based on very reasonable-sounding arguments — I also started looking into whether the polling errors from 2012 and 2022 might be relevant to consider. Unfortunately, those arguments turned out to be completely, utterly wrong, so we’ll exclude predictions ④ and ⑤ in this post-mortem analysis.
What predictions ① thru ③ did was to predict the winning margin in the seven swing states, which then yielded a probability of winning in each state for the candidates, expected electoral votes (EVs), and, combined (and based on a Monte Carlo simulation), an overall chance of winning the whole thing. What I will focus on here is to look at how far off my margin predictions were from the actual margins in the swing states, because those were the ones on which all other components of my forecast were based.
I have chosen to do so with three common approaches: directional accuracy, mean absolute error (MAE), and root mean square error (RMSE). While MAE simply calculates the absolute difference between predicted margins and actual margins and then averages these values, RMSE penalizes large errors more heavily. I will also do MAE and RMSE for an equal weighting of the seven swing states as well as for an EV-based weighting each. Equal weighting treats each state prediction with equal importance, focusing purely on predictive accuracy regardless of strategic importance. EV-based weighting, however, better reflects the electoral significance of each prediction. For both, MAE and RMSE, lower is better.
All of that being said, let’s get to it. Let’s see how my predictions fared.
Metric |
Prediction ① |
Prediction ② |
Prediction ③ |
---|---|---|---|
Directional Accuracy |
42.9% |
100% ✓ |
100% ✓ |
MAE (equal weights) |
2.37 |
2.11 |
1.01 ✓ |
RMSE (equal weights) |
2.46 |
2.32 |
1.32 ✓ |
MAE (EV-weighted) |
2.26 |
2.11 |
0.81 ✓ |
RMSE (EV-weighted) |
2.35 |
2.32 |
1.06 ✓ |
As we can see, no matter which method we use and how we weigh the individual swing states, prediciton ③ easily outperforms the other two (except for directional accuracy, the most basic measure, that looks at how many states were predicted correctly independent of margin).
Since all three predictions are based on the same polls with the same weightings and only differ in how much they were adjusted for which polling biases, this shows that the assumptions I originally made were reasonable: Elections with Trump as a candidate are unlike other elections; pollsters are still bad a capturing support for Trump (due to the “shy Trump voter” etc.); pollster still improved a bit, though; simple models (take the average polling error from 2016/20 and just cut it in half) mostly do the trick.
To conclude this entire topic (until we start anew less than 4 years from now), one more piece of further reading: There are reports that a French bettor on Polymarket engaged so-called neighbor polling (very roughly speaking, asking “who do you think your neighbor will vote for?” instead of “who will you vote for?”), thereby predicting everything correctly. Prof. Andrew Gelman has written a
Alright then, see you in 3.5 years. 👋🏻