You are on page 1of 3

..............................................................................................................................................................................................................................................

POLITICS SYMPOSIUM
..............................................................................................................................................................................................................................................

Why Forecast? The Value of


Forecasting to Political Science
Keith Dowding, Australian National University, Canberra
..............................................................................................................................................................................................................................................

S
erious forecasting of presidential (and other) the certainty with which forecasts are made (Westwood et al.
elections has been going on in political science 2020). Careful estimates derived from poll data can be wrong,
for 40 years (Lewis-Beck and Stegmaier 2014) and since reported margins of error in polls only capture sampling
now extends beyond academia, with many rival and not total survey error (Shirani-Mehr et al. 2018). In other
online and media forecasts. They are certainly words, forecasts are not as certain as they are often purported
interesting for the public and the candidates, but what is their to be, and they are often misinterpreted. We cannot expect
value to political science? On the one hand, there ought to be every forecast to be correct, especially in elections as unusual
epistemic gains to the profession in understanding of elections as 2016. Many “wrong” forecasts in fact lie well within the
if we accurately forecast results (King et al. 1994; Schrodt bounds of forecasting uncertainty. But trying to explain that to
2014). On the other, if commentators and the public misun- the public might well trigger even greater public skepticism
derstand forecasts, or different methods provide rival fore- about “so-called experts.”
casts, the forecasting business might throw our profession into So the problem for political science is that the public might
disrepute (Blanchflower 2016). judge its scientific merits by the accuracy of forecasts of a
In this comment I will, first, elucidate, in a fairly simple single token event while misunderstanding what those fore-
manner, the dangers of forecasts being misunderstood. Sec- casts actually claim. A better judgment of the merits of any set
ond, I will distinguish forecasting from scientific prediction. of forecasting models is how well they perform over a series of
The forecasting models in this symposium predict the such events. Judged thus, presidential forecasts appear in
results of the next presidential election. At least, that is one better light (Cuzan 2020). Even so, we should not confuse
description. Some predict the popular vote of the candidates, forecasting with scientific prediction (Dowding and Miller
some predict the electoral college make-up through predicting 2019). The merits of a good forecast are not quite the same
state results, some turn popular vote estimates into electoral as the merits of a good explanatory model.
college estimates, some predict the Congressional results. We judge forecasting models primarily in terms of (1) pre-
These specific predictions will be judged by the actual result dictive success, (2) narrow probability bounds, and (3) length
in November. of lead times.1 Scientific models are judged in terms of their
One way of judging the 2016 symposium is to say only two explanatory value. To be sure, predictive fit with the world is a
of the nine forecasts predicted Trump’s victory, one with the necessary feature of such explanation, but scientific prediction
unconfident caveat that “Hillary Clinton should probably be is not future forecasting. A scientific prediction is a logical
considered a strong candidate to win” (Abramowitz 2016, implication drawn from some theory. It need have no future
p. 660). However, six Clinton victory forecast models were reference. Scientific predictions are existential and a necessary
on the basis of a popular vote whose range of 50.4–52.5% part of explanation. They are existential since they are condi-
almost perfectly circle her actual vote of 51.12%. These six tional. They say “if condition C holds, then we expect Y”; or “if
models might be judged as correct in terms of what they condition C holds, then we expect with some probability p that
actually modeled. Of course, presidents are not elected by Y holds.” They are explanatory, since if C is supposed to
popular vote, but by electoral college. For that reason, eight explain Y, then it should do so in all analogous circumstances
of the current symposium articles also model, or at least take (Dowding 2016, ch. 3).
into consideration, the electoral college. The point is obvious: With enough data, of course, scientific models can forecast
what we take to be the forecast affects our judgment of a token events. Astronomers have sufficient data to forecast
model’s success. But the very fact that we can have different eclipses of the sun or moon, but not when a meteorite will
judgments based on different criteria may induce skepticism land on the White House lawn. Their inability to predict the
about the whole enterprise in the public and the media, latter does not besmirch astronomy. Political science should
bringing the discipline into disrepute. only be able to reliably forecast events with relatively stable
Forecasters can be explicit about what they are actually patterns, and only then with good data. Our ability to forecast
forecasting even if commentators are not. However, there are election results depends on good data, and it depends on
also other, deeper problems of interpretation. The public do elections having relatively stable patterns. In 2016, forecasters
not understand probability estimates, nor appreciate that knew that the characteristics of the two candidates and the
these come with a margin of error; hence people overestimate abnormal features of the Trump campaign would make the

© The Author(s), 2020. Published by Cambridge University Press on behalf of the American
doi:10.1017/S104909652000133X
Political Science Association PS • 2020 1
...............................................................................................................................................................................................................................................
Politics Symposium: Forecasting the 2020 US Elections
..............................................................................................................................................................................................................................................
election unusual—they had, as Erikson and Wlezien (2016, against overfitting even when data is known. The epistemic
p. 671), nicely put it, a “premonition of greater uncertainty.” advantage only accrues when new data tracks other differences
Scientific predictions concern the structural features of between models that are evidentially significant (Hitchcock
types for which specific data on contingencies is required and Sober 2004). Furthermore, models can track evidential
when forecasting the detail of token examples. Furthermore, significance at the type level, yet fail to track evidential
in scientific terms correctly predicting the winner each time is significance in a specific token example.

So the problem for political science is that the public might judge its scientific merits
by the accuracy of forecasts of a single token event while misunderstanding what
those forecasts actually claim.
less important than evidential aspects of models. After all, the Scientific prediction and explanation is directed at types.
proximate cause of an election result in all its detail is how To prove accurate of a given token event, scientific prediction
each voter casts their ballot, together with the counting rules. needs to pick out what is evidentially important in that event.
In science we want the ultimate causes, structural features that Where an election is close, where forecasts are made with low
explain the reasons why voters cast their ballots as they confidence, then some specific set of circumstances might be
do. Not for each voter, of course, since any given event contains causally important. In presidential elections these could be
different levels of detail—what is sometimes called “granular- something peculiar to a specific swing state, or something
ity” of description. Biden winning the presidency can be unusual—such as a pandemic! Accommodating after the event
caused by many different events at lower levels. Forecasts can lead to the charge of overfitting and not improve long-
can be incorrect, yet accurately track the evidence they utilize, term model accuracy. Of course, the probabilities assigned to
or be correct without tracking explanatory factors at all. forecasts are designed to take contingent factors into account.
Some forecasting models are simply aggregative—they Forecasting models that do best overall are those which model
extrapolate from polls. How people say they intend to vote is the most evidentially important factors for the type as a whole.
an indicator of how they will vote, but polls do not provide There are many stable patterns in social life, which tend to
much by way of explanation. Intention to vote may track hold over long periods of time. Forecasts are made for specific
contingencies in the world, but theory and other evidence is events; at times, these will not conform to the overall pattern.
required to connect the two. Contingent events are best Forecasting models are obviously important to political
tracked retrospectively. For example, Lewis-Beck and Quinlan science for they give an epistemic check upon our accommo-
(2019) nicely test Hillary Clinton’s views on the structural and dationist modelling. Yet there are dangers. Critics may deride
contingent factors that led to her loss, while Sances (2019) the accuracy of the headline predictions and then use this to
models the purportedly unusual flipping of counties to see if it disparage the science in political studies. This is to misunder-
is really unusual when viewed over time. stand the nature of scientific prediction, and is liable to instill
Structural models use objective economic and political expectations of our ability to foretell future token events with
indicators. They provide more by way of explanation and over complex causal determinants at a level of granularity which is
greater lead times can be more accurate than aggregative never expected of natural scientists. ▪
forecasts (Lewis-Beck and Dassonneville 2015). Combining
the two can provide more accurate forecasts, though with less
lead time, but do not in themselves provide further explan- NOTE
ation because they still only map contingency. Thus improv- 1. One might also add the granularity or detail of the prediction, but I will
assume the first three are tied to a given granularity, say the popular vote.
ing forecasting does not necessarily improve scientific
prediction. Poll changes a week prior to the 2016 election
may tell us nothing about similar poll changes at the 2020
REFERENCES
election, unless the same conditions C hold for both. Scientif-
Abramowitz, Alan I. 2016. “Will Time for Change Mean Time for Trump?” PS:
ically all we learn is that polls can switch in the last week. Political Science & Politics 49 (4): 659–60.
Nevertheless, we might think the more reliable the fore- Blanchflower, David. 2016. “Experts Get It Wrong Again by Failing to Predict Trump
casting model, the stronger its scientific basis. A good reason Victory.” Guardian, November 14. https://www.theguardian.com/business/2016/
nov/09experts-trump-victory-economic-political-forecasters-recession.
for believing that forecasting is the right route for political
science is that models predicting unknown data are epistem- Cuzan, Alfred G. 2020. “The Campbell Collection of Presidential Election
Forecasts: 1984-2016: A Review.” PS: Political Science & Politics
ically superior to ones only accommodating known data. doi: 10.1017/S1049096520001341.
Known data restricts potential explanatory theories and can- Dowding, Keith. 2016. The Philosophy and Methods of Political Science. London:
not increase our belief that any theory is true. Forecasting Palgrave.
unknown data adds something (White 2003). Highly predict- Dowding, Keith, and Charles Miller. 2019. “On Prediction in Political Science.”
European Journal of Political Research 58 (3): 1003–21.
ive models accommodating known data can be achieved
Erikson, Robert S., and Christopher Wlezien. 2016. “Forecasting the Presidential
through overfitting. Forecasting unknown data suggests our Vote with Leading Economic Indicators and the Polls.” PS: Political Science &
models are not overfitted. However, there are ways of guarding Politics 49 (4): 669–72.

2 PS • 2020
...............................................................................................................................................................................................................................................
..............................................................................................................................................................................................................................................
Hitchcock, Christopher, and Elliott Sober. 2004. “Prediction versus Sances, Michael W. 2019. “How Unusual Was 2016? Flipping Counties, Flipping
Accommodation and the Risk of Overfitting. 55(1): 1–34.” British Journal for Voters, and the Education–Party Correlation since 1952.” Perspectives on
the Philosophy of Science 55 (1): 1–34. Politics 17 (3): 666–78.
King, Gary, Robert O. Keohane, and Sidney Verba. 1994. Designing Social Inquiry: Schrodt, Philip. 2014. “Seven Deadly Sins of Contemporary Quantitative
Scientific Inference in Qualitative Research. Princeton: Princeton University Political Science.” Journal of Peace Research 5 (12): 287–300.
Press.
Shirani-Mehr, Houshmand, David Rothschild, Sharad Goel, and Andrew
Lewis-Beck, Michael S., and Ruth Dassonneville. 2015. “Forecasting Elections in Gelman. 2018. “Disentangling Bias and Variance in Election Polls.” Journal of
Europe: Synthetic Models.” Research and Politics 1 (1): 1–11. the American Statistical Association 113 (522): 607–14.
Lewis-Beck, Michael S., and Stephen Quinlan. 2019. “The Hillary Westwood, Sean Jeremy, Soloman Messing, and Ypthtach Leikes. 2020.
Hypotheses: Testing Candidate Views of Loss.” Perspectives on Politics 17 “Projecting Confidence: How the Probabilistic Horse Race Confuses and
(3): 646–65. Demobilizes the Public.” Journal of Politics 82 (4): online first.
Lewis-Beck, Michael S., and Mary Stegmaier. 2014. “US Presidential Election White, R. 2003. “The Epistemic Advantage of Prediction over Accommodation.”
Forecasting.” PS: Political Science & Politics 47 (2): 284–88. Mind 112 (448): 653–683.

PS • 2020 3

You might also like