You are on page 1of 10

Making Political Predictions

PSCI 101

1
Evaluate ways of thinking about
Evaluate political predictions

Learning
Objectives Identify steps to take to make more
Identify accurate predictions

Make your own political predictions –


Make or evaluate the predictions of others

2
How did we
know who
would win the
2020
Presidential
Election?
…Did we?

At what point could we accurately predict the 2020 election? If you went to
https://www.270towin.com/ last summer when I was preparing to teach this class, the
above screenshot is the prediction you would have seen. So far out from the election, how
did they know which states will go “blue” and which states will go “red?” Why didn’t they
know what will happen in North Carolina? Who created this “consensus” anyway?

This “consensus” map is based on the election forecasts of several experts in the field.
These are people who think carefully about questions of causation, and then apply those to
the future to make predictions. For example, they can simply think “Party ID is a very
important factor in who people vote for. I predict that states with mostly Democrats will
vote for the Democratic candidate, and states with most Republicans will vote for the
Republican candidate.” This is why they can guess CA but not NC – we have a lot more
swing voters (side note: states don’t just have one party – even mostly Democratic states
like CA still have a significant minority of Republicans – in fact, the state with the largest
number of Trump voters was CA). There are some other factors that matter, though – why
are TX and GA pink instead of red? Because their demographics (who lives there) are
changing, and they are closer to being swing states than the solid red states.

This is how experts make political predictions: based on evidence. You will see that neither
candidate is guaranteed to win by this map. That is probably very disappointing to you!

3
Why can’t science just have an answer?! Well, sometimes science is complicated. This is one
reason that these types of evidence-based forecasts are less popular than ones that try to
make a confident assertion, right or wrong. “Trump will win re-election” or “Trump will have
a historic loss to Biden” make for better statements to share on social media than “No one is
sure who will win the election that is six months away!” Especially with all of the political
issues that were going on at the time, it would have been irresponsible to make a solid
prediction. If you are thinking “well, 6 months isn’t that long” consider that when I took the
above screenshot, George Floyd had not yet been killed. Would you think it responsible to
ignore months of widespread protest as a potential effect on a national election outcome?
Probably not! We can make certain predictions this far out, but we have a lot of uncertainty,
as many of the key political issues that will be relevant to the election have not even
happened yet. Sometimes, crucial events happen even right before the election (such as
Trump being hospitalized with covid). We try to make predictions, but this is why the polls
can be “wrong.” There are just simply too many factors in play!

Another problem with making predictions is that often, we expect “our” candidate to win –
after all, we support them, and our family and friends support them, so it’s obvious they will
be victorious! However, our own viewpoints and those of our close friends and family may
not represent everyone. It is normal for us to want to make predictions based on what we
WANT to happen. Usually, in a face-to-face class, I’ll poll students to ask who they think will
win an election. I then usually remind them to pick who they think will realistically win, not
who they want to win. The whole class will let out a groan, and then start changing their
answers on the poll. They all picked their candidate, not the person they thought would win
based on the evidence.

In your assignment, I will be asking you to guess who would have won based on the evidence
available at the time. But, if I just said that is complicated, how can you do that? We can
make some guesses about what will happen in the “swing states” based on the available
evidence – they are just less certain than the solid red or blue states. This module will teach
you about making the most accurate political predictions possible.

Side note: the numbers “270” and “538” referenced in these websites refer to the number of
votes in the Electoral College. There are 538 electors, and the winner must gain at least 270
votes from them to win. If there is a tie, then the House of Representatives decides.

3
Expectations vs. Reality

Here are two examples of evidence-based predictions made immediately before the
election compared to the actual election results. These have the least room for uncertainty
as they were released the day before or the morning of election day. Due to early voting,
millions had already voted, so there should have been minimal error (While those voted
had not yet been counted, people would theoretically be accurately reporting their votes
on polls).

The one on the left is from the website 538. They aggregate various polls and make a
prediction based on that. You can see they correctly predicted that Biden would win but
overestimated his chances in some states. Florida and North Carolina were wrong, and the
margin of victory was closer than they predicted in states like Wisconsin (they predicted
Biden would win Wisconsin but nearly 8 percent, when he actually won by less than 1%).

The one on the right is from the Center for Politics at the University of Virginia. They used a
combination of polls and expert analysis to make their predictions. They were very
accurate, missing only NC in their predictions (though, they don’t predict exact margin of
vote like 538 does).

These are not the only two forecasts. These are just two well-known and usually highly
accurate examples. Still, you can see that they had some error to them.

4
Where did the errors in these forecasts come from, then, if there was minimal error? There is
always room for uncertainty with polling. If your sample is not representative, your poll will
be biased one way or the other. You also have to predict exactly who will turn out to vote on
election day. With a very high turnout in 2020, this was difficult to predict. Typically we
assume that people who didn’t vote before will continue to not vote – but obviously in 2020
that was a bad assumption. So, this made forecasts more inaccurate. There are also various
accounts from pollsters explaining where they think things went right or wrong in the
assigned reading.

4
What
about
2024?

Obviously, it is WAY too early to predict 2024. However, that will not stop political pundits,
so be sure to read any “2024 predictions” you read/see with a very critical eye. Above, I
have the current baseline prediction for 2024 from 270towin. The “swing states” are any
state which was relatively competitive in 2020 and the safe states (in blue or red) are those
won by a large margin by either candidate in 2020. You can see from this map that based
on nothing but the 2020 results, it is not at all clear who will win. We will need more polls
and information to predict 2024. Who knows how the important political issues of the day
will change over the next 3 years to affect the predictions they will make! This very
uncertain map is about as certain as you can get this far out.

5
How to
think

The question you should be asking yourself now is how should you think if you want to
make good political predictions and forecasts? Is it even possible if you hold strong political
views? Yes, as long as you think about the issues in a certain way. The book chapter
assigned talks about how both TV pundits and academics made poor predictions if they
thought like “hedgehogs,” but not if they thought like “foxes” (side note: pundit don’t
usually get paid to think like foxes, as that would make for less interesting TV).

The short version of this table is that you should think about things in a complex manner,
take all evidence into consideration, and base your predictions on all evidence and
information, not what you assumed would happen or what you want to happen. Don’t get
caught up in your own biases or double-down on your initial ideas if they turn out to be
wrong, and you will be a better forecaster! This can be difficult, but you can train yourself
to do it.

6
Principles of Forecasting

1 2 3
Think Update your Look for
Probabilistically forecast consensus

These principles were used by Nate Silver to design his website,


https://fivethirtyeight.com/

You will be using this website in your assignment to get the evidence you need to make
your prediction. He builds an advanced statistical forecasting model based on these
principles to make predictions about the election and other political (and sports)
outcomes. However, you don’t necessarily need advanced statistics to make predictions. As
discussed toward the end of the chapter, information that gives us deeper content and
detail might be helpful (called qualitative information). Think about what you know about
elections along with the trends seen in the polls to make your prediction.

What does it mean to think probabilistically? When you see a forecast such as “77% chance
of winning,” that means that about 8 out of 10 times, that is the outcome that we would
expect – not that the person will win 77% of the vote. Don’t get caught up in very high
probabilities. I’m sure you all know someone, for example, who has had their birth control
fail – even though most is 90-99% effective. Low probabilities mean that things are unlikely,
not impossible.

Updating your forecast is what is meant by the principle “today’s forecast is the first
forecast of the rest of your life.” You need to take into account new information as it

7
becomes available, or your model is bad. Would you stick with a guess about the election
from before the pandemic and its associated economic consequences? When we had 20+
Democratic candidates rather than narrowing down to Joe Biden? Of course not! Forecasts
like 538 take new information and update as it comes in.

You should proceed with caution if you come to an unexpected conclusion that is very
different from the consensus. Sometimes, the expert consensus will be wrong, but this is
unlikely. This does not mean just to accept the consensus viewpoint without any skepticism,
but instead to look at the evidence you are using and the evidence everyone else is using and
make sure no one is making any errors. If they aren’t the conclusions should be similar –
after all, they are supposed to be based on the evidence. You might be correct if you come to
a different conclusion because you used better/more up to date data or took a factor into
account that others were ignoring.

A major criticism of Silver’s work is that he is “never wrong” even when the less likely
outcome occurs because he always has uncertainty in his predictions. This was particularly
the case after the 2016 election, where he gave a higher chance to Clinton to win than
Trump (although many other experts were far more optimistic about Clinton’s chances). This
is just the nature of making complicated statistical predictions, however. If a website said
there were 90% probabilities of all of the outcomes we were going to see, and then was
wrong 90% of the time, we would probably still say they were bad forecasters. However, if
they said there were 90% probabilities, but were wrong once in a while, would that be bad?
Or just the nature of trying to predict the future when you can’t be sure what will happen? If
they say 90% of the time, then they SHOULD be wrong 10% of the time.

Side note: you are only reading one chapter of this book, but he writes about many failures
in predictions and the reasons for them (including bad predictions about past pandemics). It
is in the library if you want to read the whole thing.

You might also like