class: center, middle, inverse, title-slide # Why forecast elections? ## And how (not) to do so ###
G. Elliott Morris
Data journalist
The Economist
### August 30, 2019 --- class: center, inverse ## It's too early to forecast the 2020 election. So let me talk instead about why I think forecasting is worthwhile, but only when done correctly. <img src="figures/dart_monkey.jpg" width="80%" /> --- # Why forecast? A few guiding thoughts: 1. Academic reasons: - Forecasts are a (the most?) popular public-facing product of political science - Forecasting allows us to **explain** things before-hand (if models are robust enough) 2. Journalistic reasons: - Forecasts are better than punditry - Forecasts are better than polls alone - Demand: If mainstream outlets don't (a) make their own forecasts or (b) cover good ones, bad forecasts will dominate 3. They're fun! - They're fun to code and get working - Readers like being presented with interactive content they can return to day-after-day, week-after-week --- class: center, inverse # ## Forecasting is important ## But can be dangerous when done poorly ## Pay attention only to forecasters _who take uncertainty seriously_ ## (The other ones are just lying to you) --- # How (not) to forecast -- ### 1. Don't use economic indicators alone - They aren't predictive of election outcomes - This is becoming more true over time -- ### 2. Don't act certain (unless you really are) - False certainty can bias media narratives (especially when combined with reporters' political biases) - False certainty can lead to severe consequences - False certainty betrays our real understanding or how often "unlikely" election outcomes can happen (see: Trump 2016) -- ### 3. Be probabilistic - Point projections don't matter; distributions do - And so do electoral college votes (i.e. don't just predict the popular vote) --- # 1. Don't use economic indicators alone - Why? They are not predictive of election outcomes <div class="figure"> <img src="figures/jackson_2015.jpg" alt="Source: Natalie Jackson; Huffington Post (2015)" width="60%" /> <p class="caption">Source: Natalie Jackson; Huffington Post (2015)</p> </div> --- # 1. Don't use economic indicators alone - Why? They are not predictive of election outcomes - This is becoming more true over time <div class="figure"> <img src="figures/morris_2019.png" alt="Source: G. Elliott Morris; The Economist (2019)" width="70%" /> <p class="caption">Source: G. Elliott Morris; The Economist (2019)</p> </div> --- # 2. Don't act certain (unless you really are) - False certainty can bias media narratives (especially when combined with reporters' political biases) <img src="figures/grim_2016.png" width="80%" /> When presented with competing forecasts, people grab onto the ones that comport with their world-view > "For the polls to be wrong, there wouldn’t need to be one single 3-point error. All of the polls ― all of them, as Brianna Keilar would put it ― would have to be off by 3 points in the same direction." > "If you want to put your faith in the numbers, **you can relax. She’s got this.**" --- # 2. Don't act certain (unless you really are) - False certainty can bias media narratives - False certainty can lead to severe consequences <img src="figures/yglesias_2018.png" width="80%" /> James Comey, 2018, "A Higher Loyalty": > "It is entirely possible that because I was making decisions in an environment **where Hillary Clinton was sure to be the next president,** my concern about making her an illegitimate president by concealing the restarted investigation bore greater weight than it would have if the election appeared closer or if Donald Trump were ahead in all polls." --- # 3. Don't act certain (unless you really are) - False certainty can bias media narratives - False certainty can lead to severe consequences - False certainty betrays our real understanding or how often "unlikely" election outcomes can happen (see: Trump 2016) <div class="figure"> <img src="figures/silver_2016.png" alt="Source: Nate Silver; FiveThirtyEight (2016)" width="85%" /> <p class="caption">Source: Nate Silver; FiveThirtyEight (2016)</p> </div> --- class: center, inverse # # ## How can we convey our certainty? ## With probability! --- # 3. Be probabilistic #### Why? Readers have the best understanding of the horse race when presented with probabilities <div class="figure"> <img src="figures/westwood, messing and lelkes_2019.png" alt="Source: Westwood, Messing and Lelkes (2019)" width="70%" /> <p class="caption">Source: Westwood, Messing and Lelkes (2019)</p> </div> --- # 3. Be probabilistic ### Point projections don't matter, distributions do... - If we are not giving readers a sense of our certainty, we are lying to them. - The best way to convey our certainty is to produce a distribution of possible outcomes for the election, combing confidence intervals with our our point projections to transform them into probabilities ### ...and so do Electoral College votes (i.e. don't just predict the popular vote) <div class="figure"> <img src="figures/silver_2016b.png" alt="Source: Nate Silver; FiveThirtyEight (2016)" width="85%" /> <p class="caption">Source: Nate Silver; FiveThirtyEight (2016)</p> </div> --- class: center # Thank you! ## G. Elliott Morris ### Data journalist, _The Economist_ **Email: [elliott@thecrosstab.com](mailto:elliott@thecrosstab.com)** **Twitter: [@gelliottmorris](http://www.twitter.com/gelliottmorris)** <br> --- _These slides were made with the `xaringan` package for R from Yihui Xie. They are available online at https://www.thecrosstab.com/slides/2019-08-30-apsa/_