A Preliminary Ranking of the Democratic Primary Contenders

Joe Biden leads Bernie Sanders by twelve points, and Sanders leads Kamala Harris, in third place, by ten points in this week’s #10at10 preliminary rankings. Launched on social media on January 14, the model emphasizes two particular factors – how a candidate is polling on average versus other announced or potential Democratic candidates and how she or he is polling on average against Donald Trump. The model also allows some wiggle room in a bonus points section. The #10at10 preliminary rankings will be updated each week on Monday on Twitter in this thread and usually also in a column here on Wednesdays.

In future weekly columns, we will take a look at a number of factors that are relevant to forecasting which candidates are more or less likely to be competitive toward becoming the Democratic nominee in 2020. Several of those factors are considered in awarding bonus points including net favorability ratings (averaged where possible); breadth, depth, and durability of measured or perceived appeal to various demographic groups; fundraising; endorsements; polling in early or delegate rich states; second choice polling; and how candidates have fared in major media over the course of the week. Deductions may be given in this area for perceived liabilities that have not yet received wide attention.

Additional topics for consideration may include changes in the Democratic primary calendar or other relevant rules, general election swing states, name recognition, coalitions, and looking back at lessons and problems from the 2016 Democratic primary. Weekly columns will also regularly select one or more candidates to look at in-depth.

Since the model is focused heavily on polling, against other Democratic candidates and against Trump for November 2020, a few words as to why this might be considered more likely to be accurate than CNN’s narratival rating system with Harry Enten and Chris Cillizza or FiveThirtyEight’s ratings based on appeal to potential coalitions in five “corners” (Party Loyalists, The Left, Millennials and Friends, Black voters, and Hispanic voters):

As Silver previously recognized in a 2011 article, polling in the first half of the year before a primary is a reasonably accurate but not a fail-safe predictor of which candidates are likely become the nominee. Since 1976, eight of nine GOP primary contests saw the eventual winner in first or second (McCain 2008) in the average of polling in the first half of the year prior to the primary. Trump did not begin regularly polling first or second until July 2015. For Democrats, that number is a little lower, with six of the nine eventual nominees regularly polling first or second in this period. In a seventh primary cycle, the third place average poller (Dukakis in 1988) became the nominee. Jimmy Carter and Bill Clinton polled at less than 2% in the first half of 1975 and 1991 respectively.

While there are reasons for comparing this Democratic primary cycle to 1976, 1992, or 2016, there are even better reasons to think that if Joe Biden and Bernie Sanders remain first and second in polling through to June, one of them will likely be the nominee come July 2020. Beto O’Rourke began January 2019 in third place and was replaced for a time by Elizabeth Warren. Both, however, have been definitively overtaken by Harris. There is perhaps as great as a one in three chance that Harris, O’Rourke, Warren, or someone further down the list could upset Biden or Sanders. The eventual nominee could well also emerge from further back in the next few months as a definitive first or second place poller, particularly if Biden decides not to run.

One of the key reasons for thinking that Biden and Sanders truly are the frontrunners is that several polls including CNN and Monmouth [pdf] last week, strongly indicate that Dems most want a candidate who can beat Trump. Along with Michelle Obama, who almost certainly will not be running, Sanders and Biden have persistently outpolled Trump at a much greater distance than other announced or potential candidates.

Finally, a brief note on the word “preliminary” in the model. Once the debates begin (June 2019) and the primary and caucus rules and dates are set and more abundant state polling is available, I will update the model to a more sophisticated version using the same principles but including projected delegate count ranges over the course of the primary calendar.

For questions, comments, or to inquire about syndicating the weekly column in your outlet, I can be contacted on Twitter @djjohnso (DMs open) or at djjohnso@yahoo.com (subject line #10at10 Election Data).

More on the #10at10 Name, General Method, and Success Record:

The #10at10 model takes its name from its basic principle of poll averaging (as an election draws near, all the latest scientific polls by polling firm in the previous ten days, averaged strictly without weighting or adjustments). When I began using this model ahead of the 2016 U.S. Presidential election, I aimed to update the average at ten a.m. and/or ten p.m. each day. The updates continue to usually appear around one of those times over at Twitter. Averaging all scientific polls, even if from a partisan firm, without weighting or adjustments learns from the best of, but sets the model apart from the two most recognizable polling aggregators or analyzers. 

Nate Silver’s FiveThirtyEight uses all, or almost all, polls as well but has long weighted them by date (up to twenty-one, and sometimes more, days), sample size, and the quality of the pollster. In 2014 FiveThirtyEight also began to manually adjust the margin in a polling gap up or down according to a non-transparent “house effects and other factors.” RealClearPolitics, by contrast, has always insisted on strictly averaging polls, but, other than generally excluding partisan polling, is not transparent about how they select which polls to exclude and include and how they select which time frame to cover in averaging.

The ten day window in the #10at10 model is adjustable for earlier in election cycles when polling is more scarce. Generally, we try to have a time frame that is not too elastic but will usually include four or more polls from different polling outfits.While the polling average is critical for the model, on some occasions my final projection eschews the strict model projection based on specifically outlined factors.

Using this model, we have had a fair bit of success in projecting a close 2016 race between Trump and Clinton, including a Trump win in Michigan . #10at10 modeling also accurately projected a close race or even a minority government for Theresa May in the 2017 UK General Election a month in advance as well as the day prior, a narrow Doug Jones win over Roy Moore in the Alabama special Senate election, and a majority government for Doug Ford in Ontario, Canada in June 2018. In each case except Ontario, the projection was within one percentage point of the final overall margin. The final #10at10 projection was also within 1% of the popular vote gap for the 2018 mid-term House of Representative elections last November, but my seat count wrongly projected a race too close to call with a slightly greater than even chance of Republicans keeping the House .