Research Hub

The Early 2020 Democratic Primary: Comparing Voting Methods

Aaron Hamlin

We shared a poll we conducted with Change Research in November as the Democratic primaries were heating up. In that poll, you got to see approval voting and plurality voting going head to head.

For those new to the space, plurality voting is the current choose-one voting method we use now. Approval voting lets you pick as many candidates as you want, and the candidate with the most votes still wins. 

We collected more data, but we had the kind of delays you’d expect from not having funding for a designated research staffer. (You’ll see a quicker turnaround in our upcoming Super Tuesday poll as we designate more staff and contractor time.)

So now we get to share that data. In addition to approval and plurality voting, we also collected data on ranked-choice voting (RCV). RCV has voters rank candidates which then simulates a sequential runoff across multiple rounds (in this case, 19 rounds).  We also took a control measure, a zero-to-five point honest assessment scale, to see how the different voting methods stacked up with how voters actually felt.

Under plurality voting, we saw a virtual tie between Warren, Sanders, and Biden. Buttigieg fell close behind for fourth. For the rest of the field, only two candidates, Harris and Yang, got near five percent with the rest struggling for low single-digits.

RCV brought a clearer winner choosing Warren in its 18th and final runoff round. The same other three frontrunners tailed her in the last rounds. Like our choose-one method, however, RCV failed to give other candidates their show of support. Harris and Yang still got low support. The remainder fared only the slightest bit better than the choose-one method, still gaining only trivial support.

The figure above comparing RCV shows candidates’ support in its best light. The RCV support by candidate in the figure shows the candidate’s highest support before getting eliminated.

Approval voting, like RCV, chose Warren as the winner with a commanding 74% approval. It also had the same four frontrunners. But that’s where the similarities ended.

Under approval voting—unlike the choose-one method and RCV—the rest of the candidates actually got their true reflection of support. Castro, Klobuchar, Yang, Booker, and Harris all got between about 25% and 40% support. Then another eight candidates each failed to capture even 10% approval.

Looking Closer at RCV

Here, you can see how the RCV election broke down round-by-round. What’s interesting about RCV is that it’s only looking at a fraction of the data at any one point. That means many of the second, third, and later rankings can often get totally ignored based on a candidate’s sequence of elimination.

Take a look at the sequence of elimination. (Note that this figure looks at remaining votes rather than total original votes as the denominator, which explains the slight increases in percentages compared to the earlier figure.)

Now, look at someone like Yang, for instance, who barely got more support under RCV than he did under the choose-one method. He starts at around 4% and barely tops out at over 7% of the remaining ballots before getting eliminated.

Why so low? One reason is that all the cross-support for Yang by the top four candidates never gets acknowledged in the RCV tally. RCV’s tallying just ignores that data. It’s also hard to know if later rankings of Yang actually “support” him overall. We can’t really tell when voters rank a candidate 2nd or 7th when they stop actually liking that candidate.

A positive note for RCV here is that respondents were eager to rank many of the candidates. That’s good because it increased RCV’s ability to elect a high-utility (discussed later) winner—which it did here. Ranking more candidates also increases RCV’s likelihood of electing a winner that can beat everyone else head-to-head.

Honest Assessment


There are lots of voting methods out there. Who’s to say one or the other is actually picking the right winner? That’s why we included a control measure. We asked respondents how much they wanted a particular candidate elected on a scale of 0 to 5, inclusive, and we asked them not to consider viability.

This gives us an idea of what a “high-utility” winner would be. That is, it tells us the candidate who provides the maximum amount of happiness among the voters.

When we looked at the average scores, we saw Warren taking the lead—topping out at an average of over four points. We see the same four candidates take the lead, but we also see more granular differences throughout the rest of the pack. Five additional candidates from Harris to Klobuchar get scores between two and three. The remainder of the candidates from Steyer to Messam trailed off between one and two point scores.

Despite nearly an entire point difference between Biden and Warren in the honest assessment, our choose-one method couldn’t distinguish between them. Our choose-one method just fell apart from there as it was not able to pick up any useful information on the rest of the candidates. It lumped them all together giving them practically no support. Our current voting method basically failed in every way a voting method could.

RCV got the high-utility winner right, choosing Warren. It had trouble matching support after that, however. It gets the order of support for candidates mixed up starting with the third place finisher, which it selects as Biden despite him falling moderately behind Buttigieg in the honest assessment.

Following the leading candidates, it fails to capture the amount of relative support between candidates, meaning anyone not in the top four took a nosedive in support.

Approval voting, like RCV, also got the right high-utility winner and chose Warren. But approval voting did more than just that. It also got the candidates’ order of support right all the way through the 7th place candidate, Yang.

Further, those candidates don’t see enormous drops in support either. Approval voting appreciated much of the granular differences found in the utility distributions between candidates. It didn’t just lump them all together as hopeless like RCV and plurality did.

You can see the different types of “utility profiles” each candidate shows in the interactive figure below. These profiles show how often candidates receive certain scores under the honest assessment scale. Notice how Warren picks up more fours and fives relative to other candidates. How did your favored candidates do?

Approval Voting Breakdown

One of the bigger shots taken at approval voting is that it encourages voters to pick only one candidate. While it is sometimes advantageous for voters to pick only one candidate, it’s imperative that voters have the option to choose more than one candidate when they need to.

Even when most voters choose only one candidate, the remainder who do select more than one can often (1) make a material difference in determining the winner and (2) play a role in giving other candidates a more accurate reflection of their support.

We also see a pattern in the literature (laid out well from our last study) that shows there tends to be more approval votes per ballot when there are more candidates. Here, with 19 candidates, we saw that pattern. Respondents voted for more candidates.

The average respondent approved of 4.9 candidates on their ballot. Fewer than 9% of respondents chose only one candidate. We expect that as the field narrows, we’ll naturally have fewer vote per ballot and more people choosing only one candidate. Yet, it will continue to be important that voters have the option to choose multiple candidates if they wish. (Unfortunately, actual voters in the primary will not get this option.)

The Takeaways

The biggest takeaway here is that we continue to use the worst voting method to measure candidate support. Our choose-one voting method crumbles to vote splitting among similar candidates. It becomes anyone’s best guess who actually has the most support.

Both approval voting and RCV got the winner right. But we saw the difference between the two methods in the way they captured support. Approval voting picked up on much of the nuance that was lost in the RCV tally.

More about the data: This poll was conducted by Change research and contracted by The Center for Election Science. The sample size was 1,142 likely Democratic primary voters during the dates November 16-20, 2019 using an online medium for the data collection. Data has been reweighted to accommodate the sample’s demographics. Change Research’s Bias Correct Engine establishes and continuously rebalances advertising targets across region, age, gender, race, and partisanship to dynamically deliver large samples that accurately reflect the demographics of a population. Post-stratification was performed on age, gender, ethnicity, region, 2016 primary vote, 2016 presidential vote, and self-reported social media use. Thanks to Chris Raleigh for compiling the figures and summarizing the data.

Aaron Hamlin is a co-founder of CES and served as its Executive Director until 2023.