Research Hub

Super Tuesday: Deep Voting Methods Dive

Aaron Hamlin

By Aaron Hamlin,

There was a different, more interesting race leading up to Super Tuesday that you didn’t see. We compared three different voting methods in a nationwide poll. One was our current choose-one voting method, plurality voting. Another was ranked-choice voting (RCV), which has voters rank their preferences and then simulates sequential runoffs. The third method was approval voting, which lets voters pick as many candidates as they want, and the candidate with the most votes wins.

We got a lot of data from that poll. Here’s a refresher from the top-line results before we dig deeper.

The winner here, Sanders, stayed the same under all three voting methods. Our choose-one method put Sanders as a runaway winner here, yet both RCV and approval voting had the race closer between Sanders and Warren (again, both RCV and approval voting still chose Sanders). That’s not surprising because of how sensitive our choose-one voting method is to vote splitting which caused Warren to lose a lot of her support.

But a voting method has more jobs than just determining the winner. It should also gauge the support of all the candidates—including ones who lose. Both the choose-one method and RCV put Biden solidly ahead of Buttigieg, yet approval voting placed them closer together. Approval voting also captured the support of the remaining candidates while both our choose-one method and RCV struggled here.

Our choose-one voting method is so information light that it didn’t let voters say how they feel about the candidates. RCV, on the other hand, ignored a lot of those next-choice preferences so that support never showed up. Approval voting showed a crisper picture, however, because (1) voters were able to provide their opinions about the candidates, and (2) approval voting actually used all their information.

Looking Closer at RCV

(This figure looks at remaining votes rather than total original votes as the denominator, which explains the slight increases in percentages compared to the earlier figure.)

Here, we’re looking at the RCV election round by round. RCV took until the last possible round—round seven—to name Sanders the winner.

Like we saw in our earlier poll from November, many candidates got an artificially low amount of support with RCV. We saw this with Buttigieg on down through Gabbard. This is because RCV only looks at part of the rankings at any one point in time, and it can ignore certain information throughout the entire election.

Take Klobuchar, for instance. There weren’t many votes for her to pick up through Gabbard and Steyer getting eliminated, so Klobuchar doesn’t improve much under RCV. And with Klobuchar next on the chopping block with the fewest first-choice votes, she had no further opportunity to get support. Even if every other ballot put Klobuchar as their second-favorite choice, Klobuchar would still have gotten eliminated.

Now, Klobuchar didn’t have the second-choice vote on every other ballot, but she did get more support—which RCV ignored. Buttigieg also fell victim to this. RCV failed to acknowledge the support that he earned. Contrast that with the support those candidates got under approval voting.

Analyzing Approval Voting

Approval voting chose the same winner as the other two methods. But there were some clear differences. First, approval voting recognized the closeness between Sanders and Warren (RCV also got this right).

Approval voting also recognized the closeness between Biden and Buttigieg, which is one even RCV missed. Then approval voting showed support for the remaining candidates who went overlooked by both RCV and our choose-one method. This oversight by our choose-one method and RCV particularly hurt Klobuchar.

Approval voting measured candidates’ support properly by letting poll respondents support multiple candidates while also not having any unusual ballot transfer schemes during the calculation. That latter part is where approval voting seriously differs from RCV—approval voting is simpler and doesn’t ignore or drop data that voters provide.

We can see by the frequency distribution that over 70% of respondents chose more than one candidate (fewer than the November poll, as we predicted). The average number of approvals per ballot was over 2.5. Even for nine candidates, this is a lot of people choosing multiple candidates. That shows how needed approval voting was for this particular election—which is unfortunate for actual voters who were stuck with our choose-one method.

Tell Me How You Really Feel

It’s easy for someone to look at these voting method results and then ask, “so how much support did everyone really deserve?” And that’s a fair question.

The way we addressed this was by asking each respondent to indicate how much they wanted a particular candidate elected on a scale from zero to five, inclusive. We asked them not to consider viability. This was our honest assessment scale—a control measure, if you will. The figure above is the result of that question.

Indeed—and perhaps unsurprisingly given the agreement between methods—we see Sanders doing best under that honest assessment. That’s good here. It means all the voting methods were right to choose him as the winner.

For fun, say we took the honest assessment bar chart and superimposed it over the results for the different voting methods. If we then asked ourselves which voting method matched best against the honest assessment, we’d have a clear answer. That would be approval voting. 

The only mismatch here would be that approval missed some of the support for Steyer, particularly relative to Bloomberg. Still, it picked up on that support far better than RCV and our choose-one method did. Alternatively, the other voting methods disagree repeatedly in the amount and order of candidates’ support compared to this honest assessment.

Candidate profiles using this honest assessment measure can also be interesting. For instance, you can see how divisive a particular candidate is based on their distribution of ratings. See how your candidates did below.

There’s another kind of control measure that we can take. In voting theory, there’s this concept of a Condorcet winner. That’s a candidate who can beat every other candidate head-to-head. If they did a round-robin tournament with every other candidate, that candidate would go undefeated. This beats-all winner doesn’t always exist, but in many cases they do. And that candidate existed here—Bernie Sanders.

The way we did this was we asked respondents to honestly rank all the candidates from favorite to least favorite, again asking them to be honest and not consider viability. This let us see how each candidate would fare in a head-to-head matchup and giving each candidate a pairwise win/loss tally. Ultimately, we’re provided with a kind of honest Condorcet winner independent of voting method tactics.

This head-to-head win tally agreed well with the ordering of candidates’ average scores on the honest assessment scale. We see the candidates’ ascending win order matched the candidates’ honest assessment score order.

Comparing this ordering to the voting methods’ ordering, both RCV and our choose-one method erred in multiple places. They both placed Biden ahead of Buttigieg whereas the honest assessment and the head-to-head comparisons placed Buttigieg ahead of Biden. Both RCV and the choose-one method also downplayed Klobuchar by two spots and misplaced Steyer’s position with Bloomberg’s.

Approval voting didn’t get away totally error-free. But still, it only made one mistake compared to the others and wasn’t as terribly far off as the others were. It missed the ordering for Steyer by one spot—just like the choose-one method and RCV did. According to the honest assessments, Steyer should be ahead of Bloomberg.

The Takeaway

The takeaway from this poll is similar to the takeaway we had from the November primary poll. Voters want to support more candidates, and when you let them do that, you get more accurate results.

Instead, voters were forced to use the worst voting method possible for the primaries. This choose-one method not only failed to capture the support of candidates beyond the top two, but it also missed how close the support was between Sanders and Warren.

The other takeaway is that while RCV didn’t do quite as badly as our choose-one method, it still fell short compared to approval voting. Approval voting measured candidates’ support the best out of all the methods. It only missed capturing some of the support of one candidate, Steyer, compared to the honest assessment measures—a mistake the other voting methods faltered even worse on.

Unfortunately, our current primaries will continue to not only use the worst voting method but also add further distortion with the use of delegates. These delegates are assigned in a kind-of-but-not-really-proportional way that can further vary based on the whims of individual states.

In some ways, the addition of delegates feels like we’ve not only agreed to use the worst voting method there is, but we’ve also agreed to make it just a bit worse by adding another distortion.

Add on top of that the staggered way that we carry out primaries. That means we waste our votes on candidates who drop out. Approval voting and even RCV would address that, letting voters have backups. This all is a direct consequence of using a voting method that doesn’t let us provide information beyond one candidate.

The question now becomes whether parties will move towards a fairer, more representative, and much simpler answer given that one clearly exists. That answer is approval voting.

More about the data: Polling was conducted online from February 25-27, 2020. Using its Dynamic Online Sampling technology to attain a sample reflective of likely Democratic primary voters, Change Research polled 821 nationally. All voters say they identify as a Democrat or Independent and have a 50-50 likelihood or greater of participating in their state’s primary or caucus. Post-stratification weights were made on age, gender, region, race, county density, ideology, and 2016 primary vote.

Aaron Hamlin is a co-founder of CES, and was its Executive Director from 2011 to 2023.