Electronic voting systems–introduced en masse following high-profile problems with traditional voting systems in the state of Florida during the 2000 presidential election–were designed to quell fears about accuracy. Unfortunately, those concerns continue to permeate political conversation. The Emergency Assistance for Secure Elections Act of 2008, introduced recently by Rep. Rush Holt (D-NJ), proposes government funding for jurisdictions that use electronic voting to switch to systems that produce a paper trail. But many experts say that a paper trail alone can’t solve the problem.
Ben Bederson, an associate professor at the Human-Computer Interaction Lab at the University of Maryland, was part of a team that conducted a five-year study on voting-machine technology. Bederson says that machines should be evaluated for qualities beyond security, including usability, reliability, accessibility, and ease of maintenance. For example, in a 2006 Florida congressional election, some voters were uncertain whether touch-screen machines had properly recorded their votes, especially after 18,000 ballots in Sarasota County were marked “No vote” by the machines. “Security, while important, happens to be one of those places where voting machines actually have not proven to fail,” Bederson says. “However, in many other ways, they have failed dramatically, especially [regarding] usability. The original Florida problem was primarily a usability issue.” (Among the problems in Florida in 2000 was the case of Palm Beach County, where some voters were confused by a ballot design that listed candidates in two columns. The confounding layout led some people to mistakenly vote for Patrick Buchanan when they intended to vote for Al Gore.) Bederson’s team, which included researchers from the University of Maryland, the University of Rochester, and the University of Michigan, particularly focused on usability, and they evaluated electronic voting systems built by Diebold, Election Systems and Software, Avante Voting Systems, Hart InterCivic, and Nedap Election Systems, as well as one prototype built by Bederson himself.
In the study, participants were told to vote for particular candidates in mock elections. The researchers then compared the results recorded on the machines with the voters’ intentions. Bederson says that even for the simplest task–voting in one presidential race on a single screen–participants had an error rate of around 3 percent. When the task became more complicated, such as when voters were asked to change their selection from one candidate to another, the error rate increased to between 7 and 15 percent, depending on the system. Bederson notes that, although the error rate that occurred in the study may not necessarily mean that there is the same error rate in terms of actual votes on actual machines, the study does raise concern, considering how close some recent elections have been. Bederson’s group recorded one test vote in which the errors caused different candidates to win a race depending on which machine was used. “As to whether errors are biased, the answer in general is that it depends on the specific usability problem,” Bederson says.