Voting with (Little) Confidence
Experts say that when it comes to voting machines, usability issues should be as much of a concern as security.
Electronic voting systems–introduced en masse following high-profile problems with traditional voting systems in the state of Florida during the 2000 presidential election–were designed to quell fears about accuracy. Unfortunately, those concerns continue to permeate political conversation. The Emergency Assistance for Secure Elections Act of 2008, introduced recently by Rep. Rush Holt (D-NJ), proposes government funding for jurisdictions that use electronic voting to switch to systems that produce a paper trail. But many experts say that a paper trail alone can’t solve the problem.
Ben Bederson, an associate professor at the Human-Computer Interaction Lab at the University of Maryland, was part of a team that conducted a five-year study on voting-machine technology. Bederson says that machines should be evaluated for qualities beyond security, including usability, reliability, accessibility, and ease of maintenance. For example, in a 2006 Florida congressional election, some voters were uncertain whether touch-screen machines had properly recorded their votes, especially after 18,000 ballots in Sarasota County were marked “No vote” by the machines. “Security, while important, happens to be one of those places where voting machines actually have not proven to fail,” Bederson says. “However, in many other ways, they have failed dramatically, especially [regarding] usability. The original Florida problem was primarily a usability issue.” (Among the problems in Florida in 2000 was the case of Palm Beach County, where some voters were confused by a ballot design that listed candidates in two columns. The confounding layout led some people to mistakenly vote for Patrick Buchanan when they intended to vote for Al Gore.) Bederson’s team, which included researchers from the University of Maryland, the University of Rochester, and the University of Michigan, particularly focused on usability, and they evaluated electronic voting systems built by Diebold, Election Systems and Software, Avante Voting Systems, Hart InterCivic, and Nedap Election Systems, as well as one prototype built by Bederson himself.
In the study, participants were told to vote for particular candidates in mock elections. The researchers then compared the results recorded on the machines with the voters’ intentions. Bederson says that even for the simplest task–voting in one presidential race on a single screen–participants had an error rate of around 3 percent. When the task became more complicated, such as when voters were asked to change their selection from one candidate to another, the error rate increased to between 7 and 15 percent, depending on the system. Bederson notes that, although the error rate that occurred in the study may not necessarily mean that there is the same error rate in terms of actual votes on actual machines, the study does raise concern, considering how close some recent elections have been. Bederson’s group recorded one test vote in which the errors caused different candidates to win a race depending on which machine was used. “As to whether errors are biased, the answer in general is that it depends on the specific usability problem,” Bederson says.
The prototype that Bederson designed, which has a touch-screen interface that makes it easy for users to zoom in and out to view the full ballot or a specific race, held its own in the comparison–a fact that Bederson takes as a sign that commercial machines need to be better designed. “This is a strong indication that the other systems need improvement,” he says, noting that the prototype, which he had not studied very extensively before testing it with the other machines, had the lowest error rate for the simple task. Bederson says that the purpose of the prototype was to give the researchers two ways of studying the usability problem: by building a system from scratch, and by testing existing systems.
In spite of the usability problems, Bederson says, voters often seemed to like the touch-screen systems. He believes that the problems can and should be fixed, and that, if systems are tested and evaluated more thoroughly before being deployed, it should be possible to get the benefits of touch screens without the security and usability pitfalls.
Ted Selker, an associate professor at MIT Media Lab, who is currently working on a voting project conducted at MIT and the California Institute of Technology, says that electronic machines can be useful because they provide voters with better feedback during the process. For example, completion meters can help voters see when they have missed a section of the ballot. However, Selker also notes many problems with electronic machines’ security and usability, including ballot design. Anything that makes the process more complicated can gain or lose votes accidentally, he says. Also, adding a paper trail is only helpful if poll workers know how to handle the paper properly, he says, pointing out that the paper produced must be protected and kept from tampering in its own right. Selker says that he thinks the voting process could be improved, no matter what system is used, if administrators focus on providing poll workers with training to help people use the machines, and to correctly handle sensitive information, such as a paper trail.
Whether electronic voting machines are under scrutiny for usability or security, many experts say that their design flaws call for reevaluation of the devices. Tadayoshi Kohno, an assistant professor of computer science at the University of Washington, who has studied the security of several electronic systems, says, “My feeling of the electronic-voting community is that we started walking down a dark alley, and we know that it’s very dangerous. We know that at the end of the valley is a safe place. As a philosophical question, I have to ask, should we continue going down this dark alley, or should we step back and figure out some other way we want to go to safety?”