Skip to Content
Opinion

Software that monitors students during tests perpetuates inequality and violates their privacy

The coronavirus pandemic created a surge in demand for exam proctoring tools. Here’s why universities should stop using them.
August 7, 2020
A woman rests her head on her chin as she looks at a laptop while seated at a desk
Photo by Iris Wang on Unsplash

The coronavirus pandemic has been a boon for the test proctoring industry. About half a dozen companies in the US claim their software can accurately detect and prevent cheating in online tests. Examity, HonorLock, Proctorio, ProctorU, Respondus and others have rapidly grown since colleges and universities switched to remote classes.

While there’s no official tally, it’s reasonable to say that millions of algorithmically proctored tests are happening every month around the world. Proctorio told the New York Times in May that business had increased by 900% during the first few months of the pandemic, to the point where the company proctored 2.5 million tests worldwide in April alone.

I'm a university librarian and I've seen the impacts of these systems up close. My own employer, the University of Colorado Denver, has a contract with Proctorio.

It’s become clear to me that algorithmic proctoring is a modern surveillance technology that reinforces white supremacy, sexism, ableism, and transphobia. The use of these tools is an invasion of students’ privacy and, often, a civil rights violation.

If you’re a student taking an algorithmically proctored test, here’s how it works: When you begin, the software starts recording your computer’s camera, audio, and the websites you visit. It measures your body and watches you for the duration of the exam, tracking your movements to identify what it considers cheating behaviors. If you do anything that the software deems suspicious, it will alert your professor to view the recording and provide them a color-coded probability of your academic misconduct.

Depending on which company made the software, it will use some combination of machine learning, AI, and biometrics (including facial recognition, facial detection, or eye tracking) to do all of this. The problem is that facial recognition and detection have proven to be racist, sexist, and transphobic over, and over, and over again.

In general, technology has a pattern of reinforcing structural oppression like racism and sexism. Now these same biases are showing up in test proctoring software that disproportionately hurts marginalized students.

A Black woman at my university once told me that whenever she used Proctorio's test proctoring software, it always prompted her to shine more light on her face. The software couldn’t validate her identity and she was denied access to tests so often that she had to go to her professor to make other arrangements. Her white peers never had this problem.

Similar kinds of discrimination can happen if a student is trans or non-binary. But if you’re a white cis man (like most of the developers who make facial recognition software), you’ll probably be fine.

Students with children are also penalized by these systems. If you’ve ever tried to answer emails while caring for kids, you know how impossible it can be to get even a few uninterrupted minutes in front of the computer. But several proctoring programs will flag noises in the room or anyone who leaves the camera’s view as nefarious. That means students with medical conditions who must use the bathroom or administer medication frequently would be considered similarly suspect.

Beyond all the ways that proctoring software can discriminate against students, algorithmic proctoring is also a significant invasion of privacy. These products film students in their homes and often require them to complete “room scans,” which involve using their camera to show their surroundings. In many cases, professors can access the recordings of their students at any time, and even download these recordings to their personal machines. They can also see each student’s location based on their IP address.

Privacy is paramount to librarians like me because patrons trust us with their data. After 9/11, when the Patriot Act authorized the US Department of Homeland Security to access library patron records in their search for terrorists, many librarians started using software that deleted a patron’s record once a book was returned. Products that violate people’s privacy and discriminate against them go against my professional ethos, and it’s deeply concerning to see such products eagerly adopted by institutions of higher education.

This zealousness would be slightly more understandable if there was any evidence that these programs actually did what they claim. To my knowledge, there isn’t a single peer-reviewed or controlled study that shows proctoring software effectively detects or prevents cheating. Given that universities pride themselves on making evidence-based decisions, this is a glaring oversight.

Fortunately, there are movements underway to ban proctoring software and ban face recognition technologies on campuses, as well as congressional bills to ban the US federal government from using face recognition. But even if face recognition technology were banned, proctoring software could still exist as a program that tracks the movements of students’ eyes and bodies. While that might be less racist, it would still discriminate against people with disabilities, breastfeeding parents, and people who are neuroatypical. These products can’t be reformed; they should be abandoned.

Cheating is not the threat to society that test proctoring companies would have you believe. It doesn’t dilute the value of degrees or degrade institutional reputations, and student’s aren’t trying to cheat their way into being your surgeon. Technology didn’t invent the conditions for cheating and it won’t be what stops it. The best thing we in higher education can do is to start with the radical idea of trusting students. Let’s choose compassion over surveillance.

Shea Swauger is an academic librarian and researcher at the University of Colorado Denver.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.