Skip to Content

User Error Compromises Many Encrypted Communication Apps

Apps that aim to let you talk securely may be made less secure by users who screw up the authentication process.
December 14, 2015

Smartphone apps and special phones that aim to ensure secure communication may often find their security compromised by the users themselves, according to recent research.

The apps, which include RedPhone and Signal, may ask people calling or texting each other to verbally compare a short string of words they see on their screens (often referred to as a checksum or short authentication string) to make sure a new communication session hasn’t been breached by an intruder. The idea is that if a call’s security is compromised, these words won’t match up.

To test out how well this works, researchers the University of Alabama at Birmingham set up a study that mimicked a cryptophone app. Researchers had participants use a Web browser to make a call to an online server. Then they listened to a random two- or four-word sequence and determined if it matched the words they saw on the computer screen in front of them. The participants were also asked to verify whether the voice they heard was the same as one they’d heard previously reading a short story.

The researchers found that study participants frequently accepted calls even if they heard the wrong sequence of words, and often denied calls when the sequence was spoken correctly. Beyond that, researchers say that using a four-word checksum instead of a two-word checksum seemed to decrease security, even though a longer checksum should increase security exponentially.

The researchers presented their work in a paper this month at a computer security conference in Los Angeles.  

The study included 128 people, and Maliheh Shirvanian, the paper’s lead author and a graduate student at the University of Alabama at Birmingham, says that participants accepted an incorrect two-word string 30 percent of the time if it came from a voice properly verified as being one they’d heard previously. They also rejected two-word strings that were spoken correctly about 22 percent of the time.

In addition, the researchers noticed that participants accepted four-word strings that were incorrect about 40 percent of the time, and rejected ones that were correct 25 percent of the time.

Justin Troutman, a cryptographer who works at the encrypted-search startup Kryptnostic and has focused his work on the intersection of cryptography and user experience, says one reason people might accept incorrect checksums is that they consist of random words, rather than a sequence you’d see in a sentence. Users might tune out a bit when hearing them, especially if they recognize the speaker’s voice on the other end. With a higher number of words, they might tune out those in the middle, he adds.

In hopes of improving security, the researchers say they’re now working on a new study that considers how to use software to compare checksums, particularly longer ones, at the start of a secure call. As the researchers envision it, the participants in a call would speak their words aloud. Then software would transcribe the words and compare the two transcriptions. This way, the users would simply be validating that the voice on the other end sounds familiar, assuming they already know what the person they are talking to sounds like (which is, of course, not always going to be the case).

Keep Reading

Most Popular

still from Embodied Intelligence video
still from Embodied Intelligence video

These weird virtual creatures evolve their bodies to solve problems

They show how intelligence and body plans are closely linked—and could unlock AI for robots.

pig kidney transplant surgery
pig kidney transplant surgery

Surgeons have successfully tested a pig’s kidney in a human patient

The test, in a brain-dead patient, was very short but represents a milestone in the long quest to use animal organs in human transplants.

conceptual illustration showing various women's faces being scanned
conceptual illustration showing various women's faces being scanned

A horrifying new AI app swaps women into porn videos with a click

Deepfake researchers have long feared the day this would arrive.

thermal image of young woman wearing mask
thermal image of young woman wearing mask

The covid tech that is intimately tied to China’s surveillance state

Heat-sensing cameras and face recognition systems may help fight covid-19—but they also make us complicit in the high-tech oppression of Uyghurs.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.