Protecting information has become one of the most important and significant tasks in modern society. Many people have become used to memorizing passwords and PIN numbers, sometimes of bewildering complexity. Others use biometric indicators for protection: fingerprints, irises, and the like can all help identify individuals.
But these systems are not perfect. One significant problem is the threat of coercion—being forced to reveal a password or place a finger in the fingerprint scanner.
Today, Max Wolotsky at Cal Poly Pomona and a couple of pals have come up with a solution that can determine whether an individual is being coerced and deny authentication as a result.
The system is simple in concept. Wolotsky and co’s idea is to use the body’s stress levels to determine whether they are being coerced in any way. And they do this by measuring the individual’s response to “chill” music they previously identified as relaxing.
Chill music is so-called because it provokes a shiver down the spine, a response that is similar to being cold. It is the physiological effects of this shiver that Wolotsky and co set out to measure by monitoring heartbeat and brain-wave patterns.
Their hypothesis is that these signals are impossible to fake and only possible to measure when the subject is relaxed. Any duress would result in a different signal.
To find out whether this is the case, the team asked five test subjects to choose their favorite piece of chill music and then monitored their heartbeat and brain waves while they listened.
In particular, the team focused on the moments within the music that trigger the “chill” response on the assumption that this always occurs at the same point in the score. This section of the music—less than a minute or so—then becomes the key to the authentication process.
The idea is that if the subject is relaxed, he or she can experience the “chill” in the future and reproduce the physiological signals associated with this.
Indeed, the team carried out a number of tests and found that their subjects were able to pass the test with a 90 percent success rate.
There are some caveats, of course. The team was unable to test its subjects’ response under any kind of stress to simulate the kind of coercion that this test is designed to foil. ”One reason we did not do this is because it is unethical to threaten test subjects in order to verify that our system is fully coercion-resistant, as it could leave subjects with permanent physical or psychological damage,” they say.
That’s a significant limitation. If the team hasn’t checked that it works in the conditions it is designed to operate under, how can it be sure it is secure? There are other potential problems, too. The information that might benefit from this kind of increased protection is likely to be hugely valuable, things like the launch codes for nuclear weapons, perhaps. (One of the authors works at Sandia National Laboratories, which is responsible for nuclear stockpile management.)
But urgent access to this kind of information might only be necessary in times of high stress, and this could invalidate the test. The thought of somebody trying to access the launch codes as World War III unfolds, but having to chill out beforehand, has something of a black comedy about it.
Nevertheless, developing coercion-resistant passwords is an important goal. Wolotsky and co have taken some tentative steps that others can build on.
Ref: http://arxiv.org/abs/1605.01072: Chill-Pass: Using Neuro-Physiological Responses to Chill Music to Defeat Coercion Attacks
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
The world’s biggest surveillance company you’ve never heard of
Hikvision could be sanctioned for aiding the Chinese government’s human rights violations in Xinjiang. Here’s everything you need to know.
Where to get abortion pills and how to use them
New US restrictions could turn abortion into do-it-yourself medicine, but there might be legal risks.
Minneapolis police used fake social media profiles to surveil Black people
An alarming report outlines an extensive pattern of racial discrimination within the city’s police department.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.