Voice-control features designed to make PCs and smartphones easier to use, especially for people with disabilities, may also provide ways for hackers to bypass security protections and access the data stored on those devices.
Accessibility features are there for a good reason—they make it possible to control what’s happening on the graphical user interface without typing. But if they aren’t designed carefully, these features can be abused.
Researchers at Georgia Tech found that they could sidestep security protocols by using voice controls to enter text or click buttons. In a paper on the work, the researchers describe 12 ways to attack phones with Android, iOS, Windows, or Ubuntu Linux operating systems, including some that would not require physical access to the device. The paper will be presented next week at the CCS’14 conference in Scottsdale, Arizona.
One attack showed how a piece of malware could use Windows Speech Recognition to talk its way into running commands that normally require a higher level of privilege, demonstrated on a laptop here.
Another demonstration showed how malware could attack a smartphone. It exploits the fact that Google Now, a voice-controlled assistant that comes with the Android operating system, can use a voiceprint in lieu of a typed passcode. The researchers show how an attacker might record the authentication phrase on a Moto X phone, and then use a generic text-to-speech program to issue other commands as if it were the user. The attack is shown here.
“This is an important wake-up call for major OS vendors: Microsoft, Apple, and the Linux community,” says Radu Sion, director of the National Security Institute at Stony Brook University.
Wenke Lee, the Georgia Tech computer scientist who led the work, says the problems appear to be the result of incorporating speech recognition and other features into phones late in the development cycle.
“I think there are fundamental issues here that are hard to fix,” says Lee. “These features were added after the OS had been implemented, so these features don’t have the same kinds of security checks.”
Hackers could exploit the vulnerabilities remotely to initiate or escalate an attack on a device, Lee says. Although a phone that starts speaking to itself could be fairly obvious to the user, a malicious app could keep track of motion data and wait until the phone was not moving for a long period, indicating that the user was probably not nearby, Lee says.
These weird virtual creatures evolve their bodies to solve problems
They show how intelligence and body plans are closely linked—and could unlock AI for robots.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
Chinese hackers disguised themselves as Iran to target Israel
But they left a few clues that gave them away.
DeepMind says it will release the structure of every protein known to science
The company has already used its protein-folding AI, AlphaFold, to generate structures for the human proteome, as well as yeast, fruit flies, mice, and more.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.