Computers Could Talk Themselves into Giving Up Secrets
Malware might use a voice synthesizer to bypass some security controllers, researchers say.
Voice control and other assistive technologies are on hundreds of millions of PCs and smartphones.
Voice-control features designed to make PCs and smartphones easier to use, especially for people with disabilities, may also provide ways for hackers to bypass security protections and access the data stored on those devices.
Accessibility features are there for a good reason—they make it possible to control what’s happening on the graphical user interface without typing. But if they aren’t designed carefully, these features can be abused.
Researchers at Georgia Tech found that they could sidestep security protocols by using voice controls to enter text or click buttons. In a paper on the work, the researchers describe 12 ways to attack phones with Android, iOS, Windows, or Ubuntu Linux operating systems, including some that would not require physical access to the device. The paper will be presented next week at the CCS’14 conference in Scottsdale, Arizona.
One attack showed how a piece of malware could use Windows Speech Recognition to talk its way into running commands that normally require a higher level of privilege, demonstrated on a laptop here.
Another demonstration showed how malware could attack a smartphone. It exploits the fact that Google Now, a voice-controlled assistant that comes with the Android operating system, can use a voiceprint in lieu of a typed passcode. The researchers show how an attacker might record the authentication phrase on a Moto X phone, and then use a generic text-to-speech program to issue other commands as if it were the user. The attack is shown here.
“This is an important wake-up call for major OS vendors: Microsoft, Apple, and the Linux community,” says Radu Sion, director of the National Security Institute at Stony Brook University.
Wenke Lee, the Georgia Tech computer scientist who led the work, says the problems appear to be the result of incorporating speech recognition and other features into phones late in the development cycle.
“I think there are fundamental issues here that are hard to fix,” says Lee. “These features were added after the OS had been implemented, so these features don’t have the same kinds of security checks.”
Hackers could exploit the vulnerabilities remotely to initiate or escalate an attack on a device, Lee says. Although a phone that starts speaking to itself could be fairly obvious to the user, a malicious app could keep track of motion data and wait until the phone was not moving for a long period, indicating that the user was probably not nearby, Lee says.
Be the leader your company needs. Implement ethical AI.
Join us at EmTech Digital 2019.