Security vulnerabilities and spyware are now a serious problem for smart-phone users.
In July, for example, Citigroup announced that its mobile banking program for iPhone inappropriately saved confidential information–including the user’s account numbers and their PIN–unencrypted in a hidden file that could be accessed by other programs running on the user’s iPhone. The account numbers and PIN were also copied to the user’s desktop computer when the iPhone was synched.
Things are somewhat better on Google’s Android platform thanks to the operating system’s underlying security model. iPhone applications are given full access to every app’s data. But Android applications must specify which “permissions” they need. Permissions can include the ability to access the entries in the user’s address book, to determine the location of a device using the GPS, and to make phone calls. Permissions are located in a special file and shown when the application is installed; they are also visible in the Android Application control panel. Android currently supports 114 different permissions, which you can see on the developer website.
Android’s permissions system doesn’t prevent apps from stealing your data, or performing other malicious actions–it simply makes it easier to find the apps that are engaged in this practice. But it’s becoming increasingly clear that this isn’t always enough.
This past July, for example, researchers discovered that a free “wallpaper” program for Android called Jackeey collected personal information including the user’s phone numbers, voice-mail information, and carrier data, and sent it to a website in China. Then in August it was found that a free game called Tap Snake is actually a tool for covertly monitoring a person’s location. Tap Snake runs as a background service and sends the location of a phone to a website; the person who installed the game on that phone could then monitor the phone’s location with another program called GPS Spy.
Tap Snake doesn’t violate the Android security model: the program requires the ability to run as a service, monitor GPS position, and communicate over the Internet. But there are two problems with the Android security model. The first is granularity: although Android programs are required to tell the user which permissions they use, that doesn’t explain what the apps actually do with these permissions. The second problem is engagement: the model requires that somebody use this information and take responsibility for the user’s security.
A review of a few Android apps highlights this issue. A few weeks ago I was recommended an application called Rare Black Wallpapers as a way of saving battery power. I noted that the app required the ability to modify or delete SD card contents, full Internet access, and the ability to read my phone’s state and identity. Surprised, I e-mailed Hero Planet, the company that delivers this application, and asked them why these permissions were needed. Hero Planet never answered, so I uninstalled the program.
Likewise the program Salamander eBook Reader for Android requires permissions to determine your physical position, get full Internet access, and read the phone’s state and identity. I e-mailed Feel Social, the publisher, but got no response. Feel Social’s website looks like it has been abandoned and nobody answered the company’s phone when I called. But the app is still in the Google Marketplace; what is that app doing with my GPS information and full network access? I uninstalled it as well.
Another program that requires more permissions than I thought appropriate is Documents to Go, a program that lets me read Microsoft Office files with my Android phone. This program requires not just the ability to read and write to the phone’s SD card, but also full Internet access, and the ability to read the phone state and identity. It also starts automatically when the phone boots. I e-mailed DataViz, the program’s creator, and this time I got a response.
Laura Caiafa, technical support manager at DataViz, wrote back telling me that Documents to Go on Android requires the ability to read the phone’s identity because it links product registrations to the phone’s IMEI/MEID number (a kind of a serial number). “Additionally, we check the phone’s network state for roaming prior to allowing the user to register, which also requires this permission.”
Full Internet access is required to register the application (which Documents to Go does directly, rather than through the Web browser). Finally, the program starts install when a phone is switched on.
One problem with this manual survey approach is that it’s incredibly time-consuming. A second problem is that you can’t tell what the phone is doing with these permissions–is it sending my confidential data to crooks, is it using my position and Internet access to show me location-based advertisements, or is it just registering my application so I can get free updates?
A working solution to this problem will be presented on Wednesday at the Usenix Symposium on Operating Systems Design and Implementation (OSDI) in Vancouver, Canada. Called TaintDroid, the program uses an approach called data labeling or tainting to monitor the flow of personal information through a running Android phone.
Developed by researchers at Pennsylvania State University, Duke University, and Intel Labs, TaintDroid was designed to analyze 30 different Android applications. The researchers made some very interesting discoveries. For example, although 21 of the 30 applications required permission to read the phone state and permission to communicate over the Internet, only two of the applications actually transmitted the device’s phone number or unique code to the remote server. One of these transmits the information every time the phone boots–allowing the developer to know how many phones currently have the app installed, but also violating each user’s privacy in a pretty significant way.
The researchers write in their paper that half of the applications they monitored transmitted the user’s “location data to third-party advertisement servers without requiring implicit or explicit user consent”–that is, without divulging this fact in the application’s End User License Agreement. “In some cases, location data was transmitted to advertisement servers even when no advertisement was displayed in the application.” You can download the paper from the researchers’ website.
Some commentators may argue that the OSDI paper is further proof that Apple’s policy of manually vetting each application in its app store is superior to the Android policy. The truth is, we just don’t know how many apps hidden in Apple’s App Store steal personal information because there is no easy way to audit the capabilities of programs that are available for download. Apple’s review process doesn’t evaluate the application’s source code, so it would be possible for a malicious developer (or even a single programmer inside an app development company) to sneak something through. At least with Android, the operating system tries to prevent apps from accessing each other’s data, and from using the Internet unless they request that permission.
Another factor in Android’s favor is its use of the Linux operating system, for which all the source code is available. The Penn State, Duke, and Intel researchers could develop TaintDroid because it is relatively easy to get under the hood of Android and install the necessary functionality. Although it might be possible to accomplish the same results on the iPhone, the reverse-engineering process would be dramatically more difficult.
But information about poor app behavior will not be enough to provide security to Android users. Users simply lack the vigilance and the knowledge to make use of the information that Android provides. Unless Google establishes policies for protecting the privacy of its mobile users, Android will increasingly be seen as a system that is plagued by security problems, even if the real reason for this is that we have a better idea of what’s happening on Android than on the other smart-phone platforms.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
The walls are closing in on Clearview AI
The controversial face recognition company was just fined $10 million for scraping UK faces from the web. That might not be the end of it.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
These materials were meant to revolutionize the solar industry. Why hasn’t it happened?
Perovskites are promising, but real-world conditions have held them back.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.