Skip to Content

How Android Security Stacks Up

An Android phone’s approach to security is radically different from an iPhone’s–but is it better?

Today’s smart phones have all the speed, storage, and network connectivity of desktop computers from a few years ago. Because of this, they’re a treasure trove of personal information–and likely the next battleground for computer security.

Pattern recognition: The Android-powered Nexus One uses an “unlock pattern” that must be entered every time the phone’s screen is activated.

What makes smart phones attractive–the ability to customize them by downloading applications–is what makes them dangerous. Apps make the mobile phone a real computer, and Apple’s App Store has been a key factor in the phone’s success. But apps also make smart phones a target for cyber criminals.

Apple knows that it wouldn’t take more than a few malicious apps to tarnish the iPhone’s reputation. That’s why the App Store is a walled community. The only apps that get listed are those that have been approved by Apple. To get approved, developers must create a developer account and pay an annual fee. A team at Apple evaluates and approves each version of each application that is made available. Apple reportedly turns down roughly 10 percent of applications submitted to the App Store because they would steal personal data, they contain “inappropriate content,” or are designed to help a user break the law.

Google has taken a fundamentally different approach to ensuring the security of smart phones running Android. Like Apple, Android also has a store, called the Android Marketplace, from which users can download applications. But unlike Apple, any application can be uploaded to the Android Marketplace–Google doesn’t evaluate them first. What protects Android users from malicious applications is a security model based on “capabilities.”

Each Android app must tell a phone’s OS what capabilities it requires. When you install the application, the operating system lists the capabilities that the application needs to run. You can then decide if those capabilities are consistent with what the application claims it will to. For example, the TaxCaster Mobile application from Intuit requires “full Internet access” because it needs to take your input, send it to Intuit’s servers, and show you the results. On the other hand, the Slacker Radio application from Slacker requires Bluetooth, full Internet access, modify/delete access to your SD card, the ability to change audio settings, the ability to read the identity of incoming phone calls, the ability to change Wi-Fi state, and the ability to prevent your phone from sleeping.

The capabilities-based system has the advantage of being enforced by the operating system. There is simply no way for an application to do more than it says. It also doesn’t depend upon the vigilance of human screeners.

The problem with capabilities is that there is no way to be sure that an application will act appropriately with the trust that it’s given. For example, back in December a Web banking application was posted in the Android Marketplace that appeared to be for the First Tech Credit Union. It turned out that the application was fraudulent–just another phishing scam. Google removed the rogue app shortly after it was discovered, but it’s unclear how many people fell for the scam.

Capabilities can’t protect users from this kind of attack because the rogue application asked for the same privileges that a legitimate application would–that is, the ability to accept a person’s username and password and to communicate that information over the Internet with a remote server.

Another problem with the capability-based system is that it requires users to think carefully about security. Many users are unable to properly evaluate the risks of the software that they want to download and run–even when they suspect that the software might be malicious.

There are other important security differences between the iPhone and Android-based phones. Both can be set to automatically lock after a period of inactivity and require a passcode before they can be used again. But the iPhone can be set up to erase all of the data that it contains after 10 failed passcode attempts. The iPhone also supports remote wipe. Google’s Android has neither of these features, making the system fundamentally less secure. (A third-party application called Wave Secure offers some of these features, but I’ve found them to be poorly integrated with the Android system.)

Another important iPhone security advantage is a user-settable delay for the lock code. If you set an “unlock pattern” with an Android phone, you need to provide that pattern every time you turn on the phone’s screen. With the iPhone, you can set a delay so that the unlock code does not need to be entered if the phone has only been asleep for one minute, five minutes, 15 minutes, one hour, or four hours. The shorter the time period, the more secure your data, of course. But being able to set the delay for five minutes or even 15 minutes makes it far less onerous to actually use this feature. With my Android phone, I am constantly entering the unlock code, even at the end of a one-minute phone call. It’s so annoying that I am seriously considering turning it off.

I wish that the iPhone had Android’s capabilities-based security architecture, because that extra layer of protection provides important security guarantees. But even without it, the iPhone’s range of security features make it a better choice for people who need to keep sensitive information on their phone. That said, I’m hopeful that Google will make big improvements with the next release of the Android operating system.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.