Skip to Content

A New Way to Spot Malicious Apps

By targeting fraudulent reviews to identify malware in the Google Play store, researchers uncovered an insidious technique: some of these apps harass innocent users until they leave positive ratings of their own.

Malware is a constant threat for Android users downloading apps from the Google Play store. There are 2.7 million apps for people to choose from, and to its credit, Google has a system called Bouncer that looks for and removes malicious apps. But numerous malicious apps have slipped through this safety net.

Which is why Mahmudur Rahman and pals at Florida International University in Miami have developed a system called Fairplay, which searches for malicious behavior in the Google Play store in an entirely different way.

Instead of scanning the code for malicious software, Fairplay follows the trails that malicious users leave behind when fraudulently boosting their ratings. By following these trails, Fairplay can spot malicious activity that otherwise slips through Google’s security system.

Rahman and co base their new approach on a curious observation: users who post fraudulent reviews to boost the rankings of malicious apps tend to use the same account for lots of different apps. So once they are identified, they are easy to follow.

It’s easy to see why malicious users behave this way. To leave a review or rating on Google Play, users must have a Google account, register a mobile device to that account, and then install the app on that device.

That makes it hard to create lots of different accounts, so to keep their lives easy, malicious users tend to use just one. Rahman and co’s approach is to first identify malicious accounts and then map their activity.

They began by downloading the reviews and ratings associated with all the newly uploaded apps to Google Play between October 2014 and May 2015. That’s nearly 90,000 apps and three million reviews.

They then used traditional antivirus tools, along with human experts in app fraud, to manually identify over 200 apps containing malware. This forms their “gold standard” data set of malicious apps. They also asked the experts to identify Google accounts responsible for generating fraudulent reviews, finding 15 accounts that had written reviews for over 200 fraudulent apps.

These 200 apps received a further 53,000 reviews. They data-mined these reviews to find a further 188 accounts that had each reviewed at least 10 of the fraudulent apps. “We call these guilt by association accounts,” say Rahman and co.

From all this fraudulent activity, they selected a set of 400 fraudulent reviews to train a machine-learning algorithm to spot others like them.

They also designed Fairplay to look at other potential indicators of malicious behavior, such as the number of permissions an app asks for and the way in which ratings appear over time, looking in particular for suspicious spikes in rating activity.

Finally, they let the algorithm loose on the entire set of 90,000 newly released apps on Google Play.

The results make for interesting reading. “FairPlay discovers hundreds of fraudulent apps that currently evade Google Bouncer’s detection technology,” say Rahman and co.

More significant, the algorithm uncovered an entirely new form of coercive attack that forces ordinary users to write positive reviews for malicious apps. “FairPlay enabled us to discover a novel, coercive campaign attack type, where app users are harassed into writing a positive review for the app, and install and review other apps,” say the team.

The campaign works by bombarding users with ads or otherwise making games difficult to play. However, the campaign lets users remove the ads, unlock another level in a game, or get additional features by writing positive reviews.

Rahman and co uncovered this behavior by data-mining the reviews.  In a subset of 3,000 reviews, they found 118 that reported some level of coercion. For example, users wrote “I only rated it because i didn’t want it to pop up while i am playing,” or “Could not even play one level before i had to rate it [...] they actually are telling me to rate the app 5 stars.”

That reveals an entirely new kind of coercive fraud attack that Google’s Bouncer does not spot.

The question now is: what next? Identifying this kind of behavior makes it easier to crack down on. But in this cat-and-mouse game, it’s surely only a matter of time before malicious users dream up some other ingenious way to cheat.

Ref: arxiv.org/abs/1703.02002 : FairPlay: Fraud and Malware Detection in Google Play

 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.