A New Way to Spot Malicious Apps
Malware is a constant threat for Android users downloading apps from the Google Play store. There are 2.7 million apps for people to choose from, and to its credit, Google has a system called Bouncer that looks for and removes malicious apps. But numerous malicious apps have slipped through this safety net.
Which is why Mahmudur Rahman and pals at Florida International University in Miami have developed a system called Fairplay, which searches for malicious behavior in the Google Play store in an entirely different way.
Instead of scanning the code for malicious software, Fairplay follows the trails that malicious users leave behind when fraudulently boosting their ratings. By following these trails, Fairplay can spot malicious activity that otherwise slips through Google’s security system.
Rahman and co base their new approach on a curious observation: users who post fraudulent reviews to boost the rankings of malicious apps tend to use the same account for lots of different apps. So once they are identified, they are easy to follow.
It’s easy to see why malicious users behave this way. To leave a review or rating on Google Play, users must have a Google account, register a mobile device to that account, and then install the app on that device.
That makes it hard to create lots of different accounts, so to keep their lives easy, malicious users tend to use just one. Rahman and co’s approach is to first identify malicious accounts and then map their activity.
They began by downloading the reviews and ratings associated with all the newly uploaded apps to Google Play between October 2014 and May 2015. That’s nearly 90,000 apps and three million reviews.
They then used traditional antivirus tools, along with human experts in app fraud, to manually identify over 200 apps containing malware. This forms their “gold standard” data set of malicious apps. They also asked the experts to identify Google accounts responsible for generating fraudulent reviews, finding 15 accounts that had written reviews for over 200 fraudulent apps.
These 200 apps received a further 53,000 reviews. They data-mined these reviews to find a further 188 accounts that had each reviewed at least 10 of the fraudulent apps. “We call these guilt by association accounts,” say Rahman and co.
From all this fraudulent activity, they selected a set of 400 fraudulent reviews to train a machine-learning algorithm to spot others like them.
They also designed Fairplay to look at other potential indicators of malicious behavior, such as the number of permissions an app asks for and the way in which ratings appear over time, looking in particular for suspicious spikes in rating activity.
Finally, they let the algorithm loose on the entire set of 90,000 newly released apps on Google Play.
The results make for interesting reading. “FairPlay discovers hundreds of fraudulent apps that currently evade Google Bouncer’s detection technology,” say Rahman and co.
More significant, the algorithm uncovered an entirely new form of coercive attack that forces ordinary users to write positive reviews for malicious apps. “FairPlay enabled us to discover a novel, coercive campaign attack type, where app users are harassed into writing a positive review for the app, and install and review other apps,” say the team.
The campaign works by bombarding users with ads or otherwise making games difficult to play. However, the campaign lets users remove the ads, unlock another level in a game, or get additional features by writing positive reviews.
Rahman and co uncovered this behavior by data-mining the reviews. In a subset of 3,000 reviews, they found 118 that reported some level of coercion. For example, users wrote “I only rated it because i didn’t want it to pop up while i am playing,” or “Could not even play one level before i had to rate it [...] they actually are telling me to rate the app 5 stars.”
That reveals an entirely new kind of coercive fraud attack that Google’s Bouncer does not spot.
The question now is: what next? Identifying this kind of behavior makes it easier to crack down on. But in this cat-and-mouse game, it’s surely only a matter of time before malicious users dream up some other ingenious way to cheat.
Ref: arxiv.org/abs/1703.02002 : FairPlay: Fraud and Malware Detection in Google Play
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.