The apps, which could be used to spy on other people’s devices, had been downloaded a total of 130,000 times....
The news: Researchers at cybersecurity firm Avast discovered seven stalkerware apps on the Play Store this week. The most-installed apps were Spy Tracker and SMS Tracker, with more than 50,000 downloads each. The researchers reported them to Google, and they have all been removed.
How they worked: They all required access to the smartphone the perpetrator wanted to spy on. The snoop could download them from the Play Store, install them on the target device, and then enter their own email address and a password to get access to the spying app from their own device.
The apps were all able to track the surveilled person’s location, contacts, SMS messages, and call history. Once installed, there’s no app icon, so the targeted person would have no idea that the stalkerware app was on the phone. Some stalkerware apps don’t even require access to the device, and can be sent disguised as a picture message.
Who are the targets? Although these apps are often used by abusive partners to spy on their victims, they are also sometimes used to track children or even employees.
The scale of the problem: There have been very few studies on stalkerware, so it’s hard to know how big this problem truly is. However, technology is playing a growing role in abusive relationships. Domestic-violence charity Refuge estimates that around 95% of its cases involve some form of technology-based abuse.
Big tech firms are yet to fully face up to, and act upon, the use of their products for this purpose. Google’s action this week is positive, but it would be good to see the companies proactively rooting out this problem, rather than just relying on being prodded by outside experts.
Read next: Head over to this story if you want to read about one woman’s experience of stalkerware, or want advice if you fear you might be a victim of it too.
Sign up here for our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.
The spacecraft will burn up over the south Pacific Ocean at some point today....
About Tiangong-2: It was launched back in September 2016 to conduct a series of scientific and technological space experiments, including in-orbit propellant refueling technology, according to China’s National Space Administration. Two astronauts (called taikonauts in China) traveled to Tiangong-2 in October 2016. They spent about a month on the spacecraft, performing experiments on human physiology in space plus other tests. Tiangong-2 has far exceeded its expected two-year life span (it’s still fully functional), but it will now be deorbited.
The return: It’s scheduled to happen today, on July 19, Beijing time. Tiangong-2 will fire its thrusters to point its reentry towards the Pacific Ocean, between New Zealand and Chile. China will report once it’s taken place.
No need to worry: The deorbiting process will have been very carefully planned. Most of the craft is likely to burn up in the atmosphere, and any debris that makes it through should land in the ocean.
Although ... Hopefully, Tiangong-2 will fare better than its predecessor Tiangong-1. Operators had lost contact with Tiangong-1 (an artist’s impression of the reentry is shown above) long before it reentered the atmosphere unguided on April 2, 2018, landing in the Pacific Ocean. China is planning to launch Tiangong-3 in 2020.
Want to stay up to date with space tech news? Sign up for our newsletter, The Airlock.
There’s a lot that the viral photo-editing app could do with a giant database of faces....
The context: FaceApp, the photo-editing app that uses AI to touch up your face, has come under scrutiny since going viral. It’s been around since 2017, but a newly added feature that allows users to see what they might look like when they age has catapulted it back into popularity. Now the fact that it’s owned by the Russia-based company Wireless Lab has people spooked.
The concern: According to some reports, the app has amassed more than 150 million photos of people’s faces since launch—and its terms of service stipulate that the company can use the photos however it wants, in perpetuity. The company has already said in a statement that it deletes most images from its servers within 48 hours of upload, and doesn’t share data to third parties. Despite this, some Democratic members of the US Congress are now calling for an FBI investigation into the company. Users are also worried that their face could be used to track them in the future through face recognition.
The reality: Okay, so let’s imagine FaceApp did decide to use the photos they’ve gathered beyond the users’ reason for uploading them. What could they actually do? It’s highly unlikely they would use them to train algorithms for identifying your face. First, the majority of users don’t give FaceApp their name or other identifying information, which would be required for recognition. Second, although it is technically possible for a system to learn to recognize someone from a single photo, the accuracy would be poor. There would also be much easier means for obtaining the specific photos of a target individual, such as through their social-media profiles and Flickr uploads.
Down the rabbit hole: There are other ways to use a giant database of faces, however. Here are just a few:
- Face modification: Perhaps the most obvious use would be for FaceApp to improve its own algorithms. The app’s ability to tweak and alter an image of a face is based on a neural network already trained on tons of face photos. It would make sense for the company to continue feeding it more images to fine-tune its capabilities. Such a database could also be used to build more face modification features that the app doesn’t already have.
- Face analysis: While face recognition identifies specific individuals, face analysis simply involves predicting features about them, such as their gender or age. Many commercial face analysis systems are trained on open-source databases that look a lot like the one FaceApp could have retained.
- Face detection: Similarly, face detection is about identifying whether there’s a face in a photo, and where it is. Again, these systems could be built or improved with more face photos.
- Deepfake generation: And finally, such a database could be used to create faces of people who don’t exist, which would come with a whole host of issues. Fake face generation has allegedly already been used by spies to spoof identities, for example.
Does it matter? While these use cases raise major privacy concerns, it’s worth noting that there are many other open-source databases of face photos and people’s videos that may or may not already include your likeness.
Such databases, made of public media scraped from the internet, have long been a basis of AI research. Even if FaceApp didn’t have its own stockpile of images, it would be easy to find others from the plethora of options out there. Perhaps that’s the greater point to the story: FaceApp merely highlights how much we’ve already lost control of our digital data.