Google engineers have created a lean image-recognition system that could help guard your display when an unfamiliar face looks at it.
Facial recognition and gaze detection are nothing new for machine learning. But in a paper to be presented at the Neural Information Processing Systems conference next week, Google engineers say that they’ve been able to slim down the software required to perform those tasks so much that they can run reliably in almost real time on a smartphone. It takes the software just two milliseconds to detect a gaze and 47 milliseconds to identify a face.
To demonstrate why that might useful, they’ve created a simple tool, first reported by The Register and shown in the video above, that applies the software to a smartphone’s front-facing camera. Information gleaned from the detection algorithms is used to hide private content when a stranger looks at the screen. The software has a list of registered users, and if a face is found to be both looking at the phone and not on the list, a warning pops up and, in this case, a messaging app is hidden.
That’s neat. But the precise application is less interesting than the fact that it’s possible at all. This example is indicative of a larger trend toward AI that can run efficiently on less powerful mobile devices. Most smartphones, or devices like smart speakers, currently have to farm AI processing out to big servers via the cloud. But a desire for less lag and increased data privacy is driving many firms to shrink machine-learning software so that it runs on simple chips.
In fact, Google recently announced a new open-source machine-learning software library that’s dedicated to helping non-experts develop lightweight AI for mobile devices. So expect more and more examples of this kind of lean software in the future.