Software that roughly mimics the way the brain works could give smartphones new smarts—leading to more accurate and sophisticated apps for tracking everything from workouts to emotions.
The software exploits an artificial-intelligence technique known as deep learning, which uses simulated neurons and synapses to process data. Feeding the program visual stimuli will strengthen the connections between certain virtual neurons, enabling it to recognize faces or other features in images it hasn’t seen before.
Deep learning has produced dramatic advances in processing images and audio (see “10 Breakthrough Technologies 2013: Deep Learning”). Last year, for instance, Facebook researchers used it to build a system that can determine nearly as well as a human whether two different photos show the same person, and Google used the method to create software that describes complicated images in short sentences (see “Google’s Brain-Inspired Software Describes What It Sees in Complex Images”). Thus far, however, most such efforts have involved groups of extremely powerful computers.
Smartphones can already make use of deep learning by tapping into remote servers running the software. But this can be slow, and it works only if a device has a good Internet connection. Now Nic Lane, a principal scientist at Bell Labs, says some smartphones are powerful enough to run certain deep-learning methods themselves. And Lane believes deep learning can improve the performance of mobile sensing apps. For example, it could filter out unwanted sounds from a microphone or remove unwanted signals in the data gathered by an accelerometer.
While Lane was a lead researcher at Microsoft Research Asia last year, he and Petko Georgiev, a graduate student at the University of Cambridge in the U.K., built a prototype of relatively simple deep-learning program that runs on a modified Android smartphone.
The researchers were trying to see whether their prototype could improve a smartphone’s ability to detect, from data collected by an accelerometer on the wrist, whether someone was performing certain activities, such as eating soup or brushing teeth. They also tested whether they could get the phone to determine people’s emotions or identities from recordings of their speech.
Lane and Georgiev detail their findings in a paper being presented this month at the HotMobile conference in Santa Fe, New Mexico. They report that the software they created was 10 percent more accurate than other methods at recognizing activities. The researchers also say their neural network was able to identify speakers and emotions about as accurately as other methods.
The prototype network Lane and Georgiev built had just a fraction as many connections between its artificial neurons as Facebook’s. But it could be faster and more reliable for some tasks.
“It’s all about, I think, instilling intelligence into devices so that they are able to understand and react to the world—by themselves,” Lane says.
Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3
The gene-edited pig heart given to a dying patient was infected with a pig virus
The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.