A View from Tom Simonite
Google App Puts Neural Networks on Your Phone to Translate Signs Offline
Google’s new translation app puts simulated neurons on your phone—a technique that could make future gadgets much smarter.
In recent years Google has used networks of crudely simulated neurons running in its data centers to improve its speech recognition, build software that learned to spot cats from YouTube videos, and power a photo storage service that knows what’s in your snaps. Now the company wants you install artificial neural networks on your phone.
Built into an updated version of Google’s translation app released today, the technology expands its ability to translate printed text such as menus in a live view through your phone’s camera. The app could previously translate between seven different languages. Now it can handle 27 and translate between them without an Internet connection.
That’s possible because Google’s engineers created slimmed-down versions of the artificial neural networks it uses in a technique called deep learning (see “10 Breakthrough Technologies 2013: Deep Learning”). They live inside the translation app and recognize the characters used by the different languages, even when they’re not crisp and appear against the clutter of everyday life. Google’s engineers first trained much larger and more powerful neural networks to find and recognize different letters. Then they carefully shrank them down without compromising their accuracy too much. (This blog post has more details on how.)
It’s the first time Google has used that trick, but it likely won’t be the last. Embedding the intelligence that artificial neural networks can provide into gadgets so they don’t have to link to the Internet for tasks has clear benefits. Google is not the only company exploring that idea. Coming changes to the design of the chips and software on mobile devices will make it easier and more powerful.
Mobile-chip maker Qualcomm has shown off a camera app with artificial neural networks inside that can recognize some objects or identify the type of scene you’re shooting. The company’s future chip designs are being tweaked to make it easier to make apps like that (see “Smartphones Will Soon Learn to Recognize Faces and More”). Other companies are also working on hardware that could run neural nets inside gadgets, robots, and cars (see “Silicon Chips That See Are Going to Make Your Smartphone Brilliant”).
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today