Skip to Content

Google App Puts Neural Networks on Your Phone to Translate Signs Offline

Google’s new translation app puts simulated neurons on your phone—a technique that could make future gadgets much smarter.
July 29, 2015

In recent years Google has used networks of crudely simulated neurons running in its data centers to improve its speech recognition, build software that learned to spot cats from YouTube videos, and power a photo storage service that knows what’s in your snaps. Now the company wants you install artificial neural networks on your phone.

Google’s translation app can visually convert between 27 different languages.

Built into an updated version of Google’s translation app released today, the technology expands its ability to translate printed text such as menus in a live view through your phone’s camera. The app could previously translate between seven different languages. Now it can handle 27 and translate between them without an Internet connection.

That’s possible because Google’s engineers created slimmed-down versions of the artificial neural networks it uses in a technique called deep learning (see “10 Breakthrough Technologies 2013: Deep Learning”). They live inside the translation app and recognize the characters used by the different languages, even when they’re not crisp and appear against the clutter of everyday life. Google’s engineers first trained much larger and more powerful neural networks to find and recognize different letters. Then they carefully shrank them down without compromising their accuracy too much. (This blog post has more details on how.)

It’s the first time Google has used that trick, but it likely won’t be the last. Embedding the intelligence that artificial neural networks can provide into gadgets so they don’t have to link to the Internet for tasks has clear benefits. Google is not the only company exploring that idea. Coming changes to the design of the chips and software on mobile devices will make it easier and more powerful.

Mobile-chip maker Qualcomm has shown off a camera app with artificial neural networks inside that can recognize some objects or identify the type of scene you’re shooting. The company’s future chip designs are being tweaked to make it easier to make apps like that (see “Smartphones Will Soon Learn to Recognize Faces and More”). Other companies are also working on hardware that could run neural nets inside gadgets, robots, and cars (see “Silicon Chips That See Are Going to Make Your Smartphone Brilliant”).

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.