Skip to Content

Google App Puts Neural Networks on Your Phone to Translate Signs Offline

Google’s new translation app puts simulated neurons on your phone—a technique that could make future gadgets much smarter.
July 29, 2015

In recent years Google has used networks of crudely simulated neurons running in its data centers to improve its speech recognition, build software that learned to spot cats from YouTube videos, and power a photo storage service that knows what’s in your snaps. Now the company wants you install artificial neural networks on your phone.

Google’s translation app can visually convert between 27 different languages.

Built into an updated version of Google’s translation app released today, the technology expands its ability to translate printed text such as menus in a live view through your phone’s camera. The app could previously translate between seven different languages. Now it can handle 27 and translate between them without an Internet connection.

That’s possible because Google’s engineers created slimmed-down versions of the artificial neural networks it uses in a technique called deep learning (see “10 Breakthrough Technologies 2013: Deep Learning”). They live inside the translation app and recognize the characters used by the different languages, even when they’re not crisp and appear against the clutter of everyday life. Google’s engineers first trained much larger and more powerful neural networks to find and recognize different letters. Then they carefully shrank them down without compromising their accuracy too much. (This blog post has more details on how.)

It’s the first time Google has used that trick, but it likely won’t be the last. Embedding the intelligence that artificial neural networks can provide into gadgets so they don’t have to link to the Internet for tasks has clear benefits. Google is not the only company exploring that idea. Coming changes to the design of the chips and software on mobile devices will make it easier and more powerful.

Mobile-chip maker Qualcomm has shown off a camera app with artificial neural networks inside that can recognize some objects or identify the type of scene you’re shooting. The company’s future chip designs are being tweaked to make it easier to make apps like that (see “Smartphones Will Soon Learn to Recognize Faces and More”). Other companies are also working on hardware that could run neural nets inside gadgets, robots, and cars (see “Silicon Chips That See Are Going to Make Your Smartphone Brilliant”).

Keep Reading

Most Popular

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

Deep learning pioneer Geoffrey Hinton has quit Google

Hinton will be speaking at EmTech Digital on Wednesday.

Video: Geoffrey Hinton talks about the “existential threat” of AI

Watch Hinton speak with Will Douglas Heaven, MIT Technology Review’s senior editor for AI, at EmTech Digital.

Doctors have performed brain surgery on a fetus in one of the first operations of its kind

A baby girl who developed a life-threatening brain condition was successfully treated before she was born—and is now a healthy seven-week-old.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.