MIT Technology Review Subscribe

How Google’s Ear Hears

The new voice-search application for the iPhone marks a milestone for spoken interfaces.

If you own an iPhone, you can now be part of one of the most ambitious speech-recognition experiments ever launched. On Monday, Google announced that it had added voice search to its iPhone mobile application, allowing people to speak search terms into their phones and view the results on the screen.

In designing the system, Google took on an enormous challenge. Where an automated airline reservation system, say, has to handle a relatively limited number of terms, a Web search engine must contend with any topic that anyone might ever want to research–literally.

Advertisement

Fortunately, Google also has a huge amount of data on how people use search, and it was able to use that to train its algorithms. If the system has trouble interpreting one word in a query, for instance, it can fall back on data about which terms are frequently grouped together.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Google also had a useful set of data correlating speech samples with written words, culled from its free directory service, Goog411. People call the service and say the name of a city and state, and then say the name of a business or category. According to Mike Cohen, a Google research scientist, voice samples from this service were the main source of acoustic data for training the system.

But the data that Google used to build the system pales in comparison to the data that it now has the chance to collect. “The nice thing about this application is that Google will collect all this speech data,” says Jim Glass, a principal research scientist at MIT. “And by getting all this data, they will improve their recognizer even more.”

Mobile phones are assuming more and more computational duties; in much of the world, they’re people’s only computers. But their small screens and awkward keyboards can make text-intensive actions, like Web search, frustrating. While mobile browsers are getting better at predicting your search terms, and thereby reducing the amount of typing, nothing is quite as easy as speaking directly into the phone.

Speech-recognition systems, however, remain far from perfect. And people’s frustration skyrockets when they can’t find their way out of a voice-menu maze. But Google’s implementation of speech recognition deftly sidesteps some of the technology’s shortcomings, says Glass.

“The beauty of search engines is that they don’t have to be exactly right,” he says. When a user submits a spoken query, he says, Google’s algorithms “just take it and stick it in a search engine, which puts the onus on the user to select the right result or try again.” Because people are already used to refining their queries as they conduct Web searches, Glass says, they’re more tolerant of imperfect results.

Even after the search application loads, the voice-recognition system kicks in only when the user puts the phone to her ear, as determined by its built-in motion sensors. “If you’re listening all the time, then you trigger false positives,” Glass says. “The typical solution is to make you push a button,” but the motion-activated system is easier and more intuitive, he says.

The search application also uses the iPhone’s built-in location-awareness system to prioritize results. For instance, if you search for Bank of America, one of the results will be a map of local branches. This saves users from having to include location terms–which can be open to misinterpretation–in their queries.

Advertisement

While Google won’t disclose details about how its voice-recognition system works, it probably hasn’t done anything too radical, says Nelson Morgan, director of the International Computer Science Institute, in Berkeley, CA. “Nearly everybody who does speech recognition has a system that looks about the same,” he says. First, the system analyzes frequency characteristics of the voice input. Then, based on probabilities drawn from a huge number of real-world examples, it correlates them with words. Finally, those words are fed into a language model that uses common combinations or sequences of words to resolve ambiguities. For instance, if you say, “president of the United,” it’s likely that the next word is going to be “States.”

While Google isn’t announcing plans to use its voice-recognition technology for other services, the potential is easy to see. “Now we have tech to take spoken words and convert it to text,” says Gummi Hafsteinsson, a senior product manager at Google. “There are a lot of options.” Currently, there’s no way to use your voice to access Google’s calendar or e-mail applications or to write an e-mail or a text message. But that could change in the future. “I think this opens up a whole new dimension,” Hafsteinsson says.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement