Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

If you own an iPhone, you can now be part of one of the most ambitious speech-recognition experiments ever launched. On Monday, Google announced that it had added voice search to its iPhone mobile application, allowing people to speak search terms into their phones and view the results on the screen.

In designing the system, Google took on an enormous challenge. Where an automated airline reservation system, say, has to handle a relatively limited number of terms, a Web search engine must contend with any topic that anyone might ever want to research–literally.

Fortunately, Google also has a huge amount of data on how people use search, and it was able to use that to train its algorithms. If the system has trouble interpreting one word in a query, for instance, it can fall back on data about which terms are frequently grouped together.

Google also had a useful set of data correlating speech samples with written words, culled from its free directory service, Goog411. People call the service and say the name of a city and state, and then say the name of a business or category. According to Mike Cohen, a Google research scientist, voice samples from this service were the main source of acoustic data for training the system.

But the data that Google used to build the system pales in comparison to the data that it now has the chance to collect. “The nice thing about this application is that Google will collect all this speech data,” says Jim Glass, a principal research scientist at MIT. “And by getting all this data, they will improve their recognizer even more.”

Mobile phones are assuming more and more computational duties; in much of the world, they’re people’s only computers. But their small screens and awkward keyboards can make text-intensive actions, like Web search, frustrating. While mobile browsers are getting better at predicting your search terms, and thereby reducing the amount of typing, nothing is quite as easy as speaking directly into the phone.

Speech-recognition systems, however, remain far from perfect. And people’s frustration skyrockets when they can’t find their way out of a voice-menu maze. But Google’s implementation of speech recognition deftly sidesteps some of the technology’s shortcomings, says Glass.

“The beauty of search engines is that they don’t have to be exactly right,” he says. When a user submits a spoken query, he says, Google’s algorithms “just take it and stick it in a search engine, which puts the onus on the user to select the right result or try again.” Because people are already used to refining their queries as they conduct Web searches, Glass says, they’re more tolerant of imperfect results.

4 comments. Share your thoughts »

Credit: Technology Review
Video by Brittany Sauser

Tagged: Computing, Communications, Google, iPhone, search, speech recognition

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me