If anyone can preview the future of computing, it should be Alfred Spector, Google’s director of research. Spector’s team focuses on the most challenging areas of computer science research with the intention of shaping Google’s future technology. During a break from a National Academy of Engineering meeting on emerging technologies hosted by his company, Spector told Technology Review’s computing editor Tom Simonite about these efforts, and explained how Google funnels its users’ knowledge into artificial intelligence.
TR: Google often releases products based on novel ideas and technologies. How is the research conducted by your team different from the work carried out by other groups?
Spector: We also work on things that benefit Google and its users, but we have a longer time horizon and we try to advance the state of the art. That means areas like natural language processing [understanding human language], machine learning, speech recognition, translation, and image recognition. These are mostly problems that have traditionally been called artificial intelligence.
We have the significant advantage of being able to work in vitro on the large systems that Google operates, so we have large amounts of data and large numbers of users.
Can you give an example of some AI that has come out of this research effort?
Our translation tools can now use parsing—understanding the grammatical parts of a sentence. We used to train our translation just statistically, by comparing texts in different languages. Parsing now goes along with that, so we can assign parts of speech to sentences. Take the sentence “The dog crossed the road”: “the dog” is the subject, “crossed” is a verb, “the road” is the object. This makes our translations better, and it’s particularly useful in Japanese.
Another example is Fusion Tables, which is now part of Google Docs [the company’s online office suite]. You can create a database that is shared with others and visualize and publish that data. A lot of media organizations are using it to display information on Google Maps or Google Earth to explain situations to the public. [During the recent hurricane Irene, New York public radio station WNYC used Fusion Tables to create an interactive guide to evacuation zones in the city.]
Does Google have a particular approach to AI?
In general, we have been using hybrid artificial intelligence, which means that we learn from our user community. When they label something as having a certain meaning or implication, we learn from that. With voice search, for example, if we correctly recognize an utterance, we will see that it lead to something that someone clicked on. The system self-trains based on that, so the more it’s used, the better it gets.
Spelling correction for Web search uses the same approach. When Barack Obama ran for president, people might not have been sure how to spell his name and tried different ways. Eventually they came across something that worked, then they clicked on the result. We learned then which of the spellings was the one that got the results, which allowed us to automatically correct them.
We think Fusion Tables will also help our systems learn. If there are thousands of tables that say there are 50 states in the Union, there are probably 50 states in the Union. And the Union probably has states. Don’t underestimate that. It sounds trivial, but computers can induce lots of information from many examples.
Hear more from Google at EmTech 2014.