Searching for New Ideas
If anyone can preview the future of computing, it should be Alfred Spector, Google’s director of research. Spector’s team focuses on the most challenging areas of computer science research with the intention of shaping Google’s future technology. During a break from a National Academy of Engineering meeting on emerging technologies hosted by his company, Spector told Technology Review’s computing editor Tom Simonite about these efforts, and explained how Google funnels its users’ knowledge into artificial intelligence.
TR: Google often releases products based on novel ideas and technologies. How is the research conducted by your team different from the work carried out by other groups?
Spector: We also work on things that benefit Google and its users, but we have a longer time horizon and we try to advance the state of the art. That means areas like natural language processing [understanding human language], machine learning, speech recognition, translation, and image recognition. These are mostly problems that have traditionally been called artificial intelligence.
We have the significant advantage of being able to work in vitro on the large systems that Google operates, so we have large amounts of data and large numbers of users.
Can you give an example of some AI that has come out of this research effort?
Our translation tools can now use parsing—understanding the grammatical parts of a sentence. We used to train our translation just statistically, by comparing texts in different languages. Parsing now goes along with that, so we can assign parts of speech to sentences. Take the sentence “The dog crossed the road”: “the dog” is the subject, “crossed” is a verb, “the road” is the object. This makes our translations better, and it’s particularly useful in Japanese.
Another example is Fusion Tables, which is now part of Google Docs [the company’s online office suite]. You can create a database that is shared with others and visualize and publish that data. A lot of media organizations are using it to display information on Google Maps or Google Earth to explain situations to the public. [During the recent hurricane Irene, New York public radio station WNYC used Fusion Tables to create an interactive guide to evacuation zones in the city.]
Does Google have a particular approach to AI?
In general, we have been using hybrid artificial intelligence, which means that we learn from our user community. When they label something as having a certain meaning or implication, we learn from that. With voice search, for example, if we correctly recognize an utterance, we will see that it lead to something that someone clicked on. The system self-trains based on that, so the more it’s used, the better it gets.
Spelling correction for Web search uses the same approach. When Barack Obama ran for president, people might not have been sure how to spell his name and tried different ways. Eventually they came across something that worked, then they clicked on the result. We learned then which of the spellings was the one that got the results, which allowed us to automatically correct them.
We think Fusion Tables will also help our systems learn. If there are thousands of tables that say there are 50 states in the Union, there are probably 50 states in the Union. And the Union probably has states. Don’t underestimate that. It sounds trivial, but computers can induce lots of information from many examples.
What new directions is the research group exploring at the moment?
We’re looking at projects in security, because it’s an increasingly important topic across computing. One area we’re looking at is whether you can constrain the programs that you use to work on the most minimal amount of information possible. If they went wrong, they would be limited in what harm they could do.
Imagine you’re using a word processor. In principle, it could delete all of your files; it’s acting as you. But what if when you started your word processor, you gave it only a single file to edit? The worst it could do would be to corrupt that file; the damage it could do would be very limited. We’re looking if we could tightly constrain the damage that could be done by faulty programs. That’s an old line of thought. People have thought of this for years. We think it might be practical now.
Google is working hard on its social networking project, Google+. Do you expect your research to contribute to that effort?
Being useful in the social realm is very strong for many of the things that we do. Google+ is a communication mechanism, and we do research on AI problems that could aid communication—for example, how to recommend content, or how to communicate across languages. Ideas like those could help people communicate across their social network.
Google+ also provides us lots more opportunity to learn from our users. Take the “+1” button, for example. That’s a very important signal that could be quite relevant to improving how we understand what matters to you. If your 10 friends think something is great, it’s very likely you would like to see it.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.