What new directions is the research group exploring at the moment?
We’re looking at projects in security, because it’s an increasingly important topic across computing. One area we’re looking at is whether you can constrain the programs that you use to work on the most minimal amount of information possible. If they went wrong, they would be limited in what harm they could do.
Imagine you’re using a word processor. In principle, it could delete all of your files; it’s acting as you. But what if when you started your word processor, you gave it only a single file to edit? The worst it could do would be to corrupt that file; the damage it could do would be very limited. We’re looking if we could tightly constrain the damage that could be done by faulty programs. That’s an old line of thought. People have thought of this for years. We think it might be practical now.
Google is working hard on its social networking project, Google+. Do you expect your research to contribute to that effort?
Being useful in the social realm is very strong for many of the things that we do. Google+ is a communication mechanism, and we do research on AI problems that could aid communication—for example, how to recommend content, or how to communicate across languages. Ideas like those could help people communicate across their social network.
Google+ also provides us lots more opportunity to learn from our users. Take the “+1” button, for example. That’s a very important signal that could be quite relevant to improving how we understand what matters to you. If your 10 friends think something is great, it’s very likely you would like to see it.
Gain the insight you need on artificial intelligence at EmTech MIT.