Skip to Content

Q&A: Rick Rashid

The director of Microsoft Research defends the company’s past, present, and future.
December 13, 2006

Rick Rashid, who has directed Microsoft Research since the early 1990s, recently visited MIT and talked to Technology Review’s editor in chief about the future of computing. Before joining Microsoft, Rashid was a professor of computer science at Carnegie Mellon University. He’s most famous for his work on the Mach operating-system kernel, which influenced the development of the NeXTStep OS that powered NeXT’s black computers, which in turn influenced the current MacOS X.

Technology Review: Why can’t Microsoft Research, with its large budget, devise new and compelling interfaces and experiences for personal computers?

Rick Rashid: I reject the premise of the question. Anyone who is using PCs today has a very different experience than they did ten years ago, or even five years. My wife, for example, almost exclusively interacts with her extended family using the MSN voice and video, so she’s video-calling with her sister nearly every day. That’s new and compelling. Or think about how people use music. That’s all enabled by new PC technologies, like peer-to-peer networking. We’ve introduced handwriting and ink to the TabletPC, which has changed the way a lot of people, particularly in the academic community, use their computers. All the Web 2.0 stuff, all the dynamic HTML, came out of Microsoft in the ’90s. Consider the way we’ve extended the use of 3-D capabilities in Vista [the newest version of Windows]. That’s new, too. There’s a very common trap for people of a certain age to say, “There’s nothing new in the world, and the golden age was in the past.” And it’s not true.

TR: Let me try another tack. Why isn’t PC software better than it is? Why does so little PC software possess the simple, elegant properties of products like the Apple iPod?

RR: I would pose the question a little differently. I would ask, Why do most consumer interfaces work so badly? I would challenge most people to program the time on their DVD player. That’s a pretty hidden piece of functionality. But to directly answer your question, I would say you have upwards of 800 million people using PCs today, so the software can’t be that bad.

You have people using PCs for many different purposes with many different levels of education. I think one of the challenges that a company like Microsoft has is that our technology is used so very broadly. We have to be concerned about how our interfaces are used around the world and by people of very different capabilities. When you have to build an operating system that must work with people with a broad range of disabilities and from a broad range of cultures, that really changes how you design in a profound way.

To address your crack about the iPod … You can look at something like an iPod and, sure, it’s great, but what does an iPod actually do? Not very much. In short, when you design interfaces for a broad range of people for a machine that does a lot, you either have to overload the interface with features or underload it. Neither is very satisfactory.

TR: Wouldn’t the best interface, then, adapt itself to its user’s capabilities and tasks?

RR: Oh, absolutely! You can see a little of it with our interfaces today. In various Microsoft applications today we have systems that dynamically adapt their menus based on usage patterns. For instance, they depopulate the menus they know are not being used. We’re taking the baby steps into personalization. On the Web, for instance, the minute you log on to Amazon, they know something about you. When people ask me, “What are you going to do with the new processing power and memory over the next 10 years?” I think: dynamic personalization.

TR: What else could you do with that prospective memory and processing power?

RR: One of our research projects a few years ago asked, If you started to harvest all the information on usage, what could you do? Logically, your computer knows where every piece of text in a document comes from. Did you type it? Did you cut and paste it? Where from? Did it come from an e-mail? And so on. Extrapolate that idea: computers could use the knowledge of where information comes from to very powerful effect.

TR: Like what?

RR: For businesses, it could be a source of business intelligence. For individuals, it could be your entire life’s history. We’ve got a project at our Cambridge, England, facility called SenseCam. The researchers developed a device which you could wear around your neck. It was a kind of black box for a human being. It had a 180-degree camera, it had sound sensors, heat sensors, acceleration sensors–whatever. And the idea behind it was: we’re getting to a stage where we have human-scale storage. Where it’s possible to record every conversation you will have until you die. You could record every conversation and it would take a terabyte of storage and it would cost you $500. Or you could keep an entire year of video–everything you saw!–and it would also cost you a terabyte. You could keep all those things. You could begin to augment human memory in a way that science fiction talks about but wasn’t really possible before. The interesting thing about this is that people wouldn’t have to lose any of their life. My dad passed away a few years ago. How valuable would it be if I could recapture a conversation I had with him?

TR: Life has been a process of forgetting.

RR: But it doesn’t have to be anymore. I don’t even have a record of my father’s voice! One of the things that came out of this research at Cambridge is very poignant. They gave the SenseCam to a woman who had a form of encephalitis that removed her ability to remember anything for more than a day or so. In the past, the way you worked with a patient like that is that a relative would write a diary for her. But we discovered that for this particular woman, when she reviewed the video of her day, she not only remembered more events, but she remembered the events a month later. It stimulated her mind. When I think of the future of computing, these are the kinds of developments I am excited about.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.