Skip to Content

Protecting Security and Privacy

There are risks to today’s ubiquitous computational devices.
August 15, 2007

We are at the cusp of a technological revolution that will make computational devices ubiquitous in our environment–from digital sensors for home-based assisted living to next-generation wireless implantable medical devices for heart pacing and defibrillation. But the wonderful new opportunities these devices present come with potentially serious threats to our data, privacy, property, and even personal safety. For example, while the MySpace generation might flock to future phone-based social-­networking systems–
systems that could instantly reveal whether the person next to you at the bar is a “friend of a friend” who shares your passion for classic movies and country line dancing–those same systems might be exploitable by sexual predators and other miscreants.

Helping society realize the benefits of these new technologies without simultaneously exposing users to serious risks is the charter of the computer security research community.

Computer security researchers study existing and proposed electronic systems in order to determine and learn from their weaknesses. In my own work with colleagues at Johns Hopkins and Rice University, we discovered that it’s possible to compromise the security of electronic voting machines and change election results. In another example, scientists at Microsoft Research have evaluated the extent to which malicious software on cell phones could disrupt regional cellular communications.

Once we’ve identified significant security deficiencies, we develop improved security mechanisms. Classically, such research has centered on systems that can be used securely. But there is a wide gap between systems that can be used securely and systems that will be used securely. For example, recent results from Harvard University and the University of California, Berkeley, suggest that many users ignore anti-phishing defenses in Web browsers. To fully understand and improve the usability of security mechanisms, we must study users in realistic settings. At the University of Washington, we developed a building­-wide network of sensors–the RFID Ecosystem–that we are using to explore more intuitive and natural methods for controlling digital privacy in future computing environments.

Another emerging theme in security research is the attempt to hold computer users accountable–to find digital analogues for surveillance cameras and forensic identifiers like fingerprints and DNA. Together with researchers at the University of California, San Diego, my colleagues at the University of Wash­ington and I are developing one ­such accountability mechanism. Our design preserves a user’s privacy in the common case: while they’re always present, our forensic trails can be “opened” only under very special circumstances–for example, when a court order has been issued.

The next time you’re enjoying the benefits of your latest digital gadget, whether it’s a wireless gaming helmet with built-in brain-activity sensors or a new RFID credit card, you might think about the mischief that could be accomplished by someone who circumvents the device’s security. The helmet could let you directly control your computer game with your mind, but could it also reveal your private thoughts to malicious software on the gaming system, or to anyone within wireless range? These are the kinds of issues that drive the security research community toward creating a more secure and private digital world.

Tadayoshi Kohno, an assistant professor in the Department of Computer Science and Engineering at the University of Washington, is a member of the 2007 TR35.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.