Skip to Content

Is Your Car Safe From Hackers?

Interconnected computer systems provide openings for attackers.

This week researchers will present a study showing what could happen if a determined hacker went after the computer systems embedded in cars. The researchers found that, among other things, an attacker could disable the vehicle’s brakes, stop its engine, or take control of its door locks. All the attacker needs is access to the federally mandated onboard diagnostics port– located under the dashboard in almost all cars today.

Top gear: Researchers used software called “Carshark” to access a car’s computer systems. This allowed them to control the dashboard display and the car’s engine—and they fear hackers could do the same in the future.

The researchers point to a recent report showing that a typical luxury sedan now contains about 100 megabytes of code that controls 50 to 70 computers inside the car, most of which communicate over a shared internal network.

“In a lot of car architectures, all the computers are interconnected, so that having taken over one component, there’s a substantive risk that you could take over all the rest of them. Once you’re in, you’re in,” says Stefan Savage, an associate professor in the department of computer science and engineering at the University of California, San Diego, who is one of the lead investigators on the project.

The researchers say that their work shouldn’t yet be a cause for alarm, mainly because the exploits require access to the inside of a vehicle. But some of these systems can be accessed remotely, and the trend is to add even more wireless connectivity–for example, wireless automatic crash-response systems. The researchers say that other systems, such as satellite radios and remote-controlled door openers, could also become entry points.

Car systems have surprising interconnections, Savage says. For safety reasons, cars are programmed to unlock the doors after the airbags deploy to help potentially injured passengers exit the vehicle. This turns out to create a connection between the door-locking system and the crash-detection system that an attacker could theoretically exploit.

The researchers investigated the computing systems inside a car without any special knowledge from the manufacturer. They started out by pulling out the hardware and running standard security testing attacks such as fuzzing, which tests software with random input to see if it’s possible to induce any glitches or strange behavior. They used the information they gained to craft attacks that could take over and control systems on the car’s internal network. They tested their attacks on an immobile car before performing road tests to ensure that their attacks were practical in the real world.

“Until we actually did the live road tests, I don’t think we were really able to say that someone could do this to a car on the road,” says Tadayoshi Kohno, an assistant professor of computer science and engineering at the University of Washington, and also a lead investigator on the project. Though some of the attacks had to be tweaked at that point, they still functioned.

It’s going to be challenging to design more secure systems for cars, Savage says, because many of the techniques commonly used to protect devices won’t transfer well. For example, it’s common for security systems to shut down computing processes when they detect abnormal behavior. In the case of an electronic braking system, however, shutting it down could be just as dangerous as allowing a corrupted program to keep running.

Savage and Kohno say they plan to work on designing new techniques for securing automotive computer systems through a newly formed Center for Automotive Embedded Systems Security. They hope to work with manufacturers and others with a stake in designing computer systems for cars to make sure their solutions are practical and easy to implement.

One striking thing about the researchers’ work is that they found many security systems that were not fully implemented, such as authentication controls that were present but not in use, says HD Moore, chief security officer at Boston company Rapid7 and chief architect of Metasploit, an open-source framework for testing systems for security holes. Moore has also tested some automotive software and found similar problems. “This gives an idea of how immature the industry is,” he says, noting that problems will likely worsen as more software extends the reach of the car’s internal network.

Kevin Fu, an assistant professor of computer science at the University of Massachusetts Amherst, agrees. “It’s probably time for a comprehensive checkup by both industry and regulators on how to provide security assurance for automotive systems with increasingly complex software controls and communication paths,” he says.

Keep Reading

Most Popular

This new data poisoning tool lets artists fight back against generative AI

The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. 

Everything you need to know about artificial wombs

Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.

Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist

An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.

Data analytics reveal real business value

Sophisticated analytics tools mine insights from data, optimizing operational processes across the enterprise.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.