Your Future Self-Driving Car Will Be Way More Hackable
In recent years researchers have demonstrated hair-raising hacks that make it possible to take over the brakes, engine, or other components of a person’s car remotely—forcing the auto industry to take security more seriously.
But one researcher who has pioneered the effort to prod car companies into addressing their security flaws says that the industry’s rush to develop driverless car technology will open up new security problems.
“We are a long way from securing the non-autonomous vehicles, let alone the autonomous ones,” said Stefan Savage, a computer science professor at the University of California, San Diego, at the Enigma security conference in San Francisco on Tuesday. The extra computers, sensors, and improved Internet connectivity required to make a car drive itself increase the possible weak points, he said. “The attack surface for these things is even worse,” said Savage.
Major auto companies are racing with newer upstarts such as Google and Tesla to roll out autonomous driving features and develop fully self-driving vehicles. Toyota has estimated that fully autonomous cars will be available within five years, and this month the U.S. government announced it wanted to smooth the way for testing and use of such vehicles on the nation’s roads.
That concerns Savage because of what he and colleagues learned in the course of showing it is possible to take control of conventional vehicles in various ways, for example by dialing into a car’s built-in cellular connection, or by giving a driver a music CD programmed with a “song of death” that makes the car connect to an attacker’s computer.
The way modern cars are designed, once an attacker can get inside the Internet network linking the roughly 30 different computers inside, he or she can take over just about any component, from the brakes to the radio, said Savage.
It’s not possible to isolate the “important” parts such as the brakes because everything must be connected to enable many functions people expect of cars, as well as to allow repairs and software upgrades, he said. “The notion that you can separate the mission-critical from the non-mission-critical turns out to be wrong,” he said.
For a vehicle to be able to understand its environment and drive itself even part of the time, more computers, sensors, and other components must be added to the tangle already inside our cars. That will expand the possible entry points for attackers and the things they can do—for example, self-driving cars rely on laser scanners and other sensors, which could be made to send false data. It will also magnify a problem that already exists—carmakers don’t know exactly what software is inside the vehicles they sell, said Savage.
That’s because cars are built using components sourced at the lowest possible cost from third-party suppliers. Those suppliers carefully guard the details of the software inside things like the brake-control system or central locking components.
“If you walk into a car company and say, ‘Have you looked at the source code for your vehicle?’ They will say no, because they do not own it. There is nobody in the world that owns all the code in a vehicle,” said Savage. “That’s a big problem.”
Carmakers have made significant improvements to their attitudes on security in recent years. For example, after Savage’s team showed how to remotely take over a Chevy Impala in 2010, GM fixed the flaws, built a new security team, and hired a new executive dedicated to car security. Auto companies are also working to make cars that can receive software updates remotely.
Some of the younger companies now working on autonomous cars, such as Tesla and Google, may also be able to sidestep some security problems baked into the way more established companies make cars. “Tesla has an advantage in that they started from scratch,” said Savage. “They got to design their architecture in the 21st century as opposed to the 20th century.”
However, even Google and Tesla will have to rely heavily on third-party suppliers to assemble their vehicles, meaning they may not be able to see all the code in their self-driving cars.
And Tadayoshi Kohno, a professor at the University of Washington, said that even when companies take security very seriously, the longevity of products such as cars leads to what he calls the “zombie problem.” A car can be on the roads for decades, but the company that made it and the suppliers of its components aren’t likely to keep providing software updates for its full lifetime. The same problem affects home appliances now being connected to the Internet. “What are we going to do with these zombies for the remaining 20 years that they are in service?” said Kohno. “I think it will be a very big problem.”
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.