MIT Technology Review Subscribe

Hackers trick a Tesla into veering into the wrong lane

Hackers have demonstrated some worrisome ways to manipulate and confuse the various systems on a Tesla Model S. Their most dramatic feat: sending the car careening into the oncoming traffic lane by placing a series of small stickers on the road.

Attack vector: This an example of an “adversarial attack,” a way of manipulating a machine-learning model by feeding in a specially crafted input. Adversarial attacks could become more common as machine learning is used more widely, especially in areas like network security.

Advertisement

Blurred lines: Tesla’s Autopilot is vulnerable because it recognizes lanes using computer vision. In other words, the system relies on camera data, analyzed by a neural network, to tell the vehicle how to keep centered within its lane.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Traffic jamming: This isn’t the first adversarial attack on an autonomous driving system. Dawn Song, a professor at UC Berkeley, has used innocuous-looking stickers to trick a self-driving car into thinking a stop sign was a speed limit for 45 miles per hour. Another study, published in March, demonstrated how medical machine-learning systems can similarly be tricked into giving the wrong diagnoses.

Bug fixes: The researchers behind the lane-recognition hack, from the Keen Security Lab of Chinese tech giant Tencent, used a similar attack to disrupt the vehicle’s automatic windshield wipers. They also hijacked the car’s steering wheel using another method. A Tesla spokesperson told Forbes that the latter vulnerability has been fixed in its most recent software update. The spokesperson said the adversarial attack was unrealistic “given that a driver can easily override Autopilot at any time.”

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement