Skip to Content

U.S. Wants Makers of Driverless Cars to Prove They Are Safe

The auto industry is beginning to get some clarity on the rules of the road for autonomous cars.
September 20, 2016

The U.S. government has issued its first rules for automated vehicles. They include a 15-point set of “safety assessment” guidelines for self-driving systems. These cover issues such as cybersecurity, black-box recordings to aid crash investigations, and potential ethical conundrums on the road.

The new policy will play a central role in shaping how autonomous vehicles proceed toward commercial use. Many automotive and technology companies are testing self-driving vehicles, and ride-hailing company Uber even lets customers in Pittsburgh order rides in prototypes (see “My Self-Driving Uber Needed Human Help”). But the cars in testing still need close human supervision.

Until now, the rules regarding self-driving vehicles have been hazy. Some states have created exemptions to encourage testing, while others have signaled that they might restrict autonomous vehicles. The new federal policy says that states will remain responsible for enforcing laws and regulations regarding human control of a vehicle.

The safety assessment announced by the U.S. Department of Transportation today asks makers of self-driving vehicles to explain how they are designed and how they operate. They will need to say how how a system has been tested, what fail-safes are in place, and how customers’ privacy is protected.

The Department of Transportation is trying to balance safety concerns about automated vehicles with a desire to encourage what promises to be a massive global industry.

Automated cars still struggle to cope with certain everyday situations, including poor weather conditions and complex traffic scenarios. In the case of partially automated systems, where a human and car share driving duties, it remains unclear how to ensure that drivers remain sufficiently engaged to safely take over when needed.

A driver died in a Tesla vehicle earlier this year when its automated highway driving system Autopilot did not detect a semi-trailer across the road ahead. The National Highway Traffic Safety Administration and National Transportation Safety Board are both investigating the crash. Tesla recently announced plans to improve the reliability of the long-range radar system on its cars.

U.S. Secretary of Transportation Anthony Foxx said today that the new policies would also apply to cars already on the road. On a conference call announcing the rules, he said that NHTSA would act to bring manufacturers in line if necessary. “We will not hesitate to use our recall authority when we have identified a defect that presents an unreasonable risk to safety,” Foxx said.

Foxx’s department is also considering creating new regulatory mechanisms that might help make self-driving vehicles safer. These could include forcing manufacturers to act to fix safety risks immediately, presumably via some form of software update. The department could also require that manufacturers collect certain kinds of data, or share information with each other to improve performance and safety.

“This is an emerging technology, and we expect that there will be an evolution in how we approach what we do,” Foxx said today on a conference call announcing the policy.

John Maddox, assistant director of the University of Michigan’s Mobility Transformation Center, says issuing guidance rather than specific rules is a good approach, because the design and capabilities of automated vehicles are still changing rapidly. “There’s still a lot of innovation to come, especially in how you validate the technology,” he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.