Skip to Content
Uncategorized

New Driverless Car Guidelines Don’t Provide Much Guidance

September 13, 2017

The government doesn’t want to stand in the way of autonomous vehicles. That’s the biggest message to emerge from the Trump administration's newly updated guidelines for the nascent robo-car industry.

The guidelines—and they are very much guidelines, not rules or regulations, which is a message frequently reiterated in the document—build on a set of 15 points published this time last year by the Obama administration. But transportation secretary Elaine Chao has decided to thin those out to make life easier for tech companies and automakers.

Companies don’t need to wait for federal approval before they start testing autonomous vehicles on the roads. Submission of documentation, about the safety standards and precautions in place in the vehicles, to the government is purely optional. And the guidelines also urge states not to adopt their own rules for driverless cars, which many do right now, over concerns that doing so may slow innovation.

As the Washington Post notes, Chao has removed mention of ethical considerations from the Obama-era quidelines, claiming that previous guidance was “speculative in nature.” In some contrast, Germany is already drafting rules about how autonomous cars should be programmed to react in life-and-death situations. (Spoiler: they should kill dogs before humans.)

Tech firms and automakers will be pleased about the new guidelines, as they allow plenty of breathing room within which to experiment. But John Simpson from Consumer Watchdog tells the Financial Times (paywall) that the document is “a road map that allows manufacturers to do whatever they want, wherever and whenever they want, turning our roads into private laboratories for robot cars with no regard for our safety.”

So, the government doesn’t want to stand in front of autonomous cars. And for now, those of you who are as nervous as Simpson about driverless vehicles might not want to, either.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.