MIT Technology Review Subscribe

In a fatal crash, Uber’s autonomous car detected a pedestrian—but chose to not stop

The company has found the likely cause of its self-driving-car crash in March that killed someone trying to cross the road.

The news: According to a report by the Information, the vehicle’s software did in fact detect the pedestrian, but it chose not to immediately react. The car was programmed to ignore potential false positives, or things in the road that wouldn’t interfere with the vehicle (like a plastic bag). But those adjustments were taken too far.

Advertisement

Why? The car may have been part of a test for increased rider comfort. Autonomous cars aren’t known for their smooth rides, and by ignoring things that are probably not a threat, a vehicle can cut down on the number of start-and-stop jerks riders experience.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

What’s next? Uber is conducting a joint investigation with the National Transportation Safety Board, after which more details are expected to be released. In the meantime, the report could inspire other self-driving-vehicle companies to treat potential false positives with more caution.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement