In a fatal crash, Uber’s autonomous car detected a pedestrian—but chose to not stop
The company has found the likely cause of its self-driving-car crash in March that killed someone trying to cross the road.
The news: According to a report by the Information, the vehicle’s software did in fact detect the pedestrian, but it chose not to immediately react. The car was programmed to ignore potential false positives, or things in the road that wouldn’t interfere with the vehicle (like a plastic bag). But those adjustments were taken too far.
Why? The car may have been part of a test for increased rider comfort. Autonomous cars aren’t known for their smooth rides, and by ignoring things that are probably not a threat, a vehicle can cut down on the number of start-and-stop jerks riders experience.
What’s next? Uber is conducting a joint investigation with the National Transportation Safety Board, after which more details are expected to be released. In the meantime, the report could inspire other self-driving-vehicle companies to treat potential false positives with more caution.
Keep Reading
Most Popular
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.