Data gathered from Google’s self-driving Prius and Lexus cars shows that they are safer and smoother when steering themselves than when a human takes the wheel, according to the leader of Google’s autonomous-car project.
Chris Urmson made those claims today at a robotics conference in Santa Clara, California. He presented results from two studies of data from the hundreds of thousands of miles Google’s vehicles have logged on public roads in California and Nevada.
One of those analyses showed that when a human was behind the wheel, Google’s cars accelerated and braked significantly more sharply than they did when piloting themselves. Another showed that the cars’ software was much better at maintaining a safe distance from the vehicle ahead than the human drivers were.
“We’re spending less time in near-collision states,” said Urmson. “Our car is driving more smoothly and more safely than our trained professional drivers.”
In addition to painting a rosy picture of his vehicles’ autonomous capabilities, Urmson showed a new dashboard display that his group has developed to help people understand what an autonomous car is doing and when they might want to take over. “Inside the car we’ve gone out of our way to make the human factors work,” he said.
Although that might suggest the company is thinking about how to translate its research project into something used by real motorists, Urmson dodged a question about how that might happen. “We’re thinking about different ways of bringing it to market,” he said. “I can’t tell you any more right now.”
Urmson did say that he is in regular contact with automakers. Many of those companies are independently working on self-driving cars themselves (see “Driverless Cars Are Further Away Than You Think”).
Google has been testing its cars on public roads since 2010 (see “Look, No Hands”), always with a human in the driver’s seat who can take over if necessary.
Urmson dismissed claims that legal and regulatory problems pose a major barrier to cars that are completely autonomous. He pointed out that California, Nevada, and Florida have already adjusted their laws to allow tests of self-driving cars. And existing product liability laws make it clear that a car’s manufacturer would be at fault if the car caused a crash, he said. He also said that when the inevitable accidents do occur, the data autonomous cars collect in order to navigate will provide a powerful and accurate picture of exactly who was responsible.
Urmson showed data from a Google car that was rear-ended in traffic by another driver. Examining the car’s annotated map of its surroundings clearly showed that the Google vehicle smoothly halted before being struck by the other vehicle.
“We don’t have to rely on eyewitnesses that can’t act be trusted as to what happened—we actually have the data,” he said. “The guy around us wasn’t paying enough attention. The data will set you free.”
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.