Tesla Motors’s statement last week disclosing the first fatal crash involving its Autopilot automated driving feature opened not with condolences but with statistics.
Autopilot’s first fatality came after the system had driven people over 130 million miles, the company said, more than the 94 million miles on average between fatalities on U.S. roads as a whole.
Soon after, Tesla’s CEO and cofounder Elon Musk threw out more figures intended to prove Autopilot’s worth in a tetchy e-mail to Fortune (first disclosed yesterday). “If anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available,” he wrote.
Tesla and Musk’s message is clear: the data proves Autopilot is much safer than human drivers. But experts say those comparisons are worthless, because the company is comparing apples and oranges.
“It has no meaning,” says Alain Kornhauser, a Princeton professor and director of the university’s transportation program, of Tesla’s comparison of U.S.-wide statistics with data collected from its own cars. Autopilot is designed to be used only for highway driving, and may well make that safer, but standard traffic safety statistics include a much broader range of driving conditions, he says.
Tesla’s comparisons are also undermined by the fact that its expensive, relatively large vehicles are much safer in a crash than most vehicles on the road, says Bryant Walker Smith, an assistant professor at the University of South Carolina. He describes comparisons of the rate of accidents by Autopilot with population-wide statistics as “ludicrous on their face.” Tesla did not respond to a request asking it to explain why Musk and the company compare figures from very different kinds of driving.
Google has in the past drawn similar contrasts between the track record of its self-driving cars and accident statistics for humans, says Smith. He, Kornhauser, and other researchers argue that companies working on autonomous driving technology need to drop such comparisons altogether. In April, a RAND Corporation report concluded that fatalities and injuries are so rare that it would require an automated car to drive as many as hundreds of billions of miles before its performance could be fairly compared with statistics from the much larger population of human drivers.
Instead researchers say that Tesla and others need to release more data on the limitations and performance of automated driving systems if self-driving cars are to become safe and understood enough for mass market use.
The tragic crash disclosed by Tesla last week, to a chorus of negative headlines, provides a case study, says Smith. The company’s Autopilot branding and dubious comparisons with figures on human safety have encouraged people to think of it as fully competent, he says. If the company had spent the months since the feature’s October release talking about its development process and how it was refining the technology to deal with difficult situations, the reaction to the crash could have been different, he says.
“The crash could have been a continuation of an established narrative about the costs and benefits, not a surprise event,” says Smith. That more cautious approach may have sold less cars in the short run, but helped the prospects of Tesla and others banking on self-driving technology in the long term, he says. “Companies need to start saying what safety means, how they define and measure that safety and how they will monitor it,” he says.
Autopilot and the more sophisticated systems in testing by Google and others collect huge volumes of data on conditions around them and the actions they take at all times. In California and some other states that permit testing of autonomous cars, companies must report accidents or technology failures that required a human to take over. But the data is generally sparse, and collected in the form of letters, making it hard to scrutinize. Companies like Tesla that have put novel automated driving features on the market aren’t generally obligated to report special data on their performance.
Smith suggests that by releasing more of their data trove, companies could accelerate development of self-driving cars, better prove their worth, and inform efforts to develop ways to hold them to account from a safety perspective.
Elon Musk has said that Tesla would share some of its data on Autopilot with the U.S. Department of Transportation and other manufacturers, although the company has not released details about what will be handed over or when. And with Tesla, Google, established automakers such as GM, and newer startups all working on autonomous driving technologies, competitive pressures could make meaningful coöperation seem unlikely. Kornhauser at Princeton hopes that companies such as Google and Tesla will understand that they and society have more to gain if they do work together.
“They may not want to help the competition, but we’re dealing with other people’s lives,” he says. “We should have more public spiritedness in the effort to do this thing.”
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.