Should We Blur the Line Between Human and Computer Driving?
One software startup is taking a different approach to the automation of driving.
Vehicle automation could improve safety dramatically, but only if drivers want to use it.
Some luxury cars can already drive unassisted in certain situations: neatly pulling into parking spots, for instance, or taking control in slow-moving traffic by following the car in front. But even as greater automation races to market, some people are asking whether handing over control entirely to the machine is really the right approach.
A startup based in Cambridge, Massachusetts, called nuTonomy is developing automation that’s designed to feel more natural to drivers, and which can be combined with human control more effectively. Whereas a conventional self-driving computer system will analyze the road using radar, lidar, GPS, and other sensors before planning an ideal route, algorithms developed by nuTonomy mimic the way a human drives, identifying a safe corridor to travel through. A car controlled by the company’s software will drive the optimal path through this corridor.
It’s also possible to merge human and computer control more fluidly using the approach. The system can, for example, monitor a driver’s steering, and only intervene if it seems that he or she is about to veer outside the corridor defined by the software.
“The fundamental problem here is that humans and automation approach the driving task differently,” said Karl Iagnemma, a research scientist at MIT and founder of nuTonomy, at a recent conference. Without due care, this could mean that automated driving behaves in unexpected or unnatural-seeming ways that makes drivers uncomfortable.
Iagnemma says nuTonomy is also developing new ways to test and verify that an automated driving system will work in all potential scenarios, something that will help to advance more complex automation. He adds that the company is working with one luxury manufacturer “with an eye towards production.”
Automation in cars has been around for decades—adaptive cruise control has been standard since the 1990s and automatic parallel parking was introduced in the early 2000s. But recent advancements have been spectacular. The newest BMW 7-Series, for instance, lets a driver step out of the car and have it park itself in a garage at the push of a button. Some Mercedes models can drive automatically in slow-moving traffic, and both Audi and Volvo plan to introduce such technology in the next year.
This week regulators and legislators move toward mandating greater use of automation in vehicles. The National Transportation Safety Board issued a new report calling for collision avoidance systems to be included in new cars; and legislation proposed in the U.S. House and Senate would require such technology to be factored into car safety ratings.
Progress will reach important milestones this year and next with the introduction of technology that lets cars drive automatically on highways. The electric car maker Tesla has said it will issue an update for its Model S sedans, giving cars with the necessary sensors the ability to take control of speed and steering on highways. Cadillac plans to offer similar technology as an optional extra on some models next year.
The introduction of these systems could raise concerns about driver distraction, especially if drivers suddenly need to retake the wheel (see “Proceed with Caution toward the Self-Driving Car”). Cadillac hasn’t yet said how a driver will hand over or retake control from the system, but Dan Flores, a GM spokesman, says: “Rest assured system failure scenarios are being comprehended in the development.”
Greater automation is being propelled by interest not only from automakers but also from technology companies. Besides Google and its driverless car, Uber has signaled its interest in self-driving technology by poaching academic researchers with expertise in the area from Carnegie Mellon University, and even Apple is now rumored to be developing automated driving technology (see “Apple’s Real Car Play”).
Paul Green, a professor at the University of Michigan who studies driver behavior and vehicle interfaces, says partial automation inevitably raises new issues. “A big problem here is that this is a pretty substantial change in the way people drive,” he says. “Various people are pushing to do this quickly, but it’s the kind of thing you need to do carefully. We really can’t predict how people will behave.”
Be the leader your company needs. Implement ethical AI.
Join us at EmTech Digital 2019.