Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

 

Driving on Interstate 495 toward Boston in a Ford Fusion one chilly afternoon in March, I did something that would’ve made even my laid-back long-ago driving instructor spit his coffee over the dashboard: I took my hands off the steering wheel, lifted my foot off the gas pedal, and waited to see what would happen. The answer: not much. To a degree, the car was already driving itself. Sensors were busy tracking other vehicles and road markings; computer systems were operating the accelerator, the brake, and even the steering wheel. The car reduced its speed to keep a safe distance from the vehicle ahead, but as that car sped up again, mine did so too. I tried nudging the steering wheel so that we drifted toward the dotted line on my left. As the line approached, the car pushed the steering wheel in the opposite direction very slightly to keep within its lane.

The technology behind this kind of vehicle automation is being developed at a blistering pace, and it should make driving safer, more fuel-efficient, and less tiring. But despite such progress and the attention surrounding Google’s “self-driving” cars, full autonomy remains a distant destination. A truly autonomous car, one capable of dealing with any real-world situation, would require much smarter artificial intelligence than Google or anyone else has developed. The problem is that until the moment our cars can completely take over, we will need automotive technologies to strike a tricky balance: they will have to extend our abilities without doing too much for the driver.

Cars with autonomy still require a human’s attention, but they can also discourage it.

Carmakers have so far introduced autonomous technology carefully, aware that having too little to worry about behind the wheel can be just as dangerous as having too many distractions. I could detect the automakers’ restraint when I drove on I-495 in the Ford Fusion, a $30,000 sedan that has two partly autonomous systems: Adaptive Cruise Control, which uses radar to measure the distance to the car in front and controls the accelerator and brake to maintain a safe distance; and the Lane-Keeping System, which uses a camera in the rearview mirror to monitor lane markings and vibrates the steering wheel, or gently moves it, if the car drifts too far to the left or right. The capabilities of both are clearly held in check. The cruise control system doesn’t work below 12 miles per hour and shuts off if the car ahead starts going faster than the initial set speed; the lane-tracking feature can easily be overridden by moving the steering wheel forcefully. It also switched off a couple of times when the stripes on the road were too worn to be seen clearly. But even with such limitations, these two systems are remarkably clever and reassuring to use. Driving home in another car later, I felt hamstrung not seeing my position within the lane shown clearly on the dashboard.

When implemented correctly, automation quickly feels like just a natural part of driving. In fact, it’s easy to forget that it has been creeping into cars ever since the hand crank was replaced by an automatic starter in 1911. But this progression is accelerating with systems that perform much higher-level driving tasks. Numerous carmakers sell models that apply the brakes at superhuman speed if they detect an impending collision; some can help read road signs as they whiz past, and then remind the driver of the correct speed limit.

Many cars can also perform one of the most troublesome driving tasks, parallel parking. I tried this feature, called Active Park Assist, in a Lincoln MKS. The system identifies a suitable spot and then executes a near-perfect reversing maneuver while the driver operates the brake. It was unnerving, at first, to see the steering wheel spin violently as the car backed into an empty spot, but I also marveled at how flawlessly it worked.

This experience also hinted at the biggest challenge for increased vehicle automation: how to merge human and machine abilities effectively. Bryan Reimer, a research scientist at MIT’s Age Lab, who uses the Lincoln to study driver behavior, was sitting in the passenger seat during my test drive as I searched for a parking spot. He warned me not to accept the first few that the car offered to squeeze into, not because he doubted the technology but because he doubted my ability to undo what it did. “You’ll just never get out of there,” he said, pointing out that the Lincoln can park itself with just a few inches to spare on either end.

How to make sure autonomy meshes with human behavior is a topic that Don Norman, a cognitive scientist and product design consultant, explores in depth in his 2007 book The Design of Future Things. Norman foresees many potential problems with more autonomous cars; in fact, he points out, some have already cropped up. He describes how he worked with automakers whose adaptive cruise control systems would automatically speed a car up as a driver entered an off-ramp, because the ramp was free of traffic; or they would suddenly slow a car down if the driver pulled in close behind another car while changing lanes, thereby forcing the car behind to brake suddenly as well. “Fully automatic control will be safer,” he writes. “The difficulty lies in the transition toward full automation, when only some things will be automated.”

It’s tempting to think the problems Norman identifies will be short-lived. After all, Google has been testing a fleet of almost completely autonomous, or “self-driving,” hybrid cars for some time. These vehicles use an expensive laser mounted on the roof to map the car’s surroundings in 3-D and rapidly process this picture, reacting deftly to other cars and pedestrians. The company says its cars have traveled more than 300,000 miles without a single accident while under computer control. Last year it produced a video in which a blind man takes a trip behind the wheel of one of these cars, stopping at a Taco Bell and a dry cleaner.

Impressive and touching as this demonstration is, it is also deceptive. Google’s cars follow a route that has already been driven at least once by a human, and a driver always sits behind the wheel, or in the passenger seat, in case of mishap. This isn’t purely to reassure pedestrians and other motorists. No system can yet match a human driver’s ability to respond to the unexpected, and sudden failure could be catastrophic at high speed.

But if autonomy requires constant supervision, it can also discourage it. Back in his office, Reimer showed me a chart that illustrates the relationship between a driver’s performance and the number of things he or she is doing. Unsurprisingly, at one end of the chart, performance drops dramatically as distraction increases. At the other end, however, where there is too little to keep the driver engaged, performance drops as well. Someone who is daydreaming while the car drives itself will be unprepared to take control when necessary.

Google’s demonstration is deceptive. Nothing can yet match a human driver at handling the unexpected.

Reimer also worries that relying too much on autonomy could cause drivers’ skills to atrophy. A parallel can be found in airplanes, where increasing reliance on autopilot technology over the past few decades has been blamed for reducing pilots’ manual flying abilities. A 2011 draft report commissioned by the Federal Aviation Administration suggested that overreliance on automation may have contributed to several recent crashes involving pilot error. Reimer thinks the same could happen to drivers. “Highly automated driving will reduce the actual physical miles driven, and a driver who loses half the miles driven is not going to be the same driver afterward,” he says. “By and large we’re forgetting about an important problem: how do you connect the human brain to this technology?”

Norman argues that autonomy also needs to be more attuned to how the driver is feeling. “As machines start to take over more and more, they need to be socialized; they need to improve the way they communicate and interact,” he writes. Reimer and colleagues at MIT have shown how this might be achieved, with a system that estimates a driver’s mental workload and attentiveness by using sensors on the dashboard to measure heart rate, skin conductance, and eye movement. This setup would inform a kind of adaptive automation: the car would make more or less use of its autonomous features depending on the driver’s level of distraction or engagement.

Already, some systems watch for behavioral cues that the driver’s focus is wandering. Indeed, after I had been cruising along I-495 for a few moments under the car’s control, this message flashed on the dashboard: “Driver Alert Warning: Put Hands Back on Steering Wheel.” For the rest of my drive, I made sure I did.

58 comments. Share your thoughts »

Credit: Bob Staake

Tagged: Computing, Business, Communications, Energy, Mobile, Ford, autonomous vehicle, driving technology

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me