Cars May Soon Understand More of What You Say
It should soon be possible to give your car more complicated and natural verbal commands.
In the United States, nearly one in five road accidents involves some form of driver distraction.
Many cars now come with voice control, but you can’t really talk normally to such systems, and you often have to repeat a phrase to get the job done. That could change, however, with the introduction of voice interfaces that allow for a more natural back-and-forth between driver and dashboard.
“What we’re going to see in the very near future is the ability to have a dialogue,” says Charlie Ortiz, who is senior principal manager of the artificial intelligence and reasoning group at Nuance, a voice recognition technology company based in Burlington, Massachusetts. “You might say I want to listen to some Latin jazz, or suggest a particular musician.”
Ortiz says that such technology is now in the vehicle production pipeline, which means it may appear within a few years. It will primarily allow for more natural control of dashboard features and retrieval of information such as directions. “In the navigation domain, we’re developing methods to describe points of interest more abstractly,” he says. “I don’t always know the exact address of where I want to go. I want to be able to say ‘I want to go to a restaurant in the marina near the ballpark.’ “
Nuance came to dominate the market for voice-recognition technology over the past decade after acquiring various other companies in that space (see “Where Speech Recognition Is Going”). Thanks to new techniques and large quantities of training data, speech recognition has improved greatly over that time, and Nuance supplies the technology to companies across numerous industries. It already provides voice control technology to carmakers including Ford, Hyundai, and Chrysler.
And Ortiz believes that more fluent speech technology could be just around the corner, thanks to advances in parsing semantics. “The stars are aligning at just the right time,” he says. “There have been a lot of advances in various components—language-understanding and the reasoning back-end parts. One big challenge is to put these pieces together.”
Another key challenge, as far as the auto industry is concerned, is ensuring that more sophisticated interfaces aren’t also more distracting. More intuitive speech interfaces might be less taxing, but only if they work well.
“If it works perfectly, great. If it fails, you’re in a worse position,” says Bryan Reimer, a scientist at MIT’s Age Lab, whose research has shown that voice interfaces can be just as distracting as regular ones in cars. “The more complex and vague the commands, the more complex the recognition problem, and the higher damage of failure.”
Several carmakers contacted by MIT Technology Review declined to discuss how voice technology would likely evolve in their products. However, vehicle interfaces are advancing at an impressive pace, spurred on in part by mobile technology (see “Rebooting the Automobile”).
Keep up with the latest in speech recognition at EmTech MIT.
Discover where tech, business, and culture converge.
September 11-14, 2018
MIT Media Lab