Google’s cuddly-looking robotic cars have taken a big step on the way to developing a harder edge: they’ve learned to honk.
In the Google Self-Driving Car Project’s latest monthly report (PDF), the company says it has been testing horn algorithms in its prototype cars for some time, playing the horn sound inside the cars as a way to make sure they’re not beeping in a way that would confuse other drivers. As the algorithm has improved, the cars have recently begun “broadcasting our car horn to the world.”
Google says its cars are meant to be “polite, considerate, and only honk when it makes driving safer for everyone.”
That would represent a significant departure from how most humans use their horns. But it is also an important step in developing the capabilities of autonomous cars, and highlights the fact that teaching robots to drive among humans is not about merely learning a set of rules—or even the edge cases when it’s okay to bend or ignore those rules. It is a highly cultural, intuitive process.
The horn is a terrific example of this. Sure, people use it to express many colorful variations of “Hey, watch it, jerk!” But some folks also use it to say hello to neighbors they recognize on the street. In China, there is an intricate etiquette around car horn use, which to a western ear would appear to be a near-constant wall of noise. Even regional differences in the U.S. can be pronounced (when have you ever been in New York City and not heard a chorus of blaring horns?).
The engineers at Google have learned firsthand how challenging it can be to imbue their cars with driving’s softer skills. One early version of the car was far too timid at stop signs, for example—it would sit, paralyzed, as human drivers who weren’t coming to a complete stop kept passing it by.
Such problems can be solved by dialing up how aggressive the cars are. They are now programmed to inch forward at stop signs and assert themselves. But that introduces the tricky issue of judgment into the cars’ decision-making. How aggressive is too aggressive? Should robotic cars maintain large following distances from other vehicles and risk having other cars jump in between? Or should they follow closely and risk making the driver in front nervous about a tailgater?
In Google’s report, the company says its cars have two kinds of beeps—“two short, quieter pips” for politely grabbing another driver’s attention, and a loud, long honk when the situation “requires more urgency.” That type of nuance shows that Google’s engineers are on the right track to mimicking how human drivers behave, even if they have a long way to go before they can blend in on the roadway.
Toronto wants to kill the smart city forever
The city wants to get right what Sidewalk Labs got so wrong.
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.