Outta My Way! How Will We Translate Google’s Self-Driving Honks?
Google’s cuddly-looking robotic cars have taken a big step on the way to developing a harder edge: they’ve learned to honk.
In the Google Self-Driving Car Project’s latest monthly report (PDF), the company says it has been testing horn algorithms in its prototype cars for some time, playing the horn sound inside the cars as a way to make sure they’re not beeping in a way that would confuse other drivers. As the algorithm has improved, the cars have recently begun “broadcasting our car horn to the world.”
Google says its cars are meant to be “polite, considerate, and only honk when it makes driving safer for everyone.”
That would represent a significant departure from how most humans use their horns. But it is also an important step in developing the capabilities of autonomous cars, and highlights the fact that teaching robots to drive among humans is not about merely learning a set of rules—or even the edge cases when it’s okay to bend or ignore those rules. It is a highly cultural, intuitive process.
The horn is a terrific example of this. Sure, people use it to express many colorful variations of “Hey, watch it, jerk!” But some folks also use it to say hello to neighbors they recognize on the street. In China, there is an intricate etiquette around car horn use, which to a western ear would appear to be a near-constant wall of noise. Even regional differences in the U.S. can be pronounced (when have you ever been in New York City and not heard a chorus of blaring horns?).
The engineers at Google have learned firsthand how challenging it can be to imbue their cars with driving’s softer skills. One early version of the car was far too timid at stop signs, for example—it would sit, paralyzed, as human drivers who weren’t coming to a complete stop kept passing it by.
Such problems can be solved by dialing up how aggressive the cars are. They are now programmed to inch forward at stop signs and assert themselves. But that introduces the tricky issue of judgment into the cars’ decision-making. How aggressive is too aggressive? Should robotic cars maintain large following distances from other vehicles and risk having other cars jump in between? Or should they follow closely and risk making the driver in front nervous about a tailgater?
In Google’s report, the company says its cars have two kinds of beeps—“two short, quieter pips” for politely grabbing another driver’s attention, and a loud, long honk when the situation “requires more urgency.” That type of nuance shows that Google’s engineers are on the right track to mimicking how human drivers behave, even if they have a long way to go before they can blend in on the roadway.
(Read more: PC magazine, New York Times, "Driverless Cars Are Further Away Than You Think," “Hidden Obstacles for Google’s Self-Driving Cars”)
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.