Skip to Content
Smart cities

This Image Is Why Self-Driving Cars Come Loaded with Many Types of Sensors

When’s a pedestrian not a pedestrian? When it’s a decal.

Autonomous cars often proudly claim to be fitted with a long list of sensors—cameras, ultrasound, radar, lidar, you name it. But if you’ve ever wondered why so many sensors are required, look no further than this picture.

You’re looking at what’s known in the autonomous-car industry as an “edge case”—a situation where a vehicle might have behaved unpredictably because its software processed an unusual scenario differently from the way a human would. In this example, image-recognition software applied to data from a regular camera has been fooled into thinking that images of cyclists on the back of a van are genuine human cyclists.

This particular blind spot was identified by researchers at Cognata, a firm that builds software simulators—essentially, highly detailed and programmable computer games—in which automakers can test autonomous-driving algorithms. That allows them to throw these kinds of edge cases at vehicles until they can work out how to deal with them, without risking an accident.

Most autonomous cars overcome issues like the baffling image by using different types of sensing. “Lidar cannot sense glass, radar senses mainly metal, and the camera can be fooled by images,” explains Danny Atsmon, the CEO of Cognata. “Each of the sensors used in autonomous driving comes to solve another part of the sensing challenge.” By gradually figuring out which data can be used to correctly deal with particular edge cases—either in simulation or in real life—the cars can learn to deal with more complex situations.

Tesla was criticized for its decision to use only radar, camera, and ultrasound sensors to provide data for its Autopilot system after one of its vehicles failed to discern a truck trailer from a bright sky and ran into it, killing the driver of the Tesla. Critics argue that lidar is an essential element in the sensor mix—it works well in low light and glare, unlike a camera, and provides more detailed data than radar or ultrasound. But as Atsmon points out, even lidar isn’t without its flaws: it can’t tell the difference between a red and green traffic signal, for example.

The safest bet, then, is for automakers to use an array of sensors, in order to build redundancy into their systems. Cyclists, at least, will thank them for it.

(Read more: “Robot Cars Can Learn to Drive without Leaving the Garage,” “Self-Driving Cars’ Spinning-Laser Problem,” “Tesla Crash Will Shape the Future of Automated Cars”)

Deep Dive

Smart cities

What’s bigger than a megacity? China’s planned city clusters

Five regions with as many as 100 million people each aim to deliver the benefits of urbanization without the headaches.

telephone pole with wires
telephone pole with wires

What cities need now

Smart cities haven’t brought the tangible improvements that many hoped they would. What comes next?

favela
favela

Rio de Janeiro is making a digital map of one of Brazil’s largest favelas

Illegal and unplanned settlements will grow in cities everywhere as urbanization accelerates. Digital technology can help residents connect to services they need.

sound bend river
sound bend river

One city’s fight to solve its sewage problem with sensors

America’s aging sewers need $1 trillion in repairs, but officials in South Bend, Indiana, have a plan to make them smarter.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.