Skip to Content

In the 1980s, the Self-Driving Van Was Born

Primitive lidar, green-and-black screens, and literally tons of computing gear powered CMU’s NavLab 1.
November 8, 2016

Self-driving cars and trucks are all the rage—but why have we forgotten about vans?

In the 1980s, some of the first robotic cars were vans, including this beauty:

As we wrote in a feature published last month, Carnegie Mellon University’s NavLab, vintage 1986, was one of the first cars ever that was designed to be controlled by a computer. It featured an early version of lidar functioning as the vehicle’s eyes—the same way most autonomous and semi-autonomous cars see their environment today. Inside, the place looked like an FBI surveillance van, filled with computers doing everything from watching the road to controlling the air conditioning unit.

Google’s autonomous cars, Tesla’s Autopilot-driven luxury vehicles, and Uber’s self-driving taxis show how far the technology has come since the days of NavLab. The slick sensor packages and endless stream of press coverage can make it seem like widespread adoption of self-driving cars is imminent.

But as our own Will Knight found when he spoke to William “Red” Whittaker, one of the creators of NavLab and a legend in the field of autonomous driving, there’s still a long way to go yet:

Whittaker says Uber’s new service doesn’t mean the technology is perfected. “Of course it isn’t solved,” he says. “The kinds of things that aren’t solved are the edge cases.”

And there are plenty of edge cases to contend with, including sensors being blinded or impaired by bad weather, bright sunlight, or obstructions. Then there are the inevitable software and hardware failures. But more important, the edge cases involve dealing with the unknown. You can’t program a car for every imaginable situation, so at some stage, you have to trust that it will cope with just about anything that’s thrown at it, using whatever intelligence it has. And it’s hard to be confident about that, especially when even the smallest misunderstanding, like mistaking a paper bag for a large rock, could lead a car to do something unnecessarily dangerous.

Here’s hoping that doesn’t mean we have to wait another 30 years for the autonomous car revolution to arrive.

(Read more: Motherboard, “What to Know Before You Get In a Self-Driving Car,” “Otto’s Self-Driving 18-Wheeler Has Made Its First Delivery,” “Tesla Announces New Sensors and Puts the Brakes on Autopilot”)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.