Skip to Content
Smart cities

This new lidar sensor could equip every autonomous car in the world by the end of 2018

The startup Luminar aims to challenge market leaders by building its hardware at a never-before-seen scale.
Luminar's new lidar sensors.
Luminar

A new lidar sensor could equip thousands of driverless cars with the sensing abilities required to drive at high speeds on the open road.

Lidar has become the primary way most driverless cars sense the world around them, bouncing laser light off nearby objects to create 3-D maps of their surroundings.

For years, the industry leader in lidar has been Velodyne, which builds some of the most expensive ultrahigh-resolution sensors available. But the rapid advance of research on self-driving vehicles prompted other firms to start building them too—among them a startup called Luminar, which was set up by Stanford dropout Austin Russell and came out of stealth last year.

Luminar’s technology is different from other lidar systems. It uses a longer wavelength of light to operate at higher power, allowing it to see darker objects over longer distances. It’s also able to zoom in on areas of specific interest.

But its sensors, which use a mechanical mirror system and expensive indium gallium arsenide semiconductors, were difficult and pricey to produce. Early units cost at least tens of thousands of dollars, and they required an entire day of human labor to assemble.

Over the last year, says Russell, who was one of MIT Technology Review’s 35 Innovators under 35 in 2017, the firm has taken steps to change that. It acquired a chip design firm called Black Forest Engineering, hired consumer electronics experts, and set up its own manufacturing complex in Orlando, Florida—all with the aim of building its sensor at commercial scale.

As a result, Russell says, the latest version of the sensor is approaching auto grade, meaning it should be ready for extreme temperatures, inclement weather, and other adverse conditions that a production car might be exposed to (though it’s yet to be certified as such). Careful redesign of its laser detector chip, meanwhile, has cut costs from tens of thousands of dollars to just $3, and automation means the sensors will be built in eight minutes by the end of the year.

All of that means Luminar reckons it can offer a set of sensors for “single-digit thousands” of dollars once they're in large-scale production, Russell says. At the same time, it’s boosted the specs, so the sensor can detect objects that are 250 meters off—enough for seven seconds of reaction time at 75 miles per hour.

Ingmar Posner, an associate professor of information engineering at the University of Oxford and founder of the university’s autonomous-driving spinoff Oxbotica, says the specifications and price point of the sensor “sound great.” But he also points out that the price will need to fall further if the sensors are to be used in affordable consumer vehicles.

That could yet happen. Russell says the sensor cost is, obviously, related to the scale of production, and by the end of the year Luminar plans to be building 5,000 of its sensors every quarter.  That’s a lot—more than the 10,000 sensors that competitor Velodyne planned to build last year—and would give it enough, Russell claims, to equip every autonomous test car on the roads by the end of 2018.

The major hurdle to that kind of market dominance is convincing other research groups and automakers to switch from their existing sensors—something that would require rewriting control software and remapping entire cities so cars can navigate using the new equipment. Russell likens the situation to “ripping off a Band-Aid,” because it will need to happen at some point as car makers switch to using auto-grade rather than experimental sensors.

What remains to be seen for Luminar, though, is just how soon that Band-Aid gets pulled.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.