The race to build mass-market autonomous cars is creating big demand for laser sensors that help vehicles map their surroundings. But cheaper versions of the hardware currently used in experimental self-driving vehicles may not deliver the quality of data required for driving at highway speeds.
Most driverless cars make use of lidar sensors, which bounce laser beams off nearby objects to create 3-D maps of their surroundings. Lidar can provide better-quality data than radar and is superior to optical cameras because it is unaffected by variations in ambient light. You’ve probably seen the best-known example of a lidar sensor, produced by market leader Velodyne. It looks like a spinning coffee can perched atop cars developed by the likes of Waymo and Uber.
But not all lidar sensors are created equal. Velodyne, for example, has a range of offerings. Its high-end model is an $80,000 behemoth called HDL-64E—this is the one that looks a lot like a coffee can. It spits 64 laser beams, one atop the other. Each beam is separated by an angle of 0.4° (smaller angles between beams equal higher resolution), with a range of 120 meters. At the other end the firm sells the smaller Puck for $8,000. This sensor uses 16 beams of light, each separated by 2.0°, and has a range of 100 meters.
To see what those numbers mean, look at the videos below. It shows raw data from the HDL-64E at the top, and the Puck at the bottom. The expensive sensor’s 64 horizontal lines render the scene in detail, while the image produced by its cheaper sibling makes it harder to spot objects until they’re much closer to the car. While both sensors nominally have a similar range, the lower resolution of the Puck makes it less useful for obstacles until they are much closer to the vehicle.
At 70 miles per hour, spotting an object at, say, 60 meters out provides two seconds to react. But when traveling at that speed, it can take 100 meters to slow to a stop. A useful range of somewhere closer to 200 meters is a better target to shoot for to make autonomous cars truly safe.
That’s where cost comes in. Even an $8,000 sensor would be a huge problem for any automaker looking to build a self-driving car that a normal person could afford. Because of this, many sensor makers are readying new kinds of solid-state lidar devices, which use an array of tiny antennas to steer a laser beam electronically instead of mechanically. These devices promise to be easier to manufacture at scale and cheaper than their mechanical brethren. That would make them a palatable option for car companies, many of which are looking to build autonomous cars for the mass market as soon as 2021.
But some of these new solid-state devices may currently lack the fidelity required for self-driving cars to operate safely and reliably at highway speeds.
The French auto parts maker Valeo, for example, claims to have built what it says is the world’s first laser scanner for cars that’s ready for high-volume production, the SCALA. It features four lines of data with an angular resolution of 0.8°. Automotive News previously reported that Valeo will provide the lidar sensor used in the new Audi A8, though at the time of writing Audi declined to confirm this and Valeo didn’t respond to a request for details. The new A8 is the first production car to feature lidar and can drive itself—but only in heavy traffic at speeds less than 37 miles per hour.
In June, Graeme Smith, chief executive of the Oxford University autonomous driving spinoff Oxbotica, told MIT Technology Review that he thinks a trade-off between data quality and affordability in the lidar sector might affect the rate at which high-speed autonomous vehicles take to the roads. “Low-speed applications may be more affordable more quickly than higher-speed ones,” he explained. “If you want a laser that’s operating over 250 meters, you need a finely calibrated laser. If you’re working in a lower-speed environment and can get by with 15 meters’ range, then you can afford [to use] a much lower-cost sensor.”
Austin Russell, the CEO of lidar startup Luminar, says his company actively chose not to use solid-state hardware in its sensors, because it believes that while mechanically steering a beam is more expensive, it currently provides more finely detailed images that are critical for safe driving. “It doesn't matter how much machine-learning magic you throw at a couple of points [on an object], you can’t know what it is,” he says. “If you only see a target out at 30 meters or so, at freeway speeds that’s a fraction of a second.”
The standard of solid-state devices available for use in vehicles is likely to improve over time, of course. LeddarTech, for instance, is a Canadian firm based in Quebec that specializes in solid-state devices and is producing reference designs that auto parts makers will then use as a model to produce hardware at scale. The firm’s Luc Langlois says that one of its designs, estimated to cost a car company around $75 to produce, will feature either eight or 16 lines and be available in December 2018. A higher-resolution version, with 64 lines and estimated to cost around $100, will follow about a year later.
For its part, Velodyne has promised to build a solid-state lidar device, which John Eggert, director of automotive sales and marketing, says will use 32 laser lines and boast a range of 200 meters—though he won’t elaborate on the resolution provided by the hardware. And Israeli startup Innoviz Technologies claims to be making a $100 unit with a range of 200 meters and an angular resolution of 0.1°. Both firms have promised to put those sensors into production sometime in 2018, though the scale of production and availability remain unknown. Quanergy, a Silicon Valley startup, is building its own $250 solid-state device due to go into production later this year, but at the time of this writing did not respond to multiple requests for detailed specifications.
Oxbotica’s Smith thinks that automakers might just have to wait it out for a cheap sensor that offers the resolution required for high-speed driving. “It will be like camera sensors,” he says. “When we first had camera phones, they were kind of basic cameras. And then we got to a certain point where nobody really cared anymore because there was a finite limit to the human eye.” Makers of autonomous cars might find that lidar sensor performance levels out, too—eventually.
(Read more: “Self-Driving Cars’ Spinning-Laser Problem,” “Audi’s New A8 May Drive Itself, But Owners Should Proceed with Caution,” “College Dropout Says He’s Cracked Self-Driving Cars’ Most Crucial Component”)
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.