Skip to Content

CES 2015: Nvidia Demos a Car Computer Trained with “Deep Learning”

A commercial device uses powerful image and information processing to let cars interpret 360° camera views.
January 6, 2015

Many cars now include cameras or other sensors that record the passing world and trigger intelligent behavior, such as automatic braking or steering to avoid an obstacle. Today’s systems are usually unable to tell the difference between a trash can and traffic cop standing next to it, though.

The Drive CX

This week at the International Consumer Electronics Show in Las Vegas, Nvidia, a leading marking of computer graphics chips, unveiled a vehicle computer called the Drive PX that could help cars interpret and react to the world around them.

Nvidia already supplies chips to many car makers, but engineers at those companies usually have to write software to collect and process data from various different sensor systems. Drive PX is more powerful than existing hardware, and it should also make it easier to integrate and process sensor data.

The computer uses Nvidia’s new graphics microprocessor, the Tegra X1. It is capable of processing information from up to 12 cameras simultaneously, and it comes with software designed to assist with safety or autonomous driving systems. Most impressive, it includes a system trained to recognize different objects using a powerful technique known as deep learning (see “10 Breakthrough Technologies 2013: Deep Learning”). Another computer from Nvidia, called the Drive CX, is designed to generate realistic 3-D maps and other graphics for dashboard displays.

“It’s pretty cool to bring this level of powerful computation into cars,” said John Leonard, a professor of mechanical engineering at MIT, who works on autonomous-car technology. “It’s the first such computer that seems really designed for a car—an autopilot computer.”

The new Nvidia hardware can also be updated remotely, so that car manufacturers can fix bugs or add new functionality. This is something few car companies, aside from Tesla, do currently.

So far Audi has emerged as an early buyer; at CES, the company showed off a luxury concept car called the Audi Prologue that includes the Drive PX. A year ago, the company announced at CES that it had developed a compact computer for processing sensor information (see “Audi Shows Off a Compact Brain for Self-Driving Cars”). That, too, included Nvidia chips.

The introduction of Nvidia’s product is a landmark moment for deep learning, a technology that processes sensory information efficiently by loosely mimicking the way the brain works. At CES, Nvidia showed that its software can detect objects such as cars, people, bicycles, and signs, even when they are partly hidden.

Yoshua Bengio, a deep-learning researcher at the University of Montreal, says the Nvidia chipset is an important commercial milestone. “I would not call it a breakthrough, but more a continuous advance in a direction that has been going for a number of years now,” he said.

Yann LeCun, a data scientist at New York University who leads deep-learning efforts at Facebook (see “Facebook Launches Advanced AI Effort to Find Meaning in Your Posts”), also sees the announcement as an important step: “It is significant because current solutions tend to be closed and proprietary, use custom and inflexible hardware, and tend to be ‘black boxes’ that equipment manufacturers cannot really customize.”

At a press event Sunday, Jen-Hsun Huang, Nvidia’s CEO, said the devices will provide “more computing horsepower inside a car than anything you have today.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.