Skip to Content

Jaguar Demos a Car That Keeps an Eye on Its Driver

A company called Seeing Machines wants to use cameras and software to make sure you’re focused on driving.
January 5, 2015

Many cars already include plenty of sensors—cameras for spotting objects in your blind spot, for instance—but they’re usually keeping an eye on the outside world, not on what’s going on behind the wheel.

An Australian company called Seeing Machines is turning sensing inward with technology that focuses on drivers themselves in hopes of reducing distracted and drowsy driving. The company is using cameras and software to detect eye and facial movements so it can alert drivers who have become inattentive, either due to drowsiness or distraction. This kind of technology is set to become more common, especially as cars become more capable of driving themselves on some stretches of road.

So far, the company has focused its technology on drivers of heavy industrial trucks used in the mining industry, but it’s also moving toward the consumer market: it has a deal with auto parts maker Takata to bring its technology to cars and other vehicles. At the International Consumer Electronics Show in Las Vegas this week, Seeing Machines will demonstrate its attention-monitoring sensors in the dashboard of a Jaguar F-Type.

Built-in systems for tracking drivers’ attention are an option on a still small but growing number of cars from companies such as Lexus. Some companies, like Google, paint a future where cars are completely automated.

Seeing Machines’ technology uses a small infrared camera fitted to the dashboard that works with software running on the vehicle—not on a remote server—to evaluate whether the driver is looking at the road. It evaluates a person’s head position, facial expression, and blinking rate. The camera captures 60 frames per second, and the software analyzes the images to determine the driver’s alertness.

Nick Langdale-Smith, Seeing Machines’ vice president for company partnerships, says the company can get a good read on a driver’s state by tracking his pupils in particular. The software measures eyeball rotation and detects where the driver’s line of sight intersects with objects around the driver. This allows the software to determine the amount of time a driver spends looking at the dashboard, car mirrors, road, and elsewhere—which can help it judge whether a driver is paying attention to traffic or starting to doze off.

If the system senses you’re not paying attention, it will alert you to put your eyes back on the road or pull over. For companies that are already using the system in their trucks, this comes via a seat-based buzz, though Langdale-Smith says this won’t be the case in an eventual consumer version of the technology. “This is there to save your life,” he says.

To date, Seeing Machines has honed its technology in the mining industry, where Caterpillar and other makers of huge vehicles that transport earth and minerals are using it to monitor drivers. “The shift is long and the task is boring,” says Langdale-Smith. “And when [drivers] fall asleep, the vehicles turn into 450-ton juggernauts.”

Seeing Machines’ deal with Takata, announced in September, will include the installation of driver-monitoring systems in cars made by an unnamed major automaker—it’s not clear when this will happen, though. 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.