Skip to Content

Pattern Recognition Algorithm Recognizes When Drivers Are on the Phone

Using a mobile phone while driving can significantly increase the chances of an accident. Now a dashboard cam can spot when drivers pick up the phone.

By some estimates, 85 percent of drivers in America use a mobile phone while at the wheel. The National Highway Traffic Safety Administration estimates that during daylight hours, 5 percent of cars are being driven by people making phone calls.

That’s not good news. It takes about five seconds to dial a telephone number during which time a car traveling at 60 miles per hour will have moved 140 meters.  And according to the Virginia Tech transportation Institute, almost 80 percent of crashes involve drivers who were not paying attention in the three seconds before the event.

It doesn’t take a genius to figure out that using a mobile phone while driving significantly increases the chances of an accident. Which is why various research teams are studying ways of identifying when drivers are on the phone and warning them of the increased danger.

Today, Rafael Berri at Santa Catarina State University in Brazil and a few pals reveal their approach to the problem using a small dashboard camera that watches for the tell-tale signs that the driver is on the phone.

Their approach is relatively straightforward. Berri and co point out that drivers usually scan the road ahead while driving but when on the phone, they tend to fix their gaze straight ahead. This means that a dashboard camera in front of the driver is well-positioned to spot cell phone use.

Their system processes the images from this camera in three steps. First, it locates the driver and crops the image to show just the face and area to each side of the face. The idea is to see the driver’s hands should they be raised next to the ear in holding a mobile phone while making a call.

Next, it identifies any skin pixels in the image and assesses the position of these pixels. It then segments the image into areas showing face and hands. Finally, it assesses the likelihood that the driver is on a call and issues a warning accordingly.

Berri and co have tested their algorithm in real time on a set of five videos of a driver taken using a dashboard camera at 15 frames per second with a resolution of 320 by 240 pixels. Each video is divided into periods of three seconds and these are then classified as to whether the driver is using a phone or not.

The team identified a number of situations in which the accuracy of the algorithm drops dramatically. For example when sunlight falls directly on the driver’s skin creating images of particularly high contrast.

But they say that in general it works well. “Periods of three seconds were correctly classified at 87.43 per cent of cases,” say Berri and co.

Just what this system would do to warn the driver is not yet clear. It could, for example, generate warning noises that might drown out a conversation. And, of course, it would have to know whether the car was moving or not.

A broader question is whether such a system would actually prevent drivers from making phone calls. It is not hard to imagine drivers who might continue to make calls regardless of the warnings they receive. Nor is it difficult to think of ways to fool such an algorithm, using gloves, for example.

And a crucial question for manufacturers is whether anybody would buy a car that spies on them in this way. If not, it’s hard to see a system like this gaining much traction.

A better approach might be to find ways of convincing drivers that using a handheld phone while driving significantly increases the risks of an accident and persuading them either to stop or to make the call later.

Not an easy task but clearly one worth pursuing.

Ref: arxiv.org/abs/1408.0680 : A Pattern Recognition System for Detecting Use of Mobile Phones While Driving

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.