Skip to Content

CSI: Tech to Automatically Identify the Bad Guy

Researchers hope a new system could automatically scan the hours of CCTV footage police have to comb through to identify suspects without invading privacy.
August 26, 2011

Researchers are devising ways to automatically analyze CCTV and other security footage. The hope is that such technology will help police and other security officials to catch bad guys more quickly and more often, while minimizing the invasion of privacy of law-abiding citizens.

A group from Kingston University’s Digital Imaging Research Center (DIRC) is working on the tech. James Orwell, head of the DIRC’s Surveillance Research Group, told The Engineer, “We plan to develop components to automatically analyse multi-camera networks and footage before and after a trigger incident, such as a riot or fight, to produce a set of video segments relevant to a potential police investigation.” Using visual analysis (the DIRC offers scant details on the exact mechanism), the software can scan all the relevant footage and essentially automate a process that police might spend countless hours doing by hand.

For example, explains Orwell, say a man in a hooded sweatshirt suddenly smashes a window. That’s an event worth noting. The cops are going to want to scan all the footage from CCTV or surveillance cameras not just outside the store, but in the whole town center where the event took place. Maybe the bad guy thought he was out of range when he walked a few blocks away, and pulled down his hood, but little did he know that a camera outside a bank caught his face. If the cops can retrace the suspect’s path, they can potentially crack the case. The DIRC system proposes to do all this automatically. “A simple intruder-detection system could trigger the identification of all video data containing other observations of the intruder,” said Orwell.

Not only does this make cops’ jobs easier, it also helps assuage some of the concerns of privacy advocates, who are increasingly uncomfortable with the amount of video footage of our behavior that winds up in archives. If a computer could automatically detect and preserve footage “of interest,” the rest can be safely deleted, minimizing the invasion to our privacy.

The research is reminiscent of similar technology that was tested in June 2011 in a Manchester Airport. The surveillance system, called “Tag and Track”, developed by security company Ipsotek (tagline: “Recognise. Analyse. Realise”), allows security officials to tag suspicious individuals and then have the system track them across multiple cameras. Again, this is something security personnel already do, only in a more analog capacity. This can work in retrospect, for a forensic analysis after an event or crime, or it can work in real time. Sometimes a security worker, for all his efforts and training, might simply lose a suspect, one that has suddenly grabbed a piece of suspicious luggage from a baggage scanner, in a crowd when scanning a series of screens in a control room—and that’s the last thing you want to happen. Ipsotek’s system can serve as crucial backup in such a case, potentially averting an attack and saving lives.

What of the eternal give-and-take between security and privacy? What’s intriguing about the new system in particular is that, though it crunches more data and has an omnivorous appetite for CCTV footage, by reducing the rate of false positives, you’re less likely to be searched or detained unnecessarily. The dreaded pat-down, enemy of the privacy advocate, would hopefully become a thing of the past. Paradoxically, by increasing surveillance—and using new techniques to parse that data effectively—invasions of privacy, while all the more ubiquitous, could become less noticeable and disruptive. Though perhaps that’s what many privacy advocates fear most.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.