Skip to Content

Controlling When the Cameras Record

If we’re going to require body cameras, we need to be smart about when they’re used.

Around the U.S., the agents that control the public have been observed to beat up, shoot, kill, and arrest members of the public, with a special focus on protesters, members of minority groups, and people making recordings of the actions of those agents. This is often followed by fabricated accusations against the victim, meant to create false justification for the attack itself.

Illustration of Richard Stallman
Richard Stallman

To control these abuses, parts of the U.S. have begun ordering these agents to wear body cameras. Body cameras help restrain agents’ violence but create problems of their own. For instance, when should the cameras record?

There are occasions when the cameras should be off, including confidential discussions that are important not to record. Sometimes agents are invited into homes; it would be intrusive for them to make video recordings of everything visible inside the home, because the recordings might be studied later for signs of anything that could be prosecuted.

However, if agents can turn their cameras off, they might do so precisely when they are going to commit violence, as appears to have happened in February 2015.

I propose a technical system to control when these cameras record, removing most of the agents’ discretion.

The idea is that the system records its camera’s video (and its microphone’s audio) all the time, but normally discards all recordings 10 minutes after they are made. Certain events (let’s call them “significant events”) cause those 10 minutes of recording to be saved, and the following 10 minutes as well.

Each agent’s system detects certain significant events automatically. An agent can also manually declare a significant event by pushing a button. Either way, when one agent’s system detects a significant event, it sends a radio signal to report the event to the systems of all agents within a certain reception distance—perhaps 50 meters.

Here are proposed criteria for detecting a significant event:

(1) Whenever the agent removes a gun from its holster. (2) Whenever the agent takes a weapon in hand to use it, including guns, tasers, sticks, and others. (3) Whenever the agent pushes a button to declare an event. Agents should be trained and required to do this when they see a violent attack or an injury, and then to aim their cameras at least briefly toward whatever they saw. (4) Whenever the system’s microphone detects a gunshot.

The particulars of each significant event should be posted promptly on a website so citizens can verify that they are not being watched without grounds. An agent who pushes the significant-event button or draws a weapon without good reason should be punished enough to make such abuses rare, and those recordings should be deleted.

Another pertinent question is when the recordings should be made available to agents, prosecutors, a court, or the public. I propose that recordings saved because of a significant event should be made available only when a judge rules that they cover part of an act of violence, or in response to a subpoena about a specific person who appears in a specific video. In particular, agents would have to wait for court approval to view the videos of events they participated in, and that would come after they make statements about the events.

Richard Stallman leads the free software movement (fsf.org), which campaigns to give users control over their programs. He led development of the free/libre operating system GNU (gnu.org), typically used with the kernel Linux in the combination GNU+Linux. Copyright 2015 Richard ­Stallman. Released under Creative Commons Attribution-NoDerivatives 4.0 license.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.