Around the U.S., the agents that control the public have been observed to beat up, shoot, kill, and arrest members of the public, with a special focus on protesters, members of minority groups, and people making recordings of the actions of those agents. This is often followed by fabricated accusations against the victim, meant to create false justification for the attack itself.
To control these abuses, parts of the U.S. have begun ordering these agents to wear body cameras. Body cameras help restrain agents’ violence but create problems of their own. For instance, when should the cameras record?
There are occasions when the cameras should be off, including confidential discussions that are important not to record. Sometimes agents are invited into homes; it would be intrusive for them to make video recordings of everything visible inside the home, because the recordings might be studied later for signs of anything that could be prosecuted.
However, if agents can turn their cameras off, they might do so precisely when they are going to commit violence, as appears to have happened in February 2015.
I propose a technical system to control when these cameras record, removing most of the agents’ discretion.
The idea is that the system records its camera’s video (and its microphone’s audio) all the time, but normally discards all recordings 10 minutes after they are made. Certain events (let’s call them “significant events”) cause those 10 minutes of recording to be saved, and the following 10 minutes as well.
Each agent’s system detects certain significant events automatically. An agent can also manually declare a significant event by pushing a button. Either way, when one agent’s system detects a significant event, it sends a radio signal to report the event to the systems of all agents within a certain reception distance—perhaps 50 meters.
Here are proposed criteria for detecting a significant event:
(1) Whenever the agent removes a gun from its holster. (2) Whenever the agent takes a weapon in hand to use it, including guns, tasers, sticks, and others. (3) Whenever the agent pushes a button to declare an event. Agents should be trained and required to do this when they see a violent attack or an injury, and then to aim their cameras at least briefly toward whatever they saw. (4) Whenever the system’s microphone detects a gunshot.
The particulars of each significant event should be posted promptly on a website so citizens can verify that they are not being watched without grounds. An agent who pushes the significant-event button or draws a weapon without good reason should be punished enough to make such abuses rare, and those recordings should be deleted.
Another pertinent question is when the recordings should be made available to agents, prosecutors, a court, or the public. I propose that recordings saved because of a significant event should be made available only when a judge rules that they cover part of an act of violence, or in response to a subpoena about a specific person who appears in a specific video. In particular, agents would have to wait for court approval to view the videos of events they participated in, and that would come after they make statements about the events.
Richard Stallman leads the free software movement (fsf.org), which campaigns to give users control over their programs. He led development of the free/libre operating system GNU (gnu.org), typically used with the kernel Linux in the combination GNU+Linux. Copyright 2015 Richard Stallman. Released under Creative Commons Attribution-NoDerivatives 4.0 license.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
The walls are closing in on Clearview AI
The controversial face recognition company was just fined $10 million for scraping UK faces from the web. That might not be the end of it.
This horse-riding astronaut is a milestone in AI’s journey to make sense of the world
OpenAI’s latest picture-making AI is amazing—but raises questions about what we mean by intelligence.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.