Skip to Content
Policy

When an AI finally kills someone, who will be responsible?

Legal scholars are furiously debating which laws should apply to AI crime.

Here’s a curious question: Imagine it is the year 2023 and self-driving cars are finally navigating our city streets. For the first time one of them has hit and killed a pedestrian, with huge media coverage. A high-profile lawsuit is likely, but what laws should apply?

Today, we get an answer of sorts thanks to the work of John Kingston at the University of Brighton in the UK, who maps out the landscape in this incipient legal field. His analysis raises some important issues that the automotive, computing, and legal worlds should be wrestling with in earnest, if they are not already.

At the heart of this debate is whether an AI system could be held criminally liable for its actions. Kingston says that Gabriel Hallevy at Ono Academic College in Israel has explored this issue in detail.

Criminal liability usually requires an action and a mental intent (in legalese an actus rea and mens rea). Kingston says Hallevy explores three scenarios that could apply to AI systems.

The first, known as perpetrator via another, applies when an offense has been committed by a mentally deficient person or animal, who is therefore deemed to be innocent. But anybody who has instructed the mentally deficient person or animal can be held criminally liable. For example, a dog owner who instructed the animal to attack another individual.  

That has implications for those designing intelligent machines and those who use them. “An AI program could be held to be an innocent agent, with either the software programmer or the user being held to be the perpetrator-via-another,” says Kingston.

The second scenario, known as natural probable consequence, occurs when the ordinary actions of an AI system might be used inappropriately to perform a criminal act. Kingston gives the example of an artificially intelligent robot in a Japanese motorcycle factory that killed a human worker. “The robot erroneously identified the employee as a threat to its mission, and calculated that the most efficient way to eliminate this threat was by pushing him into an adjacent operating machine,” says Kingston. “Using its very powerful hydraulic arm, the robot smashed the surprised worker into the machine, killing him instantly, and then resumed its duties.”

The key question here is whether the programmer of the machine knew that this outcome was a probable consequence of its use.  

The third scenario is direct liability, and this requires both an action and an intent. An action is straightforward to prove if the AI system takes an action that results in a criminal act or fails to take an action when there is a duty to act.

The intent is much harder to determine but is still relevant, says Kingston. “Speeding is a strict liability offense,” he says. “So according to Hallevy, if a self-driving car was found to be breaking the speed limit for the road it is on, the law may well assign criminal liability to the AI program that was driving the car at that time.” In that case, the owner may not be liable.

Then there is the issue of defense. If an AI system can be criminally liable, what defense might it use? Kingston raises a number of possibilities: Could a program that is malfunctioning claim a defense similar to the human defense of insanity? Could an AI infected by an electronic virus claim defenses similar to coercion or intoxication?

These kinds of defenses are by no means theoretical. Kingston highlights a number of cases in the UK where people charged with computer-related offenses have successfully argued that their machines had been infected with malware that was instead responsible for the crime.

In one case, a teenage computer hacker, charged with executing a denial-of-service attack, claimed that a Trojan program was instead responsible and had then wiped itself from the computer before it was forensically analyzed. “The defendant’s lawyer successfully convinced the jury that such a scenario was not beyond reasonable doubt,” says Kingston.

Finally, there is the issue of punishment. Who or what would be punished for an offense for which an AI system was directly liable, and what form would this punishment take? For the moment, there are no answers to these questions.

But criminal liability may not apply, in which case the matter would have to be settled with civil law. Then a crucial question will be whether an AI system is a service or a product.

If it is a product, then product design legislation would apply based on a warranty, for example.

If it is a service, then the tort of negligence applies. In this case, the plaintiff would usually have to demonstrate three elements to prove negligence. The first is that the defendant had a duty of care—usually straightforward to show, although the standard of care might be difficult to assess in the case of an AI, says Kingston.

The second element is that the defendant breached that duty. And the third is that the breach caused an injury to the plaintiff.

And if all this weren’t murky enough, the legal standing of AI systems could change as their capabilities become more human-like and perhaps even superhuman.

One thing is for sure: in the coming years, there is likely to be some fun to be had with all this by the lawyers—or the AI systems that replace them.

Ref: arxiv.org/abs/1802.07782 : Artificial Intelligence and Legal Liability

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Yes, remote learning can work for preschoolers

The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.

Three technology trends shaping 2024’s elections

The biggest story of this year will be elections in the US and all around the globe

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.