It’s a bumper week for government pushback on the misuse of artificial intelligence.
Today the EU released its long-awaited set of AI regulations, an early draft of which leaked last week. The regulations are wide ranging, with restrictions on mass surveillance and the use of AI to manipulate people.
But a statement of intent from the US Federal Trade Commission, outlined in a short blog post by staff lawyer Elisa Jillson on April 19, may have more teeth in the immediate future. According to the post, the FTC plans to go after companies using and selling biased algorithms.
A number of companies will be running scared right now, says Ryan Calo, a professor at the University of Washington, who works on technology and law. “It’s not really just this one blog post,” he says. “This one blog post is a very stark example of what looks to be a sea change.”
The EU is known for its hard line against Big Tech, but the FTC has taken a softer approach, at least in recent years. The agency is meant to police unfair and dishonest trade practices. Its remit is narrow—it does not have jurisdiction over government agencies, banks, or nonprofits. But it can step in when companies misrepresent the capabilities of a product they are selling, which means firms that claim their facial recognition systems, predictive policing algorithms or healthcare tools are not biased may now be in the line of fire. “Where they do have power, they have enormous power,” says Calo.
The FTC has not always been willing to wield that power. Following criticism in the 1980s and ’90s that it was being too aggressive, it backed off and picked fewer fights, especially against technology companies. This looks to be changing.
In the blog post, the FTC warns vendors that claims about AI must be “truthful, non-deceptive, and backed up by evidence.”
“For example, let’s say an AI developer tells clients that its product will provide ‘100% unbiased hiring decisions,’ but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination—and an FTC law enforcement action.”
The FTC action has bipartisan support in the Senate, where commissioners were asked yesterday what more they could be doing and what they needed to do it. “There’s wind behind the sails,” says Calo.
Meanwhile, though they draw a clear line in the sand, the EU’s AI regulations are guidelines only. As with the GDPR rules introduced in 2018, it will be up to individual EU member states to decide how to implement them. Some of the language is also vague and open to interpretation. Take one provision against “subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour” in a way that could cause psychological harm. Does that apply to social media news feeds and targeted advertising? “We can expect many lobbyists to attempt to explicitly exclude advertising or recommender systems,” says Michael Veale, a faculty member at University College London who studies law and technology.
It will take years of legal challenges in the courts to thrash out the details and definitions. “That will only be after an extremely long process of investigation, complaint, fine, appeal, counter-appeal, and referral to the European Court of Justice,” says Veale. “At which point the cycle will start again.” But the FTC, despite its narrow remit, has the autonomy to act now.
One big limitation common to both the FTC and European Commission is the inability to rein in governments’ use of harmful AI tech. The EU’s regulations include carve-outs for state use of surveillance, for example. And the FTC is only authorized to go after companies. It could intervene by stopping private vendors from selling biased software to law enforcement agencies. But implementing this will be hard, given the secrecy around such sales and the lack of rules about what government agencies have to declare when procuring technology.
Yet this week’s announcements reflect an enormous worldwide shift toward serious regulation of AI, a technology that has been developed and deployed with little oversight so far. Ethics watchdogs have been calling for restrictions on unfair and harmful AI practices for years.
The EU sees its regulations bringing AI under existing protections for human liberties. “Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights,” said Ursula von der Leyen, president of the European Commission, in a speech ahead of the release.
Regulation will also help AI with its image problem. As von der Leyen also said: “We want to encourage our citizens to feel confident to use it.”
This is the real story of the Afghan biometric databases abandoned to the Taliban
By capturing 40 pieces of data per person—from iris scans and family links to their favorite fruit—a system meant to cut fraud in the Afghan security forces may actually aid the Taliban.
The covid tech that is intimately tied to China’s surveillance state
Heat-sensing cameras and face recognition systems may help fight covid-19—but they also make us complicit in the high-tech oppression of Uyghurs.
How Amazon Ring uses domestic violence to market doorbell cameras
Partnerships with law enforcement give smart cameras to the survivors of domestic violence. But who does it really help?
Why you should be more concerned about internet shutdowns
Governments are turning off the internet to silence dissenters at an ‘exponential’ rate—and threatening civil society, says the chief operating officer of Google’s Jigsaw project.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.