Skip to Content
Tech policy

This has just become a big week for AI regulation

The EU has unveiled its new AI rules—but an announcement from the FTC may have more teeth.

FTC door concept
FTC door concept
Ms Tech | Unsplash

It’s a bumper week for government pushback on the misuse of artificial intelligence

Today the EU released its long-awaited set of AI regulations, an early draft of which leaked last week. The regulations are wide ranging, with restrictions on mass surveillance and the use of AI to manipulate people.

But a statement of intent from the US Federal Trade Commission, outlined in a short blog post by staff lawyer Elisa Jillson on April 19, may have more teeth in the immediate future. According to the post, the FTC plans to go after companies using and selling biased algorithms.

A number of companies will be running scared right now, says Ryan Calo, a professor at the University of Washington, who works on technology and law. “It’s not really just this one blog post,” he says. “This one blog post is a very stark example of what looks to be a sea change.”

The EU is known for its hard line against Big Tech, but the FTC has taken a softer approach, at least in recent years. The agency is meant to police unfair and dishonest trade practices. Its remit is narrow—it does not have jurisdiction over government agencies, banks, or nonprofits. But it can step in when companies misrepresent the capabilities of a product they are selling, which means firms that claim their facial recognition systems, predictive policing algorithms or healthcare tools are not biased may now be in the line of fire. “Where they do have power, they have enormous power,” says Calo.

Taking action

The FTC has not always been willing to wield that power. Following criticism in the 1980s and ’90s that it was being too aggressive, it backed off and picked fewer fights, especially against technology companies. This looks to be changing.

In the blog post, the FTC warns vendors that claims about AI must be “truthful, non-deceptive, and backed up by evidence.”

“For example, let’s say an AI developer tells clients that its product will provide ‘100% unbiased hiring decisions,’ but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination—and an FTC law enforcement action.”

The FTC action has bipartisan support in the Senate, where commissioners were asked yesterday what more they could be doing and what they needed to do it. “There’s wind behind the sails,” says Calo.

Meanwhile, though they draw a clear line in the sand, the EU’s AI regulations are guidelines only. As with the GDPR rules introduced in 2018, it will be up to individual EU member states to decide how to implement them. Some of the language is also vague and open to interpretation. Take one provision against “subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour” in a way that could cause psychological harm. Does that apply to social media news feeds and targeted advertising? “We can expect many lobbyists to attempt to explicitly exclude advertising or recommender systems,” says Michael Veale, a faculty member at University College London who studies law and technology.

It will take years of legal challenges in the courts to thrash out the details and definitions. “That will only be after an extremely long process of investigation, complaint, fine, appeal, counter-appeal, and referral to the European Court of Justice,” says Veale. “At which point the cycle will start again.” But the FTC, despite its narrow remit, has the autonomy to act now.

One big limitation common to both the FTC and European Commission is the inability to rein in governments’ use of harmful AI tech. The EU’s regulations include carve-outs for state use of surveillance, for example. And the FTC is only authorized to go after companies. It could intervene by stopping private vendors from selling biased software to law enforcement agencies. But implementing this will be hard, given the secrecy around such sales and the lack of rules about what government agencies have to declare when procuring technology.

Yet this week’s announcements reflect an enormous worldwide shift toward serious regulation of AI, a technology that has been developed and deployed with little oversight so far. Ethics watchdogs have been calling for restrictions on unfair and harmful AI practices for years.

The EU sees its regulations bringing AI under existing protections for human liberties. “Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights,” said Ursula von der Leyen, president of the European Commission, in a speech ahead of the release.

Regulation will also help AI with its image problem. As von der Leyen also said: “We want to encourage our citizens to feel confident to use it.” 

Deep Dive

Tech policy

wet market selling fish
wet market selling fish

This scientist now believes covid started in Wuhan’s wet market. Here’s why.

How a veteran virologist found fresh evidence to back up the theory that covid jumped from animals to humans in a notorious Chinese market—rather than emerged from a lab leak.

thermal image of young woman wearing mask
thermal image of young woman wearing mask

The covid tech that is intimately tied to China’s surveillance state

Heat-sensing cameras and face recognition systems may help fight covid-19—but they also make us complicit in the high-tech oppression of Uyghurs.

German woman stands in queue for vaccination
German woman stands in queue for vaccination

What Europe’s new covid surge means—and what it doesn’t

New restrictions are coming into place across Europe as covid cases rise again. But there are several reasons why a new wave is happening.

Conceptual illustration showing a file folder with the China flag and various papers flying out of it
Conceptual illustration showing a file folder with the China flag and various papers flying out of it

The US crackdown on Chinese economic espionage is a mess. We have the data to show it.

The US government’s China Initiative sought to protect national security. In the most comprehensive analysis of cases to date, MIT Technology Review reveals how far it has strayed from its goals.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.