Skip to Content
Tech policy

Who’s going to regulate AI? It might be you.

As legislators struggle to keep pace with technology, experts say the industry needs to take a more active role in keeping things in check.

From Facebook’s role in spreading misinformation to new European copyright laws, it’s the hottest topic in technology right now. How should technology companies be regulated? How does that regulation keep up with emerging technologies like AI? And who will make sure new laws don’t stifle innovation?

It’s true that legislators often struggle to understand basic technical concepts, while companies are advancing technologies much faster than governments and the legal system can cope with. Speaking at EmTech Digital, MIT Technology Review’s AI conference, a group of leading experts on AI and policy suggested that new standards and cooperation were needed.

While Google policy chief Kent Walker announced the formation of a new external advisory council for AI development, Rashida Richardson, director of policy research at the AI Now Institute, said that the emphasis should be on technologists and leading companies acting to prevent misuse of the systems they are building.

“Who bears the burden for ensuring that emerging technologies are not discriminatory?” she asked.

Unintended consequences—for example, when face recognition systems make false positives—are too dangerous for many groups of people, she said, and systems trained on bad data only end up reinforcing preexisting bias. But preventing abuses while simultaneously encouraging development is clearly something that the law struggles with.

“The companies and individuals responsible for creating emerging technologies have an obligation. They need to do their due diligence—deeply interrogating the context in which a data set was created, for example,” Richardson said. “In other cases, there are times that companies may find their technology cannot be made discrimination-proof, and they will have to make a tough decision on whether they should bring that product to market.”

Brendan McCord, an advisor to the US Department of Defense, said that the largest and most influential companies should use their “immense power” and take a more active role in helping shape regulatory efforts.

“Civil society groups are doing a good job in trying to raise awareness of these issues,” he said. “But companies have enormous capacity to drive this conversation.”

McCord, who previously worked on the Pentagon’s controversial Project Maven, suggested that a consortium of leading companies could help establish industry norms or even work with legislators to design future-proof approaches to regulating AI, machine learning, and other fast-evolving technologies.

“I think a good strategy is that companies [like Google] band together with other companies and create momentum, create a push for the right kind of regulation, and have that codified, which drives a virtuous cycle where other companies have to comply with that regulation,” he said.

However, this would require companies to work much harder to put the interest of the public ahead of their own profits, he added.

Google’s Walker said there were lots of examples of companies making good decisions—and that Google itself was considering which elements of Europe’s new data privacy laws it might be able to import into the US.

But the evidence suggests that current approaches to self-regulation have shown many weaknesses—and often only manifest in the face of threats from governments or the courts. Facebook announced less than a week ago that it was going to stop allowing advertisers to target race, gender, and age, for example. That decision, however, came only after a string of lawsuits charging that the company was violating civil rights laws established in the 1960s.

AI Now’s Richardson said it is difficult to regulate emerging technologies because they are moving so quickly and often leave out important stakeholders.

“There is very ambiguous rhetoric around equality,” she said. “It’s really hard to say ‘We will not harm people with this technology.’ Who makes that decision?

“It’s harder to regulate, because either you have a full moratorium until we understand it, or you live in the world we live in right now, in which you’re trying to catch up.”

Deep Dive

Tech policy

afghans targeted by biometric data
afghans targeted by biometric data

This is the real story of the Afghan biometric databases abandoned to the Taliban

By capturing 40 pieces of data per person—from iris scans and family links to their favorite fruit—a system meant to cut fraud in the Afghan security forces may actually aid the Taliban.

thermal image of young woman wearing mask
thermal image of young woman wearing mask

The covid tech that is intimately tied to China’s surveillance state

Heat-sensing cameras and face recognition systems may help fight covid-19—but they also make us complicit in the high-tech oppression of Uyghurs.

Afghan cell phone photo with Taliban fighters
Afghan cell phone photo with Taliban fighters

The Taliban, not the West, won Afghanistan’s technological war

The US-led coalition had more firepower, more equipment, and more money. But it was the Taliban that gained most from technological progress.

conceptual illustration showing layers of imagery that reference surveillance, policing, and domestic violence
conceptual illustration showing layers of imagery that reference surveillance, policing, and domestic violence

How Amazon Ring uses domestic violence to market doorbell cameras

Partnerships with law enforcement give smart cameras to the survivors of domestic violence. But who does it really help?

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.