Skip to Content

Don’t Let Regulators Ruin AI

Tech policy scholar Andrea O’Sullivan says the U.S. needs to be careful not to hamstring innovation.
Miguel Porlan

The U.S. has so far been relatively permissive toward AI technologies—and we should keep it that way. It’s the reason so much innovation happens here rather than in the more prohibitory European nations.

The main reason the government hasn’t hampered the industry with regulation is that there’s no overbearing federal agency dedicated strictly to AI. Instead, we have a patchwork of federal and state authorities scrutinizing these technologies. The Federal Trade Commission and the National Highway Traffic Safety Administration, for example, recently hosted a workshop to determine how to oversee automated-car technologies. The Department of Homeland Security has put out reports on potential AI threats to critical infrastructure.

The patchwork approach is imperfect, but it has one big benefit—it constrains the temptation to regulate excessively. Regulators can only apply policies that relate to their specialized knowledge.

But now a growing chorus of academics and commentators want to kill that approach. They’re calling for a whole new regulatory body to control AI technologies. Law professor Frank Pasquale of the University of Maryland has called for a “Federal Search Commission” similar to the FCC to oversee Internet queries. Attorney ­Matthew Scherer, in Portland, Oregon, advocates a specialized federal AI agency. Law professor Ryan Calo of the University of Washington imagines a “Federal Robotics Commission.”

Such ideas are based on the “precautionary principle”—the idea that an innovation must be decelerated or halted altogether if a regulator determines that the associated risks are too much for society to bear.

Of course, as regulatory scholars have long pointed out, the risk analyses that regulators employ can be inadequate. Imagined or exaggerated risks are weighted far more heavily than real benefits, and society is robbed of life-enriching (and in many cases life-saving) developments. Regulators often can’t resist the urge to extend their own authority or budgets, regardless of the benefits or costs to society. If you give authority to regulate, they will regulate. And once you create a federal agency, it’s incredibly difficult to make it go away.

As AI grows to touch more and more domains of existence, a new federal AI agency could have a worryingly large command over American life. Policymakers would need the patience and humility to discern one AI application from another. The social risks from AI assistants, for example, are different from those posed by predictive policing software and “smart weapons.” But an overly zealous regulatory regime might erroneously lump such applications together, stifling beneficial technologies while dedicating fewer resources to the big problems that really matter.

The threat to our future and well-being that precautionary regulation poses, meanwhile, is considerable. AI technologies are poised to generate life-saving developments in health and transportation while modernizing manufacturing and trade. The projected economic benefits reach the trillions. And on a personal level, AI promises to make our lives more comfortable and simpler.

Policymakers who wish to champion growth should embrace a stance of “permissionless innovation.” Humility, collaboration, and voluntary solutions should trump the outdated “command and control” model of the last century. The age of smart machines needs a new age of smart policy.

Andrea O’Sullivan is a program manager with the Mercatus Center, a free-market-oriented think tank at George Mason University’s Technology Policy Program.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.