Skip to Content
Policy

Three things to know about how the US Congress might regulate AI

Some key themes are emerging.

A conceptual illustration of artificial intelligence regulation shows a robotic hand reaching for the dome of Congress
Sarah Rogers/MITTR | Getty

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

Last week, Senate majority leader Chuck Schumer (a Democrat from New York) announced his grand strategy for AI policymaking at a speech in Washington, DC, ushering in what might be a new era for US tech policy. He outlined some key principles for AI regulation and argued that Congress ought to introduce new laws quickly.

Schumer’s plan is a culmination of many other, smaller policy actions. On June 14, Senators Josh Hawley (a Republican from Missouri) and Richard Blumenthal (a Democrat from Connecticut) introduced a bill that would exclude generative AI from Section 230 (the law that shields online platforms from liability for the content their users create). Last Thursday, the House science committee hosted a handful of AI companies to ask questions about the technology and the various risks and benefits it poses. House Democrats Ted Lieu and Anna Eshoo, with Republican Ken Buck, proposed a National AI Commission to manage AI policy, and a bipartisan group of senators suggested creating a federal office to encourage, among other things, competition with China

Though this flurry of activity is noteworthy, US lawmakers are not actually starting from scratch on AI policy. “You’re seeing a bunch of offices develop individual takes on specific parts of AI policy, mostly that fall within some attachment to their preexisting issues,” says Alex Engler, a fellow at the Brookings Institution. Individual agencies like the FTC,the Department of Commerce, and the US Copyright Office have been quick to respond to the craze of the last six months, issuing policy statements, guidelines, and warnings about generative AI in particular. 

Of course, we never really know whether talk means action when it comes to Congress. However, US lawmakers’ thinking about AI reflects some emerging principles. Here are three key themes in all this chatter that you should know to help you understand where US AI legislation could be going. 

  • The US is home to Silicon Valley and prides itself on protecting innovation. Many of the biggest AI companies are American companies, and Congress isn’t going to let you, or the EU, forget that! Schumer called innovation the “north star” of US AI strategy, meaning regulators will probably be calling on tech CEOs to ask how they’d like to be regulated. It's going to be interesting watching the tech lobby at work here. Some of this language arose in response to the latest regulations from the European Union, which some tech companies and critics say will stifle innovation
  • Technology, and AI in particular, ought to be aligned with “democratic values.” We’re hearing this from top officials like Schumer and President Biden. The subtext here is the narrative that US AI companies are different from Chinese AI companies. (New guidelines in China mandate that outputs of generative AI must reflect “communist values.”) The US is going to try to package its AI regulation in a way that maintains the existing advantage over the Chinese tech industry, while also ramping up its production and control of the chips that power AI systems and continuing its escalating trade war. 
  • One big question: what happens to Section 230. A giant unanswered question for AI regulation in the US is whether we will or won’t see Section 230 reform. Section 230 is a 1990s internet law in the US that shields tech companies from being sued over the content on their platforms. But should tech companies have that same ‘get out of jail free’ pass for AI-generated content? This is a big question, and it would require that tech companies identify and label AI-made text and images, which is a massive undertaking. Given that the Supreme Court recently declined to rule on Section 230, the debate has likely been pushed back down to Congress. Whenever legislators decide if and how the law should be reformed, it could have a huge impact on the AI landscape. 

So where is this going? Well, nowhere in the short-term, as politicians skip off for their summer break. But starting this fall, Schumer plans to kick off invite-only discussion groups in Congress to look at particular parts of AI. 

In the meantime, Engler says we might hear some discussions about the banning of certain applications of AI, like sentiment analysis or facial recognition, echoing parts of the EU regulation. Lawmakers could also try to revive existing proposals for comprehensive tech legislation—for example, the Algorithmic Accountability Act.

For now, all eyes are on Schumer's big swing. “The idea is to come up with something so comprehensive and do it so fast. I expect there will be a pretty dramatic amount of attention,” says Engler.

What else I’m reading

  • Everyone is talking about “Bidenomics,” meaning the current president’s specific brand of economic policy. Tech is at the core of Bidenomics, with billions upon billions of dollars being poured into the industry in the US. For a glimpse of what that means on the ground, it’s well worth reading this story from the Atlantic about a new semiconductor factory coming to Syracuse. 
  • AI detection tools try to identify whether text or imagery online was made by AI or by a human. But there’s a problem: they don’t work very well. Journalists at the New York Times messed around with various tools and ranked them according to their performance. What they found makes for sobering reading. 
  • Google’s ad business is having a tough week. New research published by the Wall Street Journal found that around 80% of Google ad placements appear to break their own policies, which Google disputes.

What I learned this week

We may be more likely to believe disinformation generated by AI, according to new research covered by my colleague Rhiannon Williams. Researchers from the University of Zurich found that people were 3% less likely to identify inaccurate tweets created by AI than those written by humans.

It’s only one study, but if it’s backed up by further research, it’s a worrying finding. As Rhiannon writes, “The generative AI boom puts powerful, accessible AI tools in the hands of everyone, including bad actors. Models like GPT-3 can generate incorrect text that appears convincing, which could be used to generate false narratives quickly and cheaply for conspiracy theorists and disinformation campaigns.”

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.