MIT Technology Review Subscribe

An early guide to policymaking on generative AI

How lawmakers are thinking about the risks of the latest tech revolution

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

Earlier this week, I was chatting with a policy professor in Washington, DC, who told me that students and colleagues alike are asking about GPT-4 and generative AI: What should they be reading? How much attention should they be paying?

Advertisement

She wanted to know if I had any suggestions, and asked what I thought all the new advances meant for lawmakers. I’ve spent a few days thinking, reading, and chatting with the experts about this, and my answer morphed into this newsletter. So here goes!

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Though GPT-4 is the standard bearer, it’s just one of many high-profile generative AI releases in the past few months: Google, Nvidia, Adobe, and Baidu have all announced their own projects. In short, generative AI is the thing that everyone is talking about. And though the tech is not new, its policy implications are months if not years from being understood. 

GPT-4, released by OpenAI last week, is a multimodal large language model that uses deep learning to predict words in a sentence. It generates remarkably fluent text, and it can respond to images as well as word-based prompts. For paying customers, GPT-4 will now power ChatGPT, which has already been incorporated into commercial applications. 

The newest iteration has made a major splash, and Bill Gates called it “revolutionary” in a letter this week. However, OpenAI has also been criticized for a lack of transparency about how the model was trained and evaluated for bias. 

Despite all the excitement, generative AI comes with significant risks. The models are trained on the toxic repository that is the internet, which means they often produce racist and sexist output. They also regularly make things up and state them with convincing confidence. That could be a nightmare from a misinformation standpoint and could make scams more persuasive and prolific. 

Generative AI tools are also potential threats to people’s security and privacy, and they have little regard for copyright laws. Companies using generative AI that has stolen the work of others are already being sued.

Alex Engler, a fellow in governance studies at the Brookings Institution, has considered how policymakers should be thinking about this and sees two main types of risks: harms from malicious use and harms from commercial use. Malicious uses of the technology, like disinformation, automated hate speech, and scamming, “have a lot in common with content moderation,” Engler said in an email to me, “and the best way to tackle these risks is likely platform governance.” (If you want to learn more about this, I’d recommend listening to this week’s Sunday Show from Tech Policy Press, where Justin Hendrix, an editor and a lecturer on tech, media, and democracy, talks with a panel of experts about whether generative AI systems should be regulated similarly to search and recommendation algorithms. Hint: Section 230.)  

Policy discussions about generative AI have so far focused on that second category: risks from commercial use of the technology, like coding or advertising. So far, the US government has taken small but notable actions, primarily through the Federal Trade Commission (FTC). The FTC issued a warning statement to companies last month urging them not to make claims about technical capabilities that they can’t substantiate, such as overstating what AI can do. This week, on its business blog, it used even stronger language about risks companies should consider when using generative AI.  

Advertisement

“If you develop or offer a synthetic media or generative AI product, consider at the design stage and thereafter the reasonably foreseeable—and often obvious—ways it could be misused for fraud or cause other harm. Then ask yourself whether such risks are high enough that you shouldn’t offer the product at all,” the blog post reads. 

The US Copyright Office also launched a new initiative intended to deal with the thorny policy questions around AI, attribution, and intellectual property. 

The EU, meanwhile, is sticking true to its reputation as the world leader in tech policy. At the start of this year my colleague Melissa Heikkilä wrote about the EU’s efforts to try to pass the AI Act. It’s a set of rules that would prevent companies from releasing models into the wild without disclosing their inner workings, which is precisely what some critics are accusing OpenAI of with the GPT-4 release. 

The EU intends to separate high-risk uses of AI, like hiring, legal, or financial applications, from lower-risk uses like video games and spam filters, and require more transparency around the more sensitive uses. OpenAI has acknowledged some of the concerns about the speed of adoption. In fact, its own CEO, Sam Altman, told ABC News he shares many of the same fears. However, the company is still not disclosing key data about GPT-4. 

For policy folks in Washington, Brussels, London, and offices everywhere else in the world, it’s important to understand that generative AI is here to stay. Yes, there’s significant hype, but the recent advances in AI are as real and important as the risks that they pose. 

What I am reading this week

Yesterday, the United States Congress called Shou Zi Chew, the CEO of TikTok, to a hearing about privacy and security concerns raised by the popular social media app. His appearance came after the Biden administration threatened a national ban if its parent company, ByteDance, didn’t sell off the majority of its shares. 

There were lots of headlines, most using a temporal pun, and the hearing laid bare the depths of the new technological cold war between the US and China. For many watching, the hearing was both important and disappointing, with some legislators displaying poor technical understanding and hypocrisy about how Chinese companies handle privacy when American companies collect and trade data in much the same ways. 

It also revealed how deeply American lawmakers distrust Chinese tech. Here are some of the spicier takes and helpful articles to get up to speed:

Advertisement

What I learned this week

AI is able to persuade people to change their minds about hot-button political issues like an assault weapon ban and paid parental leave, according to a study by a team at Stanford’s Polarization and Social Change Lab. The researchers compared people’s political opinions on a topic before and after reading an AI-generated argument, and found that these arguments can be as effective as human-written ones in persuading the readers: “AI ranked consistently as more factual and logical, less angry, and less reliant upon storytelling as a persuasive technique.” 

The teams point to concerns about the use of generative AI in a political context, such as in lobbying or online discourse. (For more on the use of generative AI in politics, do please read this recent piece by Nathan Sanders and Bruce Schneier.)

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement