The White House has released 10 principles for government agencies to adhere to when proposing new AI regulations for the private sector. The move is the latest development of the American AI Initiative, launched via executive order by President Trump early last year to create a national strategy for AI. It is also part of an ongoing effort to maintain US leadership in the field.
The principles, released by the White House Office of Science and Technology Policy (OSTP), have three main goals: to ensure public engagement, limit regulatory overreach, and, most important, promote trustworthy AI that is fair, transparent, and safe. They are intentionally broadly defined, US deputy chief technology officer Lynne Parker said during a press briefing, to allow each agency to create more specific regulations tailored to its sector.
In practice, federal agencies will now be required to submit a memorandum to OSTP to explain how any proposed AI-related regulation satisfies the principles. Though the office doesn’t have the authority to nix regulations, the procedure could still provide the necessary pressure and coordination to uphold a certain standard.
“OSTP is attempting to create a regulatory sieve,” says R. David Edelman, the director of the Project on Technology, the Economy, and National Security at MIT. “A process like this seems like a very reasonable attempt to build some quality control into our AI policy.”
The principles (with my translation) are:
- Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.
- Public participation. The public should have a chance to provide feedback in all stages of the rule-making process.
- Scientific integrity and information quality. Policy decisions should be based on science.
- Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.
- Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.
- Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.
- Fairness and nondiscrimination. Agencies should make sure AI systems don’t discriminate illegally.
- Disclosure and transparency. The public will trust AI only if it knows when and how it is being used.
- Safety and security. Agencies should keep all data used by AI systems safe and secure.
- Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.
The newly proposed plan signifies a remarkable U-turn from the White House’s stance less than two years ago, when people working in the Trump administration said there was no intention of creating a national AI strategy. Instead, the administration argued that minimizing government interference was the best way to help the technology flourish.
But as more and more governments around the world, and especially China, invest heavily in AI, the US has felt significant pressure to follow suit. During the press briefing, administration officials offered a new line of logic for an increased government role in AI development.
“The US AI regulatory principles provide official guidance and reduce uncertainty for innovators about how their own government is approaching the regulation of artificial intelligence technologies,” said US CTO Michael Kratsios. This will further spur innovation, he added, allowing the US to shape the future of the technology globally and counter influences from authoritarian regimes.
There are a number of ways this could play out. Done well, it would encourage agencies to hire more personnel with technical expertise, create definitions and standards for trustworthy AI, and lead to more thoughtful regulation in general. Done poorly, it could give agencies incentives to skirt around the requirements or put up bureaucratic roadblocks to the regulations necessary for ensuring trustworthy AI .
Edelman is optimistic. “The fact that the White House pointed to trustworthy AI as a goal is very important,” he says. “It sends an important message to the agencies.”
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
This scientist now believes covid started in Wuhan’s wet market. Here’s why.
How a veteran virologist found fresh evidence to back up the theory that covid jumped from animals to humans in a notorious Chinese market—rather than emerged from a lab leak.
The US crackdown on Chinese economic espionage is a mess. We have the data to show it.
The US government’s China Initiative sought to protect national security. In the most comprehensive analysis of cases to date, MIT Technology Review reveals how far it has strayed from its goals.
The China Initiative’s first academic guilty verdict raises more questions than it answers
Observers hoped that the trial of the prominent Harvard professor Charles Lieber would provide some clues into the future of the Department of Justice’s campaign against Chinese economic espionage.
This is one reason why being online felt so bad in 2021
New data shows just how polarized political conversation is in the US and hints at what might come during the 2022 midterms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.