Skip to Content

The US just released 10 principles that it hopes will make AI safer

All future AI regulations will need to clear the checklist.
January 7, 2020
An American Flag
An American FlagBrandon Day/Unsplash

The White House has released 10 principles for government agencies to adhere to when proposing new AI regulations for the private sector. The move is the latest development of the American AI Initiative, launched via executive order by President Trump early last year to create a national strategy for AI. It is also part of an ongoing effort to maintain US leadership in the field.

The principles, released by the White House Office of Science and Technology Policy (OSTP), have three main goals: to ensure public engagement, limit regulatory overreach, and, most important, promote trustworthy AI that is fair, transparent, and safe. They are intentionally broadly defined, US deputy chief technology officer Lynne Parker said during a press briefing, to allow each agency to create more specific regulations tailored to its sector.

In practice, federal agencies will now be required to submit a memorandum to OSTP to explain how any proposed AI-related regulation satisfies the principles. Though the office doesn’t have the authority to nix regulations, the procedure could still provide the necessary pressure and coordination to uphold a certain standard.

“OSTP is attempting to create a regulatory sieve,” says R. David Edelman, the director of the Project on Technology, the Economy, and National Security at MIT. “A process like this seems like a very reasonable attempt to build some quality control into our AI policy.”

The principles (with my translation) are:

  1. Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.
  2. Public participation. The public should have a chance to provide feedback in all stages of the rule-making process.
  3. Scientific integrity and information quality. Policy decisions should be based on science. 
  4. Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.
  5. Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.
  6. Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.
  7. Fairness and nondiscrimination. Agencies should make sure AI systems don’t discriminate illegally.
  8. Disclosure and transparency. The public will trust AI only if it knows when and how it is being used.
  9. Safety and security. Agencies should keep all data used by AI systems safe and secure.
  10. Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.

The newly proposed plan signifies a remarkable U-turn from the White House’s stance less than two years ago, when people working in the Trump administration said there was no intention of creating a national AI strategy. Instead, the administration argued that minimizing government interference was the best way to help the technology flourish.

But as more and more governments around the world, and especially China, invest heavily in AI, the US has felt significant pressure to follow suit. During the press briefing, administration officials offered a new line of logic for an increased government role in AI development. 

“The US AI regulatory principles provide official guidance and reduce uncertainty for innovators about how their own government is approaching the regulation of artificial intelligence technologies,” said US CTO Michael Kratsios. This will further spur innovation, he added, allowing the US to shape the future of the technology globally and counter influences from authoritarian regimes.

There are a number of ways this could play out. Done well, it would encourage agencies to hire more personnel with technical expertise, create definitions and standards for trustworthy AI, and lead to more thoughtful regulation in general. Done poorly, it could give agencies incentives to skirt around the requirements or put up bureaucratic roadblocks to the regulations necessary for ensuring trustworthy AI .

Edelman is optimistic. “The fact that the White House pointed to trustworthy AI as a goal is very important,” he says. “It sends an important message to the agencies.”

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive


What happened to the microfinance organization Kiva?

A group of strikers argue that the organization seems more focused on making money than creating change. Are they right?

Worldcoin just officially launched. Here’s why it’s already being investigated.

The project is backed by some of tech's biggest stars, but four countries are probing its privacy practices.

Google has a new tool to outsmart authoritarian internet censorship

Its Outline VPN can now be built directly into apps—making it harder for governments to block internet access, particularly during protests.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.