The White House has released 10 principles for government agencies to adhere to when proposing new AI regulations for the private sector. The move is the latest development of the American AI Initiative, launched via executive order by President Trump early last year to create a national strategy for AI. It is also part of an ongoing effort to maintain US leadership in the field.
The principles, released by the White House Office of Science and Technology Policy (OSTP), have three main goals: to ensure public engagement, limit regulatory overreach, and, most important, promote trustworthy AI that is fair, transparent, and safe. They are intentionally broadly defined, US deputy chief technology officer Lynne Parker said during a press briefing, to allow each agency to create more specific regulations tailored to its sector.
In practice, federal agencies will now be required to submit a memorandum to OSTP to explain how any proposed AI-related regulation satisfies the principles. Though the office doesn’t have the authority to nix regulations, the procedure could still provide the necessary pressure and coordination to uphold a certain standard.
“OSTP is attempting to create a regulatory sieve,” says R. David Edelman, the director of the Project on Technology, the Economy, and National Security at MIT. “A process like this seems like a very reasonable attempt to build some quality control into our AI policy.”
The principles (with my translation) are:
- Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.
- Public participation. The public should have a chance to provide feedback in all stages of the rule-making process.
- Scientific integrity and information quality. Policy decisions should be based on science.
- Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.
- Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.
- Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.
- Fairness and nondiscrimination. Agencies should make sure AI systems don’t discriminate illegally.
- Disclosure and transparency. The public will trust AI only if it knows when and how it is being used.
- Safety and security. Agencies should keep all data used by AI systems safe and secure.
- Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.
The newly proposed plan signifies a remarkable U-turn from the White House’s stance less than two years ago, when people working in the Trump administration said there was no intention of creating a national AI strategy. Instead, the administration argued that minimizing government interference was the best way to help the technology flourish.
But as more and more governments around the world, and especially China, invest heavily in AI, the US has felt significant pressure to follow suit. During the press briefing, administration officials offered a new line of logic for an increased government role in AI development.
“The US AI regulatory principles provide official guidance and reduce uncertainty for innovators about how their own government is approaching the regulation of artificial intelligence technologies,” said US CTO Michael Kratsios. This will further spur innovation, he added, allowing the US to shape the future of the technology globally and counter influences from authoritarian regimes.
There are a number of ways this could play out. Done well, it would encourage agencies to hire more personnel with technical expertise, create definitions and standards for trustworthy AI, and lead to more thoughtful regulation in general. Done poorly, it could give agencies incentives to skirt around the requirements or put up bureaucratic roadblocks to the regulations necessary for ensuring trustworthy AI .
Edelman is optimistic. “The fact that the White House pointed to trustworthy AI as a goal is very important,” he says. “It sends an important message to the agencies.”
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
The secret police: A private security group regularly sent Minnesota police misinformation about protestors
There are 13 private security guards for every one police officer in downtown Minneapolis, but these groups are far less regulated than police departments.
A million-word novel got censored before it was even shared. Now Chinese users want answers.
After a writer was locked out of her novel for including illegal content, Chinese web users are asking questions about just how far the state’s censorship reaches.
The world’s biggest surveillance company you’ve never heard of
Hikvision could be sanctioned for aiding the Chinese government’s human rights violations in Xinjiang. Here’s everything you need to know.
Where to get abortion pills and how to use them
New US restrictions could turn abortion into do-it-yourself medicine, but there might be legal risks.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.