Skip to Content
Artificial intelligence

America and its economic allies have announced five “democratic” principles for AI

CTO of the United States, Michael Kratsios.
CTO of the United States, Michael Kratsios.OECD

The Trump administration might be building walls between America and some countries, but it is eager to forge alliances when it comes to shaping the course of artificial intelligence.

The Organization for Economic Co-operation and Development (OECD), a coalition of countries dedicated to promoting democracy and economic development, has announced a set of five principles for the development and deployment of artificial intelligence. The announcement came at a meeting of the OECD Forum in Paris.

The OECD does not include China, and the principles outlined by the group seem to contrast with the way AI is being deployed there, especially for face recognition and surveillance of ethnic groups associated with political dissent.

Speaking at the event, America’s recently appointed CTO, Michael Kratsios, said, “We are so pleased that the OECD AI recommendations address so many of the issues which are being tackled by the American AI Initiative.”

The OECD Principles on AI read as follows:

1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.

2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards—for example, enabling human intervention where necessary—to ensure a fair and just society.

3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.

4. AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.

5. Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.