Skip to Content
Artificial intelligence

Artificial-intelligence development should be regulated, says Elon Musk

February 19, 2020
Elon Musk addresses reporters at a SpaceX press conference
Elon Musk addresses reporters at a SpaceX press conferenceAssociated Press

Regulators, rein us in: Tesla and SpaceX CEO Elon Musk has said development of advanced artificial intelligence, including AI created by his own companies, should be regulated. He tweeted the remark in response to an article published this week by MIT Technology Review about OpenAI (which Musk cofounded but has since left), describing how it has drifted from its initial purpose of developing AI safely and fairly to become secretive and preoccupied with raising money. When Musk was asked if he meant AI should be regulated by individual governments or on a global scale, for example by the UN, he replied: “Both.”

Timely: The European Union unveiled a plan today to regulate “high risk” AI systems. New draft laws are expected to follow at the end of 2020. Last year 42 different countries signed up to a promise to take steps to regulate AI. However, the US and China currently seem to be prioritizing innovation and establishing supremacy in the field of AI over regulation and safety concerns. 

Long-standing worries: This is far from the first time Musk has expressed concerns about the potential negative consequences of AI development. He’s previously described it as “our biggest existential threat” and “potentially more dangerous than nukes.” In 2018 he told Recode that he thought a government committee should spend a year or two “gaining insight about AI” and then come up with regulations to ensure that it is developed and used safely.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

It’s surprisingly easy to stumble into a relationship with an AI chatbot

We’re increasingly developing bonds with chatbots. While that’s safe for some, it’s dangerous for others.

Therapists are secretly using ChatGPT. Clients are triggered.

Some therapists are using AI during therapy sessions. They’re risking their clients’ trust and privacy in the process.

How AI and Wikipedia have sent vulnerable languages into a doom spiral

Machine translators have made it easier than ever to create error-plagued Wikipedia articles in obscure languages. What happens when AI models get trained on junk pages?

OpenAI is huge in India. Its models are steeped in caste bias.

India is OpenAI’s second-largest market, but ChatGPT and Sora reproduce caste stereotypes that harm millions of people.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.