Skip to Content

The Government Isn’t Doing Enough to Solve Big Problems with AI

Leaders in artificial intelligence say the government should be more influential in shaping the technology’s development.
December 9, 2016

The government should play a bigger role in developing new tools based on artificial intelligence, or we could miss out on revolutionary applications because they don’t have obvious commercial upside.

That was the message from prominent AI technologists and researchers at a Senate committee hearing last week. They agreed that AI is in a crucial developmental moment, and that government has a unique opportunity to shape its future. They also said that the government is in a better position than technology companies to invest in AI applications aimed at broad societal problems.

Today just a few companies, led by Google and Facebook, account for the lion’s share of AI R&D in the U.S. But Eric Horvitz, technical fellow and managing director of Microsoft Research, told the committee members that there are important areas that are rich and ripe for AI innovation, such as homelessness and addiction, where the industry isn’t making big investments. The government could help support those pursuits, Horvitz said.

Senator Ted Cruz of Texas convened the hearing on the future of AI.

For a more specific example, take the plight of a veteran seeking information online about medical options, says Andrew Moore, dean of the school of computer science at Carnegie Mellon University. If an application that could respond to freeform questions, search multiple government data sets at once, and provide helpful information about a veteran’s health care options were commercially attractive, it might be available already, he says.

There is a “real hunger for basic research” says Greg Brockman, cofounder and chief technology officer of the nonprofit research company OpenAI, because technologists understand that they haven’t made the most important advances yet. If we continue to leave the bulk of it to industry, not only could we miss out on useful applications, but also on the chance to adequately explore urgent scientific questions about ethics, safety, and security while the technology is still young, says Brockman. Since the field of AI is growing “exponentially,” it’s important to study these things now, he says, and the government could make that a “top line thing that they are trying to get done.”

The hearing was more of a pep rally than a debate. Lawmakers probed the panelists not only on the areas where government might help, but also about the effects that AI might have on commerce and on American competitiveness in the world. Senator Ted Cruz of Texas, who convened the hearing, expressed concern that the United States not cede its leadership in developing AI to China or any other foreign government.

In response to concerns from lawmakers about the technology’s safety and its potential to eliminate jobs, Horvitz and Brockman both said that addressing these long-term questions calls for investing more now in research that is focused on them. The White House made a similar argument in a “National Artificial Intelligence Research and Development Strategic Plan,” which it published in October.

Brockman warns that if the government and other nonprofit entities don’t become bigger players in the field of AI, the danger is that the intellectual property, infrastructure, and expertise needed to “build powerful systems” could become sequestered inside just one or a few companies. AI is going to affect the lives of all of us no matter what, he says. “So I think it’s important that the people who have a say in how it affects us are representative of us all.”

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.