Skip to Content

AI savants, recognizing bias, and building machines that think like people

Despite impressive advances, three speakers at EmTech Digital show how far there is to go in the AI world.
March 26, 2018
Jeremy Portje

Even with all the amazing examples of progress in artificial intelligence, such as self-driving cars and the victories of AlphaGo, the technology is still very narrow in its accomplishments and far from autonomous. Indeed, says Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence, today’s machine-learning systems are “AI savants.”

Etzioni, speaking today at MIT Technology Review’s annual EmTech Digital conference in San Francisco, explained that self-driving cars and speech recognition are based on machine learning. And even today, he said, 99 percent of machine learning is based on human work.

Etzioni pointed out that machine learning needs large amounts of data, all of which needs to be labeled: this is a dog; this is a cat. And then people need to supply the appropriate algorithms. All this relies on manual labor.

AI is “not something mystical. It’s not something magical,” he said. “It will take a lot of work to go beyond current capabilities.” 

Key to recognizing the limitations of today’s AI is understanding the difference between autonomy and intelligence. People have both. But Etzioni pointed out that AI systems, even if they have very high intelligence for a specific task, tend to be low in autonomy. And that lack of autonomy limits the technology’s ability to take on many broad problems.

Despite the limitations of AI, Etzioni is keen on many potential applications. Near the top of his list are AI-based scientific breakthroughs. Machine learning can read millions of scientific papers looking for trends and forming hypotheses. “Imagine a cure for cancer buried in the thousands and even millions of clinical trials,” he said.

Etzioni also warned about the dangers introduced by machine learning, such as algorithmic bias: “Machine learning is looking for patterns in data. If you start with racist data, you will end up with even more racist models.” He said, “This is a real problem.”

But in taking on such problems, the strategy is clear. “AI is a tool,” he said. “The choice about how it gets deployed is ours.”

NYU professor Brenden Lake, who focuses on the intersection of data science and psychology, showed how looking at cognitive science can help build the next generation of artificial intelligence. Model building, instead of pattern recognition, can help AI systems develop new skills, like playing an unfamiliar game or recognizing new handwritten characters. The study of the human mind “will play a key role in developing this technology in the future,” Lake believes. 

Microsoft researcher Timnit Gebru brought a dose of reality to the proceedings with her examples of AI bias in action and a reminder to the audience about the need to develop smart standards for using the data at scientists’ fingertips. She compared the lack of standards in the AI industry to the initial chaos and subsequent regulation in the automobile industry, pointing out that cars were initially believe to be “inherently evil,” much as AI systems are feared today. “AI has a lot of opportunities, but we have to take this idea of a safety standardization process seriously,” she said.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.