AI savants, recognizing bias, and building machines that think like people

Even with all the amazing examples of progress in artificial intelligence, such as self-driving cars and the victories of AlphaGo, the technology is still very narrow in its accomplishments and far from autonomous. Indeed, says Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence, today’s machine-learning systems are “AI savants.”
Etzioni, speaking today at MIT Technology Review’s annual EmTech Digital conference in San Francisco, explained that self-driving cars and speech recognition are based on machine learning. And even today, he said, 99 percent of machine learning is based on human work.
Etzioni pointed out that machine learning needs large amounts of data, all of which needs to be labeled: this is a dog; this is a cat. And then people need to supply the appropriate algorithms. All this relies on manual labor.
AI is “not something mystical. It’s not something magical,” he said. “It will take a lot of work to go beyond current capabilities.”
Key to recognizing the limitations of today’s AI is understanding the difference between autonomy and intelligence. People have both. But Etzioni pointed out that AI systems, even if they have very high intelligence for a specific task, tend to be low in autonomy. And that lack of autonomy limits the technology’s ability to take on many broad problems.
Despite the limitations of AI, Etzioni is keen on many potential applications. Near the top of his list are AI-based scientific breakthroughs. Machine learning can read millions of scientific papers looking for trends and forming hypotheses. “Imagine a cure for cancer buried in the thousands and even millions of clinical trials,” he said.
Etzioni also warned about the dangers introduced by machine learning, such as algorithmic bias: “Machine learning is looking for patterns in data. If you start with racist data, you will end up with even more racist models.” He said, “This is a real problem.”
But in taking on such problems, the strategy is clear. “AI is a tool,” he said. “The choice about how it gets deployed is ours.”
NYU professor Brenden Lake, who focuses on the intersection of data science and psychology, showed how looking at cognitive science can help build the next generation of artificial intelligence. Model building, instead of pattern recognition, can help AI systems develop new skills, like playing an unfamiliar game or recognizing new handwritten characters. The study of the human mind “will play a key role in developing this technology in the future,” Lake believes.
Microsoft researcher Timnit Gebru brought a dose of reality to the proceedings with her examples of AI bias in action and a reminder to the audience about the need to develop smart standards for using the data at scientists’ fingertips. She compared the lack of standards in the AI industry to the initial chaos and subsequent regulation in the automobile industry, pointing out that cars were initially believe to be “inherently evil,” much as AI systems are feared today. “AI has a lot of opportunities, but we have to take this idea of a safety standardization process seriously,” she said.
Keep Reading
Most Popular
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.