A Startup Tries to Make a Better Artificial Brain
Vicarious thinks it can mimic the brain to create software that learns to see as we do.
Your eyes work with your brain to teach you about the world. You learn to recognize objects, people, and places, and you learn to imagine new things. A startup called Vicarious thinks computers could learn to do likewise, and it’s building software that tries to process visual information the way the brain does.
Vicarious hopes to combine neuroscience and computer science to create a visual perception system inspired by the neocortex, the wrinkly outer portion of the brain that deals with speaking, hearing, seeing, moving, and other functions.
The idea of a neural network—software that can mimic the way the brain works by building connections between artificial neurons—has been around for decades. But Vicarious says it has refined and improved upon previous techniques.
Cofounder Dileep George, who was formerly chief technology officer at an AI company called Numenta, says others have tended to base their neural-network software on the “neocognitron” model first proposed in 1980 (which itself is based on a visual-cortex model devised decades earlier). These systems are typically trained to recognize visual input using random, static images, he adds.
Vicarious, George says, is using a more sophisticated architecture and training its system with a video stream that varies over time. “We’re going back to the drawing board and asking, ‘What is wrong with that architecture people have been building?’” he says.
Vicarious hopes to have a vision system developed and possibly commercialized in the next several years. Cofounder D. Scott Phoenix believes it could have many applications: a computer could analyze diagnostic imagery to determine if a patient has cancer or glance at a dinner plate to let you know how many calories you’re about to consume. “Having a visual perception system that works well would be enormously transformative to anything a person wants to do,” he says.
Phoenix says that Vicarious’s software, like the human brain, essentially learns by seeing a series of images and forming connections in response. This means it’s smart enough to identify an object even if there’s missing information—it will, for example, still recognize an arm even if it’s obscured by paint or a wristwatch.
Vicarious has not published details of its technology. But the company, which was created in 2010, has piqued the interest of some investors. Last month, it raised a $15 million series A round of venture funding from a group of investors that includes Facebook cofounder Dustin Moskovitz.
Andrew Ng, director of the Stanford Artificial Intelligence Laboratory and an associate professor at Stanford University, says that harnessing enough computing power to build accurate simulations of neural processes can be big challenge to efforts like Vicarious’s. Ng was involved in a recent project at Google in which software watched images randomly chosen from YouTube videos. After a week, the software learned to detect cats, even though it hadn’t been told what a cat is. But 16,000 computer processors were involved in the massive neural network.
Alan Peters, an associate professor of electrical engineering at Vanderbilt University and chief technology officer of Universal Robotics, another company that makes AI software for image classification, is skeptical that the human visual cortex can be mimicked without building a whole system incorporating a body that can move around in its environment. But he still thinks the company’s work could be useful. “Trying to solve these problems in different ways is usually a good thing to do,” he says.
While Ng doesn’t think the technology to build an artificial visual cortex is quite there yet, he notes that he has seen rapid advances in the last few years. “Obviously, if it succeeds, there could be huge economic value,” he says.