Self-driving cars have come a long way, but they still struggle to deal with unexpected situations. Yibiao Zhao wants to change that.
At MIT Technology Review’s EmTech conference on Tuesday, Zhao, the cofounder of iSee, an MIT spin-off, highlighted the limitations of existing machine-learning approaches for self-driving cars—which are based on pattern recognition—and outlined his work on a new set of algorithms that seek to mimic humans’ instinctive understanding of the physical world. (See also our previous coverage of Zhao’s work: “Finally, a Driverless Car with Some Common Sense.”)
“When we see something for just a few seconds as we’re driving, we can quickly deduce the intention of the driver in front of us,” said Zhao. “We want cars to have this same predictive capability.”
To give it to them, Zhao and his colleagues have drawn on some of their earlier research inspired by cognitive science. That included teaching a robot how to crack a nut using a hammer, and then taking the hammer away and getting the robot to select the next most appropriate tool from a random assortment.
“The aim was to give the robot the same deep understanding of the properties of tools as a human,” said Zhao.
His work could lead to advances in fields beyond autonomous cars. Zhao has also been involved in a government project (sponsored by the Defense Advanced Research Projects Agency) to develop robots that can intuitively understand the intentions of soldiers they’re paired with on the battlefield.