The last time you saw someone walk into a lamppost while focusing intently on a smartphone, you probably thought, “That was dumb!” If you were Juan-David Hincapié-Ramos, though, you might have thought, “There should be an app for this.”
Hincapié-Ramos, a postdoctoral researcher at the University of Manitoba’s human-computer interaction lab in Winnipeg, Canada, is working on just that. Called CrashAlert, his system uses a depth-sensing camera to spot obstacles and pops up a warning on a smartphone screen before you smack into them, allowing you to safely navigate public spaces without taking your eyes from your handset. You might think the best solution would be to put your phone away, but Hincapié-Ramos says that isn’t realistic.
“People aren’t going to just stop texting and walking, and in order to incorporate [cell phones] into our everyday new habits, they have to help with the things they take away from us, like peripheral vision,” he says.
Eventually, Hincapié-Ramos hopes, his technology will be integrated with smartphones, potentially helping to alleviate a growing number of bruised egos and foreheads. And he believes it could be just one way that our phones become increasingly aware of our surroundings. Tests of a prototype are detailed in a short paper coauthored with University of Manitoba associate professor Pourang Irani, which will be presented in May at the Computer Human Interaction conference in Paris.
The prototype consists of a seven-inch Acer tablet computer with a Microsoft Kinect attached to its back—an easy, inexpensive (albeit clunky) way to add a depth-sensing camera to a mobile device. A laptop and a large battery that powers the Kinect are carried along in a backpack.
In order to simulate a task that required about as much concentration as texting but also allowed researchers to measure how users are affected by being alerted to approaching obstacles, Hincapié-Ramos built an Android app with a Whac-a-Mole-like game that his eight subjects—all accustomed to texting while walking—played on the tablet while doing their best to navigate a busy cafeteria. In order to make sure each subject encountered a minimum of four potential collisions, a volunteer was also instructed to get in their way.
The researchers tried alerting participants to obstacles captured by the depth camera in a few different ways within a rectangle across the top of the screen (each time, though, little red squares popped up when an obstacle was within two meters). They logged how long users walked without bumping into anything, how many moles they whacked, and what kinds of impacts they encountered (or avoided). Hincapié-Ramos says when using CrashAlert, subjects felt safer and got out of the way of obstacles earlier, without compromising their performance in the game.
Now the researchers are building a self-contained prototype and working to refine the software. Hincapié-Ramos believes it would be easy for phone makers to add an obstacle-sensing feature to handsets. Eventually, he thinks, such computer vision could help make our smartphones much more aware of our surroundings.
Those long-term goals may be useful, but the idea of CrashAlert, at least, is repellent to Clifford Nass, a professor at Stanford University who studies human-computer interaction. He sees it as “the epitome of removal from both the physical and social world.”
“Why do we want to encourage people to be disconnected from the world?” he asks.
But Juan Pablo Hourcade, an associate professor in human-computer interaction at the University of Iowa, says that while people using CrashAlert might have less reason to pay attention to their surroundings, perhaps it could also encourage them to be more social by letting them know when a friend is close by.