You know the looks–the stare that says “I’m bored” or the smile that means “Keep talking.” But many people with autism struggle to read the silent cues that tell us how to behave in conversation. Those who miss such cues may act inappropriately–for instance, droning on when it’s time to stop talking, says Rana el Kaliouby, a postdoctoral associate at MIT’s Media Lab. With colleagues Alea Teeters, a grad student, and Professor Rosalind Picard, el Kaliouby is developing a teaching tool to help.
The prototype of the ESP, or emotional-social intelligence prosthesis, consists of a small neck-mounted camera and a belt-mounted computer. Autistic people could use the device to learn about faces by watching themselves.
During conversation, the “self-cam” films the wearer’s face. The computer analyzes eye, eyebrow, mouth, and head movements and infers what they mean. It then produces a graph indicating when the wearer appears to be concentrating, thinking, agreeing, disagreeing, or expressing interest or confusion. The user can download the videos and watch them alongside the graphs.
Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.