Skip to Content
MIT News magazine

Help for Autism

A new device teaches the interpretation of facial cues
November 14, 2006

You know the looks–the stare that says “I’m bored” or the smile that means “Keep talking.” But many people with autism struggle to read the silent cues that tell us how to behave in conversation. Those who miss such cues may act inappropriately–for instance, droning on when it’s time to stop talking, says Rana el Kaliouby, a postdoctoral associate at MIT’s Media Lab. With colleagues Alea Teeters, a grad student, and Professor Rosalind Picard, el Kaliouby is developing a teaching tool to help.

Rana el Kaliouby and her camera. (Courtesy of Rana el Kaliouby)

The prototype of the ESP, or emotional­-­social intelligence prosthesis, consists of a small neck-mounted camera and a belt-mounted computer. Autistic people could use the device to learn about faces by watching themselves.

During conversation, the “self-cam” films the wearer’s face. The computer analyzes eye, eyebrow, mouth, and head movements and infers what they mean. It then produces a graph indicating when the wearer appears to be concentrating, thinking, agreeing, disagreeing, or expressing interest or confusion. The user can download the videos and watch them alongside the graphs.

Recording others’ faces may also be helpful, says el Kaliouby, but uninvited cameras violate others’ privacy. She says that though it is an open question whether people with autism make the same kinds of facial expressions that others do, they may learn the relationship between faces and emotions best by starting with their own. Those who feel anxious when looking at faces may feel most comfortable watching themselves. Parents and friends can also wear self-cams and offer their videos for viewing. Future versions of the system may use two self-cams, each communicating wirelessly with the other, to provide real-time instruction. One person’s ESP might send a message–“She’s losing interest”–to another person’s, which could alert its wearer with, say, a message whispered through an earphone.

The team is now testing whether its device helps high-functioning teens with autism improve their social skills over time.

Keep Reading

Most Popular

AV2.0 autonomous vehicles adapt to unknown road conditions concept
AV2.0 autonomous vehicles adapt to unknown road conditions concept

The big new idea for making self-driving cars that can go anywhere

The mainstream approach to driverless cars is slow and difficult. These startups think going all-in on AI will get there faster.

biomass with Charm mobile unit in background
biomass with Charm mobile unit in background

Inside Charm Industrial’s big bet on corn stalks for carbon removal

The startup used plant matter and bio-oil to sequester thousands of tons of carbon. The question now is how reliable, scalable, and economical this approach will prove.

AGI is just chatter for now concept
AGI is just chatter for now concept

The hype around DeepMind’s new AI model misses what’s actually cool about it

Some worry that the chatter about these tools is doing the whole field a disservice.

images created by Google Imagen
images created by Google Imagen

The dark secret behind those cute AI-generated animal images

Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.