MIT Technology Review Subscribe

The Puzzling Paradox of Sign Language

It takes longer to sign words than to say them. So how is it possible to sign and speak at the same rate?

Here’s a curious paradox related to American Sign Language, the system of hand-based gestures used by around 2 million deaf people in the US and elsewhere to communicate.

Almost 40 years ago, researchers discovered that although it takes longer to make signs than to say the equivalent words, on average sentences can be completed in about the same time. How can that be possible?

Advertisement

Today, Andrew Chong and buddies at Princeton University in New Jersey give us the answer. They say that the information content of the 45 handshapes that make up sign language is higher than the information content of phonemes, the building blocks of the spoken word. In other words, there is greater redundancy in spoken English than signed English.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

In a way, that’s a trivial explanation, a mere restatement of the problem. What’s impressive about the Princeton contribution is the way they have arrived at this conclusion.

The team has determined the entropy of American Sign Language experimentally, by measuring the frequency of handshapes on video logs for deaf people uploaded to youtube.com, deafvideo.tv and deafread.com as well as from video recordings of signed conversations taken on campus.

It turns out that the information content of handshapes is on average just 0.5 bits per handshape less than the theoretical maximum. By contrast, the information content per phoneme in spoken English is some 3 bits lower than the maximum.

This raises an interesting question. The spoken word has all this redundancy for a reason: it allows us to be understood over a noisy channel. Lessen the redundancy and your capacity to deal with noise is correspondingly reduced.

Why would sign language need less redundancy? “Entropy might be higher for handshapes than English phonemes because the visual channel is less noisy than the auditory channel…so error correction is less necessary,” say Chong and co.

They go on to speculate that signers cope with errors in an entirely different way to speakers. “Difficulties in visual recognition of handshapes could be solved by holding or slowing the transition between those handshapes for longer amounts of time, while difficulties in auditory recognition of spoken phonemes cannot always be easily solved by speaking phonemes for longer amounts of time,” they say.

And why is all this useful? Chong and friends say that if sign language is ever to be encoded and transmitted electronically, a better understanding of its information content will be essential for developing encoders and decoders that do the job. A worthy pursuit by any standards.

Advertisement

Ref: arxiv.org/abs/0912.1768: Frequency of Occurrence and Information Entropy of American Sign Language

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement