Computer scientists at the University of Pittsburgh have developed a way to make e-mails, instant messaging, and texts just a bit more personalized. Their software will allow people to use images of their own faces instead of the more traditional emoticons to communicate their mood. By automatically warping their facial features, people can use a photo to depict any one of a range of different animated emotional expressions, such as happy, sad, angry, or surprised.
All that is needed is a single photo of the person, preferably with a neutral expression, says Xin Li, who developed the system, called Face Alive Icons. “The user can upload the image from their camera phone,” he says. Then, by keying in familiar text symbols, such as “:)” for a smile, the user automatically contorts the face to reflect his or her desired expression.
“Already, people use avatars on message boards and in other settings,” says Sheryl Brahnam, an assistant professor of computer information systems at Missouri State University, in Springfield. In many respects, she says, this system bridges the gap between emoticons and avatars.
This is not the first time that someone has tried to use photos in this way, says Li, who now works for Google in New York City. “But the traditional approach is to just send the image itself,” he says. “The problem is, the size will be too big, particularly for low-bandwidth applications like PDAs and cell phones.” Other approaches involve having to capture a different photo of the person for each unique emoticon, which only further increases the demand for bandwidth.
Li’s solution is not to send the picture each time it is used, but to store a profile of the face on the recipient device. This profile consists of a decomposition of the original photo. Every time the user sends an emoticon, the face is reassembled on the recipient’s device in such a way as to show the appropriate expression.
To make this possible, Li first created generic computational models for each type of expression. Working with Shi-Kuo Chang, a professor of computer science at the University of Pittsburgh, and Chieh-Chih Chang, at the Industrial Technology Research Institute, in Taiwan, Li created the models using a learning program to analyze the expressions in a database of facial expressions and extract features unique to each expression. Each of the resulting models acts like a set of instructions telling the program how to warp, or animate, a neutral face into each particular expression.
Once the photo has been captured, the user has to click on key areas to help the program identify key features of the face. The program can then decompose the image into sets of features that change and those that will remain unaffected by the warping process.
Finally, these “pieces” make up a profile that, although it has to be sent to each of a user’s contacts, must only be sent once. This approach means that an unlimited number of expressions can be added to the system without increasing the file size or requiring any additional pictures to be taken.
Li says that preliminary evaluations carried out on eight subjects viewing hundreds of faces showed that the warped expressions are easily identifiable. The results of the evaluations are published in the current edition of the Journal of Visual Languages and Computing.
Face Alive Icons has now been incorporated into an application used for distance learning. “The teachers like to see the faces of their students,” says Li. So, rather than seeing a screen filled with identical emoticons, teachers use Face Alive Icons to view each of the virtual pupils in the classroom and observe how he or she is feeling.