One of his most successful inventions, says Georgia Tech professor Thad Starner, is a four-inch strip of Velcro that sticks his “Twiddler” keyboard to the side of his shoulder bag. The Twiddler is a handheld chording keyboard manufactured by the Handkey corporation, and the Velcro lets Starner grab his keyboard and start typing in just two seconds flat.
Indeed, speed of access is one of the determining factors in whether a mobile information device will be used for mundane and casual tasks, according to a paper Starner recently published. Two seconds from storage to use is optimal. More than 10 seconds, the device stays unused.
These and other findings are based not only on personal introspection, but on the observations of literally hundreds of human subjects that have been tested by the Context Computing Group at Georgia Tech’s Graphics, Visualization and Usability center, which Starner founded in 1999.
Starner’s Twiddler rested in a holster or inside his shoulder bag before the Velcro. Alas, it just took too much time and effort to get the device out and take a note. After he slapped a strip of hooks on his bag’s flap and a matching strip of loops along the bottom of the Twiddler, Starner could grab the keyboard and start typing in a snap.
The Velcro epiphany is a very simple solution to what looms as the larger usability issue with wearable computing. The problem? Figuring out how to get people to really integrate computers into their daily lives.
Now, his wearable computing use has grown. Sitting at a conference, walking down the hallway, or talking with an associate, he might see something he wants to remember. Rather than committing it to memory, he grabs the Twiddler, bangs a few buttons, then sticks the device back in its place. Wham. The words are not preserved through weak and flimsy neurons, but with silicon permanence.
An interesting anecdote to be sure, but just how relevant is some cyborg’s personal experience to the lives of normal computer users? Extraordinarily relevant, it turns out.
At the Computer Human Interaction 2004 conference in April, Starner presented a study that he and his students conducted at Georgia Tech’s Student Center. The researchers asked 138 passing subjects – mostly students – what they used to keep track of their appointments: their memory, scraps of paper, a day planner, or personal digital assistant of some kind. After the initial question, the respondents were asked to schedule a meeting the following Monday. Then, the researchers watched.
With the exception of people who claimed to keep everything in their head, roughly half of the people who said they used one method for tracking their activities actually used a different method to schedule the follow-up meeting.
The most inconsistent were the 44 day planner users. Only 14 actually opened their planner to write down their appointment. The rest either scribbled a note or committed the meeting to memory. However, they were hardly alone in their actions. Of the 22 people who claimed that they used scraps of paper, nine didn’t bother making note of the meeting. Even the technologically inclined didn’t fare well. Six of the 14 PDA users said their device took to long to get ready, and instead opted for other, simpler methods.
The takeaway, Starner says, is that ease of use permeates every interaction we have with wearable and mobile technologies. The easiest solution for remembering, if also the least efficient, remains memory. The Twiddler, though, may help overcome some of those barriers.
When I met with Starner in November, he was constantly grabbing for the Twiddler, jotting down a note, and then putting it back. It was all very unobtrusive – as long I kept my eyes on his face, rather than on his hands. But what about for people who don’t want to learn how to use a Twiddler, or don’t want to walk around with a four-inch piece of Velcro stuck to an ever-present fannypack that’s crammed full of electronics?
One possibility, says Starner, is what he calls “dual-purpose speech.” The idea is to combine a voice recognition system with people’s tendency to repeat words in conversations to trigger responses on the part of their own wearable computers.
Let’s say I’m walking down a hallway and ask Starner if he wants to meet with me next week.
“Next week?” he might say in response. I don’t know it, but just before Starner said “next week,” he pressed a little push-to-talk button, perhaps in his pocket. The computer heard the phrase “next week” which is a voice command that brings up next week’s calendar in his head-mounted display.
“How about Monday?” I ask.
“Monday,” he says, pressing the button again. That’s another command, of course. The calendar flashes to the Monday view. “I’m busy all day,” he says. “How about Tuesday?”
By choosing which of the spoken words the computer actually processes, it’s possible to use dual-purpose speech to navigate through and update an electronic day planner. Although speech recognition is still pretty rough for many applications, this sort of low-vocabulary recognition is completely within the capabilities of today’s systems.
Even better is that these voice recognition patterns are the norm when people speak to each other, which makes the dual-purpose speech solution a potential boon when coupled with a push-to-talk button. This was apparent from one happy interaction between Starner and former student Ben Wong, who had a habit of leaving his microphone open and running a full-text speech recognition system on the mic’s input.
On that day, the two were discussing interesting readings when Wong suggested a particular article of interest. Most of what was on the student’s screen was gibberish since full-text systems aren’t very good at transcribing conversational speech. But suddenly Wong stopped and repeated the article’s title slowly and clearly, then resumed his normal speech pattern. The system locked right on and got the title and reference perfectly. Even better, if Starner hadn’t been primed, he wouldn’t have thought anything of the incident since the repetition seemed completely normal.
Voice input is pretty neat, but for speed, accuracy and flexibility it’s still no match for the Twiddler. Although it looks intimidating and hard to use, another paper that Starner and his students published last year found that people can learn it more quickly than they can a QWERTY keyboard, and they can type upwards of 60 words per minute with just a few hours of practice.
Starner goes on to say in the paper that one potential breakthrough application for the Twiddler is to help the deaf, who “have adopted wireless texting as a convenient means of communication within the community.” With a Twiddler connected to a cell phone texting on a handheld mobile device could rapidly become an almost natural experience.
Meanwhile, I’m going to get my own Twiddler. If I can master the art of one-handed typing, I might just become a cyborg myself.