You’ve probably never heard of Fitts’ Law, but if you’ve used a computer in the past 25 years, you’ve felt its influence. Fitts’ Law mathematically models how quickly you can point to something–whether it’s with your finger, or with a device like a mouse. It’s a foundational principle of human-computer interaction in the WIMP era–“windows, icons, menus, pointer”–pioneered by Xerox PARC and made mainstream by the original Macintosh. It says that moving a pointer a short distance to a large target is faster than moving a larger distance to a smaller target. This has a distinctly “no duh” flavor to it, but Fitts’ Law has many fascinating and subtle implications for GUI design. If you ever wondered why Apple puts its menu across the top of the screen (instead of anchoring menus to individual windows, like Microsoft does), Fitts’ Law is the reason.
But graphical user interfaces are moving beyond the WIMP paradigm, and so-called “gestural interfaces” (which replace buttons and toggles with swipes and other “chromeless” movements) are going mainstream. If there’s no target to “point” to in an app or device, does Fitts’ Law–and the decades of user-experience intelligence it has wrought–still apply?
TR: How relevant is Fitts’ Law to these new kinds of interfaces?
FI: You have a few ways to look at it. One is to look at the actual design of the interface. The other is the size of the screen/devices that we’re now using. Just because you use a giant button on a website doesn’t make it good (within Fitts’s Law). If the screen is a Retina laptop, then it’s still a big distance from wherever your cursor is to the target and the accuracy might not be great. If we’re talking about a device, like an iPhone, the elements are more consolidated and there isn’t as much space for your finger/pointer to roam and miss. I think both the device/screen size and the design of the interface affect how close we are following the rules.
Talking specifically about interface design, we were used to clicking, not tapping. So, Apple (I believe rightly so) gave us the ultimate onboarding sequence into the world of touch: the skeuomorph. Some of this still follows Fitts’s Law with the targets being larger and mostly obvious. However, it begins to bring up other usability issues when everything looks tappable, since people got carried away with this as a style. With these more minimal/content interfaces, we don’t initially follow Fitts’s Law until the targets are discovered.
TR: To use Rise as an example: some of its gestural interactions (like “scrubbing” the screen to select a setting) seem like they could still be modeled by Fitts’s Law, because you are moving your “pointer” to a target. However, that target isn’t a fixed button that you’re aiming at, but an interactive display of content that follows your thumb as you move it across the screen. So are you really “pointing” in a Fitts-ian sense?
FI: Fitts’s doesn’t specify the design style, it just focuses on target size/location and then measures your speed/accuracy. The target could be anything really. It could just be content. The caveat with a full gesture UI (“content is the interface”) is that you need to know which elements are actually targets that you can interact with. So, at first glance, Rise is breaking Fitts’s Law since you can’t directly interact with the target until you understand the behavior of the interface. A lot of the behavior needs to be uncovered/discovered, since you can’t tell just by looking. There is nothing to signal “this is the target.” However, once you discover the targets, that all changes. It begins to follow all the rules.
TR: Invoking the “on/off” functions of setting an alarm in Rise is as simple as swiping leftward or rightward from anywhere on the edge of the screen. There’s no “target”; just a motion. That seems to be outside the purview of Fitts’s Law entirely.
FI: I think it’s actually more inside Fitts’s Law than you think. We’re exploiting the prime pixels–the *entire* screen is being used. If I pull to the right or left (anywhere on the screen) it begins the on/off function; The user is avoiding muscular tension since they don’t have to reach out to any specific place; And if you consider “the cursor” to be your finger in here, it’s minimizing that movement by invoking the on/off with a swipe *from wherever* you finger is on the screen. That’s nailing Fitts’s Law!
TR: Fitts’s Law has informed GUI design for decades, but are gestural UIs going to be require a new model for analyzing their usability?
FI: I think there are a lot of usability/UX rules and laws that will come into question as we move forward into more of these experimental kinds of interfaces. I know many of them already have been retested/validated by other researchers.
A lot of newer interaction paradigms aren’t naturally intuitive as we like to think. Tapping and swiping at “pictures under glass” (or in this case, content) is always going to be a learned thing, like when we were introduced to the desktop metaphor or icons. They all look fairly “cool” from the outside. For example, the Tom Cruise scene with gestures in Minority Report is the most linked/referenced interface in the history of cinema–but it’s really only good for a few kinds of tasks.
I think that we should always question/view these foundational rules in our current place in history, not just allow them to be dogma. However, that said, we can’t forget what we’ve learned, and in the end we’re still just as human now, as when Paul Fitts first did those studies. I agree that rules like this should be used as a way to measure the effectiveness of interfaces where appropriate. Although, once you go to an “invisible” interface like a VUI (voice) then we need other ways to measure the effectiveness. There might be laws/rules for this, but I haven’t done much research in that area yet.
Smaller design teams can now prototype and deploy faster.