Skip to Content

Mixing Real and Virtual Controls

A Microsoft project lets a touch screen control other hardware.
April 9, 2009

Large touch-screen tables have emerged as a useful way for several people to collaborate on projects like video editing or graphic design, but often these tasks require fine controls that can be difficult to simulate on a touch surface with limited resolution. When a person needs precision, it may be best to use a physical controller instead, says Dan Morris, a researcher at Microsoft.

Halo effect: This MIDI controller is surrounded by virtual controls. Four of the virtual buttons control discrete tasks, including playing or pausing a track. The physical knobs provide finer control of the same function than the four virtual sliders.

Morris and his colleagues have developed software for touch-screen surfaces that allows physical controls to be added to them. In addition, the software lets people define the functions that each knob, button, and slider on a controller will perform.

The researchers’ system, called Ensemble, was presented on Monday at the Computer-Human Interaction (CHI 2009) Conference in Boston. It consists of a touch table, made by former Microsoft intern Bjoern Hartmann, which is six feet long and four feet wide, and several portable sound-editing controllers that connect to the computer that controls the surface. The table is similar to Microsoft’s Surface, but larger. As with Surface, cameras underneath the tabletop are used to sense when a user touches the surface or when an object is placed on top of it.

The idea of incorporating traditional input devices like mouses or keyboards with a touch display is not new, but the Microsoft researchers show with Ensemble that it’s possible to make hardware do more than a single specified task.

Cameras within the Ensemble table detect a special tag on the bottom of each audio control box to recognize each box and determine its position on the surface. The software then produces an “aura” around each device, including touch-surface controls like “play,” “pause,” and “stop,” and virtual sliders that correspond to physical knobs on the box.

A person can then edit a music track, for example, using both the physical device and the touch-surface controls. The virtual sliders can be used to zoom in on the audio waveform of a track, or to go to a different location on the waveform by panning. The physical knobs on the box perform the same function but offer much finer control. The system also allows a person to change the function of the knobs to, say, control the volume of a trumpet track instead.

“It’s a software mechanism for telling the hardware what to do,” says Morris. He explains that once a person has mapped different functions onto the controller, she’s able to save it for later or pass it along to someone else who has a similar role in the editing process.

The paper, presented at CHI 2009 by Rebecca Fiebrink, a graduate student at Princeton University, also describes a study examining how people used the interface. Most of the study participants used the physical controls, favoring the accuracy and responsiveness that they offer. However, these participants also made extensive use of surface controls, choosing them mainly for tasks in which a single touch produced a discrete result, such as playing or stopping a track.

Robert Jacob, a professor of electrical engineering at Tufts University, in Medford, MA, says that the researchers “did a nice job of investigating what users actually did when given both [physical controllers and a touch screen] and the opportunity to switch between them.”

Jacob, who chaired the session in which the paper was presented, acknowledges that bridging the gap between physical and digital objects can be challenging. “It’s a difficult problem with no general solutions, but rather individual interesting designs,” he says. “Ideally, you want the benefits of the digital without giving up those of the physical.”

While Ensemble was designed for sound editing, its underlying technology could find other applications in graphics, gaming, and visual design, says Morris. “It could be used in scenarios where you want people to collaborate on a surface as a group,” he says, but where the resolution of touch surface limits the precision of the virtual controls.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.