Bruno Zamborlin collaborated with Norbert Schnell to use a contact microphone connected to a system that processes sound in real time to turn any rigid surface into a touch interface. There’s no way to explain it adequately in words, so just watch the video:
In Zamborlin’s own words, here’s how the magic is accomplished:
Through gesture recognition techniques we detect different kind of fingers-touch and associate them with different sounds. In the video we used two different audio synthesis techniques:
- physic modelling, which consists in generating the sound by simulating physical laws;
- concatenative synthesis (audio mosaicing), in which the sound of the contact microphone is associated with its closest frame present in a sound database.
To put it another way, the system is transforming the vibrations transmitted from touch through a rigid body into waveforms that a computer can, in real time, recognize and either transmute into audible sound or use as a triggering mechanism for other sounds.
It’s an ingenious approach, especially because Zamborlin has made the system clever enough to recognize the sound of particular gestures, so that the interface can accomplish more than just triggering actions when it “hears” a tap.
So will touch interfaces of the future rely on sounds as well as capacitance? Perhaps sound would be a cheaper, more-durable option for certain kinds of interfaces, making touch interactions all the more ubiquitous.