In Zamborlin’s own words, here’s how the magic is accomplished:
Through gesture recognition techniques we detect different kind of fingers-touch and associate them with different sounds. In the video we used two different audio synthesis techniques:
- physic modelling, which consists in generating the sound by simulating physical laws;
- concatenative synthesis (audio mosaicing), in which the sound of the contact microphone is associated with its closest frame present in a sound database.
To put it another way, the system is transforming the vibrations transmitted from touch through a rigid body into waveforms that a computer can, in real time, recognize and either transmute into audible sound or use as a triggering mechanism for other sounds.
It’s an ingenious approach, especially because Zamborlin has made the system clever enough to recognize the sound of particular gestures, so that the interface can accomplish more than just triggering actions when it “hears” a tap.
So will touch interfaces of the future rely on sounds as well as capacitance? Perhaps sound would be a cheaper, more-durable option for certain kinds of interfaces, making touch interactions all the more ubiquitous.
It will soon be easy for self-driving cars to hide in plain sight. We shouldn’t let them.
If they ever hit our roads for real, other drivers need to know exactly what they are.
Maximize business value with data-driven strategies
Every organization is now collecting data, but few are truly data driven. Here are five ways data can transform your business.
Cryptocurrency fuels new business opportunities
As adoption of digital assets accelerates, companies are investing in innovative products and services.
Where to get abortion pills and how to use them
New US restrictions could turn abortion into do-it-yourself medicine, but there might be legal risks.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.