The phones have ears: SoundSense listens to a user’s environment through a phone’s microphone and learns to connect certain sounds to activities.
Kurt Partridge, a researcher at Palo Alto Research Center, who has also created cell-phone software that tracks behavior, believes that the SoundSense project exploits an underused resource. “I don’t think the field has really realized both how little power audio-based activity-sensing takes, and how informative it can be,” Partridge says. “Audio can distinguish so many more activities [and] adds a social aspect to contextual sensing that’s not possible otherwise.”
Dan Ellis, an associate professor at Columbia University, who has researched the use of continuous audio recordings, says that this type of “life logging” could someday be used as routinely as the outbox in an e-mail application. “Maybe you don’t look at your outbox very often, but given the right tools to quickly find what you’re looking for, it’s very convenient to keep a record of every e-mail you’re ever sent,” he says. “A near-continuous, audio-based record collected by a personal device could be similarly desirable.”
Gain the insight you need on machine learning at EmTech Digital.
Watch video from the event