Working the night shift can be lonely, especially deep in the forest, so it’s no wonder owls respond when they hear a fellow owl hoot in the distance. That behavior has been a big help to the Maine Audubon volunteers who monitor the state’s owl population. For the past several years, they’ve carried CD players into the woods, played recorded owl noises, and taken notes on the responses they hear. But during this year’s survey period in March and April, Dale Joachim, the Martin Luther King Jr. Visiting Professor at the MIT Media Lab, will send some of the volunteers out with cell phones that can both emit sounds and record them.
A computer in Joachim’s Owl Project lab is programmed to call the phones, which will transmit the sounds of various owl species via speakers. Using four directional microphones arranged in a pyramid, the same phones will then capture the responding hoots. Back at the Media Lab, the computer will process the recorded sounds to determine the number of distinct owl voices and the directions from which they emanated.
An electrical engineer with a background in speech processing, Joachim hopes that cell-phone technology will eventually provide more-sophisticated data for studies of owl vocalizations, such as information about the birds’ hearing range or their response rates under particular weather conditions. But another goal of the Owl Project is to see how well data collected by people, in this case the note-taking volunteers in Maine, corresponds with that collected by machines. “We have planned the collaboration very closely so settings of their surveys and ours can be compared,” says Joachim.
Armchair owlers will be able to hear recordings from the spring survey online at owlproject.media.mit.edu, then submit their observations about the number and type of owls they think they hear. Joachim, who is also interested in the study of collective observation, hopes to review the conclusions drawn by the Maine Audubon volunteers, the computer, and the online participants and compare them for accuracy.
Keep Reading
Most Popular
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
This baby with a head camera helped teach an AI how kids learn language
A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.