Skip to Content

An Indoor Positioning System Based On Echolocation

GPS doesn’t work indoors. Can a bat-like echolocation system take its place?

The satellite-based global positioning system has revolutionised the way humans interact with our planet. But a serious weakness is that GPS doesn’t work indoors. Consequently, researchers and engineers have been studying various ways to work out position in doors in a way that is simple and inexpensive.

That’s easier said than done. Systems that rely on WiFi signals, for example, have limited accuracy because the signal strength varies dramatically throughout a building making it hard to take repeatable, unambiguous measurements. So researchers are exploring a number of other innovative methods of to pinpoint indoor position.

Today, we get an insight into a new approach for indoor localisation based on sound. Ruoxi Jia and pals at the University of California, Berkeley have developed a simple and cheap mechanism that can identify different rooms based on a relatively small dataset gathered in advance.

The new system is essentially a form of echolocation. Emit a sound and then listen for the return which will be distorted in a way that depends on the size and shape of the room, the materials on the walls and floors as well as the furniture and people within it.

The problem with this technique is that until now it has required special measuring equipment such as a microphone capable of measuring the sound field accurately. Even then, the issue of unwanted noise can significantly confuse matters.

Jia and co get around this by processing the signal in a way that ignores the noise. And that allows them to take data using the built-in microphone and speakers on an ordinary laptop.

These guys have tested their system in 10 different rooms on the Berkeley campus. The laptop produces a distinctive set of sound waves and then listens for the echo. They took 50 samples at each location, which included background noise such as footsteps, talking and heating and ventilation sounds. They then processed this data to find the unique echo fingerprint for each room.

The results are interesting. They say they can identify individual rooms with an accuracy of 97.8 per cent. They call their new system SoundLoc.

That opens up a number of potentially important applications. Jia and co are particularly interested in using the technique to reduce the energy consumption in buildings. Some 40% of energy usage in the US comes from commercial and residential buildings. If those buildings are empty, then that represents a significant waste.

The problem, of course, is to determine when specific rooms are not being used and to turn off the lights, heating and so on accordingly.

It’s not hard to imagine how SoundLoc can help if a room’s sound signature can be used to determine whether anyone is in it, although that is not something the team has tested so far. It raises the possibility of buildings filled with computers that are constantly chirping and listening to the results to determine if anyone is around.

Obviously, there are significant challenges ahead to making a system like that work. But the first step is room identification which these guys have shown is a reasonable possibility.

Ref: arxiv.org/abs/1407.4409 : SoundLoc: Acoustic Method for Indoor Localization Without Infrastructure

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.