Skip to Content

First Wi-Fi-Enabled Smart Contact Lens Prototype

A clever way of converting Bluetooth signals into Wi-Fi allows embedded devices to communicate easily with the outside world.

One promise of modern microelectronics is the possibility of embedding sensors in various parts of the human body and using them to monitor everything from blood glucose levels to brain waves. They could even help treat conditions such as epilepsy, Parkinson’s disease, and other medical conditions.

To carry out this work, these devices need to communicate with the outside world, and that is a power-consuming business. It can be done with bespoke RFID equipment, but this is large, unwieldy, and power-hungry. A better way would be to link with more portable and ubiquitous devices such as smartphones, watches, or tablets.

But there is a problem. Although Bluetooth and Wi-Fi are relatively low-power forms of communication, they are way beyond the power budget of, say, a smart contact lens. Consequently there is no way to connect an embedded device via Bluetooth or Wi-Fi and so no way to communicate with them easily on the fly.

That looks set to change thanks to the work of Joshua Smith and pals at the University of Washington in Seattle. These guys have developed a clever way for embedded devices to harvest Bluetooth radio signals and use them to broadcast Wi-Fi transmissions. The team has even built a number of Wi-Fi-enabled prototypes to show off the technique.

At first glance, it’s easy to think that converting Bluetooth signals to Wi-Fi is impossible. These systems work at different frequencies and use entirely different transmission protocols.

Wi-Fi requires a 22 MHz bandwidth and uses spread spectrum coding whereas Bluetooth requires up to 2 MHz bandwidth and relies on Gaussian Frequency Shift Keying, in which a one is represented by a positive frequency shift of 250 kHz and a zero as a negative shift of 250 kHz. These systems are entirely different.

But Smith and co have come up with a clever trick that allows them to convert Bluetooth signals to Wi-Fi. This relies on making a Bluetooth transmitter broadcast a continuous sequence of either ones or zeros to produces a continuous tone of white noise.

It is this noise that that embedded device picks up, modifies and rebroadcasts as Wi-Fi by a process called backscattering. This produces a signal that is shifted in frequency to one of the Wi-Fi channels and then modulated in line with the 802.11b Wi-Fi transmission protocol.

In tests, this process has turned out to be extremely energy efficient. “In total, generating 2 Mbps 802.11b packets consumes 28 µW,” say Smith and co.

Of course, any electrical engineer will tell you that this process also produces a mirror image signal on the other side of the Bluetooth frequency, which is at best wasted and at worst can interfere with other signals.

Smith and pals have another clever trick to get around this. This involves choosing antenna materials that have a complex impedance. This leads to a mirror signal with a negative frequency, which cannot happen in practice. The result is the first example of single sideband backscattering.

All this allows the embedded device to communicate with the outside world via backscattered signals.

However, for bidirectional communication, the device also has to receive signals. The team do this by finding a way to make 802.11g Wi-Fi signals look like standard AM modulated signals, which the embedded device can pick up at a bit rate of 160 kbps. That’s not fast but the team says this could be significantly improved in future devices.

Finally, these guys have put all these techniques together to build a variety of technology demonstrators. One is an antenna for a smart contact lens designed to monitor glucose levels in the wearer’s tears. The prototype consists of a 1 cm wire loop embedded in poly-dimethylsiloxane (PDMS) for biocompatibility.

The team tested this by successfully backscattering and modifying Bluetooth signals from a nearby transmitter to a Samsung Galaxy S4 smartphone that picks up Wi-Fi. “The plot shows that we can achieve ranges of more than 24 inches, demonstrating the feasibility of a smart contact lens that communicates directly with commodity radios,” say the team.

They also design an antenna for a neural recording device that can be embedded under the skull for monitoring brainwaves. To test this, they embed it in a pork chop and were again able to receive signals on their Samsung Galaxy S4 smartphone.

That’s interesting work that paves the way for a new generation of embedded devices that can communicate easily with common portable devices.  “We build proof-of-concepts for previously infeasible applications including the first contact lens form factor antenna prototype and an implantable neural recording interface that communicates directly with commodity devices such as smartphones and watches, thus enabling the vision of Internet-connected implanted devices,” says Smith and co.

With further optimization, the team should be able to improve performance. And that will make possible a new generation of apps that allow people to interact with and process data from devices embedded in their bodies.

Ref: arxiv.org/abs/1607.04663: Inter-Technology Backscatter: Toward Internet Connectivity for Implanted Devices

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.