Controlling your gadgets by talking to them is so 2018. In the future, you won’t even have to move your lips.
A prototype device called AlterEgo, created by MIT Media Lab graduate student Arnav Kapur, is already making this possible. With Kapur’s device—a 3-D-printed plastic doodad that looks kind of like a skinny white banana attached to the side of his head—he can flip through TV channels, change the colors of lightbulbs, make expert chess moves, solve complicated arithmetic problems, and, as he recently showed a 60 Minutes crew, order a pizza, all without saying a word or lifting a finger. It can be used to let people communicate silently and unobtrusively with each other, too.
“I do feel like a cyborg, but in the best sense possible,” he says of his experience with the device, which he built as a research project.
AlterEgo does not read minds, though it may sound that way. Rather, it picks up on the itty-bitty electrical signals produced by small movements of our facial and neck muscles when we silently read or talk to ourselves. AlterEgo’s electrodes capture these signals and send them via Bluetooth to a computer, where they can be decoded by algorithms and then acted on (“Turn on the light,” for example). The system includes bone conduction headphones to give you feedback and let you know (in a computerized voice) what other AlterEgo wearers are trying to tell you, without blocking your ears.
It’s like being personally connected to the Internet, and without it, Kapur says, “I feel normal all of a sudden.”
In a world where rapidly improving artificial intelligence is becoming a source of anxiety (in a “robots are going to take over and kill us or at least take our jobs” way), Kapur sees AlterEgo as a sort of antidote. He spent the last year working on the device to show how AI can help augment rather than replace us.
He envisions it as a new kind of computer, which can be used in a way that is less demanding of your attention than tapping and swiping on a smartphone and more intimate (and quiet) than barking commands at Alexa. Though it’s still just an early-stage prototype, he imagines it being helpful for, say, calling an Uber, or making it easier for people with speech impediments and voice disorders to communicate.
Thus far, Kapur and other Media Lab researchers have built several simple applications, including game-playing assistants for chess and Go that suggest the next move, an arithmetic app that gives you the answer to internally vocalized math problems, and an app that lets you essentially become a node on the internet of things.
The researchers also had people test out AlterEgo as a way to communicate silently and inconspicuously; according to a recent paper, they found that 92 percent of the time, on average, it was able to accurately capture what users said.
Tanzeem Choudhury, an associate professor at Cornell University who runs the school’s People-Aware Computing Lab, thinks AlterEgo could be particularly helpful in situations where it might be embarrassing or emotionally taxing to talk about certain things.
The challenge she says, is making the device work well without making the hardware and the interaction itself look weird. She points to Google Glass—the archetypal failed-wearable story—as an example of how interactions between people can go awry when at least one of them has a gadget on their head.
And Kapur, who would like to improve on this and turn it into a real product, is starting to think about all the issues that need to be fixed first.
For instance, its person-to-person communication function is limited to very simple words and phrases like “Yes,” “No,” “Hello,” “Bye,” and “Do you know this?” And while it can translate silently uttered words from English into languages including Spanish and Japanese, it’s still only able to translate 15 phrases.
That’s because his approach to silent speech is novel, so there aren’t any large data sets that the researchers can grab to train AlterEgo’s algorithm (as opposed to, say, say, a typical speech-recognition app). So the researchers are building their own data set by having people use it.
Kapur says they’re also setting up a study at hospitals and rehabilitation centers where people with speech impairments will use AlterEgo, though he won’t divulge what, exactly, they’re doing or hoping to find out. Beyond that, the researchers are expanding the vocabulary the system can understand, working on applications, and considering how they can improve AlterEgo’s form factor.
After all, he says, “it’s nice to have all those superpowers.”
Humans and technology
10 Breakthrough Technologies 2023
People are already using ChatGPT to create workout plans
Fitness advice from OpenAI’s large language model is impressively presented—but don’t take it too seriously.
I just watched Biggie Smalls perform ‘live’ in the metaverse
An avatar of the singer, who died in 1997, performed with live rappers on Meta’s Horizon Worlds.
Why my bittersweet relationship with Shein had to end
Reflecting on my desire for Chinese-style e-commerce platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.