Entrepreneurs in Silicon Valley this year set themselves an audacious new goal: creating a brain-reading device that would allow people to effortlessly send texts with their thoughts.
In April, Elon Musk announced a secretive new brain-interface company called Neuralink. Days later, Facebook CEO Mark Zuckerberg declared that “direct brain interfaces [are] going to, eventually, let you communicate only with your mind.” The company says it has 60 engineers working on the problem.
It’s an ambitious quest—and there are reasons to think it won’t happen anytime soon. But for at least one small, orange-beaked bird, the zebra finch, the dream just became a lot closer to reality.
That’s thanks to some nifty work by Timothy Gentner and his students at the University of California, San Diego, who built a brain-to-tweet interface that figures out the song a finch is going to sing a fraction of a second before it does so.
“We decode realistic synthetic birdsong directly from neural activity,” the scientists announced in a new report published on the website bioRxiv. The team, which includes Argentinian birdsong expert Ezequiel Arneodo, calls the system the first prototype of “a decoder of complex, natural communication signals from neural activity.” A similar approach could fuel advances towards a human thought-to-text interface, the researchers say.
A songbird’s brain is none too large. But its vocalizations are similar to human speech in ways that make these birds a favorite of scientists studying memory and cognition. Their songs are complex. And, like human language, they’re learned. The zebra finch learns its call from an older bird.
Makoto Fukushima, a fellow at the National Institutes of Health who has used brain interfaces to study the simpler grunts and coos made by monkeys, says the richer range of birdsong is why the new results have “important implications for application in human speech.”
Current brain interfaces tried in humans mostly track neural signals that reflect a person’s imagined arm movements, which can be coopted to move a robot or direct a cursor to very slowly peck out letters. So the idea of a helmet or brain implant that can effortlessly pick up what you’re trying to say remains pretty far from being realized.
But it’s not strictly impossible, as the new study shows. The team at UCSD used silicon electrodes in awake birds to measure the electrical chatter of neurons in part of the brain called the sensory-motor nucleus, where “commands that shape the production of learned song” originate.
The experiment employed neural-network software, a type of machine learning. The researchers fed into the program both the pattern of neural firing and the actual song that resulted, with its stops and starts and changing frequencies. The idea was to train their software to match one to the other, in what they termed “neural-to-song spectrum mappings.”
The team’s main innovation was to simplify the brain-to-tweet translation by incorporating a physical model of how finches make noise. Birds don’t have vocal cords as people do; instead, they shoot air over a vibrating surface in their throat, called a syrinx. Think of how you can make a high-pitched whine by putting two pieces of paper together and blowing at the edge.
The final result, say the authors: “We decode realistic synthetic birdsong directly from neural activity.” In their report, the team says it can predict what the bird will sing about 30 milliseconds before it does so.
You can listen to results yourself in the audio below. Keep in mind that the zebra finch is no nightingale. Its song is more like a staccato quacking.
Songbirds are already an important research model. At Elon Musk’s Neuralink, bird scientists were among the first key hires. And UCSD’s trick of focusing on detecting the muscle movements behind speech may also be a key development.
Facebook has said it hopes people will be able to type directly from their brains at 100 words per minute, privately sending texts whenever they want. A device able to read the commands your brain sends out to muscles while you are engaged in subvocal utterances (silent speech) is probably a lot more realistic than one that reads “thoughts.”
Gentner and his team hope their finches will help make it possible. “We have demonstrated a [brain-machine interface] for a complex communication signal, using an animal model for human speech,” they write. They add that “our approach also provides a valuable proving ground for biomedical speech-prosthetic devices.”
In other words, we’re a little closer to texting from our brains.
The AI revolution is here. Will you lead or follow?
Join us at EmTech Digital 2019.