Skip to Content

ETH Researchers Develop a Thought-Controlled Genetic Interface

Swiss scientists achieve a remarkable mashup of optogenetics, synthetic biology, and brain control.
November 13, 2014

We’ve seen a lot of interesting experiments with EEG recently. Researchers have used these brain signals to fly a toy helicopter, make a rat’s tail twitch, operate a tablet computer, and get a paraplegic wearing an exoskeleton to kick a ball at the World Cup (see “World Cup Mind-Control Demo Faces Deadlines, Critics”).

Now Marc Folcher, Martin Fussenegger, and colleagues at the ETH Zurich, in Switzerland, have gone a step further. They used EEG to give human volunteers direct brain control over the activity of living cells, a technology they’re calling the world’s first-ever “mind-genetic interface.”

Using the interface they designed, the ETH team showed a human volunteer wearing an EEG cap could use his thoughts to trigger production of a particular protein, called SEAP, in human kidney cells growing in a petri dish. He could also turn on supplies of the cells that had been implanted under the skin of lab mice.

The research is interesting because it shows how futuristic brain implants might function, Folcher and company write in this week’s Nature Communications. Such devices, the ETH authors speculate, might sense a person’s feelings of pain (or perhaps oncoming epileptic seizure) and then automatically trigger brain cells to pump out a helpful biotech drug.  

To build their interface, the ETH team combined three technologies, each of which is exciting in its own right: brain-computer interfaces, synthetic biology, and optogenetics.

First, they engineered the kidney cells with bacterial DNA, creating what synthetic biologists like to call a “genetic switch”—a series of genes that, together, work to turn on production of a particular protein, in this case SEAP.  

To trigger this switch to the on position, the ETH gang used optogenetics, adding a gene from the purple bacteria Rhodobacter sphaeroides that produces a light-sensitive molecule. Now, when they used an LED light to shine some near-infrared light onto their cells, the switch flipped, and they immediately began making SEAP.

So far, so good. But to complete their cybernetic stunt, the scientists had volunteers don EEG caps—that’s an electrode covered cloth that picks up electrical waves from the brain. These waves can be roughly controlled by a person if they concentrate. In itself, EEG is nothing new—here’s a video of the Beatles’ John Lennon using EEG to control a musical instrument in the 1970s.

But the ETH team wanted to be the first to ever turn human thoughts into electrical pulses, then into light, and finally into proteins. They did it by having the volunteers use their brain waves to turn the LED light on, thereby triggering the cells to make the SEAP protein. In summary, say Folcher and company, “we designed a mind-genetic interface that uses brain waves to remotely control target gene transcription wirelessly.”

You might ask what the point is. One answer is that a lot of scientists right now are interested in next-generation brain implants. They are hoping to improve deep-brain stimulation, a medical technology widely used to stop the tremors of Parkinson’s disease. These brain implants stop tremors using a wire placed in a brain region called the thalamus. The patient turns it on and gets a fairly strong current of electricity, which immediately makes the tremors cease. It’s a technology that does wonders, even though no one is sure how it works.

Work has already started on tomorrow’s implants—and these may function just as the ETH team envisions. For instance, the Michael J. Fox Foundation for Parksinson’s Research has been paying for research on how to replace the electrodes in deep-brain stimulators with fiber optics. Instead of electric shocks, it would send out pulses of light to control the neurons that don’t function right in Parkinson’s. 

Just as important to next-generation implants is the idea of direct brain control. Instead of the patient having to turn on their device on manually, a so-called “open loop” system, the goal is to close the loop with an implant that can read brain signals, and know when a tremor is starting. That way, the implant would react automatically when treatment is needed.

That means the ETH team’s crazy mind-DNA interface isn’t so crazy after all.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.