Usually when you watch a film, you sit back in your chair, eyes trained on a screen, as the story unfolds. It’s a lot different when you watch one of Richard Ramchurn’s latest films.
Ramchurn, a graduate student at the University of Nottingham in Nottingham, England, is an artist and director who has spent the last several years creating films that you can control with your mind—simply by putting on a $100 headset that detects electrical activity in your brain. With this EEG headset on, scenes, music, and animation change every time you watch it, depending on the meanderings of your mind.
Ramchurn’s latest work, a 27-minute avant-garde tale called The Moment that (no surprise) explores a dark future where brain-computer interfaces are the norm, is nearly complete. While finishing up editing work, Ramchurn has started screening it in a small trailer around Nottingham, where six to eight people can sit and view it at once. (Just one of them controls it while the others observe.) He will also show it at a film festival in Sheffield, England, in June.
If you’re wearing the headset, a NeuroSky MindWave, while watching The Moment, it will track your level of attention by measuring electrical activity within a frequency range believed to correspond with attentiveness (though it should be noted that there are doubts about how well devices like this can actually do such tracking). The continually computed score is sent wirelessly to a laptop, where Ramchurn’s specially built software uses it to alter the editing of the scenes, the flow of the background music, and more. You don’t have to move a muscle.
Simply getting all this to work is exciting to Ramchurn. But beyond that, he says, allowing the viewer to effectively edit the film—either by consciously thinking about it or by naturally responding to what’s happening on screen—creates a sort of two-way feedback loop. The film changes because of how you feel, and the way you feel changes because of the film.
“It almost becomes part of the system of your mind,” he says.
Ramchurn, 39, spent years making short films, documentaries, and music videos, and experimenting with ways of incorporating technology into his work. He started toying with the idea of a brain-computer interface for film in 2013, when he first tried a NeuroSky headset. He ultimately used it to help make his first brain-controlled film, The Disadvantages of Time Travel, in 2014 and 2015.
That first film is more abstract than The Moment, flitting between the main character’s dream state and reality. The headset monitored the viewer’s blinking to figure out when to cut from one shot to the next, and their attention and meditation (this is another range of brainwave frequencies that the headset can log and score) to determine when and how to switch between fantasy and real-life modes.
Looking back, Ramchurn says, The Disadvantages of Time Travel was too busy. Blinking-based control actually removed people from the interactive experience by making them aware of their own physiology. Having viewed the director’s cut, a version he manipulated by watching himself, I can confirm it is, at the least, demanding to watch.
Trillions of possibilities
For The Moment, Ramchurn dropped blinking and focused on attention data. It tends to rise and fall like a sine wave as your focus shifts, ebbing about every six seconds. So he used these natural dips to signal a cut to a new shot. At any given point, the film is switching back and forth between two of its three narrative threads, which follow three characters who interact throughout.
With all the possibilities for mind-directed changes, Ramchurn thinks there are about 101 trillion different versions of the film that you could see. To make this possible in a 27-minute film, he had to create three times as much footage as he would have normally, and gather six times as much audio.
Since I couldn’t get to the United Kingdom to take charge of the film myself, Ramchurn sent me the next best thing: two recordings of The Moment controlled by two different people.
The differences were mostly subtle, such as variations in the music and in the animation interspersed between shots of real-life actors. But there were also some clear differences: one version let me take a peek inside a notebook that one of the main characters was writing and drawing in, and included more dialogue that helped flesh out the story.
The overall effect of watching a film whose trajectory was controlled somewhat by previous viewers was strange and compelling. I kept wondering what, exactly, they did (or didn’t) have control over, and how much they were thinking about this while they watched. And how did they (or I) know for sure that they were controlling anything at all?
Recommended for You
I put this question to Steve Benford, a computer science professor at the University of Nottingham and Ramchurn’s advisor. He agreed that while viewers knew their blinks lined up with film cuts in The Disadvantages of Time Travel, your role in directing The Moment with your brain is fuzzier.
With interactive art like this, Benford explains, “you don’t always know what’s going on. You have to interpret what happens, and the artist has a choice about to what extent they want to make it more or less explicit.”
Ramchurn is not the first person to try to get audiences to interact with movies—the history of cinema is filled with efforts ranging from singalongs to smartphone apps meant to be used while watching.
Jacob Gaboury, an assistant professor of film and media at the University of California, Berkeley, remembers sitting in a theater in the 1990s and using a joystick to choose between two different film endings. Making films that respond to brain activity might lead filmmakers to create different kinds of stories, images, and sounds than they normally would, he says.
“Often, you get bogged down in telling stories in a particular way in the cinema, so it could be interesting to see how that would progress from a director’s perspective,” he says.
But because it’s controlled by a single person, he doesn’t imagine it being the kind of thing you’d watch at a movie theater. Ramchurn says that he has experimented with ways these films could work in front of a larger audience, such as by letting three people compete to be the main controller (by blinking more and earning higher meditation scores), or by taking an average of the reactions to determine what happened on the screen.
In the end, he says, a cooperative mode that made each person responsible for an element of the film—the soundtrack, the cutting of shots, the blending of layers—worked the best.
“The films they made flowed better,” he says.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today