Conversations between people include a lot more than just words. All sorts of visual and aural cues indicate each party’s state of mind and make for a productive interaction.
But a furrowed brow, a gesticulating hand, and a beaming smile are all lost on computers. Now, researchers at MIT and Tufts are experimenting with a way for computers to gain a little insight into our inner world.
Their system, called Brainput, is designed to recognize when a person’s workload is excessive and then automatically modify a computer interface to make it easier. The researchers used a lightweight, portable brain monitoring technology, called functional near-infrared spectroscopy (fNIRS), that determines when a person is multitasking. Analysis of the brain scan data was then fed into a system that adjusted the user’s workload at those times. A computing system with Brainput could, in other words, learn to give you a break.
There are other ways that a computer could detect when a person’s mental workload is becoming overwhelming. It could, for example, log errors in typing or speed of keystrokes. It could also use computer vision to detect facial expressions. “Brainput tries to get to closer to the source, by looking directly at brain activity,” says Erin Treacy Solovey, a postdoctoral researcher at MIT. She presented the results last Wednesday at the Computer Human Interaction Conference in Austin, Texas.
For an experiment, Treacy Solovey and her team incorporated Brainput into virtual robots designed to adapt to the mental state of their human controller. The main goal was for each operator, capped with fNIRS headgear, to guide two different robots through a maze to find a location where a Wi-Fi signal was strong enough to send a message. But here’s what made it tough: the drivers had to constantly switch between the two robots, trying to keep track of both their locations and keep them from crashing into walls.
As the research subjects drove their robots toward the strongest Wi-Fi signal, their fNIRS sensors transmitted information about their mental state to the robots. The robots, for their part, were programmed to focus on a state of mind called branching, in which a person is simultaneously working on two goals that require attention. (Previous studies have correlated certain fNIRS signals to this sort of mental state.) When the robots sensed that the driver was branching, they took on more of the navigation themselves.
The researchers found that when the robots’ autonomous mode kicked in, the overall performance of the human-robot team improved. The drivers didn’t seem to notice or get frustrated by the autonomous behavior of the robot when they were multitasking. The researchers also tried increasing the autonomy of the robots when Brainput did not indicate that users were mentally overloaded. When they did this, they found that overall performance decreased. In other words, increased autonomy only helped when users were struggling to cope.
“A good chunk of computer and human-computing interaction research these days is focused on giving computers better senses so they can either implicitly or explicitly augment our intellect and assist with our tasks,” says Desney Tan, a researcher at Microsoft Research. “This work is a wonderful first step toward understanding our changing mental state and designing interfaces that dynamically tailor themselves so that the human-computer system can be as effective as possible.”
Treacy Solovey suggests that such a system could potentially be used to help drivers, pilots, and supervisors of unmanned aerial vehicles. She says future work will investigate other cognitive states that can be reliably measured using fNIRS.
The big new idea for making self-driving cars that can go anywhere
The mainstream approach to driverless cars is slow and difficult. These startups think going all-in on AI will get there faster.
Inside Charm Industrial’s big bet on corn stalks for carbon removal
The startup used plant matter and bio-oil to sequester thousands of tons of carbon. The question now is how reliable, scalable, and economical this approach will prove.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.