The need for operating systems to help brains and machines work together.
Ed Boyden, an Assistant Professor, Biological Engineering, and Brain and Cognitive Sciences at the MIT Media Lab, will give a presentation on using light to study and treat brain disorders at 3.30pm on Wednesday at EmTech 2010. Watch a live feed of the session here.
The last few decades have seen a surge of invention of technologies that enable the observation or perturbation of information in the brain. Functional MRI, which measures blood flow changes associated with brain activity, is being explored for purposes as diverse as lie detection, prediction of human decision making, and assessment of language recovery after stroke. Implanted electrical stimulators, which enable control of neural circuit activity, are borne by hundreds of thousands of people to treat conditions such as deafness, Parkinson’s disease, and obsessive-compulsive disorder. And new methods, such as the use of light to activate or silence specific neurons in the brain, are being widely utilized by researchers to reveal insights into how to control neural circuits to achieve therapeutically useful changes in brain dynamics. We are entering a neurotechnology renaissance, in which the toolbox for understanding the brain and engineering its functions is expanding in both scope and power at an unprecedented rate.
This toolbox has grown to the point where the strategic utilization of multiple neurotechnologies in conjunction with one another, as a system, may yield fundamental new capabilities, both scientific and clinical, beyond what they can offer alone. For example, consider a system that reads out activity from a brain circuit, computes a strategy for controlling the circuit so it enters a desired state or performs a specific computation, and then delivers information into the brain to achieve this control strategy. Such a system would enable brain computations to be guided by predefined goals set by the patient or clinician, or adaptively steered in response to the circumstances of the patient’s environment or the instantaneous state of the patient’s brain.
Some examples of this kind of “brain coprocessor” technology are under active development, such as systems that perturb the epileptic brain when a seizure is electrically observed, and prosthetics for amputees that record nerves to control artificial limbs and stimulate nerves to provide sensory feedback. Looking down the line, such system architectures might be capable of very advanced functions–providing just-in-time information to the brain of a patient with dementia to augment cognition, or sculpting the risk-taking profile of an addiction patient in the presence of stimuli that prompt cravings.
Given the ever-increasing number of brain readout and control technologies available, a generalized brain coprocessor architecture could be enabled by defining common interfaces governing how component technologies talk to one another, as well as an “operating system” that defines how the overall system works as a unified whole–analogous to the way personal computers govern the interaction of their component hard drives, memories, processors, and displays. Such a brain coprocessor platform could facilitate innovation by enabling neuroengineers to focus on neural prosthetics at an algorithmic level, much as a computer programmer can work on a computer at a conceptual level without having to plan the fate of every individual bit. In addition, if new technologies come along, e.g., a new kind of neural recording technology, they could be incorporated into a system, and in principle rapidly coupled to existing computation and perturbation methods, without requiring the heavy readaptation of those other components.
Developing such brain coprocessor architectures would take some work–in particular, it would require technologies standardized enough, or perhaps open enough, to be interoperable in a variety of combinations. Nevertheless, much could be learned from developing relatively simple prototype systems. For example, recording technologies by themselves can report brain activity, but cannot fully attest to the causal contribution that the observed brain activity makes to a specific behavioral or clinical outcome; control technologies can input information into neural targets, but by themselves their outcomes might be difficult to interpret due to endogenous neural information and unobserved neural processing. These scientific issues can be disambiguated by rudimentary brain coprocessors, built with readily available off-the-shelf components, that use recording technologies to assess how a given neural circuit perturbation alters brain dynamics. Such explorations may begin to reveal principles governing how best to control a circuit–revealing the neural targets and control strategies that most efficaciously lead to a goal brain state or behavioral effect, and thus pointing the way to new therapeutic strategies. Miniature, implantable brain coprocessors might be able to support new kinds of personalized medicine, for example continuously adapting a neural control strategy to the goals, state, environment, and history of an individual patient–important powers, given the dynamic nature of many brain disorders.
In the future, the computational module of a brain coprocessor may be powerful enough to assist in high-level human cognition or complex decision making. Of course, the augmentation of human intelligence has been one of the key goals of computer engineers for well over half a century. Indeed, if we relax the definition of brain coprocessor just a bit, so as not to require direct physical access to the brain, many consumer technologies being developed today are converging upon brain coprocessor-like architectures. A large number of new technologies are attempting to discover information useful to a user and to deliver this information to the user in real time. Also, these discovery and delivery processes are increasingly shaped by the environment (e.g., location) and history (e.g., social interactions, searches) of the user. Thus we are seeing a departure from the classical view (as initially anticipated by early thinkers about human-machine symbiosis such as J. C. R. Licklider) in which computers receive goals from humans, perform defined computations, and then provide the results back to humans.
Of course, giving machines the authority to serve as proactive human coprocessors, and allowing them to capture our attention with their computed priorities, has to be considered carefully, as anyone who has lost hours due to interruption by a slew of social-network updates or search-engine alerts can attest. How can we give the human brain access to increasingly proactive coprocessing technologies without losing sight of our overarching goals? One idea is to develop and deploy metrics that allow us to evaluate the IQ of a human plus a coprocessor, working together–evaluating the performance of collaborating natural and artificial intelligences in a broad battery of problem-solving contexts. After all, humans with Internet-based brain coprocessors (e.g., laptops running Web browsers) may be more distractible if the goals include long, focused writing tasks, but they may be better at synthesizing data broadly from disparate sources; a given brain coprocessor configuration may be good for some problems but bad for others. Thinking of emerging computational technologies as brain coprocessors forces us to think about them in terms of the impacts they have on the brain, positive and negative, and importantly provides a framework for thoughtfully engineering their direct, as well as their emergent, effects.
Ed Boyden is Assistant Professor of Biological Engineering and Brain and Cognitive Sciences at the Media Lab, whose Synthetic Neurobiology group works on neurotechnologies for systematic analysis and control of neural circuits.
Doug Fritz is a Media Lab PhD student in the Fluid Interfaces group, working on extending human capability through just-in-time processing that augments our interface to the world.
Brian Allen is a Media Lab PhD student in the Synthetic Neurobiology group, working to develop new approaches to understanding how the brain gives rise to emotion.
AI is here.
Own what happens next at EmTech Digital 2019.