Skip to Content

The Brain Activity Map

Researchers explain the goals and structure of a new brain-mapping project.

A proposed effort to map brain activity on a large scale, expected to be announced by the White House later this month, could help neuroscientists understand the origins of cognition, perception, and other phenomena. These brain activities haven’t been well understood to date, in part because they arise from the interaction of large sets of neurons whose coördinated efforts scientists cannot currently track.

brain map lab image
Active links: A fluorescent molecule in the neurons of a mouse brain glows as the brain cells fire.

“There are all kinds of remarkable tools to study the microscopic world of individual cells,” says John Donoghue, a neuroscientist at Brown and a participant in the project. “And on the macroscopic end, we have tools like MRI and EEG that tell us about the function of the brain and its structure, but at a low resolution. There is a gap in the middle. We need to record many, many neurons exactly as they operate with temporal precision and in large areas,” he says.

An article published Thursday in Science online expands the project’s already ambitious goals beyond just recording the activity of all individual neurons in a brain circuit simultaneously. Researchers should also  find ways to manipulate the neurons within those circuits and understand circuit function through new methods of data analysis and modeling, the authors write.  

Understanding how neurons communicate with one another across large regions of the brain will be critical to understanding how the brain works, according to participants in the project. Other efforts to map out the physical connections in the brain are already under way (see “TR10: Connectomics” and “Mapping the Brain on a Massive Scale”), but these projects look at static brains or can only get a rough view of how regions of the brain communicate. The new project will probably start applying its novel and yet unknown technologies on simpler brains, such as those of flies, and will probably take decades to achieve its goals.

Numerous leaders from the fields of neuroscience, nanotechnology, and synthetic biology are expected to collaborate on the effort. “We need something large scale to try to build tools for the future,” says Rafael Yuste, a neurobiologist at Columbia University and a member of the project. “We view ourselves as tool builders. I think we could provide to the scientific community the methods that could be used for the next stage in neuroscience.”

In addition to deepening fundamental understanding of the brain, the project may also lead to new treatments for psychiatric and neurological disorders. “If we truly understand how things like thoughts, cognition, and other features of the brain emerge, then we should have a better understanding of mood disorders, Parkinson’s, epilepsy and other conditions that are thought to arise from brain-wide circuitry problems,” says Donoghue.

Details about which technology ideas will be given the green light and how much money will support their development are expected to be revealed in the White House announcement that is still to come. The project is likely to be supported by the National Institutes of Health, the National Science Foundation, the Defense Advanced Research Projects Agency, the Office of Science and Technology Policy, and private foundations, participants say. It’s not yet clear how much money will be needed or which technologies will be given priority.

Whichever particular technologies emerge, nanotechnology is likely to be involved, in part because of the need for smaller and faster sensors to record neuronal activity across the brain. Existing sensors can record the electrical activity of neurons, but these chips can typically monitor fewer than 100 neurons at a time and can’t record activity from neighboring neurons, which would be necessary to understand how neurons interact with one another. Paul Weiss, director of the California NanoSystems Institute at the University of California, Los Angeles, a participant in the project, says that nanofabrication techniques could address this problem, with smaller chips bearing smaller electrical and even chemical probes. “We’ve had over a decade a fairly substantial investment in science and technology to develop the capability … to control how what we make interacts with the chemical, physical, and biological worlds,” he says.

Novel optical techniques could also aid the mapping project. Currently, many research groups use calcium-sensitive fluorescent dyes to study neuron firing, but Yuste wants to develop an optical technique that uses voltage-sensitive fluorescent dyes for a faster readout. “Neurons communicate using voltage,” he says. “We would like to develop voltage imaging so we will be able to measure neuronal activity directly.”

While many things about the project are uncertain, one thing is clear—there is going to be a lot of data to store, share, and analyze. “We have just begun to scratch the surface of how you deal with data in high-dimensional spaces,” says Terry Sejnowski, a computational neuroscientist at the Salk Institute. “If you are talking about one million neurons, no one can even imagine what that looks like–it is way beyond what we can perceive in three dimensions.”

The Science article also sketches out a rough time line. Within five years, it should be possible to monitor tens of thousands of neurons; in 15 years, one million neurons should be possible. A fly’s brain has about 100,000 neurons, a mouse’s about 75 million, and a human’s about 85 billion. “With one million neurons, scientists will be able to evaluate the function of the entire brain of the zebrafish or several areas from the cerebral cortex of the mouse,” the authors write.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.