Skip to Content

Scientists Reveal New Way to Analyse Neural Signals

Neuroscientists discover a new way to determine the information content of neural spike trains

One of the great challenges of modern neuroscience is to understand how neurons transmit and process information. The trouble is that nobody really agrees on how to define the information content of a neural signal, a series of electrical signals known as a spike train.

One approach is to ask how many bits are required to reproduce the spike train exactly. This gives you the algorithmic content of the signal. But that’s far from satisfactory because neural spike trains seem to contain large amounts of noise. Reproducing that requires a significant number of bits that are presumably irrelevant in the broader scheme of things.

Today, Robert Haslinger at Massachusetts General Hospital in Charlestown and a few buddies propose a new way to characterise neural signals.

Their approach is to distinguish the random content of a neural signal, which cannot easily be generated by an algorithm, from its statistical structure, which can. They then create such an algorithm.

Haslinger and co say this new approach gives a measure of the complexity of the neural signal, a useful quantity for determining the nature of any information processing that must be going on.

The key advantage of this process over other ways of measuring the information content of neural signals is that it clearly shows when and how the computational structure of the signal changes.

Haslinger and co have already tested the technique using simulated neural signals and real data from rat brains.

This work is another small brick in the giant scientific machine that neuroscientists are creating to crack the problem of neural coding. And while these efforts are laudable, it’s hard to look at the limited progress without thinking that these guys are overlooking some important part of the problem.

One area that may help is the increasingly serious consideration being given to the role of the environment in biological computation. This is idea that it is only possible to make sense of biological information processing systems by considering their interaction with the environment and the information it contains.

One good example of the role that the environment plays in computation is the ‘intelligent’ oil drop that can navigate its way through a maze and was developed by a team at Northwestern University a few months ago. A video of this process shows the blob seemingly deliberating at forks in the maze, making various wrong turns then back-tracking and eventually arriving at the centre. It certainly looks intelligent.

It is nothing of the kind, however. The maze is immersed in an alkaline liquid and a blob of acidic gel placed at its centre. The chemical gradient this creates changes the surface tension on the droplet as it sits on the liquid surface and this generates a force that pushes the droplet along.

The droplet, of course, is entirely dumb. Any study of its ability to process information and solve mazes would be futile.

Here, all the information is in the environment and the behaviour of the droplet makes no sense without it.

The importance of the environment in computation is an idea that some roboticists are beginning to get to grips with. They want to use it to design machines that exploit the information content of the environment rather than ignore it. And the success they are having is throwing into stark relief how obviously evolution has exploited this particular trick over the eons.

Perhaps it’s a trick that neuroscientists can learn from too.

Ref: arxiv.org/abs/1001.0036: The Computational Structure of Spike Trains

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.