While prognostications about “the end of science” might be premature, I think most of us expect that high-school mathematics, and even undergraduate math, will remain pretty much the same for all time. It seems math is just basic stuff that’s true; there won’t be anything new discovered that’s simple enough to teach to us mortals.

But just maybe, this conventional wisdom is wrong. Perhaps sometime soon, a new mathematics will be developed that is so revolutionary and elegantly simple that it will appear in high-school curricula. Let’s hope so, because the future of technology – and of understanding how the brain works – demands it.

My guess is that this new mathematics will be about the organization of systems. To be sure, over the last 50 years we’ve seen lots of attempts at “systems science” and “mathematics of systems.” They all turned out to be rather more descriptive than predictive. I’m talking about a *useful* mathematics of systems.

Currently, many different forms of mathematics are used to model and understand complicated systems. Algebras can tell you how many solutions there might be to an equation. The algebra of group theory is crucial in understanding the complex crystal structures of matter. The calculus of derivatives and integrals lets you understand the relationships between continuous quantities and their rates of change. Such a calculus is essential to predicting, for example, how long a tank of water would take to drain when the rate of flow fluctuates with the amount of water still in the tank.

The list goes on: Boolean algebra is the core tool for analyzing digital circuits; statistics provides insight into the overall behavior of large groups that have local unpredictability; geometry helps explain abstract problems that can be mapped into spatial terms; lambda calculus and pi-calculus enable an understanding of formal computational systems.

Still, all these tools have provided only limited help when it comes to understanding complex biological systems such as the brain or even a single living cell. They are also inadequate to explaining how networks of hundreds of millions of computers work, or how and when artificial evolutionary techniques – applied to fields like software development – will succeed.

These are just a few examples of what are sometimes referred to as complex adaptive systems. They have many interacting parts that change in response to local inputs and as a result change the global behavior of the complete system. The relatively smooth operation of biological systems – and even our human-constructed Internet – is in some ways mysterious. Individual parts clearly do not have an understanding of how other individual parts are going to change their behavior. Nevertheless, the ensemble ends up working.

We need a new mathematics to help us explain and predict the behavior of these sorts of systems. In my own field, we want to understand the brain so we can build more intelligent robots. We have primitive models of what individual neurons do, but we get stuck using the tools of information theory in trying to understand the “information content” that is passed between neurons in the timing of voltage spikes. We try to impose a computer metaphor on a system that was not intelligently designed in that way but evolved from simpler systems.

My guess is that a new mathematics for complex adaptive systems will emerge, one that is perhaps no more difficult to understand than topology or group theory or differential calculus and that will let us answer essential questions about living cells, brains, and computer networks.

We haven’t had any new household names in mathematics for a while, but whoever figures out the structure of this new mathematics will become an intellectual darling – and may actually succeed in designing a computer that comes close to mimicking the brain.

*Rodney Brooks directs MIT’s Computer Science and Artificial Intelligence Laboratory.*