Skip to Content
Uncategorized

And Now the Power Forecast

A new mathematical technique for measuring how close power grids are to catastrophic failure could help prevent outages in future.

In the afternoon of 14 August 2003, a massive power fluctuation rippled through the grid supplying power to the north eastern US and Canada. The fluctuation caused more than 500 generators throughout the region to shut down leaving some 55 million people without power. The blackout left commuter trains stranded, disrupted water supplies as electric pumps stopped working and brought industry to a virtual halt across the region.

In the aftermath of this disaster and numerous others like it around the world, the question being asked of electrical engineers is how to prevent similar blackouts in future. The answer, unfortunately, is far from clear. Power grids are so complex that modelling the behaviour of even relatively small ones is worryingly hard. And predicting the circumstances in which they might fail is harder still.

Today, Michael Chertkov at Los Alamos National Laboratories in New Mexico and a couple of buddies suggest a new way to analyse the limits of power grid performance and to determine how close they are to catastrophic failure.

The fundamental problem of power grids is that their reliability depends on two assumptions that once seemed reasonable but are today outdated. The thinking behind these assumptions was that the grid could always be controlled by measuring what was going on and reacting to events–changes in demand and generating capacity–as and when they occurred.

The first assumption is that a grid has huge built-in redundancy: things like extra cables to carry power during an overload. For this reason, in the distant past many grids were built with double and even triple redundancy. This once provided ample capacity to cope with overloads but has gradually been eaten away as the grids have been forced to cope with the massive increase in demand for power since then. Today, many grids often operate at close to capacity.

The second problem is that the grids were designed to cope with power from a few large sources of power which could be quickly and easily switched on and off as needed. But the addition to the grid in recent years of renewable power sources such as wind and solar is gradually eroding this element of control.

These changes mean that it is now much harder to react to changes in the system in an effective way. And that means the chances of a major blackout remain unreasonably high in many parts of the world.

So what to do about it? One widely discussed option is to change from a reactive system of control to a predictive one, in other words to work out how close to disaster the system is at one time and then rewire it in a way that reduces the risk.

That’s easier said than done. The basic rules for calculating how a grid is performing are called Kirchhoff’s Circuit Laws , after the German physicist who derived them in the 19th century. These laws are deceptively simple. The first is that the sum of current flowing into any node on a grid must be equal to sum of current flowing out. The second is that the sum of the voltages around any closed loop in the grid must be zero.

Any decent-sized grid operator will always be measuring what’s going on and calculating how best to balance the load and demand using Kirchhoff’s laws. These solutions are usually easy to find. They are also usually quite stable, meaning that any small change in the operating conditions leads to another stable state.

However, there are conditions in which Kirchhoff’s laws give no solution. When that happens, the grid needs to be rewired and if that isn’t done quickly, various built in safety systems start taking generators offline.

The difficulty is in knowing how close to disaster the system is at any instant. That’s hard because it requires a detailed search of a huge parameter space.

That’s where Chertkov and co have made their breakthrough. They’ve used a technique taken from other areas of physics to search the important regions of this parameter space. This allows them to separate the regions of this space that allow stable operation of the grid from the forbidden regions in which the grid cannot operate. They then define an “error surface” as the surface which divides one region from the other.

The task for grid operators is to determine how far from this error surface the grid is at any instant and to work out how to move it away from the forbidden region if it gets too close.

To be sure, this is a fiendish calculation. But the significance of the work is being able to pose it as a well-defined problem at all.

Chertkov and co have even tested their method on a model of the relatively small power grid in operation on the pacific island of Guam and also on the IEEE Reliability Test System-96, a system designed to benchmark power grid reliability strategies.

They say the technique identifies weak links in a grid as well as over and under-used generators. This allows operators to anticipate and avoid problems before they occur. Obviously, operators will need to be aware of potential weaknesses in this approach, one of which is that it performs only a selective search of the parameter space which means that certain failure modes could go unnoticed.

However, a reliable predictive approach will clearly be hugely useful. The next stage will be to apply it to larger and more complex power grids using more powerful computers. And the ultimate goal will be a kind of power forecast that could forever prevent the sort of outage that paralysed the east coast back in 2003.

Ref: arxiv.org/abs/1006.0671: Predicting Failures in Power Grids

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.