Skip to Content

Scheduling Wind Power

Better wind forecasts could prevent blackouts and reduce pollution.
April 17, 2008

As wind power becomes more common, its unpredictability becomes more of a problem. Sudden drops in wind speed can send grid operators scrambling to cover the shortfall and even cause blackouts; unexpected surges can leave conventional power plants idling, incurring costs and spewing pollution to no purpose.

Ripe for the harvest: Power-grid operators are using wind-clocking anemometers and weather stations installed at wind farms to predict wind power production hours or days in advance.

To address the problem, power-grid operators are combining hyper-local meteorological data and artificial intelligence to predict when the wind turbines installed on their networks will turn. This month, New York’s Independent System Operator (NYISO) announced plans to integrate wind modeling into its grid control schemes by the summer, and the Electric Reliability Council of Texas (ERCOT) plans to fire up a similar system this summer, if not sooner. The California Independent System Operator (Cal-ISO), meanwhile, plans to expand a forecasting program that already covers about a quarter of the state’s wind-power capacity.

What makes these modeling systems accurate and affordable is real-time data supplied by the wind farms themselves: wind speed and direction, plus, in many cases, local temperature, barometric pressure, and humidity. Companies that specialize in weather modeling provide software that, over time, learns to correlate this data with power output and recognize the weather conditions that signal more or less power output in the near future. One of these companies, Albany’s AWS Truewind, is working with California, New York, and Texas, but its competitors include 3 Tier Environmental Forecast Group; Garrad Hassan, in the United Kingdom; and WindLogics, based in St. Paul.

When wind farms were less common, grid controllers could essentially ignore their varying output, as it was all but indistinguishable from natural fluctuations in consumer use. Throttling conventional power plants up or down kept supply and demand balanced. But those days are passing fast. Take NYISO, which had virtually no wind power to contend with five years ago. Today, it has more than 500 megawatts on its grid and proposals pending that would push that to almost 7,000 megawatts. That’s about 17 percent of its current power base.

Texas, which had 4,446 megawatts of wind on its grid by the end of 2007–more than any other state–has already discovered what large-scale wind-power ebbs and flows can do if controllers aren’t watching. “We’ve had some instances recently where we’ve either had some very high prices in the short-term market because of our inability to forecast the wind, or where we’ve actually had to declare emergencies because we were concerned about reliability, in part because we couldn’t see how much wind was on the system,” says Jess Totten, director of electric industry oversight for Texas’s Public Utility Commission.

A sharp drop in wind power was cited as a major cause of emergency power outages ordered by ERCOT on the evening of February 26, for example. Consumers drew far more power than ERCOT had projected, and several conventional power plants did not run as scheduled, but the wind-power shortfall was the last straw.

Ironically, a wind-forecasting pilot project that ERCOT had initiated with AWS Truewind predicted the wind drop more than a day earlier. “The system operators didn’t know that was coming, but the forecasters did, which is a little frustrating,” says Michael Goggin, electric-industry analyst for the American Wind Energy Association, a Washington, DC, trade group. “They just didn’t walk it over to the right person. If they had integrated it into their system operation, things would have gone very differently.”

Such forecasting will become far more critical. Earlier this month, a report by General Electric, commissioned by the state, predicted that when Texas’s wind capacity hits 15,000 megawatts, wind-induced power drops on the order of 2,400 megawatts in less than half an hour will be an annual occurrence. For context, the drop that caught operators short on February 26 was just 80 megawatts.

Forecasting is not only a way to ensure system reliability. Cal-ISO and the California Energy Commission have determined that it’s also critical to minimizing costs while achieving the pollution reductions anticipated by the state’s renewable portfolio standard, which requires utilities to derive 20 percent of their power from renewable sources by 2010, and 33 percent by 2020. Cal-ISO has to guard against wind-power shortages by contracting for backup power with conventional power plants on its network. To provide effective backup, some of those conventional plants would have to idle, generating pollution even if they are never called on to deliver megawatts. Better wind forecasting will ensure that fewer of those backup plants have to gear up in the first place.

Cal-ISO plans to beef up its current wind-forecasting system, which predicts wind power over the next hour, so that it includes a forecast for the day to come–the time scale on which it contracts for backup power. Stretching out forecasts to a day will likely increase their average error rate to 15 percent or more, compared with 7 percent or less for a one-to-four-hour forecast, according to figures provided by AWS Truewind. But reports prepared by the state in 2007 suggest that even relatively inaccurate day-ahead forecasts can make a big difference.

If 5,000 megawatts of wind power is forecast, an error of 20 percent would mean that wind farms would actually provide somewhere between 4,000 and 6,000 megawatts of power. In this case, Cal-ISO’s backup power order would routinely be 1,000 megawatts too high or too low. But without a forecast, the backup order would always be at least 4,000 megawatts too high.

Keep Reading

Most Popular

This new data poisoning tool lets artists fight back against generative AI

The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. 

The Biggest Questions: What is death?

New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.

Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist

An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.

How to fix the internet

If we want online discourse to improve, we need to move beyond the big platforms.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.