The director of the National Oceanic and Atmospheric Administration’s forecasting lab says the federal government is still relying partly on computer models designed for much larger weather systems to predict what hurricanes will do.
Alexander E. (“Sandy”) MacDonald, Director of NOAA’s Forecast Systems Laboratory in Boulder, CO, says a more precise model is in the works that would allow for sharper predictions.
But getting it ready for deployment will require more computation and research dollars. “We have a ways to go,” he tells Technology Review’s Chief Correspondent, David Talbot, who interviewed him this week.
TR: The Katrina forecast was extremely accurate, but Rita wound up farther north than initially predicted. This meant the evacuation of Houston was perhaps not as necessary as the evacuation of New Orleans. Weren’t the same computer models used for both forecasts?
AM: Forecasters use as many as ten different models – that’s called a model ensemble – to try to determine what the hurricane track and intensity are going to be. It’s sort of like you call ten stockbrokers and say “What’s the best stock?” You use all of that information to come up with the best forecast. Hurricane Katrina was a very accurate forecast, partly because the models were very accurate. For Hurricane Rita, the models were quite widely varied in their predictions, so that was a harder forecast. This showed us that we still have improvements to make to the models.
TR: What accounts for the fact that the models agreed with each other with Katrina more than for Rita?
AM: The differing levels of atmospheric stability. A hurricane can become trapped between two high pressure systems, which creates a stable “chute.” An unstable situation is that there’s no “chute” – there’s just kind of an open area without high pressure systems, and the hurricane can go any which direction it wants. Katrina was more trapped – it had to go the direction it was going. Rita depended on pretty small differences in the pressure around it as to which way it would go.
TR: Improving models starts with collecting more hurricane data. How can this be improved?
AM: Right now we get measurements of a hurricane every six hours with a manned plane that carries “dropsondes” – similar to weather balloons, except they measure winds, temperature, and pressure as they fall from the plane to the surface. But you could actually have an unmanned aircraft system, a UAS, ride along above the eye of the hurricane, at 65,000 feet, and it could release a dropsonde every hour, providing almost continuous measurements in the center of the storm. That is something that we can’t do now. The UAS could have instruments, either microwave or radar, that could tell us continuously the surface winds based on the waves and other ocean signatures. That is an example of something that would be possible.
UASs are one tool, but there are a number of others: more buoys with weather and ocean sensors on the water’s surface, more manned aircraft, better usage of satellites. We could also use Doppler radar on the manned airplanes to measure the hurricane eyewall wind structure, which can be inserted into the model to improve prediction.
TR: You mention the dynamics of the eye and the eyewall. How well are these dynamics understood?
AM: Hurricanes can change pretty rapidly, turn in a different direction, or go from a Category 4 to a 1. Hurricanes will go through eyewall cycles. As the new eyewalls grow and take the place of old eyewalls, we see these kinds of intensity changes. The eyewall is where you get 150-200 mph winds. Someone described Hurricane Andrew as a 30-mile wide tornado. So we want our models to incorporate eyewall dynamics correctly. We don’t understand everything that causes eyewall cycles; if we are going to predict those, we want to be able to see short-term changes. If we want to learn what’s causing them, we have to take more measurements. We have quite a ways to go. There are lots of things we can do to improve accuracy – like better models not only of where the hurricane is going, but of where the storm surge is going to hit.
TR: More data and higher resolution means more computing horsepower, right?
AM: Until the last few years, weather prediction models were built for geographically large storms, like the standard low pressure systems we see on the weather maps. They did not resolve the most important weather, such as tropical storms and thunderstorms. Right now, we are testing new hurricane models, not in use yet, that run at resolutions as high as 1 to 4 km (compared with the current global models that run with 40 km grid meshes), and have much more realistic hurricane dynamics. But in order to run those we need bigger, faster computers. They should help improve the hurricane forecasts.
TR: We’re trying to predict hurricanes with modeling tools meant to predict larger weather systems?
AM: We do have a model developed especially for hurricanes that is used as one of the operational ensemble models. It was developed in the early 1990s by NOAA’s Geophysical Fluid Dynamics Laboratory, but it is lower resolution and does not accommodate some of the crucial physical processes. Since modeling techniques and computer speeds have advanced, we are developing a new model called the Hurricane Weather Research and Forecast model.
TR: How much computation do you have, and what do you need, exactly?
AM: The operational weather prediction system is an IBM massively parallel supercomputer in the Washington DC area, at the National Weather Service’s National Center for Environmental Prediction. A weather model typically uses 500 to 1,000 processors in parallel. A faster computer is crucial, because then you can represent the real dynamics of a hurricane. To represent what is happening in the eyewall you need a very high resolution model and a very fast computer.
TR: The benefits seem obvious – tighter predictions can save lives and avoid needless evacuations.
AM: They always used to say an evacuation alone costs a million dollars a mile. If you warn 50 miles of coastline, it’s going to cost $50 million. I think that was pre-Katrina. We are going to look at Rita and Katrina and say they cost perhaps as much as $10 million per mile, and we are going to need much higher accuracy. A warning of 100 miles of coastline would cost $1 billion. There are lots of things we can do to improve our forecast so we can improve on our evacuation accuracy.
TR: What does the federal government spend on hurricane forecasting now, and what’s needed?
AM: We put about $50 million total into everything from the hurricane center to the models. I think that if you really said “You know, this is an extraordinary problem that is going to cost, like Katrina did, $100 billion,” you’d want to spend an additional couple hundred million a year to really improve as fast as possible.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.