How two new supercomputers will improve weather forecasts
Each of the upgrades by the US National Weather Service is the size of 10 refrigerators, has the capacity of 12.1 petaflops, and will help predict storms made worse by climate change.
When Hurricane Michael made landfall on the Gulf Coast of Florida in October 2018, it was a category 5 storm, with wind speeds over 150 miles per hour. The US National Hurricane Center had initially predicted they would reach less than half that.
Michael went through a process called rapid intensification, where a hurricane develops massively higher wind speeds in a short time. And the experts didn’t see it coming.
Predicting the chaos that is the center of a hurricane, and understanding how storms strengthen, is still a challenge for forecasters. But armed with better models and more experience, they accurately predicted that Hurricane Ida, which hit New Orleans in September this year, would rapidly intensify, although the storm strengthened even more than they had expected.
Supercomputers have been part of these improvements in predicting where, when, and how storms might hit. And by the end of 2021, the US National Weather Service (NWS) will receive two brand-new supercomputers. It’s an upgrade they hope will continue the steady march toward more accurate forecasts, which will become even more essential as climate change continues to fuel more intense storms.
The agency will use the new machines in operational forecasting—the system that forecasters use to make predictions like the ones on the nightly news. Once the agency has fully vetted them, probably in July 2022, the new supercomputers should help meteorologists better predict everything from the chance of rain in Denver to the odds that a hurricane will hit Miami.
Each supercomputer (one in Virginia and one in Arizona, so there’s always a backup) is about the size of 10 refrigerators and has a capacity of 12.1 petaflops. “Flops” stands for “floating point operations per second,” so 12.1 petaflops means the supercomputers can make just over 12 quadrillion calculations every second. It’s a huge upgrade—nearly triple the size of the old system—and will cost roughly $300 million to $500 million over the next decade.
“Early in my career, a lot of times things would strike without warning, because the technology wasn’t that good, and the warnings weren’t always that accurate,” Stefkovich says. “You don’t see that as much anymore.”
Computing capacity upgrades are a big piece of recent improvements in forecasts of hurricane path and intensity, says Michael Brennan, head of the Hurricane Specialist Unit at the National Hurricane Center.
Forecasts like the ones Brennan’s team releases are made by humans who sort through different models and decide how to synthesize the information.
Projections of hurricane paths have gotten steadily more accurate over the past 30 years as large-scale weather models, and the computers running them, have improved. Average errors in hurricane path predictions dropped from about 100 miles in 2005 to about 65 miles in 2020. The difference might seem small when storms can be hundreds of miles wide, but when it comes to predicting where the worst effects from a hurricane will hit, “every little wiggle matters,” Brennan says.
Understanding and predicting hurricanes’ intensity has been more challenging than predicting their paths, since the strength of a hurricane is driven by more local factors, like wind speed and temperature at the center of a storm. Still, intensity predictions have also started to improve in the past decade. Errors in the intensity forecast within 48 hours decreased by 20% to 30% between 2005 and 2020.
When building models to predict something as complicated as the weather, “it’s easy to suck up additional computer resources,” says Brian Gross, director of the NWS Environmental Modeling Center.
Models can benefit from computing power in multiple ways, and each model can quickly slurp up huge amounts of capacity. A model can get more complex by digesting more information or by using more complicated physics to better represent the world. In a weather model, this might mean more details about processes in the ocean when considering the frequency of hurricanes.
More computing power might also allow a model to get more geographically precise. Weather models work by splitting the globe up into a bunch of pieces and trying to calculate what will happen in each of them. A higher-resolution model will break up the globe into smaller fragments, which means there are more of them to consider.
Finally, researchers can put together what’s called an ensemble model, which they run as many as 20 or 30 times. Each of those runs is performed under slightly different conditions to see how the predictions differ. The results are then tallied up and considered together.
You’ve probably seen ensemble models in hurricane predictions. Consider a storm that starts in the Atlantic Ocean. If an ensemble forecast contains a wide variety of results, with the storm heading to Texas in some and skirting up the East Coast in others, it has many plausible paths. But if the ensemble members all show the storm hitting Florida’s Gulf Coast, forecasters can be more certain about where it will land.
Previous supercomputer upgrades have led to improvements in multiple areas for some models, like the all-purpose Global Ensemble Forecast System (GEFS). In 2018, the last time the system was upgraded, NWS increased its resolution from 34 kilometers to 25 kilometers and the number of models in the ensemble from 21 to 31.
The changes, and the resulting forecast accuracy, have made life easier for officials like Jim Stefkovich, a meteorologist for the Alabama Emergency Management Agency who helps the state government prepare for weather risks. “Early in my career, a lot of times things would strike without warning, because the technology wasn’t that good, and the warnings weren’t always that accurate,” Stefkovich says. “You don’t see that as much anymore.”
But even today, signals from weather forecasters aren’t always clear to the public, which may not be tuned in for every storm, or may not understand the difference between a storm watch (hazardous conditions are expected) and a storm warning (they’ve already been observed). If people don’t know how to react, a better forecast won’t really help them stay safe, Stefkovich says.
This happened during Ida, he says. The forecasts for the storm’s path and its strength were accurate a few days before. But partially because of poor communication, dozens of people lost their lives, and millions lost power or saw their homes or cars damaged.
Climate change means the risk of extreme weather will likely get worse. Forecasters hope that better predictions, communicated in the right way, can help people make better decisions when the storms come.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Welcome to the oldest part of the metaverse
Ultima Online, which just turned 25, offers a lesson in the challenges of building virtual worlds.
A new paradigm for managing data
Open data lakehouse architectures speed insights and deliver self-service analytics capabilities.
Three ways networking services simplify network management
The right networking services orchestrate note-perfect network performance.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.