Personalized Weather Forecasts
IBM has launched a new weather service called Deep Thunder that can predict the rain, the wind, and temperature conditions down to a one-kilometer resolution. In time, IBM researchers say they should even be able to nail the resolution down to individual streets.
The idea is to provide weather-sensitive businesses in metropolitan areas with information that’s more accurate than what government agencies are capable of providing, says Lloyd Treinish, a researcher at IBM’s TJ Watson Research Center, in Yorktown Heights, NY.
At a local submetropolitan level, the weather really can vary quite significantly, says Treinish. Yet typical forecasts will often slap a single simplistic symbol, such as the sun, a cloud, or a snowflake, on an area representing a small city.
A huge number of businesses really depend on accurate weather forecasts, says Stephen Lord, director of the National Oceanic and Atmospheric Administration’s Environmental Modeling Center, in Camp Springs, MD. Transportation, energy distribution, shipping, and even sporting events are all at the mercy of the weather, he says. Indeed, as much as $1 trillion worth of the U.S. economy is weather sensitive, says Treinish. By providing more-detailed forecasts, IBM hopes to help businesses streamline their operations and save money.
For example, local municipal services such as snowplowing could be deployed more efficiently with more-detailed information about precisely where snow will fall. Similarly, by being more prepared, utility companies could better manage energy demand and better cope with outages caused by severe weather. Even airports and postal services would benefit: they could plan and schedule operations around weather conditions.
Government agencies, such as the National Weather Service (NWS), are currently unable to provide the same level of detail. This is partly because they don’t have the technical resources, but it’s also because they are mandated to offer a uniform level of service across the nation, preventing them from providing higher resolutions for some areas and not others. Even at a metropolitan level, where local meteorological services try to improve on NWS forecasts by factoring in local measurements and conditions, the resolution is rarely much better than eight kilometers, says Treinish.
When combined, all these factors represent a gap in the market that companies like IBM could fill by tailoring their services to individual businesses. “We want to think about the information in relation to solving particular business problems,” says Treinish.
Deep Thunder increases the resolution by using a mini-supercomputer to include additional information about the local area that can affect weather conditions. Geographical information about the local area (such as the topographic layout of the metropolitan area) and coastal information (such as the city’s exposure) is fed into the supercomputer. “The differences are subtle, but at a local level they may be important,” says Treinish. “The way thunderstorms occur in New York is different from the way they occur in Miami.”
This information is then fed into one of IBM’s pSeries Cluster 1600 computers–effectively a mini-supercomputer–to calculate forecasts every 30 minutes. Trials are currently being run in several cities, including Miami, FL; New York, NY; Kansas City, KS; and Baltimore, MD. “We are running a four-dimensional physics model for each of these metropolitan areas,” says Treinish.
Having one-kilometer resolution does not necessarily mean you can resolve detail at that level, says Clive Wilson, manager of the mesoscale modeling group at the United Kingdom’s Met Office, in Exeter. The Met Office currently provides the United Kingdom with up to four-kilometer resolution but is conducting trials of models that work at higher resolutions. The advantage of going down to these sorts of levels is that it has the effect of improving the overall accuracy of weather forecasts for a larger area.
IBM’s Anthony Praino, who also works on Deep Thunder, agrees. There is a trade-off, he says, to increasing the resolution versus the accuracy. In much the same way that increasing the pixilation of a display will give an overall more accurate picture, the actual variation of the individual pixels can alter more dramatically. So while the accuracy for these one-kilometer regions might be poorer than other forecasts, the accuracy for a larger region becomes more detailed and more accurate.
Nevertheless, there is a fundamental problem with making claims about accuracy when using such high resolutions, says Lord: “How do you verify it?” The only way would be to create a network of ground-based sensors covering one-kilometer grids.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.