But there is reason for hope. A combination of new leadership and reorganization may turn things around during the next few years. The old saying, it is darkest before the dawn, may well prove true for operational numerical weather prediction in NOAA and the National Weather Service.
As I have described in many previous blogs, the U.S. is lagging behind in operational global weather prediction. Today and for many years, the U.S. global modeling system, the NOAA/ NWS GFS (Global Forecast System) model has trailed behind the world leader, the European Center Model, and is consistently less skillful than the UKMET office model run by the British. We are usually tied for third with the Canadian Model (CMC). And we lag behind the others even though the U.S. has the largest meteorological research community in the world.
To illustrate the problems, here are the latest comparative statistics (anomaly correlations!) for the global skill of the 5-day forecast at 500 hPa (about 18,000 ft up) for a variety of models. 1 represents a perfect forecast. The best forecast is the European Center (average of .915), next is the UKMET office (the British folks with a .897), third is the Canadians (CMC, .773), and FOURTH is the U.S. GFS (.869).
It is no secret why the GFS is behind: an old model, inferior data assimilation and use of observational assets, and relatively primitive model physics (e.g., how cloud processes, thunderstorms, turbulence, etc. are described). Inadequate computer resources contributed as well. Data assimilation is the step in which a wide variety of observational data is quality controlled and used to create a physically realistic three-dimensional description of the state of the atmosphere. The European Center does a very good job at this.
The inferiority of the U.S. global model has gotten a lot of press the last 6 years, particularly after the GFS showed itself to be clearly less skillful than the European Model for Hurricane Sandy. The hue and cry in the media resulted in a computer upgrade for the National Weather Service and the acquisition of a new global model, the NOAA Geophysical Fluid Dynamics Lab (GFDL) FV3. This new model has been running in parallel for nearly a year now.
But there are problems with the new FV3. The FV3's verification scores are only slightly better than GFS, something shown in the statistics above (FV3 was at .881, in third place). Part of the problem is that the FV3 is using the same data assimilation system as GFS, which is not as advanced as the one used by the European Center.
But there is something else: during the cold period of the past winter, the FV3 was predicting crazy, excessive snow amounts. And more detailed verification indicated that the FV3 was too cold in the lower atmosphere. Disturbingly, the NWS evaluation protocols were not able to delineate the problems previously.
Coastal California was predicted by FV3 to be snowbound in February, It didn't happen.
In some ways, this is NOAA's version of the Boeing Max disaster --in the hope of beating the competition, a software system was rushed into operations without sufficient testing and evaluation.
Another major problem? It appears that there aren't enough people inside the National Weather Service (NWS) who actually understand the new FV3 model.
FV3 was developed outside the NWS by a team under a very capable weather modeler, S. J. Linn, of the NOAA Geophysical Fluid Dynamics Lab. In essence, the model was "thrown over the fence" to the Environmental Modeling Center (EMC) of the NWS and few people there actually understand FV3 in any depth. About 3, according to my sources. S. J. Linn has recently moved back to Taiwan and is no longer available.
In addition to lacking depth of knowledge about the core FV3 modeling system, the NWS does not have much of an effort to improve the physics of the FV3, such as the microphysics that describes how clouds and precipitation processes work in the atmosphere. Physics is one of the key deficiencies of the U.S. models. And the data assimilation system was simply moved over from the inferior GFS.
But the situation is even worse than that. FV3 was supposed to be a community modeling system, one that could easily be run outside of the National Weather Service, including the universities and private sector. Having others use the model is essential: instead of only a handful of folks inside the NWS working on and testing the model, you get hundreds or thousands doing so. You end up with a much better prediction system that way.
But the NWS has put virtually no effort and resources into making FV3 a community modeling system, TWO YEARS after making the decision to use it. I have tried myself to use the latest release. There is no support, no tutorials, no help desk. Nothing. The code release is incomplete and poorly documented. The model code is hardwired for NOAA computers and some of my department's most accomplished IT people can't get it to run. Not good.
In contrast, the major U.S. competition to FV3, the NCAR MPAS (NCAR is a consortium of many of the atmospheric sciences departments in the U.S.), is easy to run and has lots of support. One of my students got going on it in days.
The bottom line in all this is that the U.S. move to improved global prediction using FV3 is not going well.
The NWS has made the right move to hold off on implementation until FV3 is at least as good as the old GFS, considering the critical role the U.S. global model plays in American weather prediction.
But the dawn still beckons...
Things are pretty dark for U.S. global prediction right now. But there are some reasons for optimism.
First, the FV-3 is a better designed and more modern weather modeling system than the old GFS, including being more amendable to running on large numbers of processors. It can be the basis for improvement.
Second, NOAA/NWS leadership accepts there are problems and wants to fix it.
Of particular importance is that the key person responsible for U.S. operational prediction and observation, the Assistant Secretary of Commerce for Environmental Observation and Prediction and acting NOAA administrator, is Dr. Neal Jacobs, an extremely capable and experienced weather modeler, who led the successful effort at Panasonic before moving to NOAA. Dr. Jacobs knows the issues and wants to deal with them. Furthermore, there is a relatively new and highly capable head of the NOAA/NWS Environmental Modeling Center (where U.S. operational weather prediction takes place), Dr. Brian Gross.
Dr. Neil Jacobs is now acting Administrator of NOAA
Add to that the new Presidential Science Adviser is Dr. Kelvin Droegemeier, an expert in high-resolution numerical weather prediction from the University of Oklahoma.
And consider that the U.S. Congress knows about the problem and has passed two pieces of legislation, the Weather Research and Forecasting Innovation Act of 2017 and National Integrated Drought Information System Reauthorization Act of 2018, that highlights problems with U.S. weather prediction and provides some needed resources. Another positive is that leaders of the NOAA Earth Systems Research Lab (ESRL), a group responsible for development of new U.S. models, are now committed to working closely with the NWS operational folks. Five years ago this was not the case.
So we have extremely capable leadership in NOAA who want to fix the problem and a Congress who wants to help. That is good--but it is not enough.
Now we come to the real problem, and why I am for the first time in years really optimistic.
The key problem with U.S. operational numerical weather prediction has never been resources, it has always been about organization. About too many groups, with too much resource, working on similar projects in an uncoordinated way. Furthermore, the universities and the Federal government have rarely worked together effectively.
But this may all be changing. NOAA leadership, with support from Congress, is about to set up an entity that will be the central development center of U.S. numerical weather prediction.
This center is called EPIC (Environmental Prediction Innovation Center) and would combine the efforts of both NOAA and the universities (NCAR). Done correctly, EPIC could lead to a much more effective and coordinated approach to developing a new U.S. global modeling capability. A modular, unified national modeling system shared between government, academia, and the private sector.
This center is called EPIC (Environmental Prediction Innovation Center) and would combine the efforts of both NOAA and the universities (NCAR). Done correctly, EPIC could lead to a much more effective and coordinated approach to developing a new U.S. global modeling capability. A modular, unified national modeling system shared between government, academia, and the private sector.
Will the U.S. FINALLY organize itself properly to regain leadership in global numerical weather prediction? Time will tell. But I am more optimistic today than I have been in years.
Whatever happened to the Panasonic model?
ReplyDeleteEuropeans have always had the edge over the USA weather satellites. Just smarter and they've had the funding & backing of there government. We are worried to much about cost than safety typical USA
ReplyDeleteUnknown - that’s VERY wrong. The GOES16/17 satellites are FAR more advanced than the European.
ReplyDeleteCliff,
ReplyDeleteWhat good news! I appreciate your explanation of the situation.
Cliff, I love these insights. Thank u!
ReplyDeleteOne question - As per your guess, what is the best weather model and app to use for weather in the mountains in Washington state? priorities are: accuracy on precipitation, and then cloud level- i don't prefer to hike in the rain, and less cloudy the better.
I saw your old blog post of May 2018, and it talks about Accuweather, but Accuweather does not give weather conditions on mountain peaks and mountain valleys; its mostly about cities.