June 21, 2016

U.S. Numerical Weather Prediction is Falling Further Behind: What is Wrong and How Can It Be Fixed Quickly?

Updated (see addition at the end)

It is a disappointing.  The U.S. has the largest meteorological community in the world and led the development of numerical weather prediction for decades.  The National Weather Service, stung by its relatively poor performance on Hurricane Sandy and publicity about inferior computers, was given tens of millions of dollars to purchase a world-class weather prediction system and to support forecast model development.

But the latest forecast statistics reveal an unfortunate truth:  U.S. operational weather prediction, located in NOAA's National Weather Service (NWS), is progressively falling behind the leaders in the field.  Even worse, a private sector firm, using the National Weather Service's own global model, is producing superior forecasts.

Something is very wrong and this blog will analyze why NWS global models are losing the race and what can be done to turn this around.  As I will show, this situation could be greatly improved within a year, but to do so will require leadership, innovation, and a willingness to partner with others in new ways.  I will also highlight a critical NOAA/NWS decision that will be made in the next several weeks, one that will decide the future of US weather forecasting for decades.

The Problem

       A number of media reports and several of my blogs have described the fact that U.S. numerical weather prediction (NWP) has fallen behind other nations and is a shadow of what this nation is capable of.   Global NWP is the foundation of all weather forecasts, so it is critical to get this right.  As we will see, it is not that U.S. global NWP is getting less skillful, but that other nations are innovating and pushing ahead faster.

For most of the last few years, U.S. operational global weather prediction, completed at the National Weather Service's Environmental Modeling Center (EMC) of NCEP (National Centers for Environmental Prediction), has been in third place:  behind the world leader ECMWF (European Center For Medium Range Weather Forecasting) and the UKMET Office (the Brits).    During the past several months, we have fallen further behind ECMWF and, to add insult to injury, the Canadians (the Canadian Meteorological Center, CMC) have moved ahead of us as well.  US global weather prediction is now in fourth place, with substantial negative implications for our country.   Let me demonstrate this to you.

One measure of forecast skill is anomaly correlation (AC), a measure of how well a forecast matches observations (it ranges up to 1, the best).  Below is the AC for the Northern Hemisphere for the day 5 forecast, evaluated at the mid-troposphere (500 hPa, around 18,000 ft).

The ECMWF is the best (red triangles), with the UKM (yellow) second best.  Black is the US global model (GFS).  Note that the US GFS not only has generally lower skill, but sometimes has serious dropouts, periods of MUCH worse skill.  The legend has summary numbers for the period, showing that the GFS is in fourth place, and the Canadians in third place (light green).  These statistics are from a NOAA/NWS website.

Let's compare this to the situation a year ago. Last June's statistics for the 5-day, Northern Hemisphere forecasts are shown below.  We were ahead of the Canadians then.  Look closely and you will see that difference between the US and ECWMF was less.  I could show you many more plots like this that demonstrates that the US has fallen behind the leaders in global weather modeling.

During the past few months both the US and ECMWF upgraded their global models, but clearly the ECMWF upgrade was more effective, with ECMWF pulling further ahead.

A more detailed comparison (from WeatherBell analytics) of the US and ECWMF performance for 2016 is shown below (still 5 day forecast at 500 hPa) using the same verification measure (anomaly correlation).

ECMWF (blue color) is better nearly every day.   Importantly, the ECWMF forecast is much more consistent, without the frequent (and substantial) drop outs of the US GFS.  The U.S. (red colors) frequently declines to .8 or below,  indicating of periods of large declines in skill.  These are serious failure periods.

The bottom line is that Europeans and Canadians are pulling ahead of the U.S. National Weather Service in global weather prediction. I have a LOT more statistics to back this up if anyone has any doubts.

But it is worse than that.   A private sector firm, Panasonic, has gone into the global weather prediction business using the US global model (GFS) as a starting point.   Panasonic scientists have worked on fixing some of the obvious weaknesses in the U.S. modeling system and report they have dramatically improved the forecasts over National Weather Service performance (GFS model).  They claim that their forecasts are not only better than the official US GFS model, but nearly equal to that of the vaunted ECMWF.

I have talked to the chief scientist at Panasonic, Neil Jacobs, and he has shared some of the verification statistics, which look good.  I told him the only way to prove that they have the world's best global model would be to share the forecasts and let a neutral third party verify them.  He agreed to do so, including sharing the forecasts with the University of Washington.   I doubt he would do that if their forecasts weren't as skillful as they claim.

Even worse?  The US Air Force has abandoned the US GFS model, saying that it was inferior to the UKMET office model, which the AF will switch to.

So the National Weather Service's global model is falling behind international leaders AND a private sector firm starting with the same NWS model.  Even the US military is abandoning it.   Can it get any worse?

It can.  The U.S. Congress gave the National Weather Service tens of millions of dollars for superb new computers, two CRAY XC-40s: one used for operations, and the other for development and backup.   Unfortunately, the operational computer is only being lightly used, with its vast capacity not being applied effectively to make critically needed improvements in U.S. NWP.

Key Deficiencies in U.S. Global Modeling

So why is US operational global weather prediction falling behind the leaders? Some of the problems with U.S. global weather predictions are well known and the essential "fixes" effected by Panasonic are no secret (and Panasonic should be commended for letting the community know what they are doing).  To list only a few:

1.   The National Weather Service GFS has starkly inferior physics, which means the descriptions of essential physical processes in the atmosphere.  For example, the GFS model is using a primitive, two-decades old microphysics scheme (the software describing how clouds and precipitation work).  As a result, there are serious errors in precipitation amounts and clouds, which in turn influences the evolution of the forecasts.

They are also using a very old and primitive cumulus parameterization, which describes the impacts of cumulus clouds and thunderstorms (called convection).
This results in poor prediction of convection, including critical features in the tropics (like the Madden Julian Oscillation, MJO), which in turn undermines extended range forecasts.

A plot of precipitation rate versus time and longitude for a portion of the western tropical Pacific (5N to 5S) for a two week period in April to early May 2016.  Above the line are observations, and below the line is the US GFS model.  Note how the character of the precipitation radically changes after the switch to the model.  The model is doing a very poor job forecasting the character, amplitude, and movement of convection in the tropics.  The ECMWF model is far better because they use a better cumulus parameterization (image courtesy of Michael Ventrice, the Weather Company, and University of Albany).

Importantly, the National Weather Service has few people working on model physics and no strategic plan how to improve it.  Other centers (like ECMWF) have put great emphasis on physics and substantial scientific resources.  Furthermore, the NWS has not entrained the expertise of the large US research community to help.

2.   The National Weather Service has less model resolution that its competitors.    The high-resolution ECWMF model has a grid spacing of 9 km compared to the 13 km used by the US GFS. More importantly, the ECMWF global ensemble system has TWICE the resolution of the American system (18 km grid spacing for ECMWF, 35 km for the US GFS).  Ensemble systems play a critical role in data assimilation and probabilistic prediction.  Considering the new computers acquired by the National Weather Service, this resolution gap is inexcusable.

3.  The ECMWF, UKMET Office, and Panasonic have far superior quality control of observations.  Quality control reduces the amount of bad data used in the forecast processes.

4.  ECMWF, UKMET, and the Canadians use a superior data assimilation system called 4DVAR.  Data assimilation uses observations and the model to produce the best possible initial state (the initialization) for the forecast.  Better initial states produce better forecasts.   ECWMF has been using 4DVAR since 1997.

5.  The other leading weather modeling centers use a greater range and volume of observations in their data assimilation systems.  ECWMF, for example, has applied a far greater range of satellite observations than the US, and Panasonic has great volumes of aircraft data (called TAMDAR), that the National Weather Service has been unwilling to purchase.

6.  The other major weather forecasting centers have detailed strategic plans and visions of their future directions.   The National Weather Service has no real strategic plan for global weather prediction.  Or any weather prediction.  Recently, they began a process to acquire their next generation global model (called NGGPS, Next Generation Global Prediction System), something I will talk more about below.
TAMDAR data on short-haul aircraft, collected by Panasonic, can enhance the quality of forecasts.

7.  Other major centers have entrained the help of the research community in an effective way.  The National Weather Service, until very recently, was isolated and had a go-it-alone attitude towards global weather prediction.  Even today, they have no rational, organized way to encourage and reap the benefits of academic community research.  Trust me, this is something I know about.

8. Until last year, the National Weather Service had starkly inferior computing resources compared to ECMWF, UKMET, and other major centers.  It provided an excuse for NWS prediction being second rate.  Today, the National Weather Service has first class computing and Congress wants to keep it that way.  So that excuse is gone.  The National Weather Service has the computing power to push forward rapidly and innovate, if it has the will to do so.

The Big Decision:  The New NWS Global Model--MPAS or FV3?

The National Weather Service is about to make a critical decision regarding the replacement of its out-of-date GFS global weather prediction model.  And this decision is a huge one, deciding the fate of US global weather prediction for the next several decades.

As noted above, this decision is  part of a process called NGGPS and has been an attempt to rationally decide on the guts of the next US global model, something called its dynamical core.  After testing a number of candidates, the choice is down to two.

The first is the MPAS model, developed by the National Center for Atmospheric Research, a consortium of US universities involved in atmospheric research.  The second is the FV-3 model developed by the NOAA/NWS GFDL laboratory.   As I have described in a previous blog, the clear choice is MPAS for many reasons.

MPAS uses an innovative geometry (hexagonal grid) that solves age-old model problems at the poles, while FV-3 uses a more traditional grid geometry.  MPAS uses a superior grid structure (the "C" grid) that will produce far better high-resolution predictions than the problematic "D" grid of FV-3.  And moving to high resolution is where global prediction is going.

MPAS allows local refinement of resolution without adding additional "nested grids", as shown by the figure below.  And MPAS' superior numerics offer better inherent resolution for a particular grid spacing, so one can run with coarser grids than FV-3 and secure equally good results (which reduces computer demands).

But there is something that goes beyond grids and model numerics.  Something even more important.  By picking MPAS, the National Weather Service will combine efforts with the huge US atmospheric sciences research community, with that community's model innovations (including physics and data assimilation) flowing into the National Weather Service.   The isolation of NWS global prediction efforts would end.  

But it is better than that.  NWS research dollars could then help support global model research efforts that benefit both the operational and research communities.  Other entities, such as the National Science Foundation, would able to help support research and development as well that would, in turn, improve operational skill, and hopefully a resurgent US global model, will bring the Air Force back into the fold.

But it is even better than that.  A regional version of MPAS can be created and eventually replace the current regional model favored by the academic community, WRF, which was also developed at NCAR.  So there is the potential for a national UNIFIED modeling system that could concentrate US weather modeling efforts, producing even more rapid advancement.
FV-3 grid

In contrast, the less innovative FV-3 model was developed by a small group in NOAA/GFDL with little experience in outreach and interaction with the university/research community.  

You would think the global decision is obvious in favor of MPAS, but there are powerful voices inside NOAA that are pushing for an in-house solution.

The final decision on the future NWS global model will be made by Dr. Louis Uccellini, head of the National Weather Service.  It will be one of the most important decisions he makes during his tenure.  One choice, MPAS, will lead to a creative engagement with the US weather research community and the potential for the US to move rapidly into a leadership position in global weather forecasting.  The other, FV-3, will continue and deepen National Weather Service isolation from the US academic community and continued mediocrity in global weather prediction.

In the mean time....

Even if MPAS is selected as the new U.S. global prediction model, it will take several years before the complete system is ready to go operational.  As demonstrated by Panasonic, there are steps that the National Weather Service can take during the next six months to rapidly improve US global weather prediction. If I was the US weather prediction "czar", this is what I would do:

1.  Start using the extraordinary capabilities of the new NOAA/NWS operational computers.

Increase the resolution of the US global ensemble system to 18 km (like ECMWF), increase the number of members to 50-75, and add physics diversity using stochastic physics.   This will greatly improve US data assimilation and probabilistic prediction.

By increasing the resolution and quality of the global ensemble, the NWS can drop the redundant North America/only SREF (Short-Range Ensemble Forecast System), releasing more computer power for useful work.

2.  Fix the obvious physics problems.

Update the model microphysics (moist physics) parameterization to something modern, like the well-regarded Thompson scheme used in WRF.  Replace the old SAS convective scheme as well.

3.   Improve quality control.   

Follow the lead of Panasonic and upgrade the NCEP QC system.

4.  Work with the rest of the atmospheric science community (academia, private sector) to develop a detailed strategic plan for US numerical weather prediction and follow it.

5.  Rework the structure and personnel of EMC, NCEP and NOAA labs to build coherent teams to work on key model issues (such as physics).

Final Comments

Numerical weather prediction is one of the most complex activities done by our species, requiring billions of dollars of hardware, understanding and modeling of physical processes from the microscale to the planetary scale, complex computer science issues, and much more.    World leaders in numerical weather prediction understand this challenge and know that it requires organization, planning, coherence, a long-term view, and innovation.

For too long, the National Weather Service has developed it models in a disorganized ad-hoc way, in isolation from the US research community.   They have learned the hard way that one can not do state-of-the-art weather prediction development and operations that way.

NOAA and the NWS must change the way they do global modeling if they are to provide that nation and the world with the best global weather prediction.  The opportunity and resources are now in place.  The question is whether NOAA/NWS leadership will take the right path.

Important Addendum:  June 22

I disappointed by a NOAA presentation this morning regarding testing between the two global model finalists:  the NOAA/GFDL FV3 and the NCAR MPAS.   I will blog further about this, but a few major points:

1.  NCAR has pulled out because they feel the testing is inappropriate, and I have to agree.
2. All test models had to use the old GFS (current model) physics which are  completely inappropriate at high resolution.  In fact, GFS physics doesn't work well at any resolution.  Like testing new racing cars on a muddy road--you can't do it.
3. The future of global prediction is at convection-allowing resolution (4 km or less grid spacing).   But these resolutions were hardly tested (48 out of the 50 tests were at 13 km grid spacing or more).
4.  Some of the results were clearly bogus, like the radically poor results of a 13-km forecast run and a hurricane simulation that had rain in the eye of the MPAS hurricane).  Something was clearly wrong with the tests.
5.  The testing had no vision of testing a configuration that might be used operationally in ten years (e.g., convection allowing over the globe).  It was all about testing a configuration nearly identical to the current GFS.


  1. .. Like the "hexagonal grid" patterning for basic parameterization, quite a bit.

  2. Sometimes the natural world gives us a "hint"

  3. Cliff, I wish you had been at the Ensemble Users Workshop last week when we discussed the future configuration of the GEFS. The next GEFS will include stochastic physics (I am testing it myself). We are also currently testing resolution and membership increases (though neither are as aggressive as you would like: TL670 and/or 10-20 additional members). Also, you mention improving DA here, but the EnKF component already uses 80 members and uses stochastic physics (which I'm sure you know, being on the UMAC), so I'm not sure how those are related.

    As for the model physics, I know there are several things being investigated right now that will hopefully go into production over the next couple years, including a SHOC parameterization scheme and improved microphysics. You are correct that there is a long way to go here.

    I know your point here is more the culture and policies (which are above my pay grade), but I wanted to chime in with some of the technical stuff.

  4. The US EPA is working with MPAS to use it as the meteorological driver for its next-generation air quality model. It certainly would be nice to have the NWS as a partner in the development of MPAS!

  5. Cliff Mass to run the NWS! Except that will never happen. Because: politics.

  6. I know this is irrelevant to the above discussion, but yay! Seattle managed to get over half an inch of rain last night! That's quite an accomplishment, given the recent paucity of precip. Keep it comin'!

  7. Cliff,

    I just look at our United States Congress for leadership.

    Then I puke.

  8. Well...Maybe we should just contract with ECWMF. While we're at it hire the French to provide us with nuclear power. Both seem to have it figured out. God Bless America.

  9. You should submit this post as an opinion piece to major news media, like the New York Times. It's likely that very few citizens realize the importance of this issue.

  10. Cliff,
    One comment I'd like to add is, in addition to improvements in model physics, the forward model used by GSI, The Community Radiative Transfer Model, does a poor
    job of accurately computing radiances in cloudy and precipitating scenes. This,
    in turn, results in serious deficiencies in satellite data assimilation. Combined with excessive data thinning, poor QC, and generally slow uptake of new sensors, the observationally driven analyses and forecasts aren't up to snuff compared to competitors. To top it off, funding for satellite data assimilation research at NOAA is drying up rapidly with the end of
    the SCITECH-II contract, and internal organizational changes.

  11. Thanks Cliff.
    My recollection is that several years ago the national political party hostile to science wanted to 'contract out' at least part of the weather forecasting process. Data gathering, modeling, forecasting, computer maintenance ... Don't recall. I'm generally unhappy with this as the govt needs to retain critical functions and knowledge to make far reaching important decisions.

    My guess is that we've already gone over that waterfall. Is there a practical way to contract with say Panasonic to provide the superior service AND adjust the science and models. AND/OR It seems a performance comparison between the models without preconception is the way to go ... Like fly offs at early stages of future military aircraft development..
    Dude you are one of our heroes. Best.
    Bob and Jeri

  12. The canaries in the GFS coal mine have been dropping like flies to warn of this problem for quite some time. Now that it's here, what are we to do but outsource. If this does come about with Panasonic, it will be only be the beginning.

  13. @ Walter Kolczynski ... parameterization
    To paraphrase Von Neumann: With four parameters I can fit an elephant, and with five I can make him wiggle his beak [trunk in the original, oddity intended here]. Many suspect Von Neumann was cautioning against using too many parameters. Possibly the worst aspect of using too many parameters may be the increasing difficulty of tracing them back to any of the underlying science.

  14. Cliff, could you post some of the cloud images you mentioned in today's KPLU show?


Please make sure your comments are civil. Name calling and personal attacks are not appropriate.

More Rain for the Northwest is Good News for Wildfires

After a very pleasant dry spell, another rainy period is ahead for the western side of the region and the Cascades on Friday and Saturday.  ...