January 27, 2015

Forecast Lessons from the Northeast Snowstorm

The complaints swelled quickly this morning, both in the social media and the press:

National Weather Service forecasters had predicted two to three feet over New York City and adjacent suburbs for Tuesday and only about 8-10 inches showed up.

The city had been shut down overnight--travel banned on major roadways, mass transportation systems (e.g., subways) closed, schools and businesses closed--and all for a minor snow event!  A few samples from the press illustrates some of the commentary:


And then a National Weather Service forecaster even apologized for a  "blown forecast", something that doesn't happen very often.


Fortunately, NY Governor Cuomo had the right attitude!


And you had to expect that some global warming critic would use the forecast troublex to cast doubt on global warming predictions.

So what is the truth about this forecast event?   As I will describe below, although the forecast "bust" was not as bad as it might appear, it did reveal some significant weaknesses in how my profession makes and communicates forecasts, weaknesses that National Weather Service director Louis Uccellini says he recognizes and will attempt to fix.  

The general forecast situation was well understood and skillfully forecast starting on Saturday.   A low center (a midlatitude cyclone) would develop off the SE U.S. and then move northward up the East Coast--a storm commonly called a Nor'easter.  Here is a surface weather map at 4 AM PST this morning, when the storm was near its height.  In such a location, the storm can pull cold air off the continent while swirling in moisture from off the ocean.  The result is moderate to heavy snow to the west and north of the low center, as well as strong winds over the same areas.  Thirty years ago we could not forecast these storms with any skill. That has changed.



Although our models have been suggesting the development of a low center along the coast earlier in the week, it was not until Saturday that most of the models converged on essentially the same solution:  a low developing rapidly along the mid-Atlantic coast and developing into an intense low off of Cape Cod.  And that is what happened.

Virtually all of the models indicated that the precipitation (snow) would swing around the low, with a relatively sharp cut-off to the west of the cyclone center (see figure).


As the weekend progressed it was clear that although all the models had a similar idea, the position of the low center and associated precipitation varied.  The European Center (ECMWF) model, which did so well during for Superstorm Sandy and is on average the most skillful global model, was taking the low closer to the coast, pushing the heavy snowfall over the NY metropolitan area.  For much of the weekend, the National Weather Service's main high-resolution model, the NAM, did the same thing.   On the other hand, the National Weather Service recently upgraded global model, the GFS, was taking the center farther offshore, and predicted far less snow over NY City.   Here are the 30-h forecasts of the European Center and GFS models (solid lines are sea level pressure) valid 10 AM PST Tuesday...the difference in location is clear.



The Canadian and UKMET office models had solutions similar to the U.S. GFS.   

 NWS forecasters, mindful of the general superiority of the European Center model and comfortable with the workhorse NAM model, went with the European Center solution, which implied heavy snowfall over New York.  Their forecast on Monday afternoon was for 24-36 inches over New York city, eastern New Jersey, and into Long Island and Connecticut.  Not shown here are the heavy snows forecast for southeast New England.



The problem for the NWS was that they bet on the wrong horse and the low followed a track similar to that predicted by the GFS.  With the storm further offshore, the sharp gradient in snowfall moved eastward, resulted in NY City getting far less snow....roughly 10 inches in total.  

Here is the New York Times snow total map to see more details.  New York City got about 1/2 to 1/3 the predicted amount (although 10 inches is quite significant).  New Jersey got far less than predicted. But eastern Long Island and SE New England were right on target at 2-3 feet.


So there is no way one could call this forecast a major failure.  The NWS predicted a strong cyclone moving up the coast and they were right.  They predicted heavy snow over eastern Long Island and New England and were correct.  The snow forecast over New York was substantially overblown, but 9 inches is still a significant event.

But the forecast could have been much better and far more useful.  The forecasters failed to communicate the level of forecast uncertainty and did not change the forecast rapidly enough over New York when it was clear the storm was moving farther offshore.  And it reveals major deficiencies in how forecasts are made and communicate in the U.S.

Let me explain my logic.

National Weather Service (NWS) forecasters have the benefit of many weather prediction models, and in this case there was significance between them regarding the low position as it passed northward east of Long Island.

The European Center (ECMWF) model and the NWS NAM model had a more westward track and lots of snow over New York City.  

In contrast, the NWS GFS model, recently upgraded as new computer resources became available to the NWS, had a more eastward track and far less snow.  It is well known that the GFS is generally far more skillful than the NAM; in fact, many of us in the field believe the NAM should be retired.

The UKMET office model, number two in the world (after ECMWF), was like the GFS.
So was the Canadian global model.

So there was considerable differences among the major models, which suggested a lot of uncertainty over the exact location of the low center and the western edge of the heavy snow.

NWS forecasters also have large ensemble forecast systems, in which many different U.S. models are run with different starting points and model physics (e.g., how convection and clouds work).   These ensemble systems were  also indicating SUBSTANTIAL uncertainty in the snow forecasts over New York.  Let me show you.

Here are the snow total forecasts over New York from the NWS SREF (Short-Range Ensemble System) using forecasts starting at 09 GMT on January 26 (1 AM PST Monday).  These plots show the cumulative snow total predictions.  There is a HUGE spread in the forecasts. Some are as high as 35-40 inches while others are around 5 inches.  The National Weather Service at that time was going for 24-36 inches over NY, which is only achieved by a minority of the forecasts (perhaps 30% of them).


To put it another way, if these forecasts were equally probable (and they probably aren't), there was roughly a 70% chance that the official forecasts were too high.  Instead of predicting 2-3 feet, consider if that forecasters had said there was a 30% chance of more than 2 feet, a 35% chance of 1-2 feet, and a 35% chance of less than a foot.   No one would be writing critical headlines if they had done so and decision makers would have gotten far better information. And the fact that forecasters used terms such as "historic storm" was guaranteed to push the media into a feeding frenzy.

There is something else.  By late Monday, it was clear that the European Center and NAM forecasts were going wrong, and other U.S. modeling systems, like the newly operational High Resolution Rapid Refresh System (HRRR) indicated that NY would not get the big snow dump.  

Want to see it? Here is the HRRR 15 total snow accumulation over the Northeast ending 4 AM Tuesday.  Just a wonderful short-term forecast that verified well.   Such predictions were consistent in time and with virtually all modeling system initialized during Monday evening...the heavy snow predictions should have been dropped sooner.


So what needs to be done now to ensure this failure mode does not happen again?

First, the U.S. needs to enhance its ensemble forecast systems, the systems that facilitate the creation of probabilistic predictions.  If you have say 100 equally likely forecasts, and half go for 2 feet of snow, then the probability of 2 feet of snow might be 50% (this is simplistic, but you get the idea).  The National Weather Service have sponsored a number of studies by the National Academy of Sciences, studies that have strongly advised that the NWS enhance its ensemble systems and move vigorously to probabilistic predictions..   Unfortunately, the NWS has not followed this advise, with a poorly supported  "high-resolution" Short-Range Ensemble System (SREF) with a coarse resolution of 16-km. The U.S. needs a large convection-resolving (2-3 km grid spacing) ensemble, with sophisticated statistical post-processing to give reliable probabilities for snow and other important quantities.

Second, the NWS has to move to a much more probabilistic form of forecasting preparation and dissemination, one in which forecast uncertainties are made clear to users.  The computer workstations used by NWS forecasters and NWS websites are not designed to facilitate probabilistic prediction.  This needs to change.

Superstorm Sandy led to a widespread understanding that NOAA had let its computer resources decline to third tier status, and thus the problem was addressed.

Perhaps, the January 27, 2015 snowstorm issues over New York will lead toa  realization that forecast uncertainties need to be communicated to the public and that the NWS has to dedicate sufficient resources to make this happen.





14 comments:

  1. Excellent, Thanks for giving us the whole picture. My job (custodian in nyc school) depends heavily on weather forecast s(snow removal) .

    ReplyDelete
  2. Was waiting for this. Good read!

    ReplyDelete
  3. Thanks for the explanation. So as complement to the upgraded technical abilities that are clearly needed, are there upgraded communication skills training that goes with it? I've always wondered whether there are communication professionals and graphic designers who might find new, innovative, and clearer ways to talk about and present the uncertainties and even the certainties. Of course, we are human beings, a very strange species indeed, as well as Americans in a culture that wants certainty, definitiveness, precision, and black and whiteness in a world that is very much anything but.

    ReplyDelete
  4. I hope we'll get a more risk-based forecast approach soon. Communication of flood risk from FEMA and the USACE has improved by shifting away from talking about the "100-year flood" and towards terms like the "1% annual chance exceedance flood event".

    ReplyDelete
  5. This was a communication failure; the divergence between the models was well stated if people bothered to listen. However, I believe not all of the blame should be borne by the NWS, but by a certain for-profit weather source widely available on cable networks.

    ReplyDelete
  6. The Capital Weather Gang in Washington, DC, not only got this exactly right for DC but also had hints that it wasn't right for New York and Philly. Why didn't people listen to them? They do nowcasting and also express snow forecasts in a format you'd like:
    Boom: high-range forecast
    Expected: forecast snow
    Bust: low-range forecast
    Confidence: low-medium-high
    This is easy for most people to understand. Their forecast for this snow event ended up being the "boom" scenario, but with their Medium confidence level, nobody was mad at them.
    In contrast, the NWS forecast for New York was the "boom" scenario, and what New York got was a "bust" but nobody indicated that there was only Medium confidence in the forecast, so people were inconvenienced and angry.
    As Gordon Ramsay would say, "What a shame." With just a bit more information, people in NYC could have planned according to the storm that was REALLY likely to happen.

    ReplyDelete
  7. Bravo. Bravo. Some of the solutions to improving communicating forecasts seem so, well, common sense solutions.

    ReplyDelete
  8. Here's the DC forecast analysis by the Capital Weather Gang, and from there, you can look at the rest of their coverage, including several "what went wrong in NYC" posts that echo Cliff's. http://www.washingtonpost.com/blogs/capital-weather-gang/wp/2015/01/27/how-much-snow-fell-in-d-c-and-how-was-the-forecast/

    ReplyDelete
  9. A fine overview and I don't feel that forecasters owed anyone an apology. It's no different from posting a Tornado Watch and then not having a twister drop out of a cloud. It is far better to be safe than sorry and waiting until the 'IT' happens there's no time to prepare.

    ReplyDelete
  10. I think splitting the forecast probability into about an even 30% chance for over 2', 1'-2', and under 1' is technically accurate, but it actually does little for a real life forecast. That much uncertainty is a problem in and of itself, and produces a virtually meaningless forecast of an equal chance of between 6"-30" of snow. I think you have to narrow it down further than that, but perhaps acknowledge the uncertainty.

    ReplyDelete
  11. C.P.O.: That attitude, which is shared by many decision makers, is part of the problem. Neglecting to provide uncertainty or providing a false representation of uncertainty because decision makers don't want to deal with it will consistently lead to sub-optimal decisions. Decision makers need to learn how to incorporate the uncertainty into their risk assessments, not stick their heads in the sand. Everyone would love to have a definite answer on what is going to happen and only have to consider one scenario, but that isn't realistic.

    (All opinions are my own and do not necessarily represent the opinion of NOAA, NWS, or NCEP.)

    ReplyDelete
  12. Ask “What’s the Distribution of Your Forecast?"

    https://medium.com/message/ask-not-will-it-snow-ask-whats-the-distribution-of-your-forecast-b7131f5783a9

    ReplyDelete
  13. Thanks for all the detail Cliff. Good to understand.

    ReplyDelete
  14. I would love to see confidence attached to forecasts, but I fear those will have to be made available somewhere between the current popular weather media and the technical forecast graphics. Ask someone what a "50% chance of rain" means, and it becomes clear that presenting distributions might just create more anger: "It's so complicated, I never know what's going to happen", and so forth.

    Also, thanks for that note about the NAM. I was worried "workhorse" implied "preferred".

    ReplyDelete

Please make sure your comments are civil. Name calling and personal attacks are not appropriate.