March 07, 2013

The D.C. Snowstorm Forecast Failure

It was going to one of the largest snowstorms to hit Washington D.C. in years.  An event termed Snowquester, in honor of the latest D.C. budgetary failure.

Official National Weather Service (NWS) forecasts, amplified by media outlets, were calling for as much as 8 inches in the D.C. Metro area, with greater amounts to the west.  Government offices and schools were closed, hundreds of flights were cancelled, and the region hunkered down for a once in a decade snow event.

But the big storm never came, bringing substantial embarrassment to my profession.  Here are the snowfall totals.  Virtually nothing in the city, a few inches in the western and southern suburbs, trending to zero east of the Beltway.  Perhaps 6-9 inches in the Appalachian foothills to the west.


The cost of closing down D.C. and it suburbs were huge, certainly in the tens of millions of dollars.

Why was this storm so poorly forecast? 
Could we have done better?
What lessons could be derived from this failure?

I will try to answer these questions and the nature of the main failure mode may be something you didn't expect.

 Let me begin by saying this was a difficult forecast-- the celebrated Superstorm Sandy prediction was a walk in the park in comparison.  There are few more problematic meteorological challenges than forecasting snow under marginal temperature conditions.

First, you have to get the AMOUNT of precipitation right, something my profession is not good at.  A rule of thumb is that you multiple the amount of liquid precipitation by ten to get the amount of snow (although that ratio can vary as well).    So if your storm total is off by .3 inches of precipitation, you could have an error of 3 inches of snow.  But an error in rain total of .3 inches would hardly be noticed.

But it is worse than that.  Under marginal conditions, the intensity of the precipitation can decide whether you  get rain or snow at the surface.  Heavy precipitation means there is a lot of snow aloft to melt as it descends into the warmer air below.  The melting cools the air, allowing the freezing level (and snow level, generally about 1000 ft below the freezing level) to descend.  So messing up the intensity forecast can mess up forecasting whether you will get rain or snow!

Snow depth Analysis Thursday
Snow Dept Analysis from the NWS

And then there is the impact of evaporation, which can also cool the air. To get that right you need to forecast the humidity structure of the atmosphere correctly and how that will evolve in time.

Still think this is easy?  You also have to predict the temperatures of the air approaching the region through depth, predict the evolution of the ground temperature (so you know how much will accumulate), and the effects of solar radiation during the day.  Not easy.


So let's understand some of the failure modes.   The U.S. NAM model, the main U.S. high resolution model, produced too much precipitation and so did the U.S. global GFS (but to a lesser extent).   Guess what modeling system did far better than either?  Yes, the European Center (ECMWF) model did a superior job. Importantly, it realistically predicted less precipitation than the U.S. models.   The graphic of storm total precipitation below, provided courtesy of WeatherBell, Inc., shows DC getting about 1 inch in the ECMWF model and 1.7 inches in the U.S. GFS model.  The U.S. NAM model produced even more.


Both the EC and U.S.  models were bringing above-freezing air in aloft. The 24-h EC model temperature forecast at 850 hPa (around 5000 ft) valid at 1200 UTC (7 AM DC Time) on Wednesday (see below) shows above freezing air entering the southern Chesapeake and moving towards DC (the tan to light green transition is 0C).
The 850 hPa temperatures from the NAM model showed a similar pattern...something that should have worried NWS forecasters.  Only very heavy precipitation intensity could overwhelm such a flux of warm air from off the ocean, particularly in March when the sun is getting stronger.


As I have mentioned before in this blog, an increasingly important tool for forecasters is ensemble prediction, where we run our models many times to get a handle on forecast uncertainties.   The U.S. ensembles systems were screaming that forecast uncertainties were very large.  For example, this figure shows the spread of the 24-h snowfall forecasts for the ensembles started at 4 PM EST on 5 March.  The yellow colors show that the uncertainties were 5-10 inches!  And the solid lines show the mean of the ensemble forecasts:  there was a very large gradient of predicted snowfall just east of D.C.
Also worrying was that the NWS statistical postprocessing system (called MOS-Model Output Statistics) from the GFS model for Washington National Airport was not indicating much of a snow event (see below), with a LOW temperature of 35, a high of 41, and rain most of the day. (HR in UTC, temperature (TMP), probability of snow (POS))


The screaming message in all of this (and I am leaving a lot out) was that there was HUGE uncertainty in this forecast, uncertainty that was not communicated to the public by my profession or the media.  Would decision makers have sent government workers home or cancelled schools if they knew that the chances of a big snow were marginal?   I don't know....but they deserve to have had this information, and I believe they could have made better decisions.

I don't want to sound like a broken record, but with a little investment we can fix this.  The U.S. global model should not only equal, but surpass the European Center.  We need to run state-of-the-art high-resolution ensemble predictions over the U.S. at 2-4 km resolution to give us reliable uncertainty and probabilistic forecasts. The NWS can not do any of this with current inadequate computer resources, which they don't have (see my previous blogs documenting this).  NOAA management has given weather prediction computation low priority...this needs to change and will only change if the American people demand it.  Computer resources are, of course, only the first step.  With better model guidance, my profession must move to providing the public comprehensive probabilistic forecasts for all weather parameters (e.g., temperature, wind, snow, etc.). 

Enhanced computer resources at the National Weather Service could have paid for itself in this one storm.  Think about that.  The national media is thinking about it:  here are two segments on the NBC nightly news in which the reporter notes both the forecast failure and the contribution of inferior NWS computers.   When NBC nightly news is telling you to get a new computer, you know you have a problem...

A special segment on Friday night:

http://www.nbcnews.com/video/nightly-news/51108647/#51108647

and this video from two days ago:


Visit NBCNews.com for breaking news, world news, and news about the economy

13 comments:

  1. I'm not sure I follow your prescription that "with a little investment we can fix this."

    You outline a failure of communication, then suggest better computers are the answer?

    True, better computers could reduce the uncertainty, but the failure here was not that uncertainty existed, but that, as you admitted, neither meteorologists nor the media communicated that uncertainty to the general public.

    Both industries kept assuring us this *was* happening.

    Better computers won't improve communication methods, and uncertainty will always exist. I'd suggest looking more at the way both industries handled the knowledge they had for improvements before calling for additional resources that seem just as likely to be poorly utilized.

    ReplyDelete
  2. Epic forecast failure, fully agreed! But the DC area still has something we don't - an active, frequently updated weather fanatics' site. The Capital Weather Gang's (http://www.washingtonpost.com/blogs/capital-weather-gang) forecast also failed, BUT, unlike around here where you might wait hours for an update, they had real-time updating that captured the storm under performing and kept adjusting the forecasts. That frequency of reporting and the latest data (even if the data and supporting technology needs to be better) carries an enormous value. As of yet, we have nothing comparable out here (which is sorely missed during our own on the edge snow events). As much as I love this blog, you need a team to provide the above. :(

    ReplyDelete
  3. Reading C.M. Reese comments, again, the need for a team and real-time forecasting.

    ReplyDelete
  4. Cliff's implicit claim is that bigger computers, from running higher precision calculations, would have smaller uncertainties and therefore be closer to the right answer.

    It would help to either explicitly show the snow prediction and its uncertainty from the superior European calculations or state that claiming an improvement is just a guess, albeit a reasonable one.

    The call for public pronouncement of uncertainty is a tough issue. Public understanding of and decision-maker based on uncertainties is not good - most emergency managers suggest telling people what to do rather than asking people to weigh uncertainty.

    ReplyDelete

  5. I think this is also partly to be blamed by the NWS warning philosophy and deterministic nature of its warning products. At this point, their rules do not really allow a Winter Storm Warning to be issued and say 0 to 10 inches are possible during highly uncertain events. They have to pick warning or no warning snow amounts and go with it.

    They are also supposed to warn the public of weather that is a threat to life and property so there's a tendency to error on the side of caution, and over warn, as the organization probably should. However, I think the NWS often forgets that conveying uncertainty is as big or more important than just relaying the message of an impending snowstorm or any other threat. The public loses confidence in the forecast and warnings when false alarms occur and stops taking warnings seriously, rendering the warnings useless. I think confidence lost from false alarms would not be nearly as large if this uncertainty was conveyed. There are ways of doing this through their forecast discussions, weather story graphics and youtube briefings, but I think this information can sometimes either be lost in all the information available or it’s simply not conveyed in their warnings, weather stories and youtube videos as the office chooses to focus on the impending doom and getting the warning out. Admittedly, how this uncertainty is handled is somewhat office to office dependent, and probably for that matter, forecaster to forecaster dependent.

    ReplyDelete
  6. Cliff,

    I appreciate your analysis and the posts you've provided in recent months on this topic. I have been sharing them throughout and having discussions and debate with colleagues of mine in the operational meteorology field about it too. However, I have to disagree with you on the Euro being the best performer during this storm.

    In the timeframe that the model is supposed to perform best, it completely blanked New England from this storm. It may have done an ok job with QPF amounts in and around DC, but it did an absolutely abysmal job further up the coast. It also did a horrible job with snow totals in DC proper. The Euro algorithm produced 4-8"+ in DC-Baltimore, with a bullseye west of DC (which it did actually get close to accurate). The Euro IS a superior model to the GFS, and I can't agree with you more that the field needs an infusion of research and funding to get better at this. But sometimes, I think we overstate HOW good the Euro is. Weather modeling in general needs improving...not just the US version of it (which is an argument for why we can and should be doing it better).

    No model won this storm. No meteorologist won either. This was an impossible storm from the get go...and the end result was one of the across the board worst snow forecasts in the Northeast in recent history (perhaps since March 2001?).

    ReplyDelete
  7. I'd like to think that a big forecast failure that directly impacts DC would bring the funding issue to the forefront with our government.

    What I expect to happen, though, is we'll hear bombastic rhetoric about how we've wasted money on a weather service that can't even predict a snow storm despite spending millions of dollars, accompanied by calls to gut what current spending exists.

    ReplyDelete
  8. Note how the same system was mishandled in Boston, where instead of a few inches they [got] up to a few feet. An amazing quasi-stationary "spiral band" [set] up with the low centered way offshore.

    ReplyDelete
  9. Hi Cliff,
    I've just heard today about NASA's GEOS-5 model and I don't recall you mentioning it.
    The output is given with a 0.25 degree resolution and updated twice a day.
    They say it's experimental, but I was wondering how accurate the model was ?
    Have you had a look at it ? what do you think ?

    Forecasts are available here :
    http://gmao.gsfc.nasa.gov/products/

    Cheers

    Victor

    ReplyDelete
  10. Cliff, FYI Ricky Rood referenced your blog today in a post at Capital Weather Gang on the need for better US forecasting:

    http://www.washingtonpost.com/blogs/capital-weather-gang/post/to-be-the-best-in-weather-forecasting-why-europe-is-beating-the-us/2013/03/08/429bfcd0-8806-11e2-9d71-f0feafdd1394_blog.html

    ReplyDelete
  11. Saw you on NBC Nightly News tonight Cliff! Good going! Hope you can get some more funding to help fix the problem!

    ReplyDelete
  12. You are dead on with the lack of communication. The uncertainty should have been communicated better.

    I think we are all in agreement that the models were not great. But let's forget the models. If forecasters would have just followed the system from the midwest they would have realized that it wasn't a particularly cold system. By time the system reached the area and some warmer air was brought in, the temps really weren't close to 32. Let's get the paper and pencil out and get back to doing our own hand analysis to get a good feel for weather.

    ReplyDelete
  13. Thanks ,isther any sight where're some of the uncertainty is published. ? That can be interpreted by a novice ?

    ReplyDelete

Please make sure your comments are civil. Name calling and personal attacks are not appropriate.

Are Eastern Pacific Cyclones Become More Frequent or Stronger?

 During the past three days, I have  received several calls from media folks asking the same question:  Are storms like this week's &quo...