Devised several decades ago, the idea behind this graphic was sound. Forecast uncertainty for hurricane tracks increases with time and that information needs to be communicated. The width of the cone in these graphics is based on historical track errors: two-thirds of historical official forecast errors over a 5-year sample fall within the width of the cone.
This approach may have been a reasonable thing to do twenty or thirty years ago, but it is NOT state of the science today because we have far more sophisticated tools to quantify and present the uncertainty of the track forecasts.
Uncertainty in hurricanes tracks vary by storm, location, date, forecast situation and forecast period. One size does not fit all. And we now have much more sophisticated capabilities that can produce relevant forecast track uncertainty that shows a very different structure than the simple cone method.
To put it another way, the cone approach is out-of-date and should be dropped for next hurricane season.
The key technology that has changed the story is ensemble prediction, whereby operational meteorological centers make many predictions during each forecast cycle, each forecast starting with a slightly different initial state or modestly different physics (e.g., how clouds form). Thus, we get an array of tracks that give us an idea of the potential routes of the storm. If the tracks are very different, than uncertainty is large.
Here is an example of the ensemble of forecast tracks of Superstorm Sandy from the NOAA/NWS GEFS ensemble system for runs starting at 1200 UTC 24 October 2012. You can see that the uncertainty was initially small (the ensembles were close together), but later they splayed out quite a bit, suggesting major uncertainty.. This does not look like a National Hurricane Center cone.
But let's examine at a very recent example, Hurricane Joaquin, back in September. You remember, the storm that was initially forecast to hit the East Coast, but went out to sea? The observed track (courtesy of the Weather Channel) is shown below.
On September 29th at 2 AM PDT, the storm cone track diagram showed the storm heading for the NY area, with the entire cone reaching the NE US. No possibility of a miss on the region!
At 11 PM PDT September 30th, the the threat was even worse, with the storm heading directly towards Washington DC (maybe that would been a good thing considering what is going on there these days). Somewhere along the central eastern seaboard would get it.
Let's compare this official cone track diagram to the ensembles initialized a few hours earlier at 5 PM PDT 30 Sept 2015. A very different story from the official cone track diagram. There is MUCH more uncertainty in the forecasts than suggested by the cone figure, with many of the forecast storms going out to sea. This was an extremely uncertain forecast, which was not communicated well by the cone figure.
By 5 PM PDT on October 1 the forecast tracks had shifted eastward, with the most probable track just offshore.
Unfortunately, the cone was not communicating the true uncertainty. As shown by the ensemble forecasts produced at 11 PM on October 1, there were a huge spread of possibilities, with some tracks going into the southeast U.S. and many heading out to sea. Very few were following the path of the cone.
And finally the National Hurricane Center cone for 2 PM on October 2 was taking the storm offshore, and that is what occurred (as shown by the track shown earlier). The ensembles had shown this possibility days before.
So what can we conclude from these examples? (and I could show you many, many more from this and other storms, including Superstorm Sandy in 2012)
1. The cone of uncertainty often does not encompass the true path of the storm. Thus, folks outside the boundaries of the cone may not be prepared.
2. The actual uncertainty is often larger than indicated by the cone, giving the population false confidence in the path of the storm.
3. We have substantial information from ensembles that can not be expressed by a cone, such as when they reveal two possible families of tracks for a storm (inland towards the U.S. or out to sea, as in Joaquin and Sandy).
4. The cone is based on the average errors over years. Ensemble tracks are appropriate for the specific storm and date in question.
In short, using track cones is out of date, technologically backward, and does not express the latest tools for calculating and displaying the uncertainties in hurricane tracks.
Track cones should be dropped by the National Hurricane Center and new approaches for displaying ensemble-based tracks should be developed. Here is one possibility.
Of course, there might be some folks that might be unhappy if we dropped the use of hurricane track cones...
Other than to say, "We don't know," it is difficult to communicate uncertainty to the general public in a way that fits into a 30 second video bite. My vote is to publish the ensemble forecast as it gives the public all the information available and over time they will decide best how to use it.
ReplyDeleteThere may be some reluctance inside the NWS to let the public know how uncertain their forecasts are.
Perhaps private users (airlines for example) have ways of presenting data and making decisions that we should look at.
This comment has been removed by a blog administrator.
ReplyDeleteI think communicating weather and storm uncertainty would be a great doctoral candidate cross-collaboration project between the various schools at UW. Cool post.
ReplyDeleteThe Coneheads caught me by surprise while I was scrolling down and reading the article. It was very funny.
ReplyDeleteThe article certainly made me smarter but there's not much an average person can do about this so I'm guessing much of the article is aimed at the weather community that reads everyone else's blogs.
I found your suggested formats very easy to understand.
I've watched the cones as well as ensembles. I know the ensembles are closer to reality and the cones are just a simplified minimal possibility that may not happen and regardless of that, they don't use the same cone, they change them as well.
ReplyDeleteThey need to make The Cone Of Uncertainty look more like and explosion.
I don't see how they could go to a forecast with more uncertainty. The media and the climate alarmism movement needs the cones to breathlessly report every storm as sensational and caused by climate change. A more reasonable, measured and scientific approach does not fit the scare tactic agenda.
ReplyDeleteI like the 'cone' a lot. I have always interpreted as something like a 90% confidence interval w/ the other 10% outside the cone. I could believe that somebody looking in a newspaper wouldn't think that. Also, looking at cones compared to ensemble forecasts makes me think it's not at all responsive to probability density, so maybe the way they've been doing it is really bad after all. But I think the cone concept could be salvaged by making it large enough to encompass actual uncertainty (even if that makes it look like the image-makers truly don't know, which they don't). You could also have a 2-step cone, one including all models and an inner including X% of them.
ReplyDeleteBut one or 2 cones is definitely better than a zillion lines each with no error. Plus, you get cone art:
http://www.vancitybuzz.com/2015/10/big-storm-pound-bc-coast-thanksgiving-weekend/
I think blending the cone and the ensemble tracks is pretty straightforward. The cone would cover the max edges of the ensembles, and have a color gradient showing the averages of the ensembles. Red in the middle if/where they are more average thru to violet on the outer edges. They also could fade red thru violet for future days uncertainties. That's sort what I think the NHC new approach graphic tried to do, but it's... muddy.
ReplyDeleteThat would work quite well for the 10/1 11PM Joaquin ensemble where you can see most of the tracks are similar. Visually I can see 2 similar sets of average courses, and the cone would be maybe 20 degrees wide.
It would also really show how uncertain things were for the 9/30 5PM ensembles. The cone would be near 90 degrees, and almost no similar tracks to average to a 'hotter' color.
I think the cone is useful for those who want a simple answer (the voting public), but the ensemble tracks is obviously much better for those of us who like to get at least a glimpse at some data.
ReplyDeleteShowing both would be great, perhaps for a 5-10 yr transition period.
Thanks Cliff