Monday, October 16, 2017

The Real Story Behind the California Wildfires

There has been a huge amount of media coverage regarding the tragic northern California fires, documenting the terrible loss of life and billions of dollars of damage to buildings, infrastructure, and the economy.  As I write this, the death toll has risen to 41, over 5000 buildings have been destroyed or damaged, and the estimates of the financial loss are in the tens of billions of dollars.

Media stories have blamed the catastrophic fires on many things:  a dry environment after the typical summer drought, unusual warmth the past several months, excessive rainfall producing lots of flammable grass, strong winds, global warming, and  the lack of vegetative maintenance (clearing of the power lineright-of-ways) by the local utility (PG&E).

But none of the stories I have read get at what I believe is the real truth behind this unprecedented, severe, and explosively developing wildfire event:

A unique mountain-wave windstorm produced the strongest winds in the historical record at some locations.  An event produced by the unlucky development of just the right flow regime that interacted with regional mountains to produce extreme winds beyond contemporary experience.

In short, this blog will make the case that the extreme nature of the wildfires were the result of a very unusual weather event, one that our weather models had the ability to forecast and warn about, if only their output were applied more effectively.  The blog also suggests that better use of state-of-the-art weather prediction offers the hope of preventing a similar tragedy.

The Unique Wind Event

Although there have been a lot of media reports about windy conditions, few have described the extreme, often unprecedented, nature of the winds on Sunday night and Monday morning (October 8/9th).   Some have even mocked PG&Es claims of hurricane-force winds, suggesting wind speeds of 30-40 mph.

Let's clarify a few things.  There was a wide range of winds that night, with the strongest winds on ridge tops and on the upper lee slopes of terrain.  Some winds was startling.

For example, at 10:30 PM on 9 Oct 2017 the wind gusted to 96 mph on a 3400 foot peak NE of Geyersville, about 20 miles NNW of downtown Santa Rosa. They reported sustained 74 knots (85 mph).  Those are hurricane force winds (sustained of 64 knots or more).

At the Santa Rosa RAWS station (U.S Forest Service and Bureau of Land Management) at 576 ft elevation, the wind accelerated rapidly Sunday night to 68 mph (see below).

A few miles to the NNW and a bit higher (2000 ft), winds at the Hawkeye site accelerated abruptly to 79 mph.


What is really amazing about the winds at these sites, was that they were unprecedented:  the strongest winds on record, with records going back to 1991 (Santa Rosa) or 1993 (Hawkeye).  And we are not talking about winds during the fall, but winds any time during the year.  Even during the stormy winter season when powerful storms can cross the region.

At low-levels, the situation was more mixed.  For example, at Napa Valley Airport (36 ft), the sustained winds at 11:15 PM October 9 (37 knots) were the strongest observed (looking back to 2001) at that location from July 1- November 30, while at the Santa Rosa Airport (KSTS) the sustained winds only reached 28 mph, with 40 mph gusts.

So why were the winds so strong and unprecedented at higher levels in the hills?  These winds were key for causing the wildfires to explode and to quickly move into populated regions.  And the winds undoubtedly damaged power transmission lines and thus helped start electrical fires, which may, in fact, have initiated the big wildfire runs.  And why were the lower-level winds less severe?  What can explain such differences?

Mountain Wave/Downslope Winds

When strong flow interacts with terrain, the air can be greatly accelerated.  The schematics below show you some situations, with air accelerated over and downstream of mountain crests.

Such acceleration is well know in Washington State, with some locations experiencing huge winds (like Enumclaw where winds reached 120 mph on Dec. 24, 1983 while it was calm in Seattle.)

That night (Sunday evening), strong to moderate easterly/easterly flow was approaching the terrain north of San Francisco, something shown by the 6hr forecast of height (like pressure) and winds at 850 hPa (about 5000 ft)--a forecast valid at 11 PM Sunday night (see below).  The strong winds and their orientation was the result of cooler air and high pressure moving into the Northwest during the previous day.

So we had modestly strong winds (30-50 knots) approaching the terrain.  A very favorable situation for strong mountain-wave winds is a stable layer at or just above crest level.  A stable layer can be noted when temperature is steady with height or increases with height (an inversion).   The nearest vertical sounding (radiosonde launched weather sensors) was at Oakland, CA.  The sounding at 5 PM Sunday DOES shows an inversion at the right levels (see plot), roughly between 850 and 800 hPa (roughly 5-6 thousand feet ASL).

A group at the Desert Research Institute runs a forecast model (WRF) at very high resolution (2-km grid spacing).  Here is their 6-h forecast for sustained surface winds at  11 AM Sunday.

OMG...there it is.  You can see the banded structure of strong winds over and immediately downstream of major terrain features, with lower speed winds near sea level.  I inserted a terrain map can see how the wind maxima were oriented the same way as the ridge lines.  And the model reveals something else:  the enormous  horizontal variability of the winds during such events.

Other major modeling systems also predicted the strong mountain-wave winds.  For example, here are the max wind gusts predicted by the NOAA/NWS High Resolution Rapid Refresh Model for 2 AM Monday.  Same banded structure, with gusts near Santa Rosa of 50-55 knot (58-63 mph)

Professor Rob Fovell of the University of Albany completed another high resolution simulation, one initialized at 5 PM on Thursday.  Here are is a vertical cross section though the Tubbs fire that affected Santa Rosa.  You can see the acceleration of winds (sustained) on the slopes.

The predicted winds at the Tubbs site was scary strong, with max winds around 70 mph.

The creation of such downslope mountain-wave type windstorms is very sensitive to the characteristics of the air moving towards the mountains.  You not only need strong approaching flow, but the proper vertical structure of temperature and winds.  Clearly such conditions don't happen often--otherwise similarly strong winds would have occurred before.  There is no reason to expect that such extreme wind conditions were made more probable by global warming.

So I think we can outline what happened Sunday/Monday of last week.

The vegetation was dry after little rain over the summer (quite normal).  The ground vegetation was perhaps drier than normal because the summer had been usually warm (by 1-4 F as shown by the NOAA Western Region Climate Center map for the last 90 days.)

On Sunday afternoon, winds approaching the mountains of northern CA increased, and the vertical structure of an inversion over cooler air was established.  A strong mountain wave/downslope wind event was initiated, bringing winds of 60-90 mph to the crests and upper lee slopes of the regional terrain.  Such winds helped initiate the fires (possibly due to interaction with power lines) and then caused the resulting fires to explode.  The fires, driven by the strong, gusty winds, pushed very rapidly into populated areas.

The good news in all this?  Our models seemed to be able to simulate this event, providing some warning of the imminent wind acceleration.

What could be done with such information?  Much better warnings of a potential blow up?   Shutting off the power to threatened communities?  There are lots of possibilities.   But one thing is for sure:  we can not let this happen again.  And the first step is to really understand what happened, without assuming we know the answer beforehand.

Too many people are suggesting the wildfire event is all about climate change, when it may prove to reflect a severe weather event unrelated to global warming.  Similarly, some of the same folks claimed that the great rainfall with Hurricane Harvey was all about global warming, when a stalled storm was probably more to blame.  Only by knowing the true cause of disasters and acting on that information can we protect people in the future.

Saturday, October 14, 2017

Cold Weather and Snow Hits the Pacific Northwest

Snow has hit the Cascade passes and below-normal temperatures have spread over the Pacific Northwest, with temperatures dropping into the single digits in portions of eastern Oregon.

And ironically such cold temperatures are a bad sign for those battling the wildfires north of San Francisco.

To "warm up" this blog, lets start with the latest cam shots at Snoqualmie and Stevens Passes.  White stuff.  Enough to make folks think about the upcoming winter season (which may be a good one because of La Nina).

During the last day, many NW folks have observed frost, with temperatures dropping below freezing on both sides of the Cascades.  The map of minimum temperatures for the 24-h ending 8 AM Saturday (below, click to expand), shows 20s in eastern Washington, with teens and even single digits in the valleys of the high plateau of Oregon.  Klamath Marsh RAWS site east of Crater Lake dropped to 5F and Burns, Oregon set a new record for the date (10F).

The below-normal temperatures were clearly evident at Sea Tac and Pasco, WA--here are plots for the past two days at these locations, with average maxima (purple) and minima (light blue).    At Sea-Tac, Thursday was crazy cold (about 12F below normal) and this mornings low temperatures were clearly the coldest so far this fall.
And many days dropping below normal at Pasco.
The air morning into our region is near record cold, something shown by comparing the incoming temperature at 5000 ft (850 hPa) at Quillayute (Wa coast) with climatology (see below).  The gray dot shows the observation at 5 PM Friday and the blue line shows the record low for the date.

The cold air that moved into the Pacific Northwest is associated with higher pressure, something shown by the 12-h forecast for 5 AM today (Saturday) for sea level pressure (solid lines) and lower atmosphere temperature (color shading).  At the leading edge of the cold/high pressure there is a large change in pressure (pressure gradient) that is associated with strong winds.  Unfortunately, some of that pressure-change zone is now over northern CA, which is revving up the winds, particularly over the northern Sierra.   Not as bad as Sunday/Monday, but enough to bring concerns of reinvigorated fires.

The winds yesterday were mostly northerly (from the north) over northern CA, and that blew the smoke southward toward San Francisco (see MODIS image).

I am working on an analysis of winds during this fire event, particular an evaluation of how unusual they were...stay tuned.

Thursday, October 12, 2017

Could the Northern California Wildfires Have Been Prevented Using Preemptive Power Outages and High-Resolution Weather Forecasts?

The catastrophic fires in northern California are still burning, with the death toll rising to 21 and damage estimates ranging into the tens of billions of dollars.

Is is possible that this tragedy could have been prevented or minimized by cutting the power to threatened areas before the fires started, using the best available weather forecast models to guide decision making?

This question is explored in this blog.

The proximate cause of the explosive fires were discussed in my previous blog.  We started with a very dry landscape, following the typical rain-free summer.  Wind picked dramatically on Sunday evening, with gusts to 40-70 mph over northern CA.  With offshore flow, relative humidities were very low.  And temperatures had been above normal.

But something initiated the fires and did so at multiple locations within a period of a few hours during the late evening on Sunday.  Although there is the possibility of arson, the most probable fire starter was arcing power lines damaged or shorted by falling trees and branches.

There is a history of California wildfires started by falling trees/branches during strong wind events, such as the 2015 Butte Fire near Sacramento that killed two and destroyed 550 homes.  And the there were reports of downed and arcing  power lines Sunday evening prior to the wildfire conflagration.

So if downed or arcing power lines was the key initiator of Sunday/Monday's fires what can we do to lessen the chances of a repeat of the tragedy?

Some media reports have suggested that the relevant power company (PG&E) was not effective in trimming vegetation around its power lines.   Certainly, an effective program of vegetation control around powerlines is essential.  And burying power lines in vulnerable areas would be wonderful, but very expensive (I have seen estimates of 1 million a mile)--but perhaps a reasonable investment.

But perhaps there is something else that can be done, which could provide substantial protection:  cutting the power to regions that are directly and immediately threatened by powerline-induced wildfires.

This is how it would work. 

This approach would only take place in regions and periods in which there are threatening amounts of dry fuels on the ground, making wildfires possible.

Thus, portions of California near vegetated area during the dry season (last summer, early fall) would be candidates.

Only areas with appreciable population would be candidates, thus remote areas would not be considered.

Only when dry atmospheric conditions and high winds are imminent and threatening, would pre-emptive blackouts be considered.  I would suggest using periods when gusts are predicted to exceed 40 mph (35 knots) with relative humidities less than 30% as a potential criterion.  If those conditions are forecast to exist within 6 hours or if they are observed, the power would be cut for the affected areas.

A very promising modeling system for such a purpose is the NOAA/NWS HRRR (High-Resolution Rapid Refresh) high-resolution model, which is run every hour, out to 18 h.    Here are the ten and four hour forecast of wind gusts valid at midnight Sunday/Monday (0700 UTC) for central CA.  Both forecasts are threatening, with predicted winds over 40 mph in many of the areas north of San Francisco where they were, in fact, observed.

Both showed the winds revving up above 40 mph around 8 PM and dropping below that value around 9 AM Monday morning. 

So the pre-emptive black out would run for 11 hours (8 PM to 9 AM) and folks would get a series of warnings that it would occur.   With modern numerical prediction, warnings of a potential blackout would be given a few days before, with a penultimate warning 6 hrs before, and a final warning an hour before.

Yes, there would be some inconvenience, but that would be minor compared to the benefits.  In the present case we are talking about saving roughly two-dozen lives and tens of billions of dollars of economic impacts.  And we haven't even touched on the negative impact on air quality for the heavily populated San Francisco metro area.

And there is the issue of false warnings or the "crying wolf" syndrome.  But a casual look at the climatology at a few locations around the area suggest that this was an unusual event, and that picking a realistic criterion (e.g., 40 mph gusts, August through October only, relative humidity below 30%) would produce very few blackout events.   I will explore this more during the next few weeks.

Is this a crazy idea?  If so, why?

Tuesday, October 10, 2017

The Northern California Fires: Driven by the Diablo Winds That Were Predicted Days Before

A large area north of San Francisco was devastated Sunday night/Monday morning by explosive wildfires that have killed at least 13 individuals, destroyed over 1500 structures, and burned over a hundred thousand acres. Over one-hundred people are missing.

Picture courtesy of KRDO.

This sudden catastrophic event eclipses the damage of the highly publicized Hurricane Nate, during which no person is known to have lost their life.

As we shall see, the northern California wildfires were produced by the rapid development of strong winds, which gusted to 50-70 mph in places.   Importantly, the winds were highly predictable, being forecast by current operational weather models days in advance.

Here are the max winds (in knots)  north of San Francisco during the event (provided by UW grad student Conor McNicholas). Some locations had winds over 55 knots (orange and red) and lots of places reached 30-45 knots (green colors)

High-resolution NASA MODIS satellite imagery show the explosive development of the fires.  Around noon on Sunday, October 8th, California is clear, with little evidence of smoke.

 One day later, massive smoke plumes are moving westward from a series of fires north of California.

The key element of this event was the rapid development of very strong offshore (northeasterly) winds, with gusts to 40-70 mph, and rapidly declining humidity during Sunday evening and the early morning hours of Monday. 

To illustrate, here are the observations at Napa Valley Airport (KAPC) from 1154 PDT (1854 UTC) Sunday through 2:24 AM PDT on Monday.  Temperatures early in the day were in the 70s F, with southerly winds and moderate dew point (upper 40s). But subsequently the winds switch to northeasterly, the dew point dropped into the teens (very dry) and the winds gusted to 35-40 knots (40-46 mph).

Why did dry winds pick up so rapidly?  Ironically, it was due high pressure, associated with cold air, passing to the north and east of northern California.

Looking at National Weather Service large-scale pressure analyses, one can view the changes.  The lines are isobars of constant sea level pressure.  On Saturday at 5 AM, there was high pressure over the Pacific and a cold front was moving into the Northwest.  Everything was fine in California.

By 5 AM on Sunday, the front had reached northern CA and high pressure was pushing eastward over Oregon, Idaho, and northern Nevada.  A trough of lower pressure was beginning to develop over coastal CA.

By 2 AM Monday morning, the world had changed, a very strong pressure difference had formed over northern CA, as high pressure pushed east and southward to the east of the Sierra Nevada and a trough of lower pressure intensified along the coast.  There was a big change of pressure with distance, which meant strong winds.

The UW WRF 12-km forecast of wind gusts and sea level pressure for 2 AM Monday, shows a huge pressure gradient over northern CA, with areas of strong winds. Very strong winds over the eastern Pacific as well.

And the humidity forecast for the same time shows very dry conditions are northern CA (dark brown color).

A higher resolution (4-km grid spacing) model run by the CA CANSAC group at the same time shows powerful northeasterly sustained winds (not gusts) north of San Francisco.

So the set up was the following.   Northern California was at the climatologically driest point of the year, after a summer of little rain (which is normal).  In fact, the latest official drought U.S. monitor graphic did not show particularly unusual dry conditions (see below).

High pressure then build in north and east of northern California, forcing strong  offshore (easterly) flow.  As the flow descended the western slopes of the regional terrain it was compressed and warmed (see schematic).  This warming resulted in reduced relative humidity and pressure falls at the base of the terrain (warm air is less dense than cool air).

In fact, the strong downslope wind over northern California has a name:  the Diablo or Devil's Wind

The pressure falls associated with the Diablo wind helped rev up the horizontal pressure gradients, and thus the surface wind speeds.      So you had antecedent dry conditions, strong winds, warm temperatures and low humidity--all the ingredients needed for explosive fire growth.

And then we have the other issues:  human initiation of fire, mismanaged local forests and grasslands, and folks living too close to fire-prone vegetation.

This event should be considered a severe storm situation, driven by a well-forecast weather phenomenon.   In fact, the forecast models were predicting this event many days before.  To show this, here are forecasts of sea level pressure and temperature made for 2 AM on Monday started 9 hr before (Sunday at 5PM) and 117 hr before (5 AM Wednesday).   VERY similar and both were predicting the strong winds.  The strong winds and dry conditions should have surprised no one.

Could we have warned people better of the upcoming wind event and the potential for fire blow up?  Could we have saved lives if we had done so?

Should the powerlines have been de-energized when the winds exceeded some threshold?  Was poor maintenance of powerlines (e.g., lack of trimming vegetation) a major issue?

I will let others answer these important questions.

Sunday, October 8, 2017

U.S. Numerical Weather Is Still Behind and Not Catching Up: What is Wrong and How Can It Be Fixed?

The skill of U.S. global weather prediction still trails behind major international centers, such as the European Center and the UKMET office.  And we are not catching up.

The U.S. National Weather Service is failing to run state-of-the-art high resolution ensemble forecasting systems over the U.S. and there is no hint when we will do better in the near future.

Why is U.S. operational numerical weather prediction, the responsibility of NOAA and the National Weather Service, lagging behind? 

It is not because NOAA doesn't have good scientists.
If is not because NOAA administrators don't care.
If is not because NOAA unions or employees are dragging their feet.
It is not because NOAA lacks financial resources or the support of Congress.
And it is not because the U.S. lacks the scientific infrastructure and human resources.

The reason for U.S. lagging performance?

A dysfunctional, disorganized, and fragmented organizational structure for U.S. operational numerical weather prediction and associated research that makes it impossible for NOAA's weather prediction to be world class. 

Things won't get better until that structure is replaced with an intelligently designed, rational organizational structure,that effectively uses both governmental and non-governmental resources to give Americans state-of-science weather forecasts.

The Current Situation

Ever since Hurricane Sandy in 2012, where the European Center model did far better in predicting landfall than the U.S. GFS model, there has been a national recognition that U.S. numerical weather prediction, the foundation of all U.S. weather forecasting, had fallen behind.  Story after story have appeared in the national media.  Congressional committees held hearings. And Congress, wishing to address resource issues, provided substantial funding in what is known as the "Sandy Supplement."  Six years before, after the devastating landfall of Hurricane Katrina, Congress had provide similarly large amounts to improve hurricane forecasting and warnings, creating the HFIP program (Hurricane Forecasting Improvement Project).

The HFIP program funding led to the development of a new hurricane modeling system (HWRF, Hurricane Weather Research and Forecasting model), and the Sandy money went for a new computer, support of extramural (outside of NOAA) research, and the the start on a new global modeling system (NGGPS, Next Generational Global Prediction System).

So with huge public investments in 2006 and 2012, where are we today?

The unfortunate answer is: still behind, with little evidence we are catching up.

Let me demonstrate this to you, with hard numbers, many from NOAA's own web sites.  First, here is a measure of the skill of major global models verified over the entire planet for five-day forecasts at one level, 500 hPa, for the past month  (anomaly correlation is shown, with 1 being perfect).

For virtually, every forecast the European Center (red dashed) is the best, with U.S. GFS model (the black line) indicating lower skill.   Second best, is generally the UKMET office model (yellow color) run by the British, and for this month the U.S. is even behind the Canadians (green line, CMC).  The overall summary of skill is found in the lower left corner. The U.S. Navy also does global prediction (FNO), and is behind the others.  I could show you other months, but the results don't really change.

But this is just a snapshot.  What about a longer-term view?  The top figure below is the skill of the U.S. (GFS, red) and European Center (ECM, black) global models for the past 20 years for the five-day forecasts over the northern hemisphere at 500 hPa.   The bottom shows the difference between the modeling systems, with negative indicating that the U.S. model is behind.

The bottom line:  (1)  the U.S. model skill has been lower than the European model, (2) the U.S. has made little progress in catching up during the last ten years, when a huge investment has been made.  To be fair, one should note that both the U.S. and European models have gotten slightly better during the period.  U.S. skill is not declining.  But we both are improving at about the same rate.

Hurricane forecasting is a critical prediction responsibility of NOAA, and certainly in the news of late, with the landfalls of Harvey, Irma, Maria, and Nate. The evaluation of hurricane forecast of numerical prediction models can be separated into track and intensity error.   Track error is clearly the key parameter, since a skillful intensity error is of little value if the storm is in the wrong place. And bad tracks inevitably degrade intensity prediction.  There have been huge gains in reducing track error, but only modest improvement in intensity forecasts.

Below are the 48-h track errors for Atlantic Basin tropical cyclones/hurricanes for the past 22 years for a number of models and prediction systems (this evaluation was done by the NOAA/NWS National Hurricane Center).  The official forecast (humans using all the model guidance) is shown in black.
There has been notable improvement in 48-h track errors from roughly 150 nautical miles to about 70 for the better systems.  Impressive.  But all models are not equally skillful.  The best the last few yeas has been the European Center's global model (light blue color, EMXI).  The NOAA/NWS GFS global model is not as good and NOAA hurricane model (HWRF) is inferior as well.

What about recent hurricanes in 2017?  Hurricane Irma, which hit the Caribbean and Florida very hard, was forecast far better by the European Center than by the U.S. global or hurricane models.  To prove this, here are the mean absolute errors of the track forecasts (in km) for Irma from Professor Brian Tang's website at the University of Albany.   The European Center is far superior for all shown projections (12-120 hr) in both the US global model (indicated by AVNO) or the U.S. hurricane models (HWRF and the new HHOM).

What about Hurricane Harvey?   Same thing...the EC is the best. And the new NOAA hurricane model was very poor.

Now to be fair, although the European Center is generally superior to the American models in terms of track, there are some exceptions.  For example, Hurricane Maria, which devastated Puerto Rico.    In this case, the European Center was better at 12 and 24h, about the same at 48h, and modestly worse at 72-120 hours.

But Maria is the exception; generally, the European Center has better track predictions.  To prove this, here is graphic from Albany's Brian Tang for many of the significant storms of the past few years, showing the 5-day forecast track error.  For most storms, the European Center is best. On average, the European Center 5-day track error is around 280 km, while the U.S./NOAA GFS error is about 420 km.  A significant difference.

There is a new player in global modeling system , one run by a private sector firm (Panasonic).  They started with the U.S. global model (GFS) and assigned a team of about ten people to improve it.   The results have been impressive-- they appear to do consistently better than the NOAA GFS model in general and for major storms.

Here is the proof for Hurricane Irma.  The PWS forecasts were startlingly good and MUCH better than the NOAA GFS for all projection beyond two days.  Even better than the European Center beyond 85 hr!

This has major implications.   A modest size effort by a private sector firm was able to substantially improve NOAA's own model.  Why were such improvements not made by NOAA itself?

As noted above, NOAA is now developing a new global modeling system, based on the NOAA GFDL FV-3 model, which will go operational in two years.  The latest version of this model, with improved physics, was tried on a large number of hurricanes/tropical storms for 2015-2016.  The results below, for storm track error, are sobering.  The new model's track forecast are only slightly better than the current NOAA global model (GFS) and far worse than ECMWF.  Just as disturbing, the NOAA high-resolution hurricane model (HWRF) has worse track forecasts than the others.

The bottom line in all of this, is that after years of investment,  U.S. global prediction and hurricane track forecasts are lagging the European Center and are not catching up.  For those knowledgeable about the technical details of weather prediction, this is not surprising.   U.S. global data assimilation is not as good as the European Center's and U.S. model physics (e.g., the description of clouds, precipitation, convection, radiation, etc) are generally inferior.

But the lack of NOAA progress is worse than that.   Report after report, workshop after workshop, advisory group and after advisory group, has recommended that NOAA/NWS get serious about ensemble-based (many forecasts, each slightly different) probabilistic prediction, and particularly to field a large convection-allowing ensemble with grid spacing of 4-km or less.  Such an ensemble system is critical for prediction of severe thunderstorms, heavy mountain precipitation, and more.  But little happens at the NWS.  In desperation, the university governed, National Center for Atmospheric Research established their own high-resolution ensemble system as a demonstration of what the NWS should be doing.  It is very popular among NWS forecasters, but can not be maintained indefinitely.  The NOAA/NWS Storm Prediction Center kludged together a Storm-Scale Ensemble of Opportunity with 6-7 members--too small and ad-hoc to address the needs.

And NOAA has lagged in the area of statistical post-processing, improving model predictions by combining several models and observations using advanced statistical techniques.   The private sector, using university/NCAR research, has surged ahead in this, leaving National Weather Service forecasts in the dust.

Want some proof?  Go to and pick your favorite city.  Here are the results from Chicago and New York.  The NWS prediction are far behind companies that use multi-model approaches using more advances statistical techniques.

Why is NOAA/NWS lagging?

The evidence for the lagging performance of NOAA/NWS weather prediction is overwhelming, and even my lengthy description above hardly scratched the surface of the problems (e.g.,  major deficiencies of their seasonal prediction system (CFS), duplicative models, very poor performance of their new hurricane model (HMON), and much more).

How can this all be happening with many good scientists, concerned and interested administrators, and NWS personnel who want to do an excellent job?

I believe the fundamental problem is a deficient organizational structure that has grown increasingly incapable of dealing with the support of a complex, modeling/prediction effort.  And until the structural deficiencies are dealt with, U.S. weather and environmental prediction will be second (or third) rate.

The essential problem:  responsibilities for numerical weather prediction are scattered around NOAA.  

Operational numerical weather prediction is the responsibility of the Environmental Modeling Center (EMC) of the National Centers for Environmental Prediction (NCEP), which is part of the NWS, which is part of NOAA (see NWS org chart below).    But the heads of EMC or NCEP, responsible for running the weather models, do not control the folks developing the models, which are in NOAA (in the NOAA ESRL and GFDL labs).

Imagine being responsible for winning a race, but you had to accept the car given to you by others.
And responsibilities for model development/application inside of the NWS are shared with folks outside of NCEP, including the Office of Science and Technology Integration (OSTI).   Responsibility for developing post-processing of model output is outside of EMC/NCEP, but the MDL lab in OSTI.  And responsibility for hydrological forecasting, which requires high-resolution model simulations is in ANOTHER office (the Office of Weather Prediction).

But it is worse than that.  The heads of EMC/NCEP or even the NWS don't control the folks working on developing new models or the science/technology required.  These folks are in NOAA at the Office of Atmospheric Research ESRL and GFDL labs.   Historically, this has been a major problem, with OAR folks developing models that have never been used by the NWS ;  sometimes NOAA ESRL even planned to compete in operational NWP.

There is no central point of responsibility for U.S. numerical weather/environmental  prediction, no one individual or group for whom the "buck stops here."  No one person or group that has control of the resources needed to be the best in the world.  And there has been a lack of integration of modeling systems, combining atmospheric, ocean, hydrological, air quality and land surface modeling. The future is in integrated environmental prediction, not weather prediction.

The lack of central responsibility for U.S. numerical weather prediction has led to duplication of effort, competition rather than cooperation at some times, development that has never gone into operations, lack of coordination, and waste of public resources.

Solving the problem

The key to fixing NOAA's problems is to prune and reorganize, to create one entity responsible for U.S. environmental and weather prediction.

A NOAA environmental prediction and research center should be established, located within NOAA, not the NWS.  There should be one director, with responsibility for ocean/atmosphere/hydrological and other environmental prediction issues.  That person would control both operations and research, with resources for both. The fragmented, ineffective current system, which has grown haphazardly over the past half-century much be replaced.

A natural location for the center would be in Boulder, Colorado, the location of NOAA ESRL, the National Center for Atmospheric Research, and University of Colorado.  It is an attractive, centrally positioned location, which is important, since the new center must attract the nation's best scientists/modelers either for visits or permanent status.

The new center would be responsible not only for creating and running an integrated modeling system, but for model post-processing as well.  It would sponsor regular workshops, conferences, and tutorials.  The center would be a central point of engagement with the academic and private sector communities.  It would take on critical tasks that have been neglected by NOAA, such as extensive model verification and the creation of actionable strategic and implementation plans.

Why today is different

The nation now understands that the U.S. has fallen behind in numerical weather prediction.  New NOAA administrators, previously from the private sector, may be willing to take a fresh look at the problems.  Both sides of the political spectrum want U.S. weather prediction to be the best in the world.  Perhaps, at a divisive time, there is an opportunity for us to come together for an effort that will benefit all Americans.

There is no reason that U.S. numerical weather prediction can not be far better than the European Center's, particularly since the U.S. research establishment is far larger than Europe.  The benefit of even incrementally improved weather prediction is immense; we just lack the organization and will to make it happen.  That can change.