September 14, 2011

Resolution

 Numerical weather prediction models are generally solved on a three-dimensional grid, with the distance between the points--the grid spacing-- a measure of the resolution of the model.

We often talk about the horizontal resolution of a weather forecasting model ... the horizontal distance between the grid points.   This resolution has greatly improved over the years as greater computer power has become available--from roughly 600 km in the earlier 50s to 12 to 4 km resolution today.

Let me shown you the implications of resolution using the WRF model used at the UW.

Here is 36-km resolution--representative of the best we had roughly in the late 1990s--for surface air temperature.
Really just the major features:  no hint of valleys, Puget Sound is not apparent. No volcanic peaks.

Here is 12-km, the main resolution run by the National Weather Service today.  A bit better, with more definition.  No Puget Sound really.


 Next 4-km.  Far more structure and the valleys are much better defined.



 And finally, the best--4/3 km.  Run here at the UW  for Washington and nearby states, the detail is extraordinary.  Much of Puget Sound is defined.  Individual river drainages are clearly resolved.  You need this resolution to get realistic flow in the Columbia Gorge.



Mount Rainier and other volcanic peaks are clearly evident.  In five years or so such resolution will be operational throughout the nation.  This image is really beautiful too....you could hang it on the wall as art.

Right now the National Weather Service runs a global model at roughly 25-km resolution, and regionally at 4-km grid spacing. To be able to explicitly simulate thunderstorms one needs to do better than 4-km resolution, with more being better.

In the vertical, the resolution is dependent on the number of layers we have.   For our local WRF model we used 37, with more layers near the surface where we really need the detail.

It takes a lot more computer power to make even modest improvements in resolution--every time you double horizontal resolution you need roughly 8 times more computer power.   It is not surprising that weather forecasting uses some of the most powerful computers in the world.

Here in the Pacific Northwest we run 36, 12, 4 and 4/3 km resolution forecasts twice a day on clusters of commodity-off the shelf--processors (Intel Nehalem cores).   This activity is sponsored by the Northwest Modeling Consortium, a group of Federal, State, and Local Agencies and some private sector firms.  

11 comments:

  1. To what extent (if any) can this process be parallelized in a way that does not require low-latency interprocess communication?

    I assume your simulations just boil down to stepping a set of partial differential equations forward in time subject to some forcing function, and I seem to recall from something I read in college that you can parallelize this kind of process somewhat by chopping the domain up into cubes and then simulating each cube on a separate compute node with boundary conditions transmitted every few steps between nodes that are handling neighboring cubes.

    If you can do that, you might be able to scale up your simulations by using Amazon's EC2, which has computing power available that is, I suspect, several orders of magnitude higher than anything NWS owns. (Disclaimer: I work there).

    A post about if and how the NWS might leverage the possibilities offered by cloud computing -- renting computers by the hour in numbers far beyond what any agency could afford to own -- would be very interesting.

    ReplyDelete
  2. How many cores does it take to run the model? How long? Just curious - I used to work with some global circ folk at Georgia Tech back when that first pic was all we had... we ran ours on a single RS-6000 RISC core, and it took a fair chunk of a day...

    ReplyDelete
  3. Are the high-resolution 4/3km runs published somewhere public in GRIB format? I'm a sailor and would love to have more detailed predictions for wind and weather.

    ReplyDelete
  4. Bruce - My understanding of the WRF model is that every process (core) operates in lock-step with its neighbors due to the boundary conditions due to time. If you could predict a chunk of the problem space for all times at once, then it would be much less dependant on its neighbors, but unfortunately this is not the case.

    Glenn - The WRF model can output to GRIB, NETCDF, GRADS and a multitude of other formats.

    I'm not associated with the original author, but I worked at the National Center for Atmospheric Research when they were writing WRF in the early 2000's.

    ReplyDelete
  5. I wonder if idle processing power available by donation on the Net (such as what was done for the SETI project) could expand the "cores" available for processing weather data.....

    ReplyDelete
  6. Re: Processing power.

    The NCAR-Wyoming Supercomputing Center (NWSC) is nearing completion in Cheyenne, WY.
    http://nwsc.ucar.edu/

    "Scientists rely on advanced computing to understand complex processes in the atmosphere and across the Earth system. Researchers from a broad range of disciplines—including meteorology, climate science, oceanography, air pollution and atmospheric chemistry, space weather, aviation safety, seismology, wildfire management, computational science, energy production, and carbon sequestration—will have research access to the NWSC."

    "The NWSC will be a facility capable of housing a petaflops supercomputer (a petaflops is a thousand trillion mathematical “flops” operations per second). When fully operational in 2012, the NWSC will likely rank among the world’s fastest supercomputers dedicated to Earth science research. The rankings of supercomputers are constantly changing as ever-faster machines are developed."
    http://www2.ucar.edu/news/4735/ncar-wyoming-supercomputing-center-enters-new-phase

    ReplyDelete
  7. Kenna,

    Now that is a great idea. Wonder if the NWS would consider that.

    Secondly, Prof Mass, do you have any update on the public release/access of KLGX date?

    I assume we're getting close. With this weekends downturn in the weather, the images would be interesting/useful.

    Lastly, I hope you get some good news on your dog soon.

    ReplyDelete
  8. Hey, Cliff, you'd probably enjoy this short CBS early 1950s clip about the UNIVAC computer, touting the machine's capability by discussing its utility in assisting with weather forecasts:

    UNIVAC & the Weather

    2,000 calculations per second!

    ReplyDelete
  9. Well... those radars definitely look like a man whose head is catching on fire. And I'm probably going to photoshop it as such and make a post of it, if there are no objections. ^_^



    http://meteoroflgy.blogspot.com

    ReplyDelete
  10. Kenna, SWSDuvall, and others...

    The procedures for running simulations of the atmosphere are more complicated than you might imagine--I'm sure it is more complicated than I can imagine, and I've been watching these models since they came across a large format facsimile printer. Distributing the work among a slew of processors probably buys you nothing.

    The initialization of the models, using new data every six hours--as well as data generated from previous simulations is a pretty big deal. There might be some stuff on the NCEP website
    http://www.nco.ncep.noaa.gov/

    What I would like to see is the UW run a mesoscale model from the global ECMWF. Maybe one of the ensemble members is close to the ECMWF? I don't know, maybe one of the computer guys or a grad student at the UW can that.

    Another thing to mention, is that global ensembles are run at NCEP. Maybe the best thing would be to find the best global ensemble member, and then run the mesoscale model initialized from that. It would change from day to day though...the model of the day?

    As far as the UW models being more precise--mesoscale models are sometimes very precise fictions. The 1km UW model will teach us some things, but, here is a for instance... I've watched the lowest level cloud product for marine stratus and fog and it is confounding. It is the best thing I have seen, very realistic, and yet it is often wrong. No doubt simulating fog is among the hardest things to do, but I'd have said the 1km was going to make statistical guidance moot. Now, I'm not so sure. I've used it for aviation forecasts and mos guidance did better. What guidance you choose for any given situation soon buries forecasters, too much information to assimilate.

    ReplyDelete
  11. PS--I found a Powerpoint file about NCEP computing, here is the url
    www.weather.gov/datamgmt/slide.../Daniel%20Starosta%20062911.pptx

    Or google for ncep computing Starosta and it ought to be the first hit.

    Anyway, I misspoke in that post--I meant to say 'distributing the work among a slew of computers' rather than processers. My point being that spreading out something enormously complex (like home PCs helping the SETI search) probably would not help NCEP. NCEP already has at least three supercomputers with, looks like nearly 9000 CPUs among them. The PPT might be worth looking at for PC enthusiasts to see the scale of things.

    ReplyDelete

Please make sure your comments are civil. Name calling and personal attacks are not appropriate.

Should I Let AI Write My Blogs?

I spend a lot of time thinking about and writing my blogs.   Could AI apps do the work for me?   Could any of you tell the difference? Well,...