December 05, 2022

How can we predict the climate 50 years from now if we can't forecast the weather next week?

This is a question asked by a commenter, Skeptic517, on my blog yesterday.

It is a good question, one that I suspect most climate activists can not answer.  

Certainly, none of the responders knew the answer.

So what do you think?   Are long-term climate predictions hokum?

And now the answer.

Climate prediction is possible and reasonable because the nature of the prediction is very different than weather forecasting.

If you like technical terms, forecasting weather is an initial value problem, while climate prediction is predominantly a boundary value problem.  I will explain this in a second.

And the nature of the prediction is very different. 

For weather prediction, we predict the exact state of the atmosphere at a certain time.  The high in Spokane will be 68F next Thursday.   The low center will be 978 hPa and located in central Iowa.    Specific in time and space.

An 84-h weather forecast of sea level pressure (solid lines) and precipitation (shading)

For climate prediction, we don't forecast the surface weather map for 4 AM on January 4, 2090.  That would be nonsensical.

Instead, we forecast mean conditions, often over a broader area.   Will the average temperatures in spring over Washington State be warmer or cooler than current values?  Will the precipitation averaged over ten years of winter be greater at the end of the century than the recent ten-year average?

Projected change in annual mean surface air temperature from the late 20th century to the middle 21st century


That kind of thing.

The essential insight you need is to understand the different natures of weather versus climate prediction.

Weather forecasting:  an initial value problem

Weather prediction, an initial-value problem, starts with a comprehensive, 3-D description of the atmosphere called the initialization.   Then large supercomputers are used to solve the equations describing atmospheric physics to forecast the exact state of the atmosphere in the future at specific times. 

Forecast accuracy declines with time and by roughly two weeks nearly all predictability is lost.....something described theoretically by Professor Edward Lorenz of MIT.    

Here is a plot of the loss of forecast skill over time for the U.S. models(for around 18,000 ft over the northern hemisphere).  The blue color is for the leading U.S. global model, the GFS. Forecast skill drops rapidly between 5 and 10 days.


Climate forecasting:  a boundary value problem

Forecast skill for specific weather features is lost after roughly 2 weeks because the atmosphere essentially loses memory of the initial observed state of the atmosphere.

In climate forecasts for extended periods of time, the key constraint is not the initial conditions, but the amount of radiation coming into and out of the atmosphere.    If we know, how much radiation is coming into and out of the top of the atmosphere, the climate models can produce a realistic average climate for those conditions.

The amount of radiation emitted and absorbed by the atmosphere is greatly controlled by the composition of the atmosphere....which we have to assume (e.g., how much CO2, methane, and particles in the atmosphere).

Such projections are only as good as our estimate of the amount of greenhouse gases in the atmosphere in 50 or 100 years.   Big uncertainty!  But we do the best we can.




So climate prediction DOES make sense.

I am skipping some subtleties: for example, the initial state can have some influence on climate simulations.   But you can try to deal with that issue by running an ensemble of many climate predictions each starting slightly differently.

Anyway,  now you know the answer!




34 comments:

  1. I will assert that weather prediction is 99% outstanding, thanks to the collective diligence and innovation of scientists such Cliff. Significant misses are rare, and we should all be extremely thankful for all the great forecasting information to rely on. Deaths due to unexpected weather are extremely rare compared to a world without great weather forecasting.

    ReplyDelete
    Replies
    1. I second the motion. Weather forecasting is greatly improved over what it was thirty years ago. As for the occasional Big Miss Forecast (BMF), anyone who lives for any length of time in a lowland area of the US Northwest eventually learns that it's next to impossible in this region to accurately predict the occurrence and the magnitude of a large-scale snow event 100% of the time. You have to be prepared for anything and not be surprised when it happens.

      Delete
  2. Cliff, is there a good source of upcoming seasonal predictions? Just knowing "will this winter be a cold one" would be extremely useful.

    ReplyDelete
    Replies
    1. We do, they are called La Nina, El Nino and La Nada. Just knowing that we have a baseline that gives us a more probable range of outcomes for the season.

      Delete
    2. The US folks make an attempt, here:
      https://www.weather.gov/mhx/longtermoutlook

      A different approach is found here – for Oregon:
      https://www.oregon.gov/ODA/programs/NaturalResources/Documents/Weather/dlongrange.pdf

      Delete
    3. Thanks John, that's really handy.

      Delete
  3. How reliable were the 'climate prediction' models used 10, 20, 30 years ago at predicting today's climate?

    ReplyDelete
    Replies
    1. Doesn't matter, if they are wrong, they will just say that the computing power and models have become much more powerful so the new model forecasts will be right and the old models were just doing the best they could with technology available at that time.

      Delete
    2. Who is "they"? Climate scientists like Cliff? I doubt he would participate so fully and successfully in a field that was built on a conspiratorial house of lies.

      To the question about models, the evidence suggests they're historically pretty good. See this link for a 2019 paper on the topic that includes a plain language summary. https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2019GL085378

      It's also kind of wild that Trevor thinks its dubious that climate models would improve over time. Wouldn't it be weirder if 1980s climate models were somehow still in use and considered a gold standard?

      Delete
  4. I've asked a number of AGW proponents a question which may sound simple to many but has always remained unanswered by it's adherents - "how much weather is climate?" IOW, at what point do we consider enough weather predictions and/or recorded weather data to accrue into accurate climate predictions? Is it ten years? Fifty years? Tree rings? Since we don't have many verifiable sources of objective data that goes back hundreds of years, this is part and parcel of why I remain a skeptic of the Global Warming crowd. Richard Lidzen (one of the fathers of climate research at MIT) has often voiced similar concerns, and has been harassed repeatedly and vilified for this apparently heretical POV.

    ReplyDelete
    Replies
    1. I appreciate that asking when weather becomes climate is a fair and somewhat interesting ontological question, but it's not an unsolvable puzzle either. I have to wonder how hard you've looked for answer to this, given that a peer-reviewed paper literally named "When Does Weather Become Climate" was published in 2019 (to say nothing of the many other resources that existing addressing this question) and shows up at the top of a google search of your question.

      Anyway, the usual rule of thumb is 30 years. See here: https://www.climate.gov/maps-data/climate-data-primer/whats-difference-between-climate-and-weather

      And here is that paper I mentioned above:
      https://www.researchgate.net/publication/335176583_When_Does_Weather_Become_Climate

      You can disagree with the 30 year standard; it is arbitrary after all. But all the same, your question has been answered repeatedly.

      Delete
    2. Answering this question with such dubious and (as you said arbitrary) standards is ludicrous in the extreme. That's not science in any way, shape or form - it's propaganda, straight up. They use the 30 years as a way of completely sidestepping the question in and of itself. FAIL.

      Delete
    3. One needs to establish some criteria, even if through consensus. It's really no different than other categorization schemes used in science (e.g., thresholds for statistical significance, effect size magnitude, etc.). The question isn't being side stepped at all you just don't like the answer.

      I don't know what to tell you here. I feel like you're asking something akin to when does a child become an adult or stage 2 cancer turn into stage 3, and then refusing to accept any workable answer as unscientific.

      If you don't think 30 years is acceptable to make inferences about climate (i.e., generalized trends in weather) then what is your answer?

      And how is establishing a heuristic that 30 years provide enough data to describe climate "propaganda"? I don't understand that objection at all. I want to engage seriously here but you don't seem willing to have to good faith discussion and that's disappointing.

      Delete
  5. Are climate predictions falsifiable? If so, under what circumstances, roughly, would you decide that a model or prediction had been falsified? Under what conditions, roughly, would you decide that the broader AGW hypothesis had been falsified?
    https://plato.stanford.edu/entries/popper/#BasiStatFalsConv

    ReplyDelete
    Replies
    1. Plainly so. They literally make predictions that are inevitably tested by the passage of time.

      Delete
  6. Cliff, you wrote, "In climate forecasts for extended periods of time, the key constraint is not the initial conditions, but the amount of radiation coming into and out of the atmosphere. If we know, how much radiation is coming into and out of the top of the atmosphere, the climate models can produce a realistic average climate for those conditions."

    Two things. First, the initial value problem remains in climate simulations because incorrect physical theory means that each iteration begins with a physically incorrect prior state, and then iteratively projects it incorrectly. Uncertainty can only grow with the number of iterations.

    Second, we know the climate can occupy a number of energy states -- the distribution of available energy flux within the climate subsystems -- for any given TOA energy flux. Knowing the TOA flux doesn't provide any information about the climate state.

    ReplyDelete
    Replies
    1. Pat... I don't think you are correct in this. The radiation balance is constraining on the evolution of the climate state. If there are multiple attractors that can be selected by the initial state, the way to deal with this issue is with an ensemble of climate simulations...with the NCAR large ensemble being a good example. The bottom line---global climate modeling does have merit if done properly...cliff

      Delete
    2. Certainly if we could forecast the TOA imbalance for fifty years with reasonable fidelity we could accurately forecast the average global temperature change. But can we accurately forecast what are most likely nonlinear knock-on effects of changes in water vapor concentration and other basic variables? Accumulated error from initialization inaccuracy may not be as important as error introduced by poorly extrapolating the effects of changes in the water vapor cycle in a hotter world. A few more clouds in the right places and the heating goes away. That makes me uncomfortable.

      Delete
    3. Cliff, we know from the several coolings and warmings during the Holocene that multiple climate states are possible without any change in TOA.

      Second, TOA is not known to better than ±4 W/m². The annual average increase in forcing from CO₂ emissions is 0.035 W/m². How is it possible to detect a change in TOA that is ~100 times smaller than its uncertainty?

      Finally, climate models themselves are unable to correctly partition the available energy flux into the climate sub-systems. The errors run to 10s of W/m², if not 100s of W/m². There's just no way climate models can resolve an annual 0.035 W/m² perturbation

      Delete
  7. The problem is that we really don't understand the sun as a system and the sun's place in the galaxy. Energy input into our system is important. It's not entropic. Doesn't the particular location of our star in the galaxy make a difference? Is this taken into account in the climate models? We just don't understand the solar and the galactic system very well. Could we have predicted the Little Ice Age? Or coming out of it? How long does it take to come out of a mini-"ice age" Yes, the Milankovitch Cycles play a role, but there's more to it. Just look at the flux in Mars's frozen pole. Planetary bodies radiate in flux. Is there additional energy input? What systems don't have energy input? Entropy is a weak solution.

    ReplyDelete
  8. Regarding weather forecasts. I would like to see more focus on incorrect forecasts and a post mortem of what can be improved. Specifically mountain weather because getting it wrong usually results in accidents especially in our local passes. A few hundred feet in freeing level error can easily result in huge snowfall and pass closures.

    ReplyDelete
  9. "Then large supercomputers are used to solve the equations describing atmospheric physics to forecast the exact state of the atmosphere in the future at specific times."

    Top500.org disagrees. According to them, we use 0 large supercomputers in forecasting. Germany leads the way with 4, and India has 1. We do have 2 research machines, at NOAA & NCARS. I'm aware that there are problems, likely dating back to that old IBM contract, when they quit the business. Is the problem really completely unfixed, after all this time?

    ReplyDelete
    Replies
    1. I am sorry, but you are in error. Operation weather prediction computers are on the list.

      Delete
  10. I am not sure I have very heard that the climate models used to predict the future are predicting the past accurately. It seems if the models are correct or reasonably accurate then running them back in time should result in reasonably accurate picture of what actually occurred as a calibration point for the model. Is that correct? If so, how have those simulations turned out?

    ReplyDelete
    Replies
    1. The trouble is that the models are "tuned" in a number of ways to match the past. That is a major issue. Yes...they can do a decent job in simulating the past century....but that doesn't mean much with the tuning.

      Delete
  11. Having spent part of my time in the nuclear industry assigned to software QA tasks, I've been watching with some interest the ongoing technical and scientific debates concerning the accuracy and reliability of the climate modeling codes.

    A basic question about these modeling codes concerns their ability to reliably simulate the complex physics of the earth's atmosphere over long timeframes ranging from ten years to one-hundred years.

    Are the dynamic cores of these climate modeling codes actually capable of simulating the real-time operation of the earth's atmosphere in the presence of ever-increasing concentrations of carbon GHGs?

    Back in 2010 over on the WUWT blog, I asked this question: 'Why do we need these modeling codes? Why do we need to make assumptions about how the atmospheric physics actually operate? Why do we need to parameterize these assumptions in various ways for use inside the modeling codes, as opposed to using direct dynamic simulation for each physical process?'

    A larger question presents itself. Rather than relying on these complex codes, why don't we just observe the real-time physical processes as they occur in the earth's atmosphere and then draw our conclusions from those direct observations, ones being made in real time as the physical processes themselves are happening?

    Said differently, the earth's atmosphere itself as it exists in the real world might become the 'computational computer' which predicts where global mean temperature will be going over the next eighty years.

    The response to my question was that it is not possible at the current state of science to directly observe the assumed physical processes as they might operate in real time inside of the earth's atmosphere. Their presence, and the modes in which they operate, must be inferred from other kinds of observations and from other kinds of scientific analysis.

    It has been demonstrated fairly conclusively by various authors posting on WUWT that the mainstream climate models, for all their massive internal complexity, do little more than transform the initial parameterized inputs into predictable temperature trend outputs -- trend outputs which are fairly well correlated with the parameterized inputs.

    If that is the case, then we might as well just assume a range of climate sensitivities to the presence of carbon GHGs and then use simple linear extrapolation methods based on each assumed sensitivity.

    For one example, if we look at the ups and downs of the HadCrut4 global mean temperature data set since 1850; and if we assume that the variations in GMT trend patterns seen over the last 170 years will continue for another eighty years, then we might extrapolate the Year 2100 GMT anomaly as being simply 0.08 C per decade times 25 decades yielding a +2C rise in GMT over 1850 pre-industrial.

    Or, for another example, we might assume that the greater concentration of carbon GHG's seen more recently has increased the rate of GMT rise, and assume that the GMT trend pattern in HadCrut4 seen between the year 1980 and the year 2020 will continue for another eighty years. In this example, we extrapolate the Year 2100 GMT anomaly as being the sum of: a) the roughly +1C rise seen between 1850 and 2020; and b) +0.2 C per decade times 8 decades. Thus yielding a total rise by the Year 2100 of +2.6C over 1850 pre-industrial.

    IMHO, the mostly likely outcome is a +2C rise in GMT over 1850 pre-industrial by the year 2100, occurring as a consequence of some combination of natural and anthropogenic climate change processes.

    If the climate activists believe that even a +2C rise over pre-industrial is highly dangerous and must be prevented, the ball is in their court to produce a credible plan of action for just how it can be prevented. As it stands today, neither the Biden administration nor anyone else in the climate activist community has presented such a credible plan.

    -----------------------------
    Disclosure: I post as 'Beta Blocker' on Judith Curry's blog, on Watts Up with That, and on The Manhattan Contrarian blog.

    ReplyDelete
  12. Betah Blocker is making the same point that I made, but in a much more articulate way. While I never built complex models myself, I did extensive work with modelers trying to troubleshoot model inadequacies. In my experience, modelers are acutely aware of the dangers of accumulating measurement error and have numerous very clever ways of dealing with that. Most of the problems I encountered stemmed from a different root cause, as articulated by Betah Blocker. The challenge is to create control functions within the model that adequately extrapolate from known relationships to different relationships in a different regime, outside of the envelope in which we have adequate data (in this case, recent past climate). We simply do not know how the nonlinear control functions for variables like water vapor (by far the most important!) and ocean heat content will change in a warmer world with more CO2. One possibility is an "adaptive iris" situation in which the warming is mitigated by a damping function, an example being more clouds in areas where more clouds will reflect more heat.

    An example of this sort of model inadequacy was the spectacular failure of the Northwest smoke models a few years ago. The problem had nothing to do with initial values or with measurement error. The first couple days of bad smoke, the models kept predicting that incoming air would scour out the smoke. Didn't happen. The modelers knew that the existing smoke layer itself would retard the scouring, but like all good modelers, they built that control function conservatively. The smoke ended up being far more resistant to scouring than they predicted. No doubt the model was tweaked to fix the problem and will do better next time. This fixing of problems, known as model validation, has barely begun for climate models and I am very dubious that problems like this do not lurk within.

    We are now seeing papers that discuss 20 years of CERES data. This is the sort of data-driven critique of climate models that is essential for improvement, and already we are seeing major problems, such as the puzzling decrease in shortwave earthshine rather than longwave, as predicted by the basic physics of global warming.

    ReplyDelete
  13. The best analogy I've been able to come up with is a roulette wheel. For the sake of argument, assume that half the numbers are odd, and half even. Over time, you will see the ball bounce at roughly 50-50. The more times you spin it, the closer it gets to 50-50.

    Now assume that you alter things, so that "even" has an advantage, say 51-49. If you are following the action, you notice nothing. You could watch it all day and it looks the same. Sometimes there are five even rolls in a row, but the same thing happens with odd. You certainly can't tell if anything is off, unless you look at lots and lots of spins, and add them up.

    Weather is a single spin. Climate is a huge number of spins.

    ReplyDelete
  14. I sent you Cliff a paper with ample citations many of which show that for almost all details of the climate, climate models are not skillful. If we know TOA radiation balance and tune the models to get that right and ocean heat uptake is roughly right, then average temperature will look pretty good.

    But we don't know TOA balance into the future so that means the models might be wrong on future temperatures too.

    ReplyDelete
    Replies
    1. David, I don't think you are correct in this. Your unpublished paper makes some good points, but it is fundamentally wrong in suggesting climate models have no value. They are useful tools for getting a view on the impacts of increasing greenhouse. Don't get me wrong, current climate models have major flaws. But they give insights into the impacts of increasing greenhouse gases..cliff

      Delete
    2. Dr. Mass, let's address the question of what value today's mainstream climate models have as scientific products used in support of public policy decision making.

      The short term variations seen in the climate model outputs and in the GMT observational data sets have no meaning in themselves for use in assessing the accuracy and validity of these climate models.

      For purposes of discussion, let us mentally ignore the short-term up & down squiggles seen in both the climate model outputs and in the real-world GMT observational data sets. Let us instead mentally linearize these data sets into prediction envelope trend periods of 30 years or longer.

      That some of the climate models do produce longer term linearized trends which generally correspond with the linearized trends of real-world GMT observations is the basis for claiming that all of the models accurately simulate the actual physical processes driving atmospheric warming in the presence of various lower to higher future concentrations of carbon GHGs.

      In the context of today's public policy debate, the mainstream climate models should be identified for what they actually are. These models are primarily narrative influence tools used for highlighting the alleged dangers which carbon emissions might pose to human health and the environment.

      As long as the 30-year running average of GMT continues its upward trend -- however small that upward trend might be over the next 80 years -- the mainstream climate science community will continue to claim that all of their models have been fully verified by real world observations. Including those models which predict a +4C, a +5C, or even a +6C increase in GMT by 2100 under ever-higher CO2 concentrations.

      Here is my advice to those skeptics who believe that the climate models are running too hot. Exposing the numerous technical and scientific issues with the climate models will not be one of the more influential factors in shaping the public policy debate over climate change.

      What will become most influential over the next decade in shaping the public policy debate will be the growing cost of all forms of energy for the average Joe and Jane on Main Street, and the growing lack of adequate supplies of energy as the decade of the 2020's progresses.

      Delete
    3. I did not say climate models have no value. i do believe though that we need to redirect most of the money toward theoretical work on nonlinear systems and attractors and on generating more high quality data. I also believe that many legacy climate models use very outdated numerical methods. But on the grids they currently use, that may not matter too much as the results are dominated by numerical error.

      Now on the point of skill. You didn't really respond to my list of things that are not skillfully predicted. The paper is long with lots of references, so I don't blame you for shying away from a long slog into math and fluid dynamics theory and practice.

      1. ECS in coupled atmospheric/ocean models is too high compared to energy balance methods using the last 150 years of data. There are a whole raft of papers that you can easily look up attributing this issue to the lack of skill in predicting changes in SST temperatures and their pattern of change. This has happened in the last decade or so. If the models are too high on ECS, the single most important number needed to guide our thinking on climate change, that's a big failure. CMIP6 is a lot worse than previous model IP's too.
      2. The current post at Climate Etc. is also very good on this issue. CMIP6 has significantly increased model ECS by "improving " cloud models. This s not credible. It now looks like with a 4.5 W/m2 increase in forcing we may warm only a little more than 1.5 C.
      3. Regional rainfall patterns, regional climate, convective aggregation, vertical distribution of temperature in the tropics, cloud fraction as a function of lattitude and the average temperature of the current climate are all things that most models miss, many of them badly. The tropics is an area where the models are almost certainly badly wrong because the climate there is dominated by massive energy transfers due to storms, precipitation, etc. Modelers themselves admit they don't have the data to tune these models. See the famous
      zhou et al paper on ECS sensitivity to not well constrained constants in the cloud microphysics models. (reference provided in my paper).
      4. The resolution issue is huge. On the grids used the numerical truncation errors are quite large, perhaps equivalent to trying to model a boundary layer on a wing with 3 points. The answer would be totally wrong, perhaps off by a factor of 2 in drag.


      What is disappointing to me is that I think modelers have known this for a long time but wanted to downplay it because they feared some of the wrongness might rub off on the whole field in the eyes of the public. But the whole field is shot through with models of questionable validity. Just to name one, the moist adiabat theory used with tropical convection is quite wrong. These storms are quite far from adiabatic.

      Delete
  15. Too good information that you shared. It would be so useful post for those are searching for Weather Data.

    ReplyDelete

Please make sure your comments are civil. Name calling and personal attacks are not appropriate.

More Rain for the Northwest is Good News for Wildfires

After a very pleasant dry spell, another rainy period is ahead for the western side of the region and the Cascades on Friday and Saturday.  ...