Guest Essay by Kip Hansen — 12 November 2024 — 3000 words — Very Long Essay
There has been a great deal of “model bashing” here and elsewhere in the blogs whenever climate model predictions are mentioned. This essay is a very long effort to cool off the more knee-jerk segment of that recurring phenomenon.
We all use models to make decisions; most often just tossed together mental models along the lines of: “I don’t see any cars on the road, I don’t hear any cars on the road, I looked both ways twice therefore my mental models tells me that I will be safe crossing the road now.” Your little ‘safe to cross the road?’ model is perfectly useful and (barring evidence unknown or otherwise not taken into account) and can be depended upon for personal road-crossing safety.
It is not useful or correct in any way to say “all models are junk”.
Here, at this website, the models we talk about are “numerical climate models” [or a broader search of references here], that are commonly run on supercomputers. Here’s what NASA says:
“Climate modelers run the climate simulation computer code they’ve written on the NASA Center for Climate Simulation (NCCS) supercomputers. When running their mathematical simulations, the climate modelers partition the atmosphere into 3D grids. Within each grid cell, the supercomputer calculates physical climate values such as wind vectors, temperature, and humidity. After conditions are initialized using real observations, the model is moved forward one “time step”. Using equations, the climate values are recalculated creating a projected climate simulation.”
“A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth’s atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components.”
I am open to other definitions for the basic GCM. There are, of course, hundreds of different “climate models” of various types and uses.
But let us just look at the general topic that produces the basis for claims that start with the phrase: “Climate models show that…”
Here are a few from a simple Google search on that phrase:
Climate Models Show That Sea Level Rise from Thermal Expansion Is Inevitable
Climate models show that global warming could increase from 1.8 to 4.4°C by 2100.
Climate models show that Cape Town is destined to face a drier future
Let’s try “climate science predicts that”
There are innumerable examples. But let’s ask: “What do they mean when they say ‘Climate science predicts…’?”
In general, they mean either of the two following:
1) That some climate scientist, or the IPCC, or some group in some climate report, states [or is commonly believed to have stated, which is very often not exactly the case] that such a future event/condition will occur.
2) Some climate model [or some single run of a climate model, or some number of particular climate model outputs which have been averaged] has predicted/projected that such a future event/condition will occur.
Note that the first case is often itself based on the second.
Just generally dismissing climate model results is every bit as silly as just generally dismissing all of climate skepticism. A bit of intelligence and understanding is required to make sense of either. There are some climate skepticism points/claims made by some people with which I disagree and there are climate crisis claims with which I disagree.
But I know why I disagree.
Why I Don’t Accept Most Climate Model Predictions or Projections of Future Climate States
Years ago, on October 5, 2016, I wrote Lorenz validated which was published on Judith Curry’s blog, Climate Etc.. It is an interesting read, and important enough to re-read if you are truly curious about why numerical climate modeling has problems so serious that is has become to be seen by many, myself included, as only giving valid long-term projections accidentally. I say ‘accidentally’ in the same sense that a stopped clock shows the correct time twice a day, or maybe as a misadjusted clock, running at slightly the wrong speed, gives the correct time only occasionally and accidentally.
I do not say that a numerical climate model does not and cannot ever give a correct projection.
Jennifer Kay and Clara Deser, both at University of Colorado Boulder and associated with NCAR/UCAR [National Center for Atmospheric Research, University Corporation for Atmospheric Research], with 18 others, did experiments with climate models back in 2016 and produced a marvelous paper titled: “The Community Earth System Model (CESM) Large Ensemble Project: A Community Resource for Studying Climate Change in the Presence of Internal Climate Variability”.
The full paper is available for download here [.pdf].
Here is what they did (in a nutshell):
“To explore the possible impact of miniscule perturbations to the climate — and gain a fuller understanding of the range of climate variability that could occur — Deser and her colleague Jennifer Kay, an assistant professor at the University of Colorado Boulder and an NCAR visiting scientist, led a project to run the NCAR-based Community Earth System Model (CESM) 40 times from 1920 forward to 2100. With each simulation, the scientists modified the model’s starting conditions ever so slightly by adjusting the global atmospheric temperature by less than one-trillionth of one degree, touching off a unique and chaotic chain of climate events.” [ source ]
What are Deser and Kay referring to here?
“It’s the proverbial butterfly effect,” said Clara Deser… “Could a butterfly flapping its wings in Mexico set off these little motions in the atmosphere that cascade into large-scale changes to atmospheric circulation?”
Note: The answer to the exact original question posed by Edward Lorenz is “No”, for a lot of reasons that have to do with scale and viscosity of the atmosphere and is a topic argued endlessly. But the principle of the matter, “extreme sensitivity to initial conditions” is true and correct, and demonstrated in Deser and Kay’s study in practical use in a real climate model. – kh
What happened when Deser and Kay ran the Community Earth System Model (CESM) 40 times, repeating the exact same model run forty different times, using all the same inputs and parameters, with the exception of one input: the Global Atmospheric Temperature? This input was modified for each run by:
less than one-trillionth of one degree
or
< 0.0000000000001 °C
And that one change resulted in the projections for “Winter temperature trends (in degrees Celsius) for North America between 1963 and 2012”, presented as images:
First, notice how different each of the 30 projections are. Compare #11 to #12 right beside it. #11 has a cold northern Canada and Alaska whereas #12 has a hot northern Canada and Alaska, then look down at #28.
Compare #28 to OBS (observations, the reality, actuality, what actually took place). Remember, these are not temperatures but temperature trends across 50 years. Not weather but climate.
Now look at EM, next to OBS in the bottom row. EM = Ensemble Mean – they have AVERAGED the output of 30 runs into a single result.
They set up the experiment to show whether or not numerical climate models are extremely sensitive to initial conditions. They changed a single input by an infinitesimal amount – far below the actual real world measurement precision (or our ability to measure ambient air temperatures for that matter). That amount? One one-trillionth a degree Centigrade — 0.0000000000001 °C. To be completely fair, they changed is less than that.
In the article the authors explain that they are fully aware of the extreme sensitivity to initial conditions in numerical climate modelling. In fact, in a sense, that is their very reason for doing the experiment. They know they will get chaotic (as in the field of Chaos Theory) results. And, they do get chaotic results. None of the 30 runs matches reality. The 30 results are all different in substantial ways. The Ensemble Mean is quite different from the Observations, agreeing only that winters will be somewhat generally warmer – this because models are explicitly told it will be warmer if CO2 concentrations rise (which they did).
But what they call those chaotic results is internal climate variability.
That is a major error. Their pretty little pictures represent the numerically chaotic results of nonlinear dynamical systems represented by mathematical formulas (most of which are themselves highly sensitive to initial conditions), each result fed back into the formulas at each succeeding the time step of their climate model.
Edward Lorenz showed in his seminal paper, “Deterministic Nonperiodic Flow”, that numerical weather models would produce results extremely sensitive to initial conditions and the further into the future one runs them, the more time steps calculated, the wider and wide the spread of chaotic results.
What exactly did Lorenz say? “Two states differing by imperceptible amounts may eventually evolve into two considerably different states … If, then, there is any error whatever in observing the present state—and in any real system such errors seem inevitable—an acceptable prediction of an instantaneous state in the distant future may well be impossible….In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent.”
These numerical climate models cannot not fail to predict or project accurate long-term climate states. This situation cannot be obviated. It cannot be ‘worked around’. It cannot be solved by finer and finer gridding.
Nothing can correct for the fact that sensitivity to initial conditions — the primary feature of Chaos Theory’s effect on climate models — causes models to lose the ability to predict long-term future climate states.
Deser and Kay clearly demonstrate this in their 2016 and subsequent papers.
What does that mean in the practice of climate science?
That means exactly what Lorenz found all those years ago — quoting the IPCC TAR: “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”
Deser and Kay label the chaotic results found in their paper as “internal climate variability”. This is entirely, totally, absolutely, magnificently wrong.
The chaotic results, which they acknowledge are chaotic results due to sensitivity to initial conditions, are nothing more or less than: chaotic results due to sensitivity to initial conditions. This variability is numerical – the numbers vary and they vary because they are numbers [specifically not weather and not climate].
The numbers that are varying in climate models vary chaotically because the numbers come out of calculation of nonlinear partial differential equations, such as the Navier–Stokes equations, which are a system of partial differential equations that describe the motion of a fluid in space, such as the atmosphere or the oceans. Navier-Stokes plays a major role in numerical climate models. “The open problem of existence (and smoothness) of solutions to the Navier–Stokes equations is one of the seven Millennium Prize problems in mathematics” — a solution to the posed problem will get you $ 1,000,000.00. For that reason, a linearized version of Navier-Stokes is used in models.
How does this play out then, in today’s climate models – what method is used to try to get around these roadblocks to long-term prediction?
“Apparently, a dynamical system with no explicit randomness or uncertainty to begin with, would after a few time steps produce unpredictable motion with only the slightest changes in the initial values. Seen how even the Lorenz equations (as they have become known over time) present chaotic traits, one can just imagine to what (short, presumably) extent the Navier-Stokes equations on a grid with a million points would be predictable. As previously mentioned, this is the reason why atmospheric models of today use a number of simplifying assumptions, linearizations and statistical methods in order to obtain more well-behaved systems.” [ source – or download .pdf ]
In other words, the mantra that climate models are correct, dependable and produce accurate long-term predictions because they are based on proven physics is false – the physics is treated to subjective assumptions ‘simplifying’ the physics, linearizations of the known mathematical formulas (which make the unsolveable solveable) and then subjected to statistical methods to “obtain more well-behaved systems”.
Natural variability can only be seen in the past. It is the variability seen in nature – the real world – in what really happened.
The weather and climate will vary in the future. And when we look back at it, we will see the variability.
But what happens in numerical climate models is the opposite of natural variability. It is numerical chaos. This numerical chaos is not natural climate variability – it is not internal climate variability.
But, how can we separate out the numerical chaos seen in climate models from the chaos clearly obvious in the coupled non-linear chaotic system that is Earth’s climate?
[and here I have to fall back on my personal opinion – an informed opinion but only an opinion when all is said and done]
We cannot.
I can (and have) shown images and graphs of the chaotic output of various formulas that demonstrate numerical chaos. You can glance through my Chaos Series here, scrolling down and looking at the images.
It is clear that the same type of chaotic features appear in real world physical systems of all types. Population dynamics, air flow, disease spread, heart rhythms, brain wave functions….almost all real world dynamical systems are non-linear and display aspects of chaos. And, of course, Earth’s climate is chaotic in the same Chaos Theory sense.
But, doesn’t that mean that the numerical chaos in climate models IS internal or natural variability? No, it does not.
A perfectly calculated trajectory of a cannonball’s path based on the best Newtonian physics will not bring down a castle’s wall. It is only an idea, a description. The energy calculated from the formulas is not real. The cannonball described is not a thing. And, to use a cliché of an adage: The map is not the territory.
In the same way, the numerical chaos churned out by climate models is similar in appearance to the type of chaos seen in the real world’s climate but it is not that chaos and not the future climate. Lorenz’s ‘discovery’ of numerical chaos is what led to the discoveries that Chaos Theory applies to real world dynamical systems.
Let’s take an example from this week’s news:
Hurricane Rafael’s Path Has Shifted Wildly, According to Tracker Models
Shown are the projected paths produced by our leading hurricane models as of 1200 UTC on 6 November 2024. The messy black smudge just above western Cuba is the 24 hour point, where the models begin to wildly diverge.
Why do they diverge? All of the above – everything in this essay — these hurricane path projections demonstrate a down-and-dirty sample of what chaos does to weather prediction and thus climate predictions. At just 24 hours into the future all the projections begin to diverge. By 72 hours, the hurricane could be anywhere from just northwest of the Yucatan to already hitting the coast of Florida.
If you had a home in Galveston, Texas what use would these projections be to you? If NCAR had “averaged” the paths to produce a “ensemble mean” would it be more useful?
Going back up to the first image of 30 projected winter temperature trends, a very vague metric: Is the EM (ensemble mean), of those particular model runs, created using one of the methods suggested by Copernicus Climate Change Service, more accurate than any of the others futures? Or is it just accidentally ‘sorta like’ the observations?
# # # # #
Author’s Comment:
This is not an easy topic. It produces controversy. Climate scientists know about Lorenz, chaos, sensitivity to initial conditions, non-linear dynamical systems and what that means for climate models. The IPCC used to know but ignores the facts now.
Some commenter here will cry that “It is not an initial conditions problem but a boundaries problem” – as if that makes everything OK. You can read about that in a very deep way here. I may write about that attempt to dodge reality in a future essay.
I will end with a reference to the eclectic R G Brown’s comments which I sponsored here, in which he says:
“What nobody is acknowledging is that current climate models, for all of their computational complexity and enormous size and expense, are still no more than toys, countless orders of magnitude away from the integration scale where we might have some reasonable hope of success. They are being used with gay abandon to generate countless climate trajectories, none of which particularly resemble the climate, and then they are averaged in ways that are an absolute statistical obscenity as if the linearized average of a Feigenbaum tree of chaotic behavior is somehow a good predictor of the behavior of a chaotic system! … This isn’t just dumb, it is beyond dumb. It is literally betraying the roots of the entire discipline for manna.”
And so say I.
Thanks for reading.
# # # # #
via Watts Up With That?
November 8, 2024 at 08:03PM
