Inside The Bayesian Priory

Here are a few random quotes and thoughts about the paper called An observation-based scaling model for climate sensitivity estimates and global projections to 2100. This was the first statement that caught my eye:

We estimate the model and forcing parameters by Bayesian inference which allows us to analytically calculate the transient climate response and the equilibrium climate sensitivity as: 1.7+0.3−0.2  K and 2.4+1.3−0.6  K respectively (likely range).

I always get nervous when someone says that they are using “Bayesian inference”. The problem is not with the Bayesian theory, which is correct. It basically says that the probability of something happening depends in part on what went before, called the “Bayesian Priors”. And clearly, in many situations this is true.

The problem is in the choice of the priors. This depends on human judgement, plus some pre-existing theory as to what is going on … I’m sure you can see the problems with that. First, human judgment is … well … let me call it “not universally correct” and leave it at that. And next … just what pre-existing theory of climate are we supposed to use when we are investigating the theory of climate?

Then came this statement.

The ecological consequences of global warming could be dire; therefore, better constraining climate sensitivity is of utmost importance in order to meet the urgency of adjusting economical and environmental policies.

This one makes the sweat break out on my forehead, because it is one of the fundamental building blocks of a whole host of theories … but it is rarely supported by even the flimsiest of evidence. It is just stated as undeniable truth, as in this paper. They do not make even the slightest attempt to justify it. 

Every realistic description that I’ve read about the gradual warming over the last three centuries or so contains some verifiable facts:

  • Things are warmer now than during the Little Ice Age.
  • In general that increased warmth has been a benefit for man, beasts, and plants alike.
  • There have been no “climate catastrophes” from that warming.

Couple that with the following data …

… and it becomes very difficult to believe that “the ecological consequences of global warming could be dire”.

Confronted with these facts, the fallback position of the alarmists is usually “But mah sea level! Mah sea level is gonna drown everyone” … however, as I’ve shown over and over, even the longest sea level records don’t show any acceleration due to the warming.

So we are starting out from way behind, crippled by false assumptions. And these false assumptions, these “Bayesian priors” are driven by the initial mistake made by the IPCC, what I’ve termed the “Picasso Problem”. Picasso said 

“What good are computers? They can only give you answers …

At first I didn’t either understand this or believe it. I mean, I’m a computer guy, why is a painter questioning my computer use? … but eventually I saw that looking for the right answers is not what we should be doing. What we should be focusing on instead is looking for the right questions. As the old saying goes, “Ask a stupid question, get a stupid answer.”

And tragically, the IPCC at its inception was asked the wrong question. When IPCC was first set up, it was tasked to answer the question:

“What level of CO2 is dangerous to humanity??”

In fact, they should have been tasked to answer the question:

“Is increasing CO2 a danger to humanity?”

And obviously, the current paper suffers from the same problem—it is answering the wrong question … and not only that, it is answering the question with computers …

For the next issue, let me preface it with a few definitions from the paper:

Emergent properties of the Earth’s climate, i.e. properties which are not specified a priori, are then inferred from GCM simulations. The equilibrium climate sensitivity (ECS) is such a property; it refers to the expected temperature change after an infinitely long time following a doubling in carbon dioxide (CO2) atmospheric concentration. Another is the transient climate response (TCR), which is defined as the change in temperature after a gradual doubling of CO2 atmospheric concentration over 70 years at a rate of 1% per year.

With those definitions, they go on:

Since the United States National Academy of Sciences report (Charney et al. 1979), the likely range for the ECS has not changed and remains [1.5,4.5] 𝐾.

The problem with this is that they say it as though it was a good thing … when in fact it is a clear indication that they are operating on false assumptions. In fact, as they are clearly unwilling to admit, the likely range has increased, not stayed the same. You can see this issue below:

And this increase in the range of the equilibrium climate sensitivity should be a huge danger signal. I mean, in what other field of science has there not only been no advance in forty years on a central question, but in fact the uncertainty has increased? I know of no scientific field other than climate science in which this is true. 

They continue:

Future anthropogenic forcing is prescribed in four scenarios, the Representative Concentration Pathways (RCPs), established by the IPCC for CMIP5 simulations : RCP 2.6, RCP 4.5, RCP 6.0 and RCP 8.5 (Meinshausen et al. 2011). They are named according to the total radiative forcing in Wm−2 expected in the year 2100 and are motivated by complex economic projections, expected technological developments, and political decisions.

There are a couple of problems with this. First, these assumptions of future forcings form another part of the Bayesian priors discussed above. And given that these priors are based on “complex economic projections, expected technological developments, and political decisions“, they are infinitely adjustable to match the desires and theories of the investigators.

A more fundamental problem, however, is the relationship between the forcings and the response of the models. As Kiehl pointed out over a decade ago in “Twentieth Century Climate Model Response and Climate Sensitivity”:

It is found that the total anthropogenic forcing for a wide range of climate models differs by a factor of two and that the total forcing is inversely correlated to climate sensitivity. Much of the uncertainty in total anthropogenic forcing derives from a threefold range of uncertainty in the aerosol forcing used in the simulations.

This is a crucial paper which has been conspicuously ignored by scientists in the field. It says that if you assume larger forcings, the model shows a smaller climate sensitivity, and vice versa. Not only that, but as I showed in Life Is Like A Black Box Of Chocolates, the relationship between the forcings and the model output is both linear and ridiculously simple, viz:

The outputs of the climate models are very well emulated by a simple lagged and scaled version of the inputs. 

Here is an example of my analysis showing how well the model output can be emulated by that absurdly simple formula:

This formula has only three tuned parameters—lambda (the scaling factor), tau (the lag factor), and a volcanic adjustment. (Curiously, the current paper agrees that there needs to be a volcanic adjustment, saying “the instantaneous temperature response [to the eruptions] is weaker than expected using linear response theory.”)

This shows that with all of their complexity, the outcome of the climate models can be almost perfectly emulated by a simple formula. And another implication is that the calculations of equilibrium climate sensitivity (ECS) and transient climate response (TCR) from the outputs of these models are completely meaningless … as supported by the fact that there has been no progress made in refining the estimates of ECS and TCR over the last forty years.

In continuing to read, I finally lost both the plot and all further interest in the paper when they said (emphasis mine):

In order to make progress, Hasselmann et al. (1997) proposed a response function consisting of a sum of N exponentials – effectively an N box model (although without using differential equations: the boxes were only implicit). Nevertheless, they ultimately chose 𝑁=3 out of practical necessity—so as to fit GCM outputs.

To translate that, they are defining reality on the basis of whether or not their description of reality matches the output of computer models … and at that point, I quit reading. 

Seriously, I could go no further. Any study claiming that a description of reality is only worthwhile if it matches the output of absurdly simplistic climate models is not worth my time to investigate.

Best holiday regards to all on this day after Christmas,

w.

Like this:

Like Loading…

Related

via Watts Up With That?

https://ift.tt/3hsk4sk

December 26, 2020 at 04:52PM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s