Gleick: What’s Not to Like?

If there is something that the climate change debate is certainly not lacking, it is ad hominin, for whilst it is universally disapproved of it is also ubiquitous to the point of being de rigueur. Take, for example, Peter Gleick’s recent critique of Michael Shellenberger’s latest book. Peter does not waste any time in accusing Michael of stooping towards ad hominin, before then joining in with the ethical limbo dancing by delivering his own ad hominin in excelsis. Furthermore, in responding to Mike Dombroski’s excellent post on Peter’s critique, I also chose to throw in my own brand of personal attack when criticising Peter’s accusations. It was something along the lines of ‘How can anyone so honoured be such an idiot?’ Whilst appearing to be a perfectly reasonable question, this was not actually a reasonable accusation. The point is, Peter, I’m so very, very sorry to have called you an idiot. It was wrong of me and I am writing this post, not only as a penance, but also to explain how such a clever person such as yourself could have said such an apparently stupid thing.

The Offending Remark

The statement that had me slapping my forehead in disbelief reads as follows:

“Shellenberger misunderstands the concept of ‘uncertainty’ in science, making the classic mistake of thinking about uncertainty in the colloquial sense of ‘We don’t know’ rather than the way scientists use it to present ‘a range of possibilities’.”

So, according to Gleick, there is scientific uncertainty, and then there is the colloquial concept of uncertainty that only cornucopians and dumb, anti-science deniers use. Scientific uncertainty is all about understanding nature and the range of possibilities it encompasses. As such, uncertainty is the product of knowledge, and the greater the known uncertainty, the greater the imperative to act. Colloquial uncertainty, on the other hand, is all about stressing ignorance and the importance of not acting until one knows what one is dealing with. Fortunately, we can ignore the colloquial argument because it isn’t scientific, and only an ignorant anti-scientist could be impressed by it.

My initial reaction to all of this was to allow my gob to be overly smacked, before then reaching out for the special keyboard I use to construct my finest invective. However, my subsequent and more considered response is to try unpicking Peter’s statement so that one might get a better idea as to where these ideas are coming from.

Foolishness in Good Company?

Firstly, it has to be conceded that if one searches on the internet for ‘uncertainty in science’, one is presented with the following headline summary:

“But uncertainty in science does not imply doubt as it does in everyday use. Scientific uncertainty is a quantitative measurement of variability in the data. In other words, uncertainty in science refers to the idea that all data have a range of expected values as opposed to a precise point value.”

This is essentially what Gleick appears to be claiming in his Shellenberger critique – almost to the extent that it is tempting to speculate that Gleick got his views by searching for ‘uncertainty in science’ on the internet. The quote actually comes from a website called ‘Visionlearning: Your insight into science’, and is to be found within an article written by a couple of PhDs. So, the first thing to conclude here is that Peter Gleick is certainly not on his own in believing in the concept of ‘scientific uncertainty’ – an uncertainty that, presumably, has to be distinguished from the unscientific variety. Having made this discovery, it behoved me to ask: ‘Where does this idea of a high-standing, scientific uncertainty come from?’ A quick read of the Visionlearning article provided the answer.

Scientific uncertainty, according to the two PhDs, is all about variability in nature and the consequent problems of accuracy and precision in the data that scientists collect and analyse in order to understand the natural world. In summary, theirs is an article on measurement theory and they are alluding to aleatory uncertainty, i.e. uncertainty that reflects natural variability. Such uncertainty is distinct from epistemic uncertainty, which reflects a level of ignorance. Aleatory uncertainty is objectively calculable, and hence supposedly scientific. Epistemic uncertainty is subjective, and so it appears in the eyes of at least some to be an uncertainty unworthy of the epithet ‘scientific’. Heaven forfend that uncertainty in the scientific mind should be interpreted as ‘we don’t know’. That interpretation, surely, would be a classic Shellenberger gaffe!

Putting the Record Straight

If it is Gleick’s view that the aleatory uncertainty underpinning measurement theory is the one true scientific uncertainty, then it is difficult to know where to even begin criticizing him. However, let me try by first pointing out that it is rarely the case that uncertainty neatly, and obligingly, falls fully into one or other of the two categories: aleatory or epistemic. In practice, much uncertainty is a hybrid known as Knightian uncertainty. In fact, if one is lucky enough to be dealing with pure aleatory uncertainty, in which probabilities can be objectively and reliably calculated, this means one is able to reliably calculate the risk – so much so that people would no longer talk about making a decision under uncertainty; the preferred expression becomes ‘decision-making under risk’.

The essential point to note is that, in the real world of science, data is often missing and expert opinions are rife. These are circumstances in which the probability distributions associated with aleatory uncertainty cannot hope to fully capture the ambiguities; they can’t do this because they are not reliably known. If the only true scientific uncertainty is aleatory, as Gleick appears to be suggesting, then I’m afraid it occupies a regrettably restricted domain – it is that relatively rare situation in which one can be certain just how uncertain one is, and it is the one in which notions of risk become sufficient motivators.

The IPCC’s Treatment of Uncertainty

Even if Gleick were to think that his idea of scientific uncertainty is so prevalent in the real world of science that he can accuse others of colloquialism or classic errors whenever they allude to epistemic uncertainty, then he would certainly have no excuse for failing to note that no one on the IPCC agrees with this. In fact, it cannot have escaped anyone’s attention (including Gleick’s) that the IPCC captures the uncertainty associated with its statements by using expressions of likelihood caveated by expressions of confidence. For example:

“Past emissions alone are unlikely to raise global-mean temperature to 1.5°C above pre-industrial levels but past emissions do commit to other changes, such as further sea level rise (high confidence).”

This is, in effect, the probability of a probability (e.g. there is less than a 33% probability of this happening and the probability that we are right about this is believed to be 80%). In so doing, the IPCC is playing the sleight of hand trick of using two concepts of probability simultaneously, i.e. they are expressing the Baconian probability of a Pascalian probability [1]. More to the point, both aleatory and epistemic concepts of uncertainty are being invoked, since the Pascalian probability relates to variability in the real world and the Baconian probability relates to subjective levels of confidence associated with evidential weight in support of a hypothesis [2].

There is nothing really wrong with this as long as everyone is aware of what is going on [3]. It simply demonstrates that (at least in the context of climate science) a properly scientific statement of uncertainty cannot be restricted to the question of the ‘range of possibilities’ that nature seems to be allowing, but must also involve considerations of evidential weight and residual ignorance. Confidence levels stemming from such considerations are far from colloquialisms, they are core to any concept of scientific uncertainty.

Finally, the readiness by which Pascalian probabilities are objectively quantified (they are, after all, supposed to be capturing nature in the raw) is no reason to put them on the scientific pedestal. Evidential weighting can also be quantified in an objective fashion, though perhaps not so successfully if one restricts oneself to probability. Fortunately, however, there are such things as evidence theories, though you would not think so listening to the IPCC [4].

So What is There to Like?

Gleick says that scientists use uncertainty to refer to ‘a range of possibilities’ and contrasts this to the supposedly unscientific notion of ‘We don’t know’. He can only, therefore, be referring to the range of possibilities afforded by the inherent variability of nature (including tipping points and other fat-tailed distribution hobgoblins). As such, he is alluding to aleatory uncertainty, as encountered in measurement theory, and declaring this to be the true scientific conception of uncertainty. I hope I have done enough to persuade the reader that this is, at best, a simplistic view and, at worst, a naïve and ill-informed one. But this does not make Gleick an idiot. On the contrary, he is actually being quite cunning in making the claim that the really scientific thing to do is to consider all the possibilities, and the really unscientific thing to do is to point to the huge epistemic uncertainties that such leaps of imagination often entail. As such, he chooses to misrepresent the concept of scientific uncertainty, not perhaps because he fails to understand it, but because it suits his advocacy of the precautionary approach. I don’t like it, but I begrudgingly admire the craftiness.

Notes:

[1] Pascalian probability is based on likelihood relative to a criterion of truth and Baconian probability is based on evidential support relative to a criterion of justified belief.

[2] The Visionlearning article also talks of confidence but theirs is entirely in keeping with their aleatory conception of uncertainty:

Confidence statements do not, as some people believe, provide a measure of how “correct” a measurement is. Instead, a confidence statement describes the probability that a measurement range will overlap the mean value of a measurement when a study is repeated.”

[3] In fact, if I have a problem with the IPCC approach it is that they use levels of consensus as a dimension relevant to the calculation of confidence levels, and this is stretching the credibility of the knowledge hypothesis to breaking point. Worse still, they compound the error by combining consensus levels with an evidential weighting that includes the strength and quality of expert opinion. As a result, the impact of consensus levels is double-counted. But I digress. The full account of these misgivings can be found here.

[4] If you look, you can find the application of evidence theories such as Dempster-Schafer theory in climate and environmental science, but they are thin on the ground. I could say a whole lot more regarding the IPCC and their failure to properly acknowledge the non-complementarity of evidence, but perhaps another day.

Like this:

Like Loading…

Related

via Climate Scepticism

https://ift.tt/2Y04YSm

August 13, 2020 at 04:20AM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s