“It is impossible as I state it, and therefore I must in some respect have stated it wrong.”
If I have gained any reputation at all on this website, I would like to think that it is for being a person who has been rightfully critical of the IPCC’s treatment and understanding of risk and uncertainty. This subject concerns me because my professional background required that I develop a firm grasp of the conceptual framework for risk and uncertainty, and I happen to believe that this is equally important if one is to present a case for or against the actions proposed to address climate change. I have made the point on more than one occasion that the analysis and management of risk and uncertainty does not necessarily fall within the range of expertise that may be assumed for the average climate scientist, nor indeed for the vast majority who profess on their behalf. In that important respect, we should not be looking to the IPCC as an expert authority.
On the eve of the publication of the Sixth Assessment Report, Working Group 1 (AR6 WG1), I wrote a series of articles drawing attention to the fact that the IPCC had already outlined how risk perception can and should be manipulated in order to facilitate public acceptance of climate change policies (ref. AR5, WG3, Chapter 2). In particular, the exploitation of the availability heuristic was openly advocated for this purpose, resulting in a greater focus upon extreme weather event attribution studies. I suggested in my closing remarks that much more of this could be expected in AR6, and I was not to be disappointed. In particular, I was not at all surprised to see the re-framing of risk as being predominantly an issue of low likelihood, high impact events. Even so, I wasn’t quite prepared for the profound subject-matter ignorance that was often on show, and so I am left with the inescapable conclusion that a collection of risk management amateurs is in charge of framing one of the world’s most important risk management challenges. No one should be taking any encouragement from this.
The interval between the ears
The IPCC’s new approach to risk assessment is introduced very early in the document and is heralded using suitably grandiose phrases such as ‘unified framework of climate risk’ and ‘systematic risk framing’. I’ve no idea what they mean by this, but there can be no doubt regarding their intentions when they say that the AR6 framework is ‘supported by an increased focus in WGI on low-likelihood, high-impact events.’ This has been a battle ground for some time now, where many have argued that the ‘true’ risk lies in that domain. That may be so, but it is equally true to say that the greater uncertainties also lie in that domain, and uncertainty has the nasty habit of distorting the perception of risk, normally in the direction of overstatement. If the IPCC is turning its attention to the low likelihood, high impact (LLHI) end of the risk curve, one would hope that they know what they are talking about. Unfortunately, that hope is dashed very early on, once the reader has encountered the following in section 220.127.116.11:
“Further, even though it is objectively more probable that wide uncertainty intervals will encompass true values, wide intervals were interpreted by lay people as implying subjective uncertainty or lack of knowledge on the part of scientists (Løhre et al., 2019).”
Everything that is wrong, dangerous and stupid about the IPCC’s treatment of risk and uncertainty is encapsulated in that one statement. Firstly, you will note the author’s arrogant assumption that uncertainty is only properly understood by scientists – contrast the foolish lay person. And yet it is clear that the author hasn’t got the first clue how objectivity and subjectivity work in an uncertain world. It is Gleick’s folly, but on steroids. Whether or not a wide interval is an expression of subjective or objective uncertainty is entirely down to the relative contributions made by aleatory and epistemic components. If a ‘lay person’, as they put it, interprets a wide interval as implying subjective uncertainty it is more than likely that this is perfectly well justified. In fact, a probability distribution that is a representation of pure aleatory (objective) uncertainty is relatively rare in the real world. The suggestion that the lay populace fails to understand how uncertainty works because they can’t see the objective reality (a wide range of possibilities) that results from subjective ignorance is just errant nonsense.
I am not sure why the statement has been made but I suspect it stems from the notion of uncertainty serving as ‘actionable knowledge’, as Stephan Lewandowsky would put it. When a scientist hasn’t got a clue what the ‘true’ value is, then, ‘objectively’, any value is still possible; and it seems to the IPCC that the possibility is all that matters. The author(s) would wish the reader to think that scientists are always objective, and that only a lay person would make the mistake of misinterpreting their subjective ignorance as a sign of non-objectivity.
I was trained as a scientist and that training left me believing that I knew all there was to know about risk and uncertainty. It wasn’t until I entered the domains of engineering and quality management, and left the purity of theoretical physics behind me, that I came to realize that I had known diddly squat. There is so much more to it than probability distributions and Monte Carlo methods. I’m afraid that the IPCC’s statement is just the sort of asinine remark that I might have come out with back in the day. It’s disappointing to see that such ignorance and arrogance is still informing attitudes at the centre of the IPCC.
I’d like to say that the above quote is a one-off that poorly represents the remainder of the document but, unfortunately, there is this to be found in section 18.104.22.168:
“When uncertainty is large, researchers may choose to report a wide range as ‘very likely’, even though it is less informative about potential consequences. By contrast, high-likelihood statements about a narrower range may be more informative, yet also prove less reliable if new evidence later emerges that widens the range. Furthermore, the difference between narrower and wider uncertainty intervals has been shown to be confusing to lay readers, who often interpret wider intervals as less certain (Løhre et al., 2019).”
It is a basic tenet of sampling theory that, for a given sample size, either imprecise statements can be made with confidence or precise statements can be made with incertitude – there is only so much that can be gleaned without increasing the sample size. But that does not appear to be what the author is saying here. He seems to be contrasting high likelihood, imprecise statements with high likelihood, precise statements. You can only go from the former to the latter by obtaining more information. And yet the author chooses to portray the latter, better informed statement as less reliable and more liable to change in the light of additional information! He seems to be trying to argue that, for example, the wide range of values that are stated for ECS is a good thing because it is, at least, a reliable statement. And then, of course, there is the repeated accusation that lay people simply don’t understand how to interpret wide intervals. Shockingly, lay people think that they imply uncertainty.
Actually, they would be right. This uncertainty is sometimes referred to as probabilistic discord and the relevant formula is:
H = – Σ p * loge(p)
This is not the first time I have presented this formula here at Cliscep, and it is hardly unfamiliar to aficionados of uncertainty analysis. Surely, it isn’t asking too much of an IPCC lead author to have heard of it. Even so, one has to be careful here, because it is in the nature of epistemic uncertainty that probabilistic discord can increase as the epistemic uncertainty decreases, i.e. the curve can widen as understanding increases. Maybe that is the detail that the IPCC, in its high-handedness, assumes that a ‘lay person’ can’t understand.
Learning the lingo
None of AR6’s garbled explanation of the relationship between uncertainty, interval width and subjectivity inspires confidence. Nevertheless, it is clear from the general tone of the report that confidence is exactly what the reader is expected to be brimming with. After all, as breathlessly explained in section 1.1:
“IPCC reports undergo one of the most comprehensive, open, and transparent review and revision processes ever employed for science assessments.”
Moreover, when it comes down to the communication of uncertainty, AR6 WG1 is supposed to benefit from the application of the ‘IPCC calibrated uncertainty language’, first developed by Mastrandrea et al. in 2010. For the sake of those who are not already familiar, the details of the process by which uncertainties are agreed and communicated are repeated in Box 1.1 of AR6 WG1.
Fortunately, I am already familiar with the IPCC’s calibrated uncertainty language. Unfortunately, all I can say is that it was fundamentally flawed back in 2010 and it remains fundamentally flawed today. As I have explained before, it conflates levels of agreement with evidential weight in a way that not only double-counts confidence, it also fails to acknowledge the methods by which consensus can be illegitimately formed in the absence of a narrow uncertainty range (e.g. through groupthink). AR6 WG1 crows about how its application by all authors has led to a consistent approach, from which I can only assume the report is now at least consistently wrong.
The sceptic’s only cause for reassurance regarding the over-hyped IPCC calibrated uncertainty language is that even the report’s authors concede that its application has not had the desired effect, since even if the language of uncertainty is being used consistently by the scientists, the way it is read by that damned ignorant lay person has remained stubbornly perverse. As the report explains:
“…lay readers systematically misunderstood IPCC likelihood statements. When presented with a ‘high likelihood’ statement, they understood it as indicating a lower likelihood than intended by the IPCC authors. Conversely, they interpreted ‘low likelihood’ statements as indicating a higher likelihood than intended.”
Even without such lay perversity, there is already enough wrong with the idea to justify dropping it altogether:
“Specific concerns include, for example, the transparency and traceability of expert judgements underlying the assessment conclusions (Oppenheimer et al., 2016) and the context-dependent representations and interpretations of probability terms (Budescu et al., 2009, 2012; Janzwood, 2020).”
Nevertheless, the IPCC soldiers on gamely, satisfied that, even if the calibration of their language is founded upon false assumptions and is patently failing in its purpose, “a consistent and systematic approach across Working Groups to communicate the assessment outcomes is an important characteristic of the IPCC”.
Another story to tell
Arrogance and ignorance make a heady cocktail that we have all imbibed from time to time. However, one has good reason to expect that an organisation such as the IPCC would have in place the required processes to ensure that such drunkenness does not get out of hand. One would certainly hope that it would not institutionalise a drinking culture. Part of that sobriety would entail the realisation that if one is failing to properly explain oneself, it could very well be because one has not thoroughly understood the subject. So when I see a lead author struggling with basic subject-matter whilst making sweeping statements regarding the shortcomings of the ‘lay person’, I am inclined to lose confidence in the whole set up. Such confidence is important, because, in promoting the importance of low likelihood, high impact events, AR6 has chosen to turn its focus to a subject area that demands a sound grasp of the underlying principles of risk and uncertainty analysis. I will be saying a lot more about that in my next article. For the time being, however, I will leave you with another nugget taken from AR6 WG1, which suggests there may very well be something rotten in the state of Denmark. It is a statement made with regard to what the IPCC is calling the ‘storylines’ approach to risk assessment. Amongst other benefits attributed to this approach, the report says of storylines that they:
“…can also help in assessing risks associated with [Low Likelihood High Impact] LLHI events (Weitzman, 2011; Sutton, 2018), because they consider the ‘physically self-consistent unfolding of past events, or of plausible future events or pathways’ (Shepherd et al., 2018b), which would be masked in a probabilistic approach.”
Risk assessment necessitates the determination of scale for both likelihood and impact. Quantifying probabilities (or employing an alternative means of evaluating likelihood) is therefore an essential part of assessment. One might then ask how the IPCC arrived at the conclusion that ‘a probabilistic approach’ could be masking anything. To answer that question, it helps to understand what is wrong with the way climate science handles the probabilistic approach. However, it is even more important to understand how a new approach peddled by a small group of individuals operating on the periphery of Detection & Attribution studies, and bitterly rejected by the vast majority of practitioners, could have, nevertheless, become the central idea behind AR6’s approach to risk assessment. One has to wonder what this says about the supposed ‘openness’ and ‘transparency’ of the IPCC’s self-proclaimed, world-beating review process. More of this next time…
via Climate Scepticism
September 12, 2021 at 01:15PM