The Advantages of Being Vague

Some background

Consult any scientist, or indeed anyone who claims to be a critical thinker, and they will be only too eager to warn against the dangers of being vague. For example, if cutting the blue wire is necessary to diffuse a bomb, you are not going to be too impressed by receiving an instruction to cut the coloured one. Imprecision matters, and the uncertainty it introduces can be the source of significant risk.

From this observation you would be forgiven for concluding that vagueness should be avoided at all costs. So why isn’t it? Why, for example, is language founded upon so many words that are patently vague? Why have we allowed ourselves to develop a means of communication that renders so much of what we say open to interpretation?

To answer that question one has to appreciate the value of vagueness in general discourse. For example, words such as ‘small’ are commonly used to convey a sense of scale, but they are deliberately vague. Why? Because then you only need the one word to cover a whole range of possibilities. Furthermore, size is context-specific, and we don’t want to have to use a different word when the context changes. So, linguistic vagueness enables semantic utility, albeit at the expense of precision. You can finesse your statements by using degree adjectives such as ‘very’, but these adjectives are themselves vague for the same reason. At the end of the day, we can’t go around being totally precise unless we are prepared to use an impractically large and cumbersome lexicon.

That said, the exploitation of linguistic vagueness is not entirely without its rules, one of which is the maxim of quantity, as devised by the linguist Paul Grice. According to this maxim, you should express yourself as strongly as your information allows, but no more so. By following rules such as these, we are perfectly capable of constructing a reasoned argument and acting upon it, enabled rather than challenged by the subtleties and mysteries of linguistic vagueness. Our love of vague terms is not a mistake. On the contrary, it provides utility that enables us to operate effectively and efficiently in an uncertain and changing world.

Now I am sure there must be some of you at this stage who are saying to themselves, ‘Yes, but what about numbers?  Surely they are the solution for those requiring both precision and utility.’

Well, not really. The problem with numbers is that they are often used to quantify in areas for which there remains an epistemic deficit, and this deficit re-introduces vagueness. Take, for example, the case in which the value of a variable is estimated from an incomplete sampling of a population (e.g. when trying to determine the percentage of support for a policy when sampling only a section of the population). The question asked is how likely it will be that the true value is captured by the sample. The expression of this uncertainty requires two elements, each of which can be traded off against the other. Firstly, there is the vagueness by which the value is stated (i.e. precision). Secondly, there is the confidence that the value as stated is correct (i.e. accuracy). If one wants to increase that confidence one can do so, but only by relaxing the precision. Conversely, one can make a more precise estimate, but only by reducing one’s level of confidence in that estimate. And this decision as to how to apportion the uncertainty is down to the individual. At the one extreme one may choose to make a very precise statement even though one’s confidence in its accuracy is very low. On the other hand, one may choose to make a statement in which one can be very confident, even though it is so vague as to be practically useless. Put simply, the wider the stated range of possibilities, the more confident one can be that the true value lies within that range. Vagueness can be very useful, if you always want to be proven right.

From the above, one can see that there is a numerical equivalent to the linguistic maxim of quantity, i.e. there is a limit to the combined levels of precision and confidence and this is determined by the sample size. The only way of increasing one (let’s say, confidence) whilst holding the other constant (let’s say, precision) is by increasing the size of the sample and thereby reducing the epistemic deficit.

All of this raises the question regarding which is the true expression of uncertainty. Is it the imprecision or the confidence level? To which the obvious answer is both. It is the combination of the two that captures the uncertainty since it is only that combination that correlates with the scale of epistemic deficiency.

Now to the climate science

So what does any of this mean when it comes to making climate predictions? To answer that question I feel I need to return to a remark made by hydrologist and acclaimed climate science communicator, Peter Gleick, when he heavily criticized a book written by Michael Shellenberger. According to Gleick:

Shellenberger misunderstands the concept of ‘uncertainty’ in science, making the classic mistake of thinking about uncertainty in the colloquial sense of ‘We don’t know’ rather than the way scientists use it to present ‘a range of possibilities’.

You may recall that I was particularly scathing of this remark at the time, largely because I felt that Gleick was wrong in insisting that there exists a distinction between a scientist’s and a layperson’s conception of uncertainty, and that he was particularly wrong in suggesting that ‘We don’t know’, is the non-scientific expression. After all, is that not just the expression of the epistemic uncertainty that haunts all scientific ventures? However, having recently returned to what I had written at the time, I am no longer so sure that I had properly understood what Gleick was saying; although in my defence I have to say that the award-winning communicator had done a terrible job of explaining himself.

At the time, I had assumed that Gleick was drawing a distinction between aleatory and epistemic uncertainty (i.e. between objective variability and subjective incertitude) and that by declaring the latter as being a layperson’s colloquial misconception of uncertainty he was declaring the former to be the scientific conception. However, upon reflection, I suspect that Gleick may have been claiming that the layperson mistakenly focuses upon confidence levels as the expression of uncertainty, whereas the scientist focuses instead upon the imprecision. This is certainly what you find when googling ‘uncertainty in science’:

But uncertainty in science does not imply doubt as it does in everyday use. Scientific uncertainty is a quantitative measurement of variability in the data. In other words, uncertainty in science refers to the idea that all data have a range of expected values as opposed to a precise point value.

The argument also seems to go further by suggesting that a risk assessment should be based purely upon the confident statements that are supposedly enabled by ‘scientific uncertainty’. The argument seems to be that we can be objectively confident that the risk is high if we can be confident that those possibilities entailing disaster lie within the accepted range of possibilities. And since imprecision leads to greater levels of confidence in the statements made, it follows that imprecision is exposing the true risk in some objective way. As Erik Løhre et al put it:

The use of interval forecasts allows climate scientists to issue predictions with high levels of certainty even for areas fraught with uncertainty, since wide intervals are objectively more likely to capture the truth than narrow intervals.

These arguments are all very well but they ignore the fact that the uncertainty manifests itself as a combination of imprecision and lowered confidence, neither of which can lay claim to any scientific status that the other lacks. Whether one sees the dichotomy in terms of the aleatory versus the epistemic, or precision versus confidence, scientists gain no superior grasp of the concept of uncertainty by focusing upon just one side of the dichotomy. The assessment of uncertainty requires consideration of both the confidence in which statements are made and the imprecision of those statements. Anyone who claims that a scientific approach to uncertainty assessment requires one to focus purely upon imprecision, and then suggests this ‘scientific uncertainty’ can be used to confidently predict high risk, is just using vagueness as a ploy to predict whatever they want whilst being proven right irrespective of what happens.

To conclude

So what do I want you to take away from this discussion? Well, firstly, be aware of climate science communicators who tell you that there is a scientific notion of uncertainty that is to be contrasted with colloquial misconceptions. The reality is that the concepts of risk and uncertainty are deceptively difficult to pin down and it has been my experience that climate scientists are no better at it than you or I.

Secondly, how one chooses to express the uncertainty is a political decision and there is no scientific basis for preferring one way over the other, apart from the scientists’ preference for only making statements that meet certain confidence levels. As long as one abides by the maxim of quantity there may be many equally valid expressions at your disposal depending upon whether you want to be precise or circumspect.

Thirdly, to dismiss ‘We don’t know’ as a colloquial misconception of uncertainty is surely the classic mistake. On the contrary, it is such epistemic deficiency that dictates the limits within which precision and confidence can be traded off against each other. Effective risk management entails the reduction of this deficit whenever possible. It doesn’t require having a judicious preference for imprecision over inaccuracy just so one can confidently proclaim the possibility of disaster. The ‘layperson’ desire to see the uncertainty reduced before costly and potentially damaging decisions are taken is not a scam, it is basic risk management practice – or at least it is outside of the politico-scientific arena of climate science.

And finally, it turns out that being vague isn’t as much an anathema to scientists and critical thinkers as I had made out in my opening paragraph. The truth is that vagueness can be exploited to open up a world of utility. Climate scientists in particular see the benefit of vagueness. However, I suspect for them the concern isn’t utility in communication but the political utility bestowed by emphasising the possibilities lurking at the extremes. It’s a political utility that finds its zenith in the conviction that mankind is confronted with an existential threat that justifies a radical degradation of our way of life. Sceptics can easily demonstrate that the UK’s frantic transition to net zero is unachievable, pointless and certain to prove disastrous, but that is unlikely to result in a change of direction whilst vague forecasts leave the fear of extinction on the table.

via Climate Scepticism

https://ift.tt/DVTm5gA

December 7, 2024 at 07:40AM

Leave a comment