A Brief Primer on Causation

So much of the debate surrounding climate change seems to hinge on questions of causation. In particular, the two principal causation questions of interest are:

  • Is the anthropogenic emission of carbon dioxide the cause of global warming?
  • Is the global warming the cause of the current incidence of extreme or environmentally damaging weather-related events?

The stock response to the first question is to say that anthropogenic emission is a cause to a certain extent, but with the extent theoretically determined and subject to scientific consensus. The stock response to the second question is to say that we cannot directly attribute any single weather event to global warming but we can say by how much the risk of such an event has increased.

In this article I do not intend throwing my hat into the ring with respect to any of the arguments that have taken place, but I do note that many such arguments seem to be hampered by the fact that those involved will often appear to be treating the concept of causation differently when they state their case. The result is that many such arguments are doomed to be unresolved as individuals continue to argue in circles. For that reason, I thought it might be an idea to present here a brief primer on the subject of causation and its relevance to the matters in hand.

Don’t worry. I do not intend boring you with a dissertation on the history behind the philosophy of causation. Instead, I will attempt a more pragmatic discourse by concentrating upon the nature of causal questions and what is required in order to answer them. None of this will actually require me to come up with a definition of causation – which is a good thing, because no-one has yet agreed upon one.

The questions I have in mind include the following:

  • What will happen if I do this?
  • Why did that happen?
  • To what extent can attribution be made to a single precursor when there are multiple precursors to an event happening?
  • Is a given precursor necessary and is it sufficient?
  • What should my policy be, given my current understanding of the causation of events and the likely outcome of any interventions?

The Limitations of Data and Probabilities

The traditional approach towards answering such questions is to collect data, look for patterns within the data and from them infer associations that could be causative. The central assumption here is that if the probability of observing Y is increased by observing X, then X may be posited as a causal agent for Y. In the language of conditional probabilities, X is a cause of Y if:

P(Y|X) > P(Y)

Of course, one can immediately see the difficulty with this approach. Simply observing a correlation between X and Y does not prove a causal link. In fact, the same effect might be observed if a common causation, Z, had been driving trends in both X and Y irrespective of any causal links between the two. In such situations, Z is said to be a confounder. To circumvent this problem, one compares situations in which Z is constant (i.e. one conditions upon Z) to see if the effect persists, i.e.

P(Y|X, Z=k) > P(Y, Z=k)

Well that’s fine, but it does rather raise the question as to how one could possibly know that all the potential confounders have been identified. Furthermore, whenever one conditions upon a potential confounder, one runs the risk of holding constant an important causal agent in order to focus upon a relatively unimportant one. Such a focus will have the effect of exaggerating the importance of the agent under investigation. The fact is, one can observe probabilities until one is blue in the face but one will never properly tease out the causal story by doing that alone because causation isn’t fully captured by probabilities and it never will be.

This obsession with observing probabilities as the basis for inferring causation has been a problem for many years. Until relatively recently, probability had been the only game in town by which uncertainty may be analysed and so it is little wonder that its position as the sufficient concept behind causation has been inviolate. However, in the last decade advances in the fields of AI and machine learning have provided important insights regarding what information and concepts are required to enable causal thinking. And the most important insight is that one cannot think causally without having a causal model that is founded on an ability to conjecture upon the consequences of interventions and the implications of a counterfactual history. It is no good making field observations and trying to infer intervention, one has to explicitly model the intervention and then analyse the outcome in terms of altered observation. Only then can one talk about causation.

Causal Inference – The Very Basics

In order to address causal questions one first has to construct a structural causal model (SCM). A SCM can be formalised in many ways, but the most accessible and intuitive is the causal network, in which arrows are used to indicate a direction of influence. For example, take the following:

X –> Y –> Z

This is a simple network demonstrating that X is a cause of Z, mediating through Y. Another simple construct might be:

X <– Z –> Y

In this case Z is a confounder providing a common cause for X and Y, and whilst X and Y are associated they are not causally linked. This is known as a fork. Another simple construct might be:

X –> Y <– Z

In this case X and Z are both causal factors for Y. This is known as a collider.

It should not be difficult for the reader to imagine that some quite intricate networks could be constructed using a combination of mediators, forks and colliders. However, it is important to note that the result is only a model, sharing all of the simplifications and assumptions that any model has. Be that as it may, once one has such a framework in place, and data has been collected to put flesh on the relationships (i.e. data that indicates the probability rule or function that specifies how Y varies with X) one is then in a position to ask causal queries and see what answers are provided by the model. These queries fall into three categories:

Here one asks what would be the expectation of observing Y given that X has been observed. These questions can be answered by simply referencing the conditional probabilities linking the two variables, i.e. P(Y|X). To that extent, causal networks are similar in purpose to Bayesian belief networks.

Here one asks what would be the expectation of observing Y following an intervention in which X is forced to take on a particular value. Formally, this is expressed by applying the so-called do-operator, i.e. one determines P(Y|do(X)). These are basically questions of prediction. If X is a causation of Y then one would expect P(Y|do(X)) > P(Y). If this condition is not met then we need to rethink our causal model. For example, Y could be the observation of a cure and do(X) could be the mandated administration of a drug. If the condition is not met, then the drugs don’t work. The purpose of the do-operator is to preclude influences that could act as confounders with respect to X and therefore confuse the issue.

Here one accepts the observation Y but conjectures upon how it might otherwise have been with counterfactual values of X. This enables one to explore the broader implications regarding causation. For example, knowing the efficacy of the drug one can imagine withholding treatment in order to see what impact that would have in the context of other posited causal links. As with intervention, the query is graphically equivalent to removing arrows of influence and setting node values to posited values (in this case counterfactual ones).

The detailed mathematics and methods behind the above analyses need not concern us here. Suffice it to say, the game is all about determining the extent to which nodes within the network are ‘listening’ to others, i.e. does changing the value of the first alter the second. If there is such an ‘information flow’, then there is a causation between the nodes, even if they may appear remote within the network. It is worth emphasising that the same cannot be said of a Bayesian belief network since there are no assumed directions of influence encoded into such a model – new probabilities may propagate as a result of updated information but no causal inference may be drawn from such updating.

The Importance of the Counterfactual

Of the three basic forms of causal investigation (associational, interventional and counterfactual) those premised upon the counterfactual are perhaps the most insightful. Indeed, the central importance of counter-factuality is captured in a definition of causality offered by the philosopher, David Hume in 1748:

“We may define a cause to be an object followed by another, and where all the objects, similar to the first, are followed by objects similar to the second. In other words, where, if the first object had not been, the second never had existed.”

This is something of a hybrid definition since it starts out by stressing the importance of regularity (which, as I have explained, could be a spurious correlation) before then introducing the much more impressive evidence of the counterfactual. The cock crowing can appear to cause the sunrise, but one has to imagine the cock remaining silent before drawing any conclusions. In fact, it is this ability to imagine the counterfactual that sets the human race apart and enables its capacity for causal reasoning.

With a SCM, one can model the counterfactual simply by altering the values associated with a causal agent to see what the impact is. This is precisely what happens when climate models are re-run with anthropogenic emissions removed, in order to see how a prediction or retrodiction changes. The difference is interpreted causally, and there is nothing wrong with that. However, there are two important details regarding the modelling of the counterfactual that need to be mentioned here.

Putting Models on a Witness Stand

The first detail I have in mind was hinted at when former IPCC lead author, Professor Robert Muir-Wood wrote the following:

“Environmental Lawyers are following the attribution studies with great interest. If you can show that an event has doubled in probability, it may be possible to find some greenhouse gas emitters on whom to pin liability. But would the evidence withstand courtroom cross-examination and questions such as: Who exactly built this climate model? How do you know it is reliable?”

That is indeed a very good question, which many sceptics suspect should be answered in the negative. The problem with such attribution studies is that they are premised upon models that are notoriously compromised in their role as material witness. If one turns a blind eye to their structural uncertainties one can confidently draw causal inferences by playing the counterfactual game. But is it wise to turn a blind eye to structural uncertainty in a structural causal model? If one is going to evaluate as one would in a legal case, then one must forensically examine the evidential weight being offered before making a judgement. This detail often seems to be conveniently overlooked by those who rely heavily upon the credibility of attribution studies. Such studies carry a great deal of scientific kudos, but one has to wonder what a good lawyer could do in a courtroom.

Causality and Culpability

The second detail also has a legal dimension. In law there are two concepts that are important when deciding an individual’s culpability: the probability of necessary cause and the probability of sufficient cause.

In causal inference, the probability of necessary cause is measured by calculating the Probability of Necessity (PN). It relates to the legal expression ‘but-for causation’ since it expresses the likelihood that a known outcome would not have happened were it not for the defendant’s actions. For example, the defendant shoots at the victim and the victim dies from the resulting bullet wound. Here PN is high since it is highly likely that the victim would still be alive were it not for the shooting (i.e. the shooting was necessary for the death to have occurred).

In causal inference, the probability of sufficient cause is measured by calculating the Probability of Sufficiency (PS). It relates to the legal expression ‘proximate cause’ and expresses the likelihood that the defendant’s actions would lead to the known outcome. For example, suppose that the defendant had shot and missed, encouraging the victim to flee into the street, only to be knocked down by a herd of stampeding camels that just happened to be passing by. Here PN is still high (no shot means no fleeing) but PS is low, reflecting the fact that the direct cause of death was very unlikely and had very little to do with the defendant’s actions (it’s the camels wot dunnit). A high PN would normally result in a conviction, but not when combined with a low PS. Of course, in our example the PS might have been a lot higher had the incident taken place next to a busy motorway and no camels were involved.

The reason why I’m telling you all of this is because the probability of necessity (PN) and probability of sufficiency (PS) have a great deal to do with attributions of specific weather events to climate change. Take, for example, the following attribution statement made by Myles Allen and Peter Scott of the Met Office in the wake of France’s heatwave of 2003:

“It is very likely that over half of the risk of European summer temperature anomalies exceeding a threshold of 1.6oC is attributable to human influence.”

This statement (essentially a quantification of the Fraction of Attributable Risk (FAR)) relates to the probability of necessity, since it focuses on the climatic trends that provide the context for the weather experienced that year. It is saying that a ‘shot was fired’, and without it the possibility of the heatwave would have been diminished significantly. However, it wasn’t the climate that killed, it’s the weather wot dunnit. When one looks at the same problem from the perspective of the weather conditions, contingent as they are on several factors that have nothing to do with levels of CO2, the probability of sufficiency is actually very small – it takes an awful lot more than the anthropogenic influence to create incidents such as the French heatwave. To put figures to this, a study conducted by Alexis Hannart of the Franco-Argentine Institute on the Study of Climate, using the structural causal modelling techniques outlined above, led to a determination of PN=0.9 and of PS=0.007. There is not a court in the land that would convict with such figures.

The fact that causality has this duality (probability of necessary cause and probability of sufficient cause) leads to many differences of opinion when attribution statements are discussed, with the alarmed usually focusing upon PN and sceptics focusing upon PS. Worse still, the individuals concerned are often unaware that this is the true nature of their dispute. Even when the science is agreed upon, the conclusions can look very different depending upon which facet of causality represents the major concern. So who is right and who is wrong?

It’s All About Policy

So far I have discussed the issue as if it were a case of providing proof beyond reasonable doubt for each individual case. However, this is not really the issue. Instead, one should be looking at the long-term risk and establishing policies to manage that risk. Seen in this light, PS=0.007 still looks very significant. Even though it stresses the relatively minor role played by climate in a specific instance, the dice is still loaded in such a manner that the risk of such an event increases significantly when longer timescales are considered. And it is this long-term risk that drives policies such as those required for insurance: the fewer times we shoot our gun in public spaces, the less we have to insure against the habits of passing camels. The reality is that both PN=0.9 and PS=0.007 are significantly large from the point of view of politics and risk management. As Hannart put it:

“PS is the appropriate focus for the planner when assessing the future costs that inaction will imply, but PN is at stake when assessing the future benefits of enforcing strong mitigation actions. Policy elaboration requires both sides of this assessment; thus both PN and PS are of interest here.”

Even so, one cannot help but suspect that the reason why the climate concerned seem so obsessed with PN is that it provides for bigger numbers, and bigger numbers are more scary, aren’t they? Well, they certainly are if one ignores alternative big numbers. For example, when analysing causation relating to bushfires, the PN for the climate change contribution might look impressive, but what about the PN values associated with negligent forest management, or trends in arson? In fact, forget about their PN values. Think about the PS values. When it comes to starting bushfires, there is nothing quite as sufficient as a lighted match!

So the answer to my question as to who is right and who is wrong would be that this is the wrong question to ask. A much better question would be the one asked by Professor Robert Muir-Wood when he drew attention to the question of model reliability. After all, as Judea Pearl, inventor of Bayesian belief networks and father of modern causal inference had this to say about the climate models:

“Though they are excellent at forecasting weather conditions a few days ahead, they have never been verified in a prospective trial over century-long timescales and so could still contain systematic errors that we don’t know about.”

Thus speaks possibly the world’s leading expert on causal inference.

And then, of course, there is the old chestnut regarding the incalcitrant uncertainty over ECS. But that’s quite another story…

via Climate Scepticism

https://ift.tt/2IN5lrf

March 14, 2020 at 12:46PM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s