Richard Lindzen–Part II

Richard Lindzen–Part II

via NOT A LOT OF PEOPLE KNOW THAT
http://ift.tt/16C5B6P

By Paul Homewood

 

image

 

Continuing with Richard Lindzen’s article “Thoughts on the Public Discourse over Climate Change”:

 

The ‘warmest years on record’ meme.

image

image

image

 

This simple claim covers a myriad of misconceptions. Under these circumstances, it is sometimes difficult to know where to begin. As in any demonization project, it begins with the ridiculous presumption that any warming whatsoever (and, for that matter, any increase in CO2) is bad, and proof of worse to come. We know that neither of these presumptions is true. People retire to the Sun Belt rather than to the arctic. CO2 is pumped into greenhouses to enhance plant growth. The emphasis on ‘warmest years on record’ appears to have been a response to the observation that the warming episode from about 1978 to 1998 appeared to have ceased and temperatures have remained almost constant since 1998. Of course, if 1998 was the hottest year on record, all the subsequent years will also be among the hottest years on record. None of this contradicts the fact that the warming (ie, the increase of temperature) has ceased. Yet, somehow, many people have been led to believe that both statements cannot be simultaneously true. At best, this assumes a very substantial level of public gullibility. The potential importance of the so-called pause (for all we know, this might not be a pause, and the temperature might even cool), is never mentioned and rarely understood. Its existence means that there is something that is at least comparable to anthropogenic forcing. However, the IPCC attribution of most of the recent (and only the recent) warming episode to man depends on the assumption in models that there is no such competitive process.

The focus on the temperature record, itself, is worth delving into a bit. What exactly is this temperature that is being looked at? It certainly can’t be the average surface temperature. Averaging temperatures from places as disparate as Death Valley and Mount Everest is hardly more meaningful than averaging phone numbers in a telephone book (for those of you who still remember phone books). What is done, instead, is to average what are called temperature anomalies. Here, one takes thirty year averages at each station and records the deviations from this average. These are referred to as anomalies and it is the anomalies that are averaged over the globe. The only attempt I know of to illustrate the steps in this process was by the late Stan Grotch at the Lawrence Livermore Laboratory. Figure 1a shows the scatter plot of the station anomalies. Figure 1b then shows the result of averaging these anomalies. Most scientists would conclude that there was a remarkable degree of cancellation and that the result was almost complete cancellation. However, instead, one stretches the temperature scale by almost a factor of 10 so as to make the minuscule changes in Figure 1b look more significant. The result is shown in Figure 1c. There is quite a lot of random noise in Figure 1c, and this noise is a pretty good indication of the uncertainty of the analysis (roughly +/- 0.2C). The usual presentations show something considerably smoother. Sometimes this is the result of smoothing the record with something called running means. It is also the case that Grotch used data from the UK Meteorological Office which was from land based stations. Including data from the ocean leads to smoother looking series but the absolute accuracy of the data is worse given that the ocean data mixes very different measurement techniques (buckets in old ship data, ship intakes after WW1, satellite measurements of skin temperature (which is quite different from surface temperature), and buoy data).

Figure 2

These issues are summarized in Figure 2 which presents an idealized schematic of the temperature record and its uncertainty. We see very clearly that because the rise ceases in 1998, that this implies that 18 of the 18 warmest years on record (for the schematic presentation) have occurred during the last 18 years. We also see that the uncertainty together with the smallness of the changes offers ample scope for adjustments that dramatically alter the appearance of the record (note that uncertainty is rarely indicated on such graphs).

At this point, one is likely to run into arguments over the minutia of the temperature record, but this would simply amount to muddying the waters so to speak. Nothing can alter the fact that the changes one is speaking about are small. Of course ‘small’ is relative. Consider three measures of smallness.

Figure 3

Figure 3 shows the variations in temperature in Boston over a one month period. The dark blue bars show the actual range of temperatures for each day. The dark gray bars show the climatological range of temperatures for that date, and the light gray bars show the range between the record-breaking low and record-breaking high for that date. In the middle is a red line. The width of that line corresponds to the range of temperature in the global mean temperature anomaly record for the past 175 years. This shows that the temperature change that we are discussing is small compared to our routine sensual experience. Keep this in mind when someone claims to ‘feel’ global warming.

 

The next measure is how does the observed change compare with what we might expect from greenhouse warming. Now, CO2 is not the only anthropogenic greenhouse gas.

Figure 4. Red bar represents observations. Gray bars show model predictions.


When all of them are included, the UN IPCC finds that we are just about at the greenhouse forcing of climate that one expects from a doubling of CO2, and the temperature increase has been about 0.8C. If man’s emissions are responsible for all of the temperature change over that past 60 years, this still points to a lower sensitivity (sensitivity, by convention, generally refers to the temperature increase produced by a doubling of CO2 when the system reaches equilibrium) than produced by the least sensitive models (which claim to have sensitivities of from 1.5-4.5C for a doubling of CO2). And, the lower sensitivities are understood to be unproblematic. However, the IPCC only claims man is responsible for most of the warming. The sensitivity might then be much lower. Of course, the situation is not quite so simple, but calculations do show that for higher sensitivities one has to cancel some (and often quite a lot) of the greenhouse forcing with what was assumed to be unknown aerosol cooling in order for the models to remain consistent with past observations (a recent article in the Bulletin of the American Meteorological Society points out that there are, in fact, quite a number of arbitrary adjustments made to models in order to get some agreement with the past record). As the aerosol forcing becomes less uncertain, we see that high sensitivities have become untenable. This is entirely consistent with the fact that virtually all models used to predict ‘dangerous’ warming over-predict observed warming after the ‘calibration’ periods. That is to say, observed warming is small compared to what the models upon which concerns are based are predicting. This is illustrated in Figure 4. As I have mentioned, uncertainties allow for substantial adjustments in the temperature record. One rather infamous case involved NOAA’s adjustments in a paper by Karl et al that replace the pause with continued warming. But it was easy to show that even with this adjustment, models continued to show more warming than even the ‘adjusted’ time series showed. Moreover, most papers since have rejected the Karl et al adjustment (which just coincidentally came out with much publicity just before the Paris climate conference).

The third approach is somewhat different. Instead of arguing that the change is not small, it argues that the change is ‘unprecedented.’ This is Michael Mann’s infamous ‘hockey stick.’ Here, Mann used tree rings from bristle cone pines to estimate Northern Hemisphere temperatures back hundreds of years. This was done by calibrating the tree ring data with surface observations for a thirty year period, and using this calibration to estimate temperatures in the distant past in order to eliminate the medieval warm period. Indeed, this reconstruction showed flat temperatures for the past thousand years. The usual test for such a procedure would be to see how the calibration worked for observations after the calibration period. Unfortunately, the results failed to show the warming found in the surface data. The solution was starkly simple and stupid. The tree ring record was cut off at the end of the calibration period and replaced by the actual surface record. In the Climategate emails (Climategate refers to a huge release of emails from various scientists supporting alarm where the suppression of opposing views, the intimidation of editors, the manipulation of data, etc. were all discussed), this was referred to as Mann’s trick.

The whole point of the above was to make clear that we are not concerned with warming per se, but with how much warming. It is essential to avoid the environmental tendency to regard anything that may be bad in large quantities to be avoided at any level however small. In point of fact small warming is likely to be beneficial on many counts. If you have assimilated the above, you should be able to analyze media presentations like this one to see that amidst all the rhetoric, the author is pretty much saying nothing while even misrepresenting what the IPCC says.

http://ift.tt/2oJHtZe

via NOT A LOT OF PEOPLE KNOW THAT http://ift.tt/16C5B6P

May 2, 2017 at 10:09PM

Leave a comment