Month: September 2017

Frequently Asked Questions 9.1*: A Critique

clip_image002[4]

NCAR

Guest Post by Clyde Spencer

Because of recent WUWT guest posts, and their comments, I decided to do what I have been putting off for too long – reading what the IPCC has to say about climate modeling. The following are my remarks regarding what I thought were some of the most important statements found in FAQ 9.1; it asks and answers the question “Are Climate Models Getting Better, and How Would We Know?

IPCC AR5 FAQ 9.1 (p. 824) claims:

“The complexity of climate models…has increased substantially since the IPCC First Assessment Report in 1990, so in that sense, current Earth System Models are vastly ‘better’ than the models of that era.”

They are explicitly defining “better” as more complex. However, what policy makers need to know is if the predictions are more precise, and more reliable! That is, are they useable?

FAQ 9.1 further states:

“An important consideration is that model performance can be evaluated only relative to past observations, taking into account natural internal variability.”

This is only true if model behavior is determined by tuning to match past weather, particularly temperature and precipitation. However, this pseudo model-performance is little better than curve fitting with a high-order polynomial. What should be done is to minimize the reliance on historical data, using first-principles more than what is currently done, doing a projection and waiting 5 or 10 years to see how well the projection forecasts the actual future temperatures. The way things are done currently – although using first-principles – may not be any better than using a ‘black box’ neural network approach to make predictions, because of the reliance on what engineers call “fudge factors” to tune with history.

FAQ 9.1 goes on to say:

“To have confidence in the future projections of such models, historical climate—and its variability and change—must be well simulated.”

It is obvious that if models didn’t simulate historical climate well, there would be no confidence in their ability to predict. However, historical fit alone isn’t sufficient to guarantee that projections will be correct. Polynomial-fits to data can have high correlation coefficients, yet are notorious for flying off into the Wild Blue Yonder when extrapolated beyond the data sequence. That is why I say above that the true test of skill is to let the models actually forecast the future. Another approach that might be used is to not tune the models to all historical data, but only tune them to the segment that is pre-industrial, or pre-World War II, and then let them demonstrate how well they model the last half-century.

One of the problems with tuning to historical data is that if the extant models don’t include all the factors that influence weather (and they almost certainly don’t), then the influence of the missing parameter(s) is/are proxied inappropriately by other factors. That is to say, if there was a past ‘disturbance in the force(ing)’, of unknown nature and magnitude, to correct for it, it would be necessary to adjust the variables that are in the models, by trial-and-error. Can we be certain that we have identified all exogenous inputs to climate? Can we be certain that all feedback loops are mathematically correct?

Inconveniently, it is remarked in Box 9.1 (p. 750):

“It has been shown for at least one model that the tuning process does not necessarily lead to a single, unique set of parameters for a given model, but that different combinations of parameters can yield equally plausible models (Mauritsen et al., 2012).”

These models are so complex that it is impossible to predict how an infinitude of combinations of parameters might influence the various outputs.

The kind of meteorological detail available in modern data is unavailable for historical data, particularly prior to the 20th Century! Thus, it would seem to be a foregone conclusion that missing forcing-information is assigned to other factors that are in the models. To make it clearer, in historical time, we know when most volcanoes erupted. However, what the density of ash in the atmosphere was can only be estimated, at best, whereas the ash and aerosol density of modern eruptions can be measured. Historical eruptions in sparsely populated regions may only be speculation based on a sudden decline in global temperatures that last for a couple of years. We only have qualitative estimates of exceptional events such as the Carrington Coronal Mass Ejection of 1859. We can only wonder what such a massive injection of energy into the atmosphere is capable of doing.

Recently, concern has been expressed about how ozone depletion may affect climate. In fact, some have been so bold as to claim that the Montreal Protocol has forestalled some undesirable climate change. We can’t be certain that some volcanos, such as Mount Katmai (Valley of Ten Thousand Smokes, AK), which are known to have had anomalous hydrochloric and hydrofluoric acid emissions (see page 4), haven’t had a significant effect on ozone levels before we were even aware of variations in ozone. For further insight on this possibility, see the following:

http://ift.tt/2k1WjNE

Continuing, FAQ 9.1 remarks:

“Inevitably, some models perform better than others for certain climate variables, but no individual model clearly emerges as ‘the best’ overall.”

This is undoubtedly because modelers make different assumptions regarding parameterizations and the models are tuned to their variable of interest. This suggests that tuning is over-riding the first-principles, and it dominates the results!

My supposition is supported by their subsequent FAQ 9.1 remark:

“…, climate models are based, to a large extent [my emphasis], on verifiable physical principles and are able to reproduce many important aspects of past response to external forcing.”

It would seem that tuning is a major weakness of current modeling efforts, along with the necessity for parameterizing energy exchange processes (convection and clouds), which occur at a spatial scale too small to model directly. Tuning is ‘the elephant in the room’ that is rarely acknowledged.

The authors of Chapter 9 acknowledge in Box 9.1 (p. 750):

“…the need for model tuning may increase model uncertainty.”

Exacerbating the situation is the remark in this same section (Box 9.1, p. 749):

“With very few exceptions … modelling centres do not routinely describe in detail how they tune their models. Therefore the complete list of observational constraints toward which a particular model is tuned is generally not available.”

Lastly, the authors clearly question how tuning impacts the purpose of modeling (Box 9.1, p. 750):

“The requirement for model tuning raises the question of whether climate models are reliable for future climate projections.”

I think that it is important to note that buried in Chapter 12 of AR5 (p. 1040) is the following statement:

“In summary, there does not exist at present a single agreed on and robust formal methodology to deliver uncertainty quantification estimates of future changes in all climate variables ….”

This is important because it implies that the quantitative correlations presented below are nominal values with no anchor to inherent uncertainty. That is, if the uncertainties are very large, then the correlations themselves have large uncertainties and should be accepted with reservation.

Further speaking to the issue of reliability is this quote and following illustration from FAQ 9.1:

“An example of changes in model performance over time is shown in FAQ 9.1, Figure 1, and illustrates the ongoing, albeit modest, [my emphasis] improvement.”

Generally, one should expect a high, non-linear correlation between temperatures and precipitation. It doesn’t rain or snow a lot in deserts, or at the poles (effectively cold deserts). Warm regions, i.e. the tropics, allow for abundant evaporation from the oceans and transpiration from vegetation, and provide abundant precipitatable water vapor. Therefore, I’m a little surprised that the following charts show a higher correlation between temperature and spatial patterns than is shown for precipitation and spatial patterns. To the extent that some areas have model temperatures that are higher than measured temperatures, then there have to be areas with lower than what is measured, in order to meet the tuning constraints of the global average. Therefore, I’m not totally convinced by the claims of high correlations between temperatures and spatial patterns. Might it be that the “surface temperatures” include the ocean temperatures, and because the oceans cover more than 70% of the Earth and don’t have the extreme temperatures of land, the temperature patterns are weighted heavily by sea surface temperatures? That is, would the correlation coefficients be nearly as high if only land temperatures were used?

clip_image004[4]

The reader should note that the claimed correlation coefficients for both the CMIP3 and CMIP5 imply that only about 65% of the precipitation can be predicted by the location or spatial pattern. If precipitation patterns are so poorly explained compared to average surface temperatures, it doesn’t give me confidence that regional temperature patterns will have correlation coefficients as high as the global average.

To read any or all of the IPCC AR5, go to the following hyperlink: http://ift.tt/18ksY34


*Intergovernmental Panel on Climate Change, Fifth Assessment Report: Working Group 1; Climate Change 2013: The Physical Science Basis: Chapter 9 – Evaluation of Climate Models

via Watts Up With That?

http://ift.tt/2yFK0K0

September 27, 2017 at 04:06AM

Rupert Darwall: The Winner Of The Wind-Power Game Won’t Be The Consumer

Offshore wind bidders are in the game to get their hands on expected subsidies of around £300million a year, totalling more than £4 billion over the 15-year contract period.

“What we’re saying to the politicians, regulators and customers is: let’s keep going – this [wind power] has been a huge success,” Scottish Power’s Keith Anderson gushed yesterday. It certainly has been for Mr Anderson and Scottish Power’s Spanish parent Iberdrola SA. You’ll be hard pressed to find more expensive electricity today than that being produced by Mr Anderson’s wind farms.

Last year, the average selling price that the Big Six energy companies got for electricity from their gas and coal-fired power stations was £45.49 per MWh. By contrast, the average price of electricity from Scottish Power’s wind farms was £117.14 – more than two and half times more – enabling Scottish Power to make a stonking £42.35 profit per MWh, almost as much as the selling price of conventional electricity. Small wonder Mr Anderson wants more wind.

Is wind-generated electricity so much better than conventional to justify such a huge price premium? As Matt Ridley and John Constable brilliantly explain in The Scottish Wind-Power Racket, Scottish wind power is like the sausage factory that only makes sausages when it wants to, and has to be compensated when it makes sausages you don’t want or the roads are too congested for the sausages to make it to your front door.

The bad news doesn’t end there. On top of the 157 per cent mark up on the wholesale price of conventionally generated electricity, you have to pay additional delivery charges for the privilege. National Grid is spending nearly £2 billion on extra grid infrastructure to transport Scottish wind power southwards, enabling National Grid to grow its profits and forcing us to pay even more for high cost renewable energy.

All this is important to bear in mind as we risk being swamped with renewable energy propaganda claiming that we’re on the verge of a wind bonanza. Indeed, one normally level-headed commentator talked of Britain swapping places with Saudi Arabia to become the energy sheikhdom of the northern seas, claiming that the economic argument over wind power had been settled. It hasn’t.

The offshore wind excitement – Keith Anderson is an onshore man – was triggered by the results of the second round of offshore wind contracts. These showed bid prices of between £57.50 and £74.75 per MWh compared to £114-150 per MWh for the projects in the previous round, hence the outpouring of joy at the apparent fall in costs.

Only the story is a little more complicated. According to a timely study by Gordon Hughes, Capell Aris and John Constable, there has been a real, but modest rate of technological improvement, which is only to be expected of a mature technology such as wind power. However, this improvement is offset by the trend towards building wind farms in deeper water as the cheaper, shallower sites get built out.

The key point, though, is that the bid numbers aren’t committed and don’t represent actual costs. As the authors explain, offshore wind bidders are in the game to get their hands on expected subsidies of around £300million a year, totalling more than £4 billion over the 15-year contract period.

But even more attractive than the expected subsidy stream, is the way the deals have been structured. They are one-sided deals, where there is almost no walk-away penalty for non-delivery by the contractor but where the Government has put the customer on the hook for 15 years.

The prospect of a huge upside and a negligible downside is a formula for encouraging what is politely called “strategic bidding”, where bidders sprinkle their bids with fairy dust in the knowledge that they won’t suffer the consequences when their bid numbers turn out to have been too optimistic.

A similar dynamic was at work in bidding for rail franchises. Because of the asymmetric upside and downside risk profile, the process meant that the Government ends up choosing the most risky bid. In 2012, this led to the collapse of the West Coast franchise award, when the Department for Transport had to rescind its decision to award the franchise to First Group in the face of an action in the High Court by Virgin, who demonstrated how First Group’s numbers didn’t stack up.

The West Coast fiasco should have led the Government to have binned a system where irresponsible bidding is rewarded. Instead, it commissioned a report from a former managing director of a rival train operator who argued that the government should embrace the prospect of failure. “Government should tolerate the idea that a franchise may default,” the Brown Report says. “For franchising to function effectively and for the market to function competitively, Government should accept that there can be failure.” This is a bad approach to running a railway, as it systematically favours the lowest quality, highest risk bidder.

In energy, this is even more irresponsible as it means gambling with the future security of Britain’s energy supply. Only a weak government would have gone ahead with the disastrous Hinkley nuclear deal. It sent a signal around the world that the British government preferred a bad deal to no deal. In these circumstances, it would be hardly surprising that offshore wind developers are queueing up to low-ball bids.

Having won control of vast acreages of the sea, like players on a Monopoly board, wind farm developers can be fairly sure that if construction costs start turning out to be higher than assumed in their bid numbers, the Government will have nowhere else to go. Having bid Old Kent Road and Whitechapel prices, they’ll tell the Government that if it wants to ensure that Britain has enough generating capacity, consumers will have to end up paying Park Lane and Mayfair prices.

Full post

via The Global Warming Policy Forum (GWPF)

http://ift.tt/2hxhaHH

September 27, 2017 at 03:38AM

Ross McKitrick: Despite Denial, Climate Models Are Running Too Hot

Millar et al. attracted controversy for stating that climate models have shown too much warming in recent decades, even though others (including the IPCC) have said the same thing.  The model-observational discrepancy is real, and needs to be taken into account especially when using models for policy guidance.

[…] A number of authors, including the IPCC, have argued that climate models have systematically overstated the rate of global warming in recent decades. A recent paper by Millar et al. (2017) presented the same finding in a diagram of temperature change versus cumulative carbon emissions since 1870.

The horizontal axis is correlated with time but by using cumulative CO2 instead the authors infer a policy conclusion. The line with circles along it represents the CMIP5 ensemble mean path outlined by climate models. The vertical dashed line represents a carbon level where two thirds of the climate models say that much extra CO2 in the air translates into at least 1.5 oC warming. The black cross shows the estimated historical cumulative total CO2 emissions and the estimated observed warming. Notably it lies below the model line. The models show more warming than observed at lower emissions than have occurred. The vertical distance from the cross to the model line indicates that once the models have caught up with observed emissions they will have projected 0.3 oC more warming than has been seen, and will be very close (only seven years away) to the 1.5 oC level, which they associate with 615 GtC. With historical CO2 emissions adding up to 545 GtC that means we can only emit another 70 GtC, the so-called “carbon budget.”

Extrapolating forward based on the observed warming rate suggests that the 1.5 oC level would not be reached until cumulative emissions are more than 200 GtC above the current level, and possibly much higher. The gist of the article, therefore, is that because observations do not show the rapid warming shown in the models, this means there is more time to meet policy goals.

As an aside, I dislike the “carbon budget” language because it implies the existence of an arbitrary hard cap on allowable emissions, which rarely emerges as an optimal solution in models of environmental policy, and never in mainstream analyses of the climate issue except under some extreme assumptions about the nature of damages. But that’s a subject for another occasion.

Were Millar et al. authors right to assert that climate models have overstated recent warming? They are certainly not the first to make this claim. Fyfe et al. (2013) compared Hadley Centre temperature series (HadCRUT4) temperatures to the CMIP5 ensemble and showed that most models had higher trends over the 1998-2012 interval than were observed:

Original caption: a, 1993–2012. b, 1998–2012. Histograms of observed trends (red hatching) are from 100 reconstructions of the HadCRUT4 dataset1. Histograms of model trends (grey bars) are based on 117 simulations of the models, and black curves are smoothed versions of the model trends. The ranges of observed trends reflect observational uncertainty, whereas the ranges of model trends reflect forcing uncertainty, as well as differences in individual model responses to external forcings and uncertainty arising from internal climate variability.

The IPCC’s Fifth Assessment Report also acknowledged model over-estimation of recent warming in their Figure 9.8 and accompanying discussion in Box 9.2. I have updated the IPCC chart as follows. I set the CMIP5 range to gray, and the thin white lines show the (year-by-year) central 66% and 95% of model projections. The chart uses the most recent version of the HadCRUT4 data, which goes to the end of 2016. All data are centered on 1961-1990.

Even with the 2016 EL-Nino event, the HadCRUT4 series does not reach the mean of the CMIP5 ensemble. Prior to 2000 the longest interval without a crossing between the red and black lines was 12 years, but the current one now runs to 18 years.

This would appear to confirm the claim in Millar et al. that climate models display an exaggerated recent warming rate not observed in the data.

Full post

via The Global Warming Policy Forum (GWPF)

http://ift.tt/2ysgYMW

September 27, 2017 at 03:17AM

Wake in Fright: Wind Turbine Infrasound Causes Panic, Fear & Nightmares

The evidence proving the unnecessary damage done to wind farm neighbours by the noise generated by giant industrial wind turbines is mounting by the day: Germany’s Max Planck Institute has identified sub-audible infrasound as the cause of stress, sleep disruption and more (see our post here); and a Swedish group have shown that it’s the … Continue reading Wake in Fright: Wind Turbine Infrasound Causes Panic, Fear & Nightmares

via STOP THESE THINGS

http://ift.tt/2wTxpWo

September 27, 2017 at 02:31AM