Month: February 2020

Weekend Unthreaded

Rating: 0.0/10 (0 votes cast)

Rating: 0.0/10 (0 votes cast)

via JoNova

https://ift.tt/2Uk3SQa

February 1, 2020 at 09:31PM

Fewer Recessions Thanks to the Shale Revolution

Originally published in WorldNetDaily. Guest post by Steve Goreham The United States economy currently enjoys the longest period of expansion in history. The economy has been growing for more than ten and a half years, since the end of the Great Recession of 2007-2009. Behind the current expansion is the rise of the US to…

via Watts Up With That?

https://ift.tt/2OjYhpj

February 1, 2020 at 08:39PM

All gone by the year 2020: is observed glacier melt the basis for the prediction?

This is part 6 in the series on the prediction that glaciers in Glacier National Park will be gone by 2020. You might want to see to part 1, part2, part 3, part 4 and part 5 if you haven’t already.

In a previous post, I walked down memory lane to the time I was about 19 years old. In this post, I will go back to the more tender age of 14 and my first trip abroad. It was a trip to Switzerland on the health insurance plan of my parents. Its intention was to give the working class youth the opportunity to breathe in the pure mountain air of Switzerland. We were a bunch of kids of the same age and, as you would expect, there were a lot of outdoor activities like sports that we could chose from.

Not being that much into sports and a more nature-minded boy, I often joined the hiking group that did nature walks in the vicinity of the center were we stayed over. At one point, there was the opportunity to see a glacier. I had learned about glaciers in school and was eager to see one. It was not that close by and after a stiff walk we arrived at the glacier. Boy, was I disappointed. We saw some melting ice chunks and a small patch of ice stretching along the mountain. Although it was nice to see some ice in that time of the year, it was a real anti-climax.

One of the guides then said that a lot of glaciers were shrinking and that this glacier was no exception. This was in the mid 1970s.

I was reminded of this scene from my youth when reading this paragraph in the Hall & Fagre paper “Modeled Climate-Induced Glacier Change in Glacier National Park, 1850–2100” (my emphasis):

On the basis of tree-ring analysis in the forest fronting the Agassiz and Jackson Glaciers, Carrara and McGimsey (1981) estimated that within Glacier National Park the maximum glacial advances during the Little Ice Age occurred just before 1860. Retreat rates derived from their tree-ring data showed that before 1910 glaciers retreated at a modest rate (< 7 m per year). That rate increased dramatically between 1917 and 1926, reaching more than 40 m per year. Above the tree line, Carrara and McGimsey used terminal moraines, naturalists’ notes, photographs, and park records to deduce that the glaciers retreated rapidly (> 100 m per year) between 1926 and 1932 and continued to retreat at more than 90 m per year until 1942. This period of accelerated retreat corresponds to a period of above-average summer temperatures in the climatic record of the region (figure 4). After the mid-1940s the rate slowed, but ablation continued.

That was an interesting read. Also the glaciers in Glacier National Park were receding well before the 1970s and apparently even had their biggest melt rate in the mid 1920s and 1930s. It was also interesting because actual numbers were used. There was less than 7 m/year from just before 1860 until 1910, more than 40 m/year between 1917 and 1926, then more than 100 m/years until 1932, 90 m/year until 1942 and slowing afterwards. This period of accelerated melt occurred well before we put significant amounts of CO2 in the atmosphere.

However, the paper stayed rather vague about what happened after 1942. It only mentioned that there was a slowdown after 1942, but not what the actual rates were between 1942 and the year that the paper was published (2003). Wondering about the current rate, I searched for the missing numbers, but in vain.

Luckily, I found historical data on the area of the named glaciers in Glacier National Park. The data is not optimal though. Only the area of the glaciers was measured and there were only five measurements (“just before 1860”, 1966, 1998, 2005 and 2015). The periods in between the measurements were also vastly different (116, 32, 7 and 10 years), making a comparison rather difficult. There are however some interesting things to extract from it.

A first thing is that looking at the number of glaciers, I found an explanation for the difference between the number of active glaciers reported around 2010 (25) and now (26). I was wondering about this in the first post in this series. One of my guesses was that one glacier might have been below the threshold of 25 acres and by growing came above that threshold in a more recent measurement. However, I could not find a glacier that was been measured below the threshold in 2005 and that was measured as more than 25 acres in 2015. The explanation for the difference of active glaciers seems to be that two glaciers retreated in two separate masses. I already knew about the Blackfoot and the Jackson glaciers, but also the Grinnell glacier separated in two masses (the Grinnell and the Salamander glaciers). If those were counted as two separate masses, then one arrives at 26. Otherwise it would be 25.

A second thing that became clear from the area information is that the glaciers in Glacier National Park had experienced a significant area loss since 1850. At closer sight, those glaciers lost more than half of their area by 1966. According to the 2003 paper, most of that loss was in the 1920s-1930s-beginning 1940s. Apparently, it seems that those glaciers were already well on their way on death row when we started to put significant amounts of CO2 in the atmosphere (1950s).

Last, but not least, something that I wanted to look more closely at is the retreat in a more recent time, more specifically in relation to the 2020 prediction that was made in 2009. There was some controversy about the reason why the 2020 revision seemed necessary. Most articles describing the decision claimed that observational evidence of melt was the direct cause. For example this was given as the reason for the 2020 estimate as mentioned in an interview of a colleague of Fagre:

After publication of that report, field observations showed glacier melt to be years ahead of the projections, causing scientists in 2010 to revise their “end date” to 2020.

It somehow suggests that it was the observation of glacier melt between the report was published (2003) and 2010 that led to the 2020 estimate (it was in fact March 2009 when the 2020 estimate was broadcast). However, Fagre told the journalists something else: that the global temperature increase projections from the IPCC were used as input in their model, but the actual observed temperature increase in the park to be twice as great and therefor a revision to 2020 (from 2030) was needed. Now we know that the 2020 estimate is found wrong, this theory of increasing melt in that period seems very unlikely.

So I wondered whether any increasing area loss would show in the historical data just before the end date revision was made? Meaning, is there an increasing loss when comparing 1966 → 1998 with 1998 → 2005?

When I look at the average annual decrease rate between 1966 and 1998, then I find that there was an average loss of 0.88% per year. Between 1998 and 2005, that number was 0.76% per year. Contrary to what I expected, the area loss did the opposite and decreased slightly in the period just before the revision was made. That does not bode well for the theory of “field observations of glacier melt years ahead of the projections”. That would mean that there would have been a strong decrease in the melt rate between 2005 and 2020. This seems not to be the case. The next measurement in 2015 showed that the average melt loss reverted back to the rate between 1966 and 1998.

But then, maybe we should specifically look at the two glaciers that were investigated in the 2003 paper that put forward the 2030 estimate? These two glaciers are the Blackfoot and Jackson glaciers. When I do that, then the difference is even more profound. The Blackfoot glacier even grew slightly (from 1,625,124 m2 measured in 1998 to 1,630,173 m2 in 2005. The rate went from a loss of 0.37% per year between 1966 and 1998 to a very tiny increase of 0.04% per year between 1998 and 2005. The Jackson glacier went from a decrease rate of 1.41% per year to a decrease rate of 0.15% per year in the same period (linear trend):

Glacier National Park area Blackfoot and Jackson 1850-2005

That is a decrease in the rate since the measurement of 1966 (Blackfoot glacier) and 1998 (Jackson glacier). What was not know in 2005, is that this slower trend continued, shown by the measurement of 2015. If that trend would persist until present, then that glacier would not be gone any time soon (linear trend):

Glacier National Park area Blackfoot and Jackson 1850-2015

But then, if the two studied glaciers would unlikely be gone by 2020, how many of the other glaciers are projected to disappear based on the (then) known data until 2005? To figure that out, I entered the other values in Calc, calculated the (linear) trend line for the data between 1850 and 2005 and made the projection for 2020 (linear trend):

Glacier National Park area overview 1850-2005

I ended up with two glaciers that would projected to be gone by 2020: the Boulder glacier and the Thunderbird Glacier. However, the Boulder glacier already dropped below the 25 acres threshold) somewhere between 1966 and 1998, therefor not considered an active glacier since many decades by then.

The Thunderbird glacier was, just barely, considered an active glacier in 2005. but it also has a slowing trend in recent times. It started as a rather large glacier according to the measurements of 1850 and lost 7/8th of its area before the measurement of 1966 and, based on the later measurements, the decrease went much more slowly, it almost flatlined between 1966 and 2005 (linear trend):

Glacier National Park area Thunderbird 1850-2005

Considering that flat trend, it should be clear that in 2005 it should be less likely that this glacier would be gone by 2020 . Later, based on 2015 measurements, that trend continued at the same rate, so the Thunderbird glacier would unlikely be gone by 2020, unless there was a drastic change from 2015.

That is still zero glaciers gone by 2020 according to the data that was known at the time of the prediction in 2009.

But then, their definition of “glaciers gone” could be all “glaciers smaller than 25 acres” (active glaciers). There are 11 glaciers that would be projected smaller than 25 acres in 2020, but only two were still considered active in 2005: the Two Ocean glacier and the Whitecrow glacier. The former had the same slowing down of the melting trend (since 1998). If that decrease rate stayed the same, then the Two Oceans glacier would also not be projected gone by 2020. The Whitecrow glacier however was not much above the threshold in 2005 and it had a much less pronounced slowing at the end, so that glacier would be projected below the threshold of an active glacier in 2020, based on its trend until 2005 (also based on the 2015 data, it scraped just above the threshold, so is very likely to be below in later measurements).

I think it is safe to conclude that observed melting between 2003 and 2009 is not the cause of the prediction that “all glaciers would be gone by 2020” (even if we would define “gone” as “less than 25 acres”). At least when it comes to the area measurements and based on what was known by 2005. Something else seemed to be the trigger. Then the explanation of Fagre (that it was the, ahem, correction from global to local temperature increase) that was the more likely reason to make the revision.

Closing, although the data is far from optimal and not suitable for the purposes I originally wanted to use it for, I learned a lot by looking at it.

via Trust, yet verify

https://ift.tt/38TKpd8

February 1, 2020 at 05:41PM

Analysis of a carbon forecast gone wrong: the case of the IPCC FAR

Reposted from Dr. Judith Curry’s Climate Etc.

Posted on January 31, 2020 by curryja |

by Alberto Zaragoza Comendador

The IPCC’s First Assessment Report (FAR) made forecasts or projections of future concentrations of carbon dioxide that turned out to be too high.

From 1990 to 2018, the increase in atmospheric CO2 concentrations was about 25% higher in FAR’s Business-as-usual forecast than in reality. More generally, FAR’s Business-as-usual scenario expected much more forcing from greenhouse gases than has actually occurred, because its forecast for the concentration of said gases was too high; this was a problem not only for CO2, but also for methane and for gases regulated by the Montreal Protocol. This was a key reason FAR’s projections of atmospheric warming and sea level rise likewise have been above observations.

Some researchers and commentators have argued that this means FAR’s mistaken projections of atmospheric warming and sea level rise do not stem from errors in physical science and climate modelling. After all, emissions are for climate models an input, not an output. Emissions depend largely on economic growth, and can also be affected by population growth, intentional emission reductions (such as those implemented by the aforementioned Montreal Protocol), and other factors that lie outside the field of physical science. Under this line of reasoning, it makes no sense to blame the IPCC for failing to predict the right amount of atmospheric warming and sea level rise, because that would be the same as blaming it for failing to predict emissions.

This is a good argument regarding Montreal Protocol gases, as emissions of these were much lower than forecasted by the IPCC. However, it’s not true for CO2: the over-forecast in concentrations happened because in FAR’s Business-as-usual scenario over 60% of CO2 emissions remain in the atmosphere, which is a much higher share than has been observed in the real world. In fact, real-world CO2 emissions were probably higher than forecasted by FAR’s Business-as-usual scenario. And the only reason one cannot be sure of this because there is great uncertainty around emissions of CO2 from changes in land use. For the rest of CO2 emissions, which chiefly come from fossil fuel consumption and are known with much greater accuracy, there is no question they were higher in reality than as projected by the IPCC.

In the article I also show that the error in FAR’s methane forecast is so large that it can only be blamed on physical science – any influence from changes in human behaviour or economic activity is dwarfed by the uncertainties around the methane cycle. Thus, errors or deficiencies in physical science are to blame for the over-estimation in CO2 and methane concentration forecasts, along with the correspondent over-estimation in forecasts of greenhouse gas forcing, atmospheric warming, and sea level rise. Human emissions of greenhouse gases may indeed be unpredictable, but this unpredictability is not the reason the IPCC’s projections were wrong.

Calculations regarding the IPCC’s First Assessment Report

FAR, released in 1990, made projections according to a series of four scenarios. One of them, Scenario A, was also called Business-as-usual and represented just what the name implies: a world that didn’t try to mitigate emissions of greenhouse gases. In FAR’s Summary for Policymakers, Figure 5 offered projections of greenhouse-gas concentrations out to the year 2100, according to each of the scenarios. Here’s the panel showing CO2:

I’ve digitized the data, and the concentration in the chart rises from 354.8ppm in 1990 to 422.75 by 2018; that’s a rise of 67.86 ppm. Please notice that slight inaccuracies are inevitable when digitizing, especially if it’s a document, like FAR, that was first printed, then scanned and turned into a PDF.

For emissions, the Annex to the Summary for Policymakers offers a not-very-good-looking chart; a better version is this one (Figure A.2(a) page 331, the Annex to the whole report):

Some arithmetic is needed here. The concentrations chart is in parts per million (ppm), whereas the emissions chart is in gigatons of carbon (GtC); one gigaton equals a billion metric tons. But the molecular mass of CO2 (44) is 3.67 times bigger than that of carbon (12). Using C or CO2 as the unit is merely a matter of preference – both measures represent the same thing. The only difference is that, when expressing numbers as C, the figures will be 3.67 times smaller than when expressed as CO2. This means that, while one ppm one ppm of CO2 contains approximately the weight 7.81 gigatons of CO2 of said gas, if we express emissions as GtC rather than GtCO2 the equivalent figure is 7.81 / 3.67 = 2.13.

Under FAR’s Business-as-usual scenario, cumulative CO2 emissions between 1991 and 2018 were 237.61GtC, which is equivalent to 111.55ppm. Since concentrations increased by 67.86ppm, that means 60.8% of CO2 emissions remained in the atmosphere.

Now, saying that a given percentage of emissions “remained in the atmosphere” is just a way to express what happens in as few words as possible; it’s not a literally correct statement. Rather, all CO2 molecules (whether released by humankind or not) are always being moved around in a very complex cycle: some CO2 molecules are taken up by vegetation, are others released by the ocean into the atmosphere, and so on. There is also some interaction with other gases; for example, methane has an atmospheric lifespan of only a decade or so because it decays into CO2. What matters is that, without man-made emissions, CO2 concentrations would not increase. Whether the CO2 molecules currently in the air are “our” molecules, the same ones that came out of burning fossil fuels, is irrelevant.

And that’s where the concept of airborne fraction comes in. The increase in concentrations of CO2 has always been less than man-made emissions, so it could be said that only a fraction of our emissions remains in the atmosphere. Saying that “the airborne fraction of CO2 is 60%” may be technically incorrect, but it rolls off the keyboard more easily than “the increase in CO2 concentrations is equivalent to 60% of emissions”. And indeed the term is commonly used in the scientific literature.

Anyway, we’ve seen what FAR had to say about CO2 emissions and concentrations. Now let’s see what nature said.

Calculations regarding the real world

Here I use two sources on emissions:

  • BP’s Energy Review 2019, which has data up to 2018.
  • Emission estimates from the Lawrence Berkeley National Laboratory. These are only available until 2014.

BP counts only emission from fossil fuel combustion: the burning of petroleum, natural gas, other hydrocarbons, and coal. And both sources are in very close agreement as far as emissions from fossil fuel combustion are concerned: for the 1991-2014 period, LBNL’s figures are 1% higher than BP’s. The LBNL numbers also include cement manufacturing, because the chemical reaction necessary for producing cement releases CO2; I couldn’t find a similarly authoritative source with more recent data for cement.

There is also the issue of flaring, or burning of natural gas by the oil-and-gas industry itself; these emissions are included in LBNL’s total. BP’s report does not feature the word “flaring”, and it seems unlikely they would be included, because BP’s method for arriving at global estimates of emissions is by aggregating national-level data on fossil fuel consumption. Now, I’ll admit I haven’t emailed every country’s energy statistics agency to be sure of the issue, but flared gas is by definition gas that did not reach energy markets; it’s hard to see why national agencies would include this in their “consumption” numbers, and many countries would have trouble even knowing how much gas is being flared. For what it’s worth, according to LBNL’s estimate flaring makes up less than 1% of global CO2 emissions.

For concentrations, I use data from the Mauna Loa Observatory. CO2 concentration in 1990 was 354.39ppm, and by 2014 this had grown to 398.65 (an increase of 44.26ppm). By 2018, concentrations had reached a level of 408.52 ppm, which meant an increase of 54.13 ppm since 1990.

It follows that the airborne fraction according to these estimates was:

  • In 1991-2014, emissions per LBNL were 182.9GtC, which is equivalent to 85.88 ppm. Thus, the estimated airborne fraction was 44.26 / 85.88 = 51.5%
  • In 1991-2018, emissions according to BP were 764GtCO2, equivalent to 97.82ppm. We get an airborne fraction of 54.13 / 97.82 = 55.3%

Unfortunately, there is a kind of emissions that aren’t counted either by LBNL or BP. So total emissions have necessarily been higher than estimated above, and the real airborne fraction has been lower – which is what the next section is about.

Comparison of FAR with observations

This comparison has to start with two words: land use.

Remember what we said about the airborne fraction of CO2: it’s simply the increase in concentrations over a given period, divided by the emissions that took place over that period. If you emit 10 ppm and concentrations increase by 6ppm, then the airborne fraction is 60%. But if you made a mistake in estimating emissions and those had been 12ppm, then the airborne fraction in reality would be 50%.

This is an issue because, while we know concentrations with extreme accuracy, we don’t know emissions nearly that well. In particular, there is great uncertainty around emissions from land use: carbon released and stored due to tree-cutting, agriculture, etc. The IPCC itself acknowledged in FAR that estimates of these emissions were hazy; on page 13 it provided the following emission estimates for the 1980-89 period, expressed in GtC per year:

  • Emissions from fossil fuels: 5.4 ±5
  • Emissions from deforestation and land use: 1.6 ± 1.0

So, even though emissions from fossil fuels were believed to be three-and-a-half times higher than those from land use, in absolute terms the uncertainty around land use emissions was double that around fossil fuels.

(FAR didn’t break down emissions from cement; these were a smaller share of total emissions in 1990 than today, and presumably were lumped in with fossil fuels. By the way, I believe the confidence intervals reflect a 95% probability, but haven’t found any text in the report actually spelling that out).

Perhaps there was great uncertainty around land-use emissions back in 1990, but this has now been reduced? Well, the IPCC’s Assessment Report 5 (AR5) is a bit old now (it was published in 2013), but it didn’t look like uncertainty had been reduced much. More specifically, Table 6.1 of the report gives a 90% confidence interval for CO2 emissions from 1980 to 2011. And the confidence interval is the same interval in every period: ± 0.8GtC/year.

Still, it’s possible to make some comparisons. Let’s go first with LBNL: for 1991-2014, emissions according to FAR’s Business-as-usual scenario would be 196.91GtC, which is 14.17GtC more than LBNL’s numbers show. In other words: if real-world land use emissions over the period had been 14.17GtC, then emissions according to FAR would have been the same as according to LBNL. That’s only 0.6GtC/year, which is well below AR5’s best estimate of land use emissions (1.5GtC/year in the 1990s, and about 1GtC/year in the 2000s).

For BP, emissions of 764.8GtCO2 convert to 208.58GtC. Now, to this figure at a minimum we’d have to add cement emissions from 1991-2014, which were 7.46GtC. By 2014 emissions from cement were well above 0.5GtC, so even a conservative estimate would put the additional emissions until 2018 at 2GtC, or 9.46GtC in total. This would mean BP’s figures, when adding cement production, give a total of 218.04GtC. I don’t consider flaring here, but according to LBNL those emissions were only about 1GtC.

Therefore BP’s fossil-fuel-plus-cement emissions would be 19.57 GtC lower than the figure for FAR’s Business-as-usual scenario (237.61GtC). For BP’s emissions to have matched FAR’s, real-world land-use emissions would have needed to average 0.7 GtC/year. Again, it seems real-world emissions exceeded this rate, and indeed the figures from AR5’s Figure 6.1 suggest total emissions for 1991-2011 alone were around 25GtC. But just to be clear: it is only likely that real-world emissions exceeded FAR’s Business-as-usual scenario. The uncertainty in land-use emissions means one can’t be sure of that.

I’ll conclude this section by pointing out that FAR didn’t break down how many tons of CO2 would come from changes in land use as opposed to fossil fuel consumption, but its description of the Business-as-usual scenario says “deforestation continues until the tropical forests are depleted”. While this statement isn’t quantitative, it seems FAR did not expect the apparent decline in deforestation rates seen since the 1990s. If emissions from land use were lower than expected by FAR’s authors, yet total emissions appear to have been higher, the only possible conclusion is that emissions from fossil fuels and cement were greater than FAR expected.

The First Assessment Report greatly overestimated the airborne fraction of CO2

The report mentions the airborne fraction only a couple of times:

  • For the period from 1850 to 1986, airborne fraction was estimated at 41 ± 6%
  • For 1980-89, its estimate is 48 ± 8%

So according to the IPCC itself, the airborne fraction of CO2 in observations at the time of the report’s publication was 48%, with a confidence interval going no higher than 56%. But the forecast for the decades immediately following the report implied a fraction of 60 or 61%. There is no explanation or even mention of this discrepancy in the report; the closest the IPCC came is this line:

“In model simulations of the past CO2 increase using estimated emissions from fossil fuels and deforestation it has generally been found that the simulated increase is larger than that actually observed”

Further evidence of FAR’s over-estimate of the airborne fraction comes from looking at Scenario B. Under this projection, CO2 emissions would slightly decline from 1990 on, and then make a likewise slight recovery; in all, annual emissions over 1991-2018 would be on average lower than in 1990. But even under this scenario CO2 concentrations would reach 401 ppm by 2018, compared with 408.5ppm in reality and 422ppm in the Business-as-usual scenario.

So real-world CO2 emissions were probably higher than under the IPCC’s highest-emissions scenario, yet concentrations ended up closer to a different scenario in which emissions declined from their 1990 level.

The error in the IPCC’s forecast of methane concentrations was enormous

In this case the calculations I’ve done are rougher than for CO2, but you’ll see it doesn’t really matter. This chart is from FAR’s Summary for Policymakers, Figure 5:

From a 1990 level just above 1700 parts per billion (ppb), concentrations reach about 2500 ppb by 2018. Even in Scenario B methane reaches 2050 ppb by that year. In the real world concentrations were only 1850 ppb. In other words:

  • The increase in concentrations in Scenario B was about two-and-a-half times larger than in reality
  • For Scenario A, the concentration increase was five or six times bigger than in the real world

The mismatch arose because methane concentrations were growing very quickly in the 1980s, though a slowdown was already apparent; this growth slowed further in the 1990s, and essentially stopped in the early 2000s. Since 2006 or so methane concentrations have been growing again, but at nowhere near the rates forecasted by the IPCC.

Readers may be wondering if perhaps FAR’s projections of methane emissions were very extravagant. Not so: the expected growth in yearly emissions between 1990 and 2018 was about 30%, far less than for CO2. See Figure A.2(b), from FAR’s Annex, page 331:

There’s an obvious reason the methane miss is even more of a head-scratcher. One of the main sources of methane is the fossil fuel industry: methane leaks out of coal mines, gas fields, etc. But fossil fuel consumption grew very quickly during the forecast period – indeed faster than the IPCC expected, as we saw.

It’s also interesting that the differences between emission scenarios were smaller for methane than for CO2. This may reflect a view on the part of the IPCC (which I consider reasonable) that methane emissions are less actionable than those of CO2. If you want to cut CO2 emissions, you burn less fossil fuel: difficult, yet simple. If by contrast you want to reduce methane emissions, it probably helps to reduce fossil fuel consumption, but there are also significant methane emissions from cattle, landfills, rice agriculture, and other sources; even with all the uncertainty around total methane emissions, more or less everybody agrees that non-fossil-fuel emissions are a more important source for methane than for CO2. And it’s not clear how to measure non-fossil-fuel emissions, so it’s far more difficult to act on them.

CO2 and methane appear to account for most of the mistake in FAR’s over-estimate of forcings

Disclosure: this is the most speculative section of the article. But as with land-use emissions before, it’s a case in which one can make some inferences even with incomplete data.

Let’s start with a paper by Zeke Hausfather and three co-authors; I hope the co-authors don’t feel slighted – I will refer simply to “Hausfather” for short.

Hausfather sets out to answer question: how well have projections from old climate models done, when accounting for the differences between real-world forcings and projected forcings? This is indeed a very good question: perhaps the IPCC back in 1990 projected more atmospheric warming than has actually happened only because its forecast of forcing was too aggressive. Perhaps the IPCC’s estimates of climate sensitivity, which is to say how much air temperature increases as a response to a given level of radiative forcing, were spot on.

(Although Hausfather’s paper focuses on atmospheric temperature increase, the over-projection in sea level rise has been perhaps worse. FAR’s Business-as-usual scenario expected 20 cm of sea level rise between 1990 and 2030, and the result in the real world is looking like it will be about 13 cm).

Looking at the paper’s Figure 2, there are three cases in which climate models made too-warm projections, yet after accounting for differences in realized-versus-expected forcing this effect disappears; the climate models appear to have erred on the warm side because they assumed excessively high forcing. Of the three cases, the IPCC’s 1990 report has arguably had the biggest impact on policy and scientific discussions. And for FAR, the authors estimate (Figure 1) that forecasted forcing was 55% greater than realized: the trend is 0.61 watts per square meter per decade, versus 0.39 in reality. Over the 1990-2017 period, the difference in trends adds up to 0.59 watts per square meter.

Now, there is a lot to digest in the paper, and I hope other researchers dig through the numbers as carefully as possible. I’m just going to assume the authors’ calculations of forcing and temperature increase are correct, but I want to mention why a calculation like this (comparing real-world forcings with the forcings expected by a 1990 document) is a minefield. Even if we restrict ourselves to greenhouse gases, ignoring harder-to-quantify forcing agents such as aerosols, there are at least three issues which make an apples-to-apples comparison difficult. (Hausfather’s supplementary Information seems to indicate they didn’t account for any of this — they simply took the raw forcing values from FAR))

First, some greenhouse gases simply weren’t considered in old projections of climate change. The most notable case in FAR may be tropospheric ozone. According to the estimate of Lewis & Curry (2018), forcing from this gas increased by 0.067w/m2 between 1990 and 2016, the last year for which they offer estimates (over the last decade of data forcing was still rising by about 0.002w/m2/year). Just to be sure, you can check Figure 2.4 in FAR (page 56), as well as Table 2.7 (page 57). These numbers do not include tropospheric ozone, but you’ll see the sum of the different greenhouse gases featured equals the total greenhouse forcing expected in the different scenarios. The IPCC did not account for tropospheric ozone at all.

Second, the classification of forcings is somewhat subjective and changes over time. For example, the depletion of stratospheric ozone, colloquially known as the ‘ozone hole’, has a cooling effect (a negative forcing). So, when you see an estimate of the forcing of CFCs and similar gases, you have to ask: is it a gross figure, looking at CFCs only as greenhouse gases? Or is it a net figure, accounting for both their greenhouse effect and their impact on the ozone layer? In modern studies stratospheric ozone has normally been accounted for as a separate forcing, but I’m not sure how FAR did it (no, I haven’t read the whole report).

Finally, even when greenhouse gases were considered and their effects had a more-or-less-agreed classification, our estimates of their effect on the Earth’s radiative budget changes over time. For the best-understood forcing agent, CO2, FAR estimated a forcing of 4 watts/m2 if atmospheric concentrations doubled (the forcing from CO2 is approximately the same each time concentration doubles). In 2013, the IPCC’s Assessment Report 5 estimated 3.7w/m2, and now some studies say it’s actually 3.8w/m2. These differences may seem minor, but they’re yet another way the calculation can go wrong. And for smaller forcing agents the situation is worse. Methane forcing, for example, suffered a major revision just three years ago.

Is there a way around the watts-per-square-meter madness? Yes. While I previously described climate sensitivity as the response of atmospheric temperatures to an increase in forcing, in practice climate models estimate it as the response to an increase in CO2 concentrations, and this is also the way sensitivity is usually expressed in studies estimating its value in the real world. Imagine the forcing from a doubling of atmospheric CO2 is 3.8w/m2 in the real world, but some climate model, for whatever reason, produces a value of 3w/m2. Obviously, then, what we’re interested in is not how much warming we’ll get per w/m2, but how much warming we’ll get from a doubling of CO2.

Thus, for example, the IPCC’s Business-as-usual forecast of 9.90 w/m2 in greenhouse forcing by 2100 (from a 1990 baseline) could instead be expressed as equivalent to 2.475 doublings of CO2 (the result of diving 9.90 by 4). Hausfather’s paper, or a follow-up, could then apply this to all models. Just using some made-up numbers as an illustration, it may be that FAR’s Business-as-usual forecast expected forcing between 1990 and 2017 equivalent to 0.4 doublings of CO2, while in reality the forcing was equivalent to 0.26 doublings. This would still mean the difference in forcings was about 55%, meaning FAR overshot real forcings by around 55%; however, this would be easier to interpret than a simple w/m2 measure.

Now, even with all these caveats, one can make some statements. First, there are seven greenhouse gases counted by FAR in its scenarios, but one of them (stratospheric water vapor) is created through the decay of other (methane). I haven’t checked if water vapor forcing according to FAR was greater than in the real world, but if that happened the blame lies on FAR’s inaccurate methane forecast; in any case stratospheric H2O is a small forcing agent and did not play a major role in FAR’s forecasts.

Then there are three gases regulated by the Montreal Protocol, which I will consider together: CFC-11, CFC-12, and HCFC-22. That leaves us with four sources to be considered: CO2, methane, N2O, and Montreal Protocol gases. In previous sections of the article we already saw CO2 and methane, so let’s turn to the two remaining sources of greenhouse forcing. I use 2017 as the finishing year, for comparison with Hausfather’s paper. The figures for real-world concentrations and forcings come from NOAA’s Annual Greenhouse Gas Index (AGGI)

For N2O, Figure A.3 in FAR’s page 333 shows concentrations rising from about 307ppb in 1990 to 334 ppb by 2017. This is close to the level that was observed (2018 concentrations averaged about 332 ppb). And even a big deviation in the forecast of N2O concentration wouldn’t have a major effect on forcing; FAR’s Business-as-usual scenario expected forcing of only about 0.036w/m2 per decade, which would mean roughly 0.1w/m2 for the whole 1990-2017 period. Deviations in the N2O forecast may have accounted for about 0.01w/m2 of the error in FAR’s forcing projection – surely there’s no need to keep going on about this gas.

Finally, we have Montreal Protocol gases and their replacements: CFCs, HCFCs, and in recent years HFCs. To get a sense of of their forcing effect in the real world, I check NOAA’s AGGI and sum the columns for CFC-11, CFC-12, and the 15 minor greenhouse gases (almost all of that is HCFCs and HFCs). The forcing thus aggregated rises from 0.284w/m2 in 1990 to 0.344 w/m2 in 2017; in other words, forcing from these gases between these years was 0.06 w/m2.

Here’s where Hausfather and co-authors have a point: the world really did emit far smaller quantities of CFCs and HCFCs than FAR’s Business-as-usual projection assumed. In FAR’s Table 2.7 (page 57), the aggregated forcing of CFC-11, CFC-12 and HCFC-22 rises by 0.24w/m2 between 2000 and 2025. And the IPCC expected accelerating growth: the sum of the forcings from these three gases would then increase by 0.28w/m2 between 2025 and 2050.

A rough calculation of what this implies for forcing between 1990 and 2017 now follows. In 2000-2025 FAR expected Montreal Protocol gases to account for 0.0096 w/m2/year of forcing; multiplied by the 27 years that we’re analysing, that would mean 0.259w/m2. However, forcing was supposed to be slower over the first period than later, as we’ve seen; Table 2.6 in FAR’s page 54 also implies smaller growth in 1990-2000 than after 2000. So I round the previously-calculated figure down to 0.25w/m2; this is probably higher than the actual increase FAR was forecasting, but I cannot realistically make an estimate down the last hundredth of a watt, so it will have to do.

If FAR expected 1990-2017 forcing from Montreal Protocol gases of 0.25w/m2, that would mean the difference between the real world and FAR’s Scenario A was 0.25 – 0.06 = 0.19w/m2. I haven’t accounted here for these gases’ effect on stratospheric ozone, as it wasn’t clear whether that effect was already included in FAR’s numbers. If stratospheric ozone depletion hadn’t been accounted for, then the deviation between FAR’s numbers and reality would be smaller.

Readers who have made it to this part of the article probably want a summary, so here it goes:

  • Hausfather estimates that FAR’s Business-as-usual scenario over-projected forcings for the 1990-2017 period by 55%. This would mean a difference of 0.59 w/m2 between FAR and reality.
  • Lower-than-expected concentrations of Montreal Protocol gases explain about 0.19 w/m2 of the difference. With the big caveat that Montreal Protocol accounting is a mess of CFCs, HCFCs, HFCs, stratospheric ozone, and perhaps other things I’m not even aware of.
  • FAR didn’t account for tropospheric ozone, and this ‘unexplains’ about 0.07 w/m2. So there’s still 0.45-0.5 w/m2 of forcing overshoot coming from something else, if Hausfather’s numbers are correct.
  • N2O is irrelevant in these numbers
  • CO2 concentration was significantly over-forecasted by the IPCC, and that of methane grossly so. It’s safe to assume that methane and CO2 account for most or all of the remaining difference between FAR’s projections and reality.

Again, this is a rough calculation. As mentioned before, an exact calculation has to take into account for many issues I didn’t consider here. I really hope Hausfather’s paper is the beginning of a trend in properly evaluating climate models of the past, and that means properly accounting for (and documenting) how expected forcings and actual forcings differed.

By the way: this doesn’t mean climate action failed

There is a tendency to say that, since emissions of CO2 and other greenhouse gases are increasing, policies intended to reduce or mitigate emissions have been a failure. The problem with such an inference is obvious: we don’t know whether emissions would have been even higher in the absence of emissions reductions policies. Emissions may grow very quickly in an economic boom, even if emission-mitigation policies are effective; on the other hand, even with no policies at all, emissions obviously decline in economic downturns. Looking at the metric tons of greenhouse gases emitted is not enough.

Dealing specifically with the IPCC’s First Assessment Report, its emission scenarios used a common assumption about future economic and population growth; however, the description is so brief and vague as to be useless.

“Population was assumed to approach 10.5 billion in the second half of the next century. Economic growth was assumed to be 2-3% annually in the coming decade in the OECD countries and 3-5 % in the Eastern European and developing countries. The economic growth levels were assumed to decrease thereafter.”

So it’s impossible to say the amount of emissions FAR expected per unit of economic growth or population growth. The question ‘are climate policies effective?’ can’t answered by FAR.

Conclusions

The IPCC’s First Assessment report greatly overestimated future rates of atmospheric warming and sea level rise in its Business-as-usual scenario. This projection also overestimated rates of radiative forcing from greenhouse gases. A major part of the mis-estimation of greenhouse forcing happened because the world clamped down on CFCs and HCFCs much more quickly than its projections assumed. This was not a mistake of climate science, but simply a failure to foresee changes in human behaviour.

However, the IPCC also made other errors or omissions, which went the other way: they tended to reduce forecasted forcing and warming. Its Business-as-usual scenario featured CO2 emissions probably lower than those that have actually taken place, and its forcing estimates didn’t include tropospheric ozone.

This means that the bulk of the error in FAR’s forecast stems from two sources:

  • The fraction of CO2 emissions that remained in the atmosphere was much higher than has been observed, either at the time of the report’s publication or since then. There are uncertainties around the real-world airborne fraction, but the IPCC’s figure of 61% is about one third-higher than emission estimates suggest. As a result, CO2 concentrations grew 25% more in FAR’s Business-as-usual projection than in the real world.
  • The methane forecast was hopeless: methane concentrations in FAR’s Business-as-usual scenario grew five or six times more than has been observed. It’s still not clear where exactly the science went wrong, but a deviation of this size cannot be blamed on some massive-yet-imperceptible change in human behaviour.

These are purely problems of inadequate scientific knowledge, or a failure to apply scientific knowledge in climate projections. Perhaps by learning about the mistakes of the past we can create a better future.

Data

This Google Drive folder contains three files:

  • BP’s Energy Review 2019 spreadsheet (original document and general website)
  • NOAA’s data on CO2 concentrations from the Mauna Loa observatory (original document)
  • My own Excel file with all the calculations. This includes the raw digitized figures on CO2 emissions and concentrations from the IPCC’S First Assessment Report.

The emission numbers from LBNL are available here. I couldn’t figure out how to download a file with the data, so these figures are included in my spreadsheet.

NOAA’s annual greenhouse gas index (AGGI) is here. For comparisons of methane and N2O concentrations in the real world with the IPCC’s forecasts, I used Figure 2.

The IPCC’s First Assessment Report, or specifically the part of the report by Working Group 1 (which dealt with the physical science of climate change), is here. The corresponding section of Assessment Report 5 is here.

via Watts Up With That?

https://ift.tt/2OjC4Yt

February 1, 2020 at 04:43PM