From the Journal of International Climatology and the “if you can’t beat ’em, join ’em” department.
To me, this feels like vindication. For years, I’ve been pointing out just how bad the U.S. and Global Surface monitoring network has been. We’ve seen stations that are or pavement, at airports collecting jet exhaust, and failing instruments reading high, and right next to the heat output of air conditioning systems.
We’ve been told it “doesn’t matter” and that “the surface monitoring network is producing good data”. Behind the scenes though, we learned that NOAA/NCDC scrambled when we reported this, quietly closing some of the worst stations, while making feverish and desperate PR pitches to prop up the narrative of “good data”.
Read my report from 2009 on the state of the US Historical Climate Network:
That 2009 report (published with the help of the Heartland Institute) spurred a firestorm of criticism, and an investigation and report by the U.S. Office of the Inspector General who wrote:
Lack of oversight, non-compliance and a lax review process for the State Department’s global climate change programs have led the Office of the Inspector General (OIG) to conclude that program data “cannot be consistently relied upon by decision-makers” and it cannot be ensured “that Federal funds were being spent in an appropriate manner.”
More recently, I presented at AGU15 : Watts at #AGU15 The quality of temperature station siting matters for temperature trends
And showed just how bad the old surface network is in two graphs:
Now, some of the very same people who have scathingly criticized my efforts and the efforts of others to bring these weaknesses to the attention of the scientific community have essentially done an about-face, and authored a paper calling for a new global climate monitoring network like the United States Climate Reference Network (USCRN) which I have endorsed as the only suitable way to measure surface temperature and extract long term temperature trends.
During my recent trip to Kennedy Space Center (Thanks to generous donations from WUWT readers), I spotted an old-style airport ASOS weather station right next to one of the new USCRN stations, at the Shuttle Landing Facility runway, presumably placed there to study the difference between the two. Or, possibly, they just couldn’t trust the ASOS station when they most needed it -during a Shuttle landing where accurate temperature is of critical importance in calculating density altitude, and therefore the glide ratio. Comparing the data between the two is something I hope to do in a future post.
Here is the aerial view showing placement:
Clearly, with its selection of locations, triple redundant state of the art aspirated air temperature sensors, the USCRN station platform is the best possible way to measure long-term trends in 2 meter surface air temperature. Unfortunately, the public never sees the temperature reports from it in NOAA’s “State of the Climate” missives, but they instead rely on the antiquated and buggy surface COOP and GHCN network and it’s highly biased and then adjusted data.
So, for this group of people to call for a worldwide USCRN style temperature monitoring network, is not only a step in the right direction, but a clear indication that even though they won’t publicly admit to the unreliable and uncertain existing COOP/USHCN networks worldwide being “unfit for purpose” they are in fact endorsing the creation of a truly “fit for purpose” global system to monitor surface air temperature, one that won’t be highly biased by location, sensor/equipment issues, and have any need at all for adjustments.
I applaud the effort, and I’ll get behind it. Because by doing so, it puts an end to the relevance of NASA GISS and HadCRUT, whose operators (Gavin Schmidt and Phil Jones) are some of the most biased, condescending, and outright snotty scientists the world has ever seen. They should not be gatekeepers for the data, and this will end their lock on that distinction. To Phil Jones credit, he was a co-author of this new paper. Gavin Schmidt, predictably, was not.
This is something both climate skeptics and climate alarmists should be able to get behind and promote. More on that later.
Here’s the paper: (note they reference my work in the 2011 Fall et al. paper)
Towards a global land surface climate fiducial reference measurements network
P. W. Thorne, H. J. Diamond, B. Goodison, S. Harrigan, Z. Hausfather, N. B. Ingleby, P. D. Jones, J. H. Lawrimore, D. H. Lister, A. Merlone, T. Oakley, M. Palecki, T. C. Peterson, M. de Podesta, C. Tassone, V. Venema, K. M. Willett
There is overwhelming evidence that the climate system has warmed since the instigation of instrumental meteorological observations. The Fifth Assessment Report of the Intergovernmental Panel on Climate Change concluded that the evidence for warming was unequivocal. However, owing to imperfect measurements and ubiquitous changes in measurement networks and techniques, there remain uncertainties in many of the details of these historical changes. These uncertainties do not call into question the trend or overall magnitude of the changes in the global climate system. Rather, they act to make the picture less clear than it could be, particularly at the local scale where many decisions regarding adaptation choices will be required, both now and in the future. A set of high-quality long-term fiducial reference measurements of essential climate variables will enable future generations to make rigorous assessments of future climate change and variability, providing society with the best possible information to support future decisions. Here we propose that by implementing and maintaining a suitably stable and metrologically well-characterized global land surface climate fiducial reference measurements network, the present-day scientific community can bequeath to future generations a better set of observations. This will aid future adaptation decisions and help us to monitor and quantify the effectiveness of internationally agreed mitigation steps. This article provides the background, rationale, metrological principles, and practical considerations regarding what would be involved in such a network, and outlines the benefits which may accrue. The challenge, of course, is how to convert such a vision to a long-term sustainable capability providing the necessary well-characterized measurement series to the benefit of global science and future generations.
INTRODUCTION: HISTORICAL OBSERVATIONS, DATA CHALLENGES, AND HOMOGENIZATION
A suite of meteorological parameters has been measured using meteorological instrumentation for more than a century (e.g., Becker et al., 2013; Jones, 2016; Menne, Durre, Vose, Gleason, & Houston, 2012; Rennie et al., 2014; Willett et al., 2013, henceforth termed “historical observations”). Numerous analyses of these historical observations underpin much of our understanding of recent climatic changes and their causes (Hartmann et al., 2013). Taken together with measurements from satellites, weather balloons, and observations of changes in other relevant phenomena, these observational analyses underpin the Intergovernmental Panel on Climate Change conclusion that evidence of historical warming is “unequivocal” (Intergovernmental Panel on Climate Change, 2007 2007, 2013).
Typically, individual station series have experienced changes in observing equipment and practices (Aguilar, Auer, Brunet, Peterson, & Wieringa, 2003; Brandsma & van der Meulen, 2008; Fall et al., 2011; Mekis & Vincent, 2011; Menne, Williams Jr., & Palecki, 2010; Parker, 1994; Sevruk, Ondrás, & Chvíla, 2009). In addition, station locations, observation times, instrumentation, and land use characteristics (including in some cases urbanization) have changed at many stations. Collectively, these changes affect the representativeness of individual station series, and particularly their long-term stability (Changnon & Kunkel, 2006; Hausfather et al., 2013; Karl, Williams Jr., Young, & Wendland, 1986; Quayle, Easterling, Karl, & Hughes, 1991). Metadata about changes are limited for many of the stations. These factors impact our ability to extract the full information content from historical observations of a broad range of essential climate variables (ECVs) (Bojinski et al., 2014). Many ECVs, such as precipitation, are extremely challenging to effectively monitor and analyse due to their restricted spatial and temporal scales and globally heterogeneous measurement approaches (Goodison, Louie, & Yang, 1998; Sevruk et al., 2009).
Changes in instrumentation were never intended to deliberately bias the climate record. Rather, the motivation was to either reduce costs and/or improve observations for the primary goal(s) of the networks, which was most often meteorological forecasting. The majority of changes have been localized and quasi-random in nature and so are amenable to statistical averaging of their effects. However, there have been regionally or globally systemic transitions specific to certain periods of time whose effect cannot be entirely ameliorated by averaging. Examples include:
- Early thermometers tended to be housed in polewards facing wall screens, or for tropical locales under thatched shelter roofs (Parker, 1994). By the early 20th century better radiation shielding and ventilation control using Stevenson screens became ubiquitous. In Europe, Böhm et al. (2010) have shown that pre-screen summer temperatures were about 0.5 °C too warm.
- In the most recent 30 or so years a transition to automated or semi-automated measurements has occurred, although this has been geographically heterogeneous.
- As highlighted in the recent World Meteorological Organization (WMO) SPICE intercomparison (http://www.wmo.int/pages/prog/www/IMOP/intercomparisons/SPICE/SPICE.html) and the previous intercomparison (Goodison et al., 1998), measuring solid precipitation remains a challenge. Instrument design, shielding, siting, and transition from manual to automatic all contribute to measurement error and bias and affect the achievable uncertainties in measurements of solid precipitation and snow on the ground.
- For humidity measurements, recent decades have seen a switch to capacitive relative humidity sensors from traditional wet- and dry-bulb psychrometers. This has resulted in a shift in error characteristics that is particularly significant in wetter conditions (Bell, Carroll, Beardmore, England, & Mander, 2017; Ingleby, Moore, Sloan, & Dunn, 2013).
As technology and observing practices evolve, future changes are inevitable. Imminent issues include the replacement of mercury-in-glass thermometers and the use of third party measurements arising from private entities, the general public, and non-National Met Service public sector activities.
From the perspective of climate science, the consequence of both random and more systematic effects is that almost invariably a post hoc statistical assessment of the homogeneity of historical records, informed by any available metadata, is required. Based on this analysis, adjustments must be applied to the data prior to use. Substantive efforts have been made to post-process the data to create homogeneous long-term records for multiple ECVs (Mekis & Vincent, 2011; Menne & Williams, 2009; Rohde et al., 2013; Willett et al., 2013, 2014; Yang, Kane, Zhang, Legates, & Goodison, 2005) at both regional and global scales (Hartmann et al., 2013). Such studies build upon decades of development of techniques to identify and adjust for breakpoints, for example, the work of Guy Callendar in the early 20th century (Hawkins & Jones, 2013). The uncertainty arising from homogenization using multiple methods for land surface air temperatures (LSAT) (Jones et al., 2012; Venema et al., 2012; Williams, Menne, & Thorne, 2012) is much too small to call into question the conclusion of decadal to centennial global-mean warming, and commensurate changes in a suite of related ECVs and indicators (Hartmann et al., 2013, their FAQ2.1). Evidence of this warming is supported by many lines of evidence, as well as modern reanalyses (Simmons et al., 2017).
The effects of inhomogeneities are stronger at the local and regional level, may be impacted by national practices complicating homogenization efforts, and are more challenging to remove for sparse networks (Aguilar et al., 2003; Lindau & Venema, 2016). The effects of inhomogeneities are also manifested more strongly in extremes than in the mean (e.g., Trewin, 2013) and are thus important for studies of changes in climatic extremes. State-of-the art homogenization methods can only make modest improvements in the variability around the mean of daily temperature (Killick, 2016) and humidity data (Chimani et al., 2017).
In the future, it is reasonable to expect that observing networks will continue to evolve in response to the same stakeholder pressures that have led to historical changes. We can thus be reasonably confident that there will be changes in measurement technology and measuring practice. It is possible that such changes will prove difficult to homogenize and would thus threaten the continuity of existing data series. It is therefore appropriate to ask whether a different route is possible to follow for future observational strategies that may better meet climate needs, and serve to increase our confidence in records going forwards. Having set out the current status of data sets derived from ad hoc historical networks, in the remainder of this article, we propose the construction of a different kind of measurement network: a reference network whose primary mission is the establishment of a suite of long-term, stable, metrologically traceable, measurements for climate science.
Each site will need to be large enough to house all instrumentation without adjacent instrumentation interfering with one another, with no shading or wind-blocking vegetation or localized topography, and at least 100 m from any artificial heat sources. Figure 2 provides a site schematic for USCRN stations that meets this goal. The siting should strive to adhere to Class 1 criteria detailed in guidance from the WMO Commission for Instruments and Methods of Observations (World Meteorological Organization, 2014, part I, chap. I). This serves to minimize representativity errors and associated uncertainties. Sites should be chosen in areas where changes in siting quality and land use, which may impact representativity, are least likely for the next century. The site and surrounding area should further be selected on the basis that its ownership is secure. Thus, site selection requires an excellent working and local knowledge of items such as land/site ownership proposed, geology, regional vegetation, and climate. As it cannot be guaranteed that siting shall remain secure over decades or centuries, sites need to be chosen so that a loss will not critically affect the data products derived from the network. A partial solution would be to replace lost stations with new stations with a period of overlap of several years (Diamond et al., 2013). It should be stressed that sites in the fiducial reference network do not have to be new sites and, indeed, there are significant benefits from enhancing the current measurement program at existing sites. Firstly, co-location with sites already undertaking fiducial reference measurements either for target ECVs or other ECVs, such as GRUAN or GCW would be desirable. Secondly, co-location with existing baseline sites that already have long records of several target ECVs has obvious climate monitoring, cost and operational benefits.
For a reference grade installation, an evaluated uncertainty value should be ascertained for representativeness effects which may differ synoptically and seasonally. Techniques and large-scale experiments for this kind of evaluation and characterization of the influences of the siting on the measured atmospheric parameters are currently in progress (Merlone et al., 2015).
Finally, if the global surface fiducial reference network ends up consisting of two or more distinct set-ups of instrumentation (section 4.1), there would be value in side-by-side operations of the different configurations in a subset of climatically distinct regions to ensure long-term comparability is assured (section 3). This could be a task for the identified super-sites in the network.
There are many possible metrics for determining the success of a global land surface fiducial reference climate network as it evolves, such as the number and distribution of fiducial reference climate stations or the percent of stations adhering to the strict reference climate criteria described in this article. However, in order to fully appreciate the significance of the proposed global climate surface fiducial reference network, we need to imagine ourselves in the position of scientists working in the latter part of the 21st century and beyond. However, not just scientists, but also politicians, civil servants, and citizens faced with potentially difficult choices in the face of a variable and changing climate. In this context, we need to act now with a view to fulfilling their requirements for having a solid historical context they can utilize to assist them making scientifically vetted decisions related to actions on climate adaptation. Therefore, we should care about this now because those future scientists, politicians, civil servants, and citizens will be—collectively—our children and grandchildren, and it is—to the best of our ability—our obligation to pass on to them the possibility to make decisions with the best possible data. Having left a legacy of a changing climate, this is the very least successive generations can expect from us in order to enable them to more precisely determine how the climate has changed.
Read the full open access paper here, well worth your time: http://onlinelibrary.wiley.com/doi/10.1002/joc.5458/full
h/t to Zeke Hausfather for notice of the paper. Zeke, unlike some of his co-authors, actually engages me with respect. Perhaps his influence will help them become not just civil servants, but civil people.
via Watts Up With That?
March 2, 2018 at 12:37PM