DO-IT-YOURSELF TEMPERATURE RECONSTRUCTION

Guest essay by Dr Michael Chase

SCOPE

This article describes a simple but effective procedure for regional average temperature reconstruction, a procedure that you, yes you dear reader, can fully understand and, if you have some elementary programming skills, can implement.

To aid readability, and to avoid the risk of getting it wrong, no attempt is made in the article to give proper attribution to previous work of others, but a link is provided at the end to where a list of references can be found.

INPUTS/OUTPUTS

The inputs are records of raw monthly average surface air temperature for a region of interest, plus any station history information (metadata) that is available. Monthly rainfall totals are sometimes helpful in the analysis of temperature changes.

The outputs are, separately for each month, regional averages of two quantities:

· A: The variations of “typical” (moving-average) temperature, relative to an arbitrary reference year. Moving averages are typically over around 11-15 years.

· B: The fluctuations of temperature relative to A

The two outputs are usually plotted together as A+B (temperature variations) and A (moving average variations). Note that there is no concept here of a regional average absolute temperature.

There is some arbitrariness in the definition of moving averages A, and thereby in the definition of B (= RAW – A), but this issue goes away in A+B.

PROCEDURE OUTLINE

The procedure does a democratic average across stations of temperature changes and deviations, excluding periods deemed by the analyst to be anomalous, due to a variety of causes such as station moves, equipment changes and observer errors.

Initially, when presented with raw data with no indication of anomalous periods, the procedure simply produces an average over entire station records, which acts as a reference for the visual detection/confirmation of station inhomogeneities. Each station record is analysed in turn, looking for (or confirming metadata indications of) periods of anomalous temperature change, and any such periods are marked in files, which cause the software to exclude them from subsequent recalculations of the regional averages. When all stations have been done the final output is an estimation of the true regional averages, assuming that there were no systematic weather station changes, which must be dealt with by additional (bolted-on) processing.

KEY CONCEPTS

The procedure is described with the aid of a set of actual software outputs for a set of synthetic data inputs. The synthetic station records all have the same base “moving-averages”, to which is added common sets of temperature fluctuations (referred to as “weather” in the figure titles). Each station record usually also has its own uncorrelated set of random deviations. One synthetic input has a persistent step changes in temperature, and a large transient perturbation.

The following figure, showing monthly data, illustrates the averaging methods used to obtain the two outputs of the procedure:

clip_image002

Regional average temperature fluctuations (output B) are estimated by median averaging across all stations with valid moving averages, the median giving resilience to data errors and local weather extremes.

Besides being one of the desired outputs (output B) the regional average temperature fluctuations play a key role in detecting/confirming inhomogeneities: subtracting the fluctuations from station raw data enhances the signal-to-noise ratio.

The regional-average moving averages are obtained by the “First Difference” (FD) method applied to the valid periods of station moving averages. Inhomogeneities are excluded simply by making them produce periods of invalid moving averages, effectively chopping station records into separate segments. The FD method forms inter-annual temperature differences, averages them across stations, then integrates those average differences forwards and backwards in time from an arbitrary reference year.

Note in the figure above that moving averages continue right up to the boundaries, a feature that allows all data to be used, there is more on this feature below.

The figure above does not show any sources of error, such as from non-climatic temperature shifts caused by station moves and equipment changes. The following figure shows the main data display used in the visual detection/confirmation of anomalous temperature changes:

clip_image004

The figure above shows 12-month moving averages of regional-weather-corrected temperature variations (RAW – B) for a station (in blue), together with similar data for station and regional (red) moving averages. There is a step change in temperature around year 50, but the exact date of the step is unclear.

The following figure illustrates the method used to estimate the month of step changes in temperature, it is essentially the multi-month version of the figure above:

clip_image006

The monthly data figure above reveals that the step change occurred early in January of year 50, with all data before then being “high”, and data after being “low”. The figure above was produced with the step change marked in a file, which caused the software to break the moving averages (the red curves) at that point, thereby preventing the step change from distorting the regional averages. The data display above is also used to check for inhomogeneities with a strong seasonality, sometimes step changes are only visible in a few months of the data.

So far we appear to be having an almost free algorithmic lunch, with no mention of auto-detection or of temperature adjustments, a one-size-fits-all reference (the regional averages), and no need for correlation and for the calculation of temperature offsets to allow different station records to be averaged. Now is the time to mention the downside with the First Difference method of averaging temperature records: end-point errors associated with truncated transient perturbations.

Transient perturbations that both start and stop within a segment of station data are not much of a problem. Even if they survive the outlier-resistant moving average estimation algorithm, they will have matching temperature up and down shifts, so they have almost no net impact on trends, but not exactly zero impact due to the time varying weights applied to stations in the regional average. A potentially large problem with transient perturbations arises when they occur at station record start/stop times, internal boundaries caused by periods of missing data, and boundaries created when records are chopped into separate segments. When a transient perturbation is truncated by a boundary it will no longer have matching up and down shifts, so may produce trend distortion.

The following figure illustrates the resilience of the procedure to truncated transient perturbations:

clip_image008

The blue and purple data are identical leading up to year 50, but the blue data has a transient perturbation just before its step change then. The transition defined for the blue data at year 50 creates a truncated perturbation, which would have distorted the regional average trend if the First Difference method had been applied to unsmoothed temperature data. The procedure avoids major trend distortion in this example by the smoothing inherent in moving averages, and by the use of extrapolated station data derived from the regional moving averages. The extrapolated data allows moving averages to be computed more accurately right up to boundaries, with greater resilience to transient perturbations.

QUALITY CONTROL

In general there is no need for the extensive Quality Control adjustments that feature in some methods, but in some cases a small amount is beneficial when dealing with sparse data periods. Data deemed to be invalid, and which might lead to errors if left in place, can be set manually to NaN (Not a Number). The NaNs created, and many present originally, are auto-infilled using valid data either side, and the latest estimate of the regional average weather fluctuations.

MORE INFORMATION

There is a dedicated website for this procedure, providing more information on the algorithms and software, together with real data examples:

https://diymetanalysis.wordpress.com

The website also provides references for the original First Difference method, which was just a data averaging method, and for its use in conjunction with the removal of periods of anomalous temperature change.

BIOGRAPHICAL INFORMATION

Dr Michael Chase has a PhD and several years of postdoctoral research experience in theoretical physics. He also has around 30 years of experience in developing signal processing algorithms for acoustic sensor systems.

via Watts Up With That?

http://ift.tt/2nwWgJ0

February 2, 2018 at 05:42PM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: