Guest post by Nick Stokes,

Every now and then, in climate blogging, one hears a refrain that the traditional min/max daily temperature can’t be used because it “violates Nyquist”. In particular, an engineer, William Ward, writes occasionally of this at WUWT; the latest is here, with an earlier version here. But there is more to it.

Naturally, the more samples you can get, the better. But there is a finite cost to sampling limitation; not a sudden failure because of “violation”. And when the data is being used to compile monthly averages, the notion promoted by William Ward that many samples per hour are needed, that cost is actually very small. Willis Eschenbach, in comments to that Nyquist post, showed that for several USCRN stations, there was little difference to even a daily average whether samples were every hour or every five minutes.

The underlying criticism is of the prevailing method of assessing temperature at locations by a combined average of Tmax and Tmin = (Tmax+Tmin)/2. I’ll call that the min/max method. That of course involves just two samples a day, but it actually isn’t a frequency sampling of the kind envisaged by Nyquist. The sampling isn’t periodic; in fact we don’t know exactly what times the readings correspond to. But more importantly, the samples are determined by value, which gives them a different kind of validity. Climate scientists didn’t invent the idea of summarising the day by the temperature range; it has been done for centuries, aided by the min/max thermometer. It has been the staple of newspaper and television reporting.

So in a way, fussing about regular sample rates of a few per day is theoretical only. The way it was done for centuries of records is not periodic sampling, and for modern technology, much greater sample rates are easily achieved. But there is some interesting theory.

In this post, I’d like to first talk about the notion of aliasing that underlies the Nyquist theory, and show how it could affect a monthly average. This is mainly an interaction of sub-daily periodicity with the diurnal cycle. Then I’ll follow Willis in seeing what the practical effect of limited sampling is for the Redding CA USCRN station. There isn’t much until you get down to just a few samples per day. But then I’d like to follow an idea for improvement, based on a study of that diurnal cycle. It involves the general idea of using anomalies (from the diurnal cycle) and is a good and verifiable demonstration of their utility. It also demonstrates that the “violation of Nyquist” is not irreparable.

Here is a linked table of contents:

#### Aliasing and Nyquist

Various stroboscopic effects are familiar – this wiki article gives examples. The math comes from this. If you have a sinusoid frequency f Hz (sin(2Ï€ft)) samples at s Hz, the samples are sin(2Ï€fn/s), n=0,1,2… But this is indistinguishable from sin(2Ï€(fn/s+m*n)) for any integerm (positive or negative), because you can add a multiple of 2Ï€ to the argument of sin without changing its value.

But sin(2Ï€(fn/s+m*n)) = sin(2Ï€(f+m*s)n/s) that is, the samples representing the sine also representing a sine to which any multiple of the sampling frequency s has been added, and you can’t distinguish between them. These are the aliases. But if s is small, the aliases all have higher frequency, so you can pick out the lowest frequency as the one you want.

This, though, fails if f>s/2, because then subtracting s from f gives a lower frequency, so you can’t use frequency to pick out the one you want. This is where the term aliasing is more commonly used, and s=2*f is referred to as the Nyquist limit.

I’d like to illuminate this math with a more intuitive example. Suppose you observe a running track, circle circumference 400 m, from a height, through a series of snapshots (samples) 10 sec apart. There is a runner who appears as a dot. He appears to advance 80 m in each frame. So you might assume that he is running at a steady 8 m/s.

But he could also be covering 480m, running a lap+80 between shots. Or 880m, or even covering 320 m the other way. Of course, you’d favour the initial interpretation, as the alternatives would be faster than anyone can run.

But what if you sample every 20 s. Then you’d see him cover 160 m. Or 240 m the other way, which is not quite so implausible. Or sample every 30 s. Then he would seem to progress 240m, but if running the other way, would only cover 160m. If you favour the slower speed, that is the interpretation you’d make. That is the aliasing problem.

The critical case is sampling every 25s. Then every frame seems to take him 200m, or halfway around. It’s 8 m/s, but could be either way. That is the Nyquist frequency (0.04 Hz), relative to the frequency 0.02Hz which goes with as speed of 8 m/s. Sampling at double the frequency.

But there is one other critical frequency – that 0.2 Hz, or sampling every 50s. Then the runner would appear not to move. The same is true for multiples of 50s.

Here is a diagram in which I show some paths consistent with the sampled data, over just one sample interval. The basic 8 m/s is shown in black, the next highest forward speed in green, and the slowest path the other way in red. Starting point is at the triangles, ending at the dots. I have spread the paths for clarity; there is really only one start and end point.

All this speculation about aliasing only matters when you want to make some quantitative statement that depends on what he was doing between samples. You might, for example, want to calculate his long term average location. Now all those sampling regimes will give you the correct answer, track centre, except the last where sampling was at lap frequency.

Now coming back to our temperature problem, the reference to exact periodic processes (sinusoids or lapping) relates to a Fourier decomposition of the temperature series. And the quantitative step is the inferring of a monthly average, which can be regarded as long term relative to the dominant Fourier modes, which are harmonics of diurnal. So that is how aliasing contributes error. It comes when one of those harmonics matches the sample rate.

#### USCRN Redding and monthly averaging

Willis linked to this NOAA site (still working) as a source of USCRN 5 minute AWS temperature data. Following him, I downloaded data for Redding, California. I took just the years 2010 to present, since the files are large (13Mb per station per year) and I thought the earlier years might have more missing data. Those years were mostly gap-free, except for the last half of 2018, which I generally discarded.

Here is a table for the months of May. The rows are for sampling frequencies of 288, 24, 12, 4, 2, and 1 per day. The first row shows the actual mean temperature averaged 288 times per day over the month. The other rows show the discrepancy for the lower rate of sampling, for each year.

Per hour | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018 |

1/12 | 13.611 | 14.143 | 18.099 | 18.59 | 19.195 | 18.076 | 17.734 | 19.18 | 18.676 |

1 | -0.012 | 0.007 | -0.02 | -0.002 | -0.021 | -0.014 | -0.007 | 0.002 | 0.005 |

2 | -0.004 | 0.013 | -0.05 | -0.024 | -0.032 | -0.013 | -0.037 | 0.011 | -0.035 |

6 | -0.111 | -0.03 | -0.195 | -0.225 | -0.161 | -0.279 | -0.141 | -0.183 | -0.146 |

12 | 0.762 | 0.794 | 0.749 | 0.772 | 0.842 | 0.758 | 0.811 | 1.022 | 0.983 |

24 | -2.637 | -2.704 | -4.39 | -3.652 | -4.588 | -4.376 | -3.982 | -4.296 | -3.718 |

As Willis noted, the discrepancy for sampling every hour is small, suggesting that very high sample rates aren’t needed, even though they are said to “violate Nyquist”. But they get up towards a degree for sampling twice a day, and once a day is quite bad. I’ll show a plot:

The interesting thing to note is that the discrepancies are reasonably constant, year to year. This is true for all months. In the next section I’ll show how to calculate that constant, which comes from the common diurnal pattern.

#### Using anomalies to gain accuracy

I talk a lot about anomalies in averaging temperature globally. But there is a general principle that it uses. If you have a variable T that you are trying to average, or integrate, you can split it:

T = E + A

where E is some kind of expected value, and A is the difference (or residual, or anomaly). Now if you do the same linear operation on E and A, there is nothing gained. But it may be possible to do something more accurate on E. And A should be smaller, already reducing the error, but more importantly, it should be more homogeneous. So if the operation involves sampling, as averaging does, then getting the sample right is far less critical.

With global temperature average, E is the set of averages over a base period, and the treatment is to simply omit it, and use the anomaly average instead. For this monthly average task, however, E can actually be averaged. The right choice is some estimate of the diurnal cycle. What helps is that it is just one day of numbers (for each month), rather than a month. So it isn’t too bad to get 288 values for that day – ie use high resolution, while using lower resolution for the anomalies A, which are new data for each day.

But it isn’t that important to get E extremely accurate. The idea of subtracting E from T is to remove the daily cycle component that reacts most strongly with the sampling frequency. If you remove only most of it, that is still a big gain. My preference here is to use the first few harmonics of the Fourier series approximation of the daily cycle, worked out at hourly frequency. The range 0-4 dayâ»Â¹ can do it.

The point is that we know exactly what the averages of the harmonics should be. They are zero, except for the constant. And we also know what the sampled value should be. Again, it is zero, except where the frequency is a multiple of the sampling frequency, when it is just the initial value. This is just the Fourier series coefficient of the cos term.

Here is are the corresponding discrepancies of the May averages for different sampling rates, to compare with the table above. The numbers for 2 hour sampling have not changed. The reason is that the error there would have been in the 8th harmonic, and I only resolved the diurnal frequency up to 4.

Per hour | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018 |

1/12 | -0.012 | 0.007 | -0.02 | -0.002 | -0.021 | -0.014 | -0.007 | 0.002 | 0.005 |

2 | -0.004 | 0.013 | -0.05 | -0.024 | -0.032 | -0.013 | -0.037 | 0.011 | -0.035 |

6 | 0.014 | 0.095 | -0.07 | -0.1 | -0.036 | -0.154 | -0.016 | -0.058 | -0.021 |

12 | -0.062 | -0.029 | -0.075 | -0.051 | 0.019 | -0.066 | -0.012 | 0.199 | 0.16 |

24 | 1.088 | 1.021 | -0.665 | 0.073 | -0.864 | -0.651 | -0.258 | -0.571 | 0.007 |

And here is the comparison graph. It shows the uncorrected discrepancies with triangles, and the diurnally corrected with circles. I haven’t shown the one sample/day, because the scale required makes the other numbers hard to see. But you can see from the table that with only one sample/day, it is still accurate within a degree or so with diurnal correction. I have only shown May results, but other months are similar.

#### Conclusion

Sparse sampling (eg 2/day) does create aliasing to zero frequency, which does affect accuracy of monthly averaging. You could attribute this to Nyquist, although some would see it as just a poorly resolved integral. But the situation can be repaired without resort to high frequency sampling. The reason is that most of the error arises from trying to sample the repeated diurnal pattern. In this analysis I estimated that just from Fourier series of hourly readings from a set of base years. If you subtract a few harmonics of the diurnal, you get much improved accuracy for sparse sampling of each extra year, at the cost of just hourly sampling of a reference set.

Note that this is true for sampling at prescribed times. Min/max sampling is something else.

via Watts Up With That?

January 25, 2019 at 08:01AM