With Tree Rings On Their Fingers

CDN

There’s a lot of apparently confident talk about how current temperatures compare with those in the past, including claims of 2023 being the “hottest year ever” or at least in the last 125,000. But how do we actually know, and how much do we actually know, about historic and prehistoric temperatures? In this Climate Discussion Nexus “Backgrounder” video John Robson examines the uses, and abuses, of various temperature proxies.

Transcript below [apologies for any misspellings of proper nouns~cr]


You’ve probably heard the claim that the Earth today is the warmest it’s been in a thousand years, or 10,000, or even 125,000 years. But how do they know, when the earliest modern thermometers were invented by German physicist Daniel Fahrenheit in 1709, and we have very few systematic weather records anywhere before the mid-1800s, and few or none in most of the world until the mid-20th century? So how can anyone claim to know temperatures anywhere, let alone around the world, in, say, 1708, or even further back? How do we know what’s warmer today in Scotland than it was in 1314, the year Robert the Bruce defeated the English army at Bannockburn, or that Rome is warmer today than in 306, when Constantine became emperor, or 410 AD, when Alaric the Visigoth invaded and sacked it, or that Israel is warmer today than in 587 BC, when King Nebuchadnezzar of Babylon destroyed Jerusalem and led the Jews into captivity? He didn’t confiscate their thermometers—there weren’t any. So how can we say anything definitive, or even plausible, about a single location, never mind the whole world, 70% of which is open ocean, where nobody was keeping even anecdotal records?

Obviously, we don’t have satellite data to make up for the lack of thermometers. Instead, scientists use indirect measures called proxies. These are evidence from the geological record of what the landscape was like in the past that we believe correlate fairly well with temperature—things like tree ring widths, different isotopes of carbon in ice core layers, and the kind and quantity of shells, pollen, and other remains of living creatures found in sediments at the bottom of the ocean. If a proxy record goes back thousands of years and we think we know fairly precisely when a given part of it was created, then according to the theory, it can be used to estimate what the local temperature probably was back then compared to today. Now, we’re not criticizing proxies in principle; on the contrary, they represent an ingenious way to get important data that we can’t measure directly—or at least they can represent an important way.

But when you look closely, as we’re about to do, you find that the estimates can be rough, very uncertain, and often no better than sheer guesswork. In fact, sometimes they’re much worse than guesswork. What you have is researchers who know what they want to find and deliberately select only the kind of proxy, or only the specific proxy data series, that says what they want to hear. And far too many scientists who work with these proxies have actually gone to great lengths not to disclose the uncertainties but to hide them, to make sure the public never hears about how imprecise, or sometimes even dubious, their reconstructions are—which is where we come in.

For the Climate Discussion Nexus, I’m John Robson, and this is a CDN backgrounder on proxy reconstructions of the Earth’s temperature history. But before we plunge into the past, let’s look at how temperatures are measured, or not measured, more recently. Because if you’re going to compare modern records with older ones, it matters how both are generated. Systematic weather records from around the world since the mid-1800s are archived at the Global Historical Climatology Network and elsewhere. So, if, for instance, we pick a fairly recent year, like 1965, we can see that records were available on land from most countries around the world, although many places only had partial records from a handful of stations, and the annual average had to be based on estimating the missing numbers. And of course, there’s always the question of how good the measurements were, where the instruments were situated, how well they were maintained, and how carefully they were read. And if 1965 is shaky, take a look 40 years further back, in 1925—there was hardly any data from Africa, South America, and vast regions of Asia. Yet we’re now confidently told that, say, the Central African Republic was hotter in 2023 than in 1923. And if we go back another 40 years to 1885, we see that basically there was no data at all, other than the US, Europe, and a few places in India and Australia.

Now, here’s a surprise: from 1885 to 1965, the record gets more complete, but after 1965, it thins out again. As of 2006, the sample looked much the way it had early in the 20th century. And if we chart the number of locations supplying data to the global climate archive over the years from 1900 to 2008, it rather unexpectedly looks like this. So, as you can see, the sample size has been constantly changing, which ought always to make us uneasy about precise findings, or more exactly, claims of precise findings. And when scientists construct those famous charts of global average temperature back to the mid-1800s, they quietly admit among themselves that over half the data is missing and has to be imputed, which is a fancy way of saying ‘made up.’ But they don’t draw this issue to the attention of the public, and journalists don’t ask about it—or at least, they don’t ask the scientists who would insist on bringing it up. The coverage is fragmentary, however, which is a major statistical challenge over the entire period. Over half—53%—of data entries are missing, most of them at the poles and over Africa. The coverage generally worsens back in time, with notable gaps during the two World Wars. That survey just covers the modern data, which is supposedly the best part of the record and is at least in part based on thermometers. Prior to about 1850, we have to resort to proxies to get temperature estimates. And while there are many potential proxy records, most attention is paid to tree rings, ice core layers, and marine sediments. So, obviously, it’s important to ask how reliable they are. In 2006, the US National Academy of Sciences did just that, conducting a review of all these methods in light of the controversies that had arisen concerning the IPCC hockey stick graph, which was mostly based on tree rings.

In general, that review said the proxies sometimes contain useful information, but scientists have to be careful about how they use them, and they need to be honest about the uncertainties they specifically cautioned that uncertainties of the published reconstructions have been underestimated. So how are proxy-based reconstructions done? Let’s start with tree rings. As trees grow, they add a ring of new wood around their trunks every year. Scientists measure the width and density of these rings by taking small, pencil-like cores out of the trunk, and the general principle is that trees grow faster and further in good years than bad, so thick rings mean favorable conditions, which certainly would include warmth. So variations in these rings might, in some cases, correlate with variations in temperature. The first problem, which is obvious to anyone who’s ever seen a tree stump, is that the ring width patterns can be completely different depending on which side of the tree you take the core from. And the National Academy’s panel noted that many other things than temperature affect tree ring growth, such as precipitation, disease, fire, and competition from other trees. Scientists need to try to find locations where they are sure temperature is the main controlling factor, but even if they are diligent, it’s not always possible to know if that’s the case.

They also emphasize that it’s not enough to look at a single tree. If a pattern found in a tree core is truly a climate signal, it should be seen in cores taken from at least 10 to 20 trees in an area, because a single tree can suffer storm damage or be attacked by pests. So whenever you see a tree-ring-based reconstruction, the first question you need to ask is how many trees were sampled. But good luck finding out. One of the problems we run into when we look at these studies is the number of times scientists rely on insufficiently large samples, or worse, take a large sample and then throw out the ones that don’t tell them what they want to see, or simply refuse to say how many trees they examined. Canadian researcher Steven McIntyre spent about 15 years blogging at the site ClimateAudit.org, detailing his efforts to get tree ring researchers to report these things, often without success. If they’re not deliberately hiding something, they’re sure doing a good imitation. Another problem with tree rings is that as a tree gets older and its trunk widens, if the volume of growth is constant, the width of each year’s ring will decrease, meaning that ring widths will get narrower, even if temperatures stay the same. Scientists need to use statistical models to remove this trend from the data, but every time you start manipulating data, even for valid reasons and carefully, it creates further uncertainties. So it’s far from straightforward. For instance, the National Academy’s panel focused attention on two issues that arose during the debates about the Michael Mann hockey stick graph.

First, they pointed out that the underlying theory assumes the correlation between temperature and tree ring widths must be constant over time. If wide rings in the 20th century mean temperatures were high, the narrower rings hundreds of years ago mean it was cooler.

But what if this sub-theory doesn’t hold? What if something else changes the growth pattern from time to time? It might sound like a weird thing to worry about, but when you start checking tree rings against actual recent thermometer data, you find significant evidence that it does happen. For instance, after 1960, tree rings in many locations around the world started getting narrower even while thermometers said local temperatures were rising. Scientists gave this a fancy label—the Divergence Problem—waved it away by saying it was probably a one-off occurrence, and then started deleting the post-1960 data so that people wouldn’t notice it. And we discussed a particularly glaring example of this approach in our video on Hiding the Decline. Unfortunately, as Rudyard Kipling once said, giving something a long name doesn’t make it better. On the contrary, the Divergence Problem undermines the whole field, or forest, because if trees aren’t picking up the warming happening now, how do we know they didn’t also fail to pick it up then? If narrow tree rings are happening during a warm interval today, how can scientists insist that narrow tree rings prove it was cold in the past? And worse, instead of being honest about the question, scientists simply resorted to hiding the decline, hoping no one would notice. It didn’t work.

Another issue the National Academy pointed to, still on the tree ring proxy, was that some kinds of tree are definitely not good for recording temperatures and should be avoided. They particularly singled out bristlecone pines. These are small conifers that grow to a great age, which of course makes them superficially appealing. Unfortunately, over their long lives, they form twisted, bunched-up trunks with ring width patterns that have nothing to do with temperature. And one of the discoveries made by Steven McIntyre in his analysis of the Mann hockey stick was that its shape depended entirely on a set of 20 bristlecone pine records from Colorado that have a 20th-century trend of bigger rings despite, awkwardly, coming from a region where thermometers say no warming took place. This figure shows in the top panel the result of applying Mann’s statistical method to a collection of over 200 tree ring proxies, including the 20 bristlecone series, using a flawed method that puts most of the emphasis on those bristlecones. It has a compelling hockey stick shape. The bottom panel shows the same calculation after removing just the 20 bristlecone pine records. It’s clear that the hockey stick shape is entirely due to tree rings that experts have long known are not valid for showing temperature. What’s worse, as McIntyre has pointed out, Mann himself computed the bottom graph, but he hid the results instead of showing them to his readers.

This pattern is far too common. When we look hard at paleoclimate reconstructions, they fall apart on close inspection, but the scientists who do them almost never tell you about their weaknesses upfront. In fact, it’s happened so often that you’re justified in assuming it’s the rule, not the exception.

Another series that used to be popular in climate reconstructions was a collection of Russian tree rings from the Polar Urals region, introduced in a 1995 journal article by the late British climatologist Keith Briffa and his co-authors. They argue that their tree ring reconstruction showed the 20th century was quite warm compared to the previous 1,100 years, and they specifically identified the years around AD 1000 as among the coldest of the millennium. Here’s that chart.

This Briffa Polar Urals data series naturally became very popular in other tree ring reconstructions. But the problem was that the early part of the data was only based on three trees, which is not enough for confident conclusions. In 1998, some other scientists obtained more tree ring samples from the same areas, and suddenly the picture looked completely different. Instead of AD 1000 being super cold, it was right in the middle of the hottest period of all—the supposedly non-existent Medieval Warm Period—and the 20th century was no longer the least bit unusual.

So what did Briffa and his colleagues do? Did they publish a correction or let people know that they’d actually found evidence of a Medieval Warm Period? No, of course not. They just quietly stopped using Polar Urals data and switched to a new collection of tree rings from the nearby Yamal Peninsula that had the right shape. Now that switcheroo was bad enough, but the story gets worse. The dogged Steve McIntyre asked Briffa to release his Yamal data, but Briffa steadfastly refused. Eventually, after nearly a decade, the journal where he published his research ordered Briffa to release it, and McIntyre promptly made two remarkable discoveries. First, the number of trees in the 20th-century segment dropped off to only five near the end, which clearly fails the data quality standard. Second, McIntyre found that another scientist, Fritz Schweingruber, who happened to be a co-author of Briffa, had already archived lots of tree ring data from the same area, and while it looked similar to Briffa’s up to the year 1900, instead of going up in the 20th century, it went down. Briffa, surprise, surprise, hadn’t used it. So it’s not just incomplete data that happens to have a bias; it’s data that’s been deliberately chosen to introduce one.

This graph from McIntyre’s ClimateAudit website shows a close-up of the 20th-century portion. The red line is the data Briffa used, the black line is the Schweingruber data, and the green line is the result from combining all the data together. Clearly, when you include the more complete data, the blade of the hockey stick disappears, and the 20th century shows a slight cooling, not warming, which is kind of important to the story. Another source of bias in tree ring reconstructions comes from the practice of something called pre-screening. Recall that the National Academy of Science panel said that researchers should sample a lot of trees at a location, and if there is a climate signal, it should be common to all of them. If modern temperatures line up with some of the tree cores but not others, it might be a spurious correlation. This would mean the early portion of the record is not reliable for temperature reconstructions. In a 2006 study, Australian statistician David Stockwell illustrated the problem by using a computer to generate a thousand sets of random numbers, each one 2,000 numbers long. He selected the ones where the last 100 numbers happened to correlate with the orthodox 20th-century global average temperature series and threw out the rest, then combined the data together the way paleoclimatologists do. The result was an impressive hockey stick, which according to common practice would lead to the conclusion that today’s climate is the warmest in the past millennium. The problem is the graph has absolutely no information about the past climate in it, true or false.

Instead, it was constructed using random numbers that were then pre-screened to fit modern temperatures and then spliced to the modern temperature record to create the illusion of providing information about the past, which is exactly what far too many tree ring researchers are doing now. One way to guard against generating spurious results like this one is to use all the data from a sampling location, but researchers on a mission don’t do so. Instead, they pre-screen and may even end up throwing out most of the data they’ve collected if it’s what it takes to get the result they wanted. In 1989, American climate scientist Gordon Jacoby and his co-author Rosanne D’Arrigo published a reconstruction of northern hemisphere temperatures that had the usual hockey stick shape, although it only went back to 1670. In the article, the authors said they sampled data from 36 sites but only kept data from 10 of them. So McIntyre emailed Jacoby and asked for the others, and Jacoby, unsurprisingly, refused to show them. What is surprising is the frankness of his explanation: “Sometimes, even with our best efforts in the field, there may not be a common low-frequency variation among the cores or trees at a site. This result would mean that the trees are influenced by other factors that interfere with the climate response. There can be fire, insect infestation, wind or ice storm, etc., that disturb the trees, or there can be ecological factors that influence growth. We try to avoid the problems but sometimes cannot.

If we get a good climatic story from an chronology, we write a paper using it. That is our funded mission. It does not make sense to expend efforts on marginal or poor data, and it is a waste of funding agency and taxpayer dollars.” The rejected data are set aside and not archived. And you can guess what makes a good climatic story. McIntyre eventually gave up trying to get the 26 datasets Jacoby threw away, but Jacoby died in 2014, and the same year, his university archived a lot of his data, and later in the fall of 2023, McIntyre noticed that buried in the archive was one of the series Jacoby had rejected, from Sukakpak Peak in Alaska. Even though it was close to the two sites that Jacoby and D’Arrigo had retained, and had at least as many individual tree cores in it as other sites, it was rejected as being poor data. And here’s what it looks like: the ring widths, if they’re a temperature proxy, show that the Medieval period was very warm, then there were a couple of very cold periods, and the 20th century was nothing unusual, which is not a good climate story, which is what poor data now means to these sorts. So they threw it out. You can see how the game works. When they get a hockey stick shape, they say it’s based on good data, and when we ask how they define good, the answer is, if it’s shaped like a hockey stick, QED.

Now let’s look at a very different type of temperature proxy, marine sediments. Here scientists look at organic compounds called alkanones, which are produced in the ocean by tiny creatures called phytoplankton and which settle in layers on the ocean floor. Since alkanones have chemical properties that correlate to temperature, by drilling cores out of the ocean floor and examining the changing density of alkanones in layers that they estimate to have been formed at various times, science can say something about the past climate. Once again, there’s a lot more uncertainty than we often hear about, because the layers form very slowly. Unlike tree rings, alkanone layers don’t pick up year-by-year changes, only average changes over multiple centuries. A single data point will represent the alkanone density in a thin layer of a core sample, but it might, at best, indicate not a single year but average climate conditions over several hundred years.

As a result, it means they can’t be used for comparing modern short-term warming and cooling trends to the past. The appropriate comparison would be a single data point for temperature from 1823 to 2023. On the plus side, because thin layers cover long periods, a single sediment core can provide information a long way into the past, even 10,000 years or more. And thus it was that in March 2013, headlines around the world announced that the Earth was now warmer than any time in the past 11,000 years, based on a new proxy reconstruction published in Science magazine by a young scientist named Shaun Marcott, based mostly on a global sample of alkanone cores collected by other scientists in previous years. The graph showed that the climate had warmed after the end of the last glaciation, 11,000 years ago, stayed warm for millennia, then cooled gradually until the start of the 20th century, after which it warmed at an exceptional rate, doing 8,000 years of cooling in only one century. Gotcha, right? Except a reader at Climate Audit soon noticed something odd. Marcott had just finished his PhD at Oregon State University, and the paper in Science was based on one of his thesis chapters, which was posted online, and, drum roll, please, in that version, there was no uptick at the end, no 20th-century warming, no hockey stick. So where did the blade of the stick come from in the version published in Science? While climate scientists were busy proclaiming the Marcott result as more proof of the climate crisis, it fell to outsiders, like once again Steve McIntyre and his readers at Climate Audit, to dig into the details. In this case, McIntyre was able to obtain the Marcott data promptly, and to show that the big jump at the end was based on just one single data point.

As a mining consultant, McIntyre also knew something important about drill cores. The topmost layer, which represents the most recent data, can be contaminated during the drilling process. He wanted to know how the various scientists who collected the alkanone samples dealt with that issue, so he looked up the original studies, and to his surprise, he found that they didn’t consider the core tops to be reliable measures of recent temperatures. Most of them only reported temperature proxies starting centuries in the past, even a thousand years or more. Marcott and his co-authors had redated the cores to the present, but if they used the dates assigned by the original authors, there would be no uptick at the end. After being confronted with this data, Marcott and his co-authors put out a posting on the web in which they made a startling admission: “The 20th-century portion of our paleo temperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions.” But of course, the damage had been done. How many news stories that pounced on the original even mentioned this critical correction, let alone made a big fuss over it?

Now let’s look at another popular type of proxy, the one that comes from drilling out cores in large ancient ice caps like the ones over Greenland and Antarctica. These cylinders are believed to provide evidence of temperatures back hundreds of thousands of years, because every year, a layer of snow becomes ice, and the chemical composition of the ice contains clues about temperature. One of the most famous of these reconstructions is the Vostok ice core from Antarctica.

It shows that most of the past half-million years have been spent in Ice Age conditions, interrupted only by short interglacial periods. The last 10,000 years, our current interglacial, has been longer than the previous three, but colder than the previous four. The ice core record also shows that changes in and out of ice ages are extremely rapid. When we start diving into the next glaciation, we may not have much time to prepare, assuming, of course, that ice cores are reliable. It’s no good for us to point to methodological uncertainties when proxies confirm the orthodox and then tout their precision when they challenge it. And one important point about ice cores is that, as with sediment layers, the bubbles don’t take definitive shape in just one year or a couple of years, so there’s a certain degree of blurring.

That means that they can miss significant spikes or dips in temperature if they’re sudden and brief. “Brief” here being a word that can even extend to a century. Which isn’t to say that proxies are inherently useless, or even disreputable. On the contrary, as we said at the outset, we applaud the ingenuity of researchers who look for indirect ways of measuring things that matter when the direct ones aren’t available. But we insist that they be honest in how they collect and sort the data, and how they present it, including how much certainty they claim that it carries. Oh, there’s one more key point that we need to make about the whole business of using proxy data to reconstruct past temperatures. During the overlap period when we have both thermometer and proxy data, the challenge is to construct a statistical model connecting them.

And the problem is that, mathematically speaking, it’s well known that many different models can be constructed with no particular reason to favor one over the others. In a 2011 study in the Annals of Applied Statistics, two statisticians, Blakely McShane and Abraham Wyner, demonstrated this by constructing multiple different models using the Mann hockey stick data and showed that while they implied completely different conclusions about how today’s climate compares to the past, they all fit the data about the same. While climatologists tend to produce results like the red line, it would be just as easy and just as valid to produce the green line from the same data. So the uncertainties in these kinds of reconstructions go way beyond the little error bars climatologists like to draw around their reconstructions, because in truth, they can’t be certain of the shape of the reconstruction to begin with.

So yes, by all means apply proper scientific methods to reconstructing the past climate, but proper ones, handling data honestly and recognizing the often very large amount of uncertainty. For the Climate Discussion Nexus, I’m John Robson, and that’s our backgrounder on temperature reconstruction from the pre-metered era.

via Watts Up With That?

https://ift.tt/YfK3cP9

April 22, 2024 at 12:03PM

Leave a comment