Guest Post by Willis Eschenbach
Over at the marvelous KNMI website, home of all kinds of climate data, they’re just finishing their transfer to a new server. I noticed that they’ve completed the migration of the Climate Model Intercomparison Project 6 (CMIP6) data to the new server, so I downloaded all of the model runs.
I thought I’d take a look at the future scenario that has the smallest increase in CO2 emissions. This is the “SSP126” scenario. KNMI has a total of 222 model runs using the SSP126 scenario. Figure 1 shows the raw model runs with the actual temperatures.
Figure 1. Raw results, 222 model runs, CMIP6 models, SSP126 scenario
So here, we have the first problem. The various models can’t even decide how warm the historical period was. Modeled 1850-1900 mean temperatures range all the way from twelve and a half degrees celsius up to fifteen and a half degrees celsius … hardly encouraging. I mean, given that the models can’t replicate the historical temperature, what chance do they have of projecting the future?
Next, I took an anomaly using the early period 1850-1880 as the anomaly baseline. That gives them all the same starting point, so I could see how they diverged over the 250-year period.
Figure 2. Anomalies, 222 model runs, CMIP6 models, SSP126 scenario
This brings up the second problem. As the density of the results on the right side of the graph shows, the models roughly divide into three groups. Why? Who knows. And by the time they’re out to the end of the period, they predict temperature increases from what is called the “pre-industrial” temperature ranging from 1.3°C up to 3.1°C … just which number are we supposed to believe?
Finally, the claim is that we can simply average the various models in the “ensemble” to find the real future temperature. So I compared the average of the 222 models to observations. I used an anomaly period of 1950-1980 so that the results wouldn’t be biased by differences or inaccuracies in the early data. And I used the Berkeley Earth and the HadCRUT surface temperature data. Figure 3 shows that result.
Figure 3. Global surface temperature observations from Berkeley Earth (red) and HadCRUT (blue), along with the average of the 222 climate models.
This brings us to the third and the biggest problem. In only a bit less than a quarter-century, the average of the models is already somewhere around 0.5°C to 0.7°C warmer than the observations … YIKES!
And they’re seriously claiming they can actually use these models to tell us what the surface temperatures will be in the year 2100?
I don’t think so …
I mean seriously, folks, these models are a joke. They are clearly not fit to base trillion-dollar public decisions on. They can’t even replicate the past, and they’re way wrong about the present. Why should anyone trust them about the future?
Here on our forested hillside, rain, beautiful rain, has come just after I finally finished pressure-washing all the walls, including the second story … timing is everything, the rain is rinsing it all down.
My warmest regards to all, and seriously, if you believe these Tinkertoy™ climate models are worth more than a bucket of bovine waste products, you really need to sit the climate debate out …
w.
Further Reading: In researching this I came across an excellent open-access study entitled “Robustness of CMIP6 Historical Global Mean Temperature Simulations: Trends, Long-Term Persistence, Autocorrelation, and Distributional Shape“. It’s a very thorough in-depth examination of some of the many problems with the models. TL;DR version: very few of the model results are actually similar to real observational data.
In addition, there’s a good article in Science magazine entitled Earning The Public’s Trust on why people don’t trust science so much these days. Spoiler Alert: climate models get an honorable mention.
As Always: When you comment please quote the exact words that you are discussing. This avoids many of the misunderstandings that plague the intarwebs.
via Watts Up With That?
March 16, 2022 at 12:03PM
