Month: September 2023

True, Indianapolis Star, “Climate Change is not ‘Theoretical,’” but Its Connection to Extreme Weather Is

From ClimateREALISM

The Indianapolis Star (IndyStar) published a story claiming climate change is causing dangerous weather changes in Indiana. Climate change is a fact, but data shows, contrary to what is implied by the expert interviewed by the IndyStar, that neither flooding, nor hot temperatures have become more extreme or common in Indiana than they have been historically in the state.

IndyStar reporter Karl Schneider interviewed Gabe Filippelli, executive director of Indiana University’s Environmental Resilience Institute, for his story, “Climate change not ‘theoretical, or even debatable,’ an IU expert says. What’s the solution?

“Climate change already is affecting the everyday lives of Hoosiers and experts at Indiana University are exploring potential solutions,” writes Schneider.

“We do have some significant short-term challenges and one of them is flooding,” Filippelli told Schneider. “We’re already seeing a lot more extreme rainfall events and … we’re going to see even more into the future.

“The other issue is extreme heat. We see an increase in extreme heat events globally, but also right here in Indiana …,” Filippelli continued to opine.

I say opine because the data on flooding and extreme heat in Indiana refute his claims.

Concerning extreme rainfall and flooding, data from the National Centers for Environmental Information (NCEI) at the National Oceanic and Atmospheric Administration show that Indiana’s single day record for precipitation was set in August 1905, more than 118 years of global warming ago. Also, although records do indicate the mid-western United States is receiving modestly more precipitation now on average during the present period of modest warming than it did in the early and mid-20th century, this has not led to a worsening of flooding. In fact, in its most recent report the U.N. Intergovernmental Panel on Climate Change says it has detected no changes in flooding and that it can attribute no particular floods or patterns in floods to human caused climate change.

Concerning extreme heat, once again data from the NCEI refute Filippelli’s uncorroborated assertion that extreme heat is increasing. Indiana’s single day maximum temperature record, of 116℉ was set in 1936, 87 years of climate change ago. Indeed, more record hot temperatures in the United States were set during the dust bowl decade of the 1930s than in any other decade before or since, and far more record high temperatures for the country were set prior to 1950, before the recent period of modest warming, than have been set or tied in the past 70 years of climate change. Only six high temperature records have been set or tied since 2000 during what most alarmists call the hottest two decades on record. By contrast, 25 state maximum temperature records set in the 1930s still stand as records today.

Neither flooding nor extreme heat have worsened during the recent period of modest warming. Had Schneider checked the data, rather than relying on “expert opinion,” he could easily have established this fact and produced an accurate story, rather than one more in a long line of climate crisis scare stories.

H. Sterling Burnett

H. Sterling Burnett, Ph.D., is the Director of the Arthur B. Robinson Center on Climate and Environmental Policy and the managing editor of Environment & Climate News. In addition to directing The Heartland Institute’s Arthur B. Robinson Center on Climate and Environmental Policy, Burnett puts Environment & Climate News together, is the editor of Heartland’s Climate Change Weekly email, and the host of the Environment & Climate News Podcast.

via Watts Up With That?

https://ift.tt/3v8Pc2V

September 16, 2023 at 08:07PM

A Stunningly Good Hurricane Forecast

From the Cliff Mass Weather Blog

Cliff Mass

Numerical weather prediction has improved dramatically over the past decades, providing potent warnings for extreme weather, such as hurricanes.

There are few better examples than the prediction of Hurricane Lee, which will make landfall near the Maine/New Brunswick border late Saturday.

The U.S. global model, the GFS, has been spectacularly skillful in predicting this storm, well more than a week ahead.

The latest forecast run shows the storm making landfall near the international border around 5 PM PDT on Sunday.  That is a 54-h hour prediction  This is so close enough in time…and so consistent with other model forecasts… that you can be assured that this is close to what will happen.

But how did extended forecasts do?

The 72-h prediction is pretty much the same.

The 126 h prediction is nearly identical in position:

The 198 hr (8.25 day) forecast has a strong hurricane in pretty much the same location.

Folks, this is a stunningly good forecast for over a week ahead.

Professor Brian Tang of the University of Albany has a wonderful website that verifies the hurricane track (position) forecasts of major modeling/forecasting systems.  The results for Hurricane Lee are shown below for forecasts of 120 hours (5 days) or less.  

In general,  the track accuracy gets better for shorter forecasts…which makes sense. But let’s compare the American model (blue color, AVNO), the European Center model (red color), and the UKMET office model (green color).  The human (official) forecast is shown in black.

Wow.  The American model is STUNNINGLY accurate at all projections in time.  

It is FAR better than the nominally top two global modeling systems in the world:  the European Center and UKMET.  The forecast error is under 100 km (60 miles) for all projections shown. 

Extraordinary.  

The model forecasts are better than the official Hurricane Center forecasts….I suspect that humans are probably hedging their bets with the European Center model solution.😅

This was a truly excellent forecast and not the only success for the American model this season.  Hopefully, this extraordinary performance will be persistent for future storms, perhaps reflecting recent improvements in the U.S. global modeling system.

Finally, I should note there are real policy implications of the rapidly advancing weather prediction skill now available to decision-makers.  Excellent forecasts can help protect people and economic assets from extreme weather.

Better forecasts are the first line of defense against severe weather. 

 Better forecasts have great potential for reducing the negative impacts of global warming.  

One of the reasons I have spent some time trying to calm down some who are panicking over global warming and extreme weather.

Climate-related deaths are down…and I mean WAY down.  Better adaptation and a richer world have contributed, but so have better forecasts.

Importantly, we have only begun taking advantage of improved forecast skill. 

The winds on Maui were nearly perfectly predicted on August 7-8 of this year, yet 115 people died and nearly 10 billion dollars in damage was done. We could have easily stopped the carnage, by shutting off the power and effectively evacuating the population.

Most major wildfires are related to strong winds and such winds are often forecast with great skill.  Few should be a surprise.

In summary, coastal New England has had nearly a week to prepare for strong winds and heavy precipitation (over northern Maine)– and we can be proud of the technological advances and investments in NOAA  and in other government agencies that made such forecasting prowess possible.

via Watts Up With That?

https://ift.tt/4YtK5B9

September 16, 2023 at 04:03PM

Into the Unknown

If the mainstream media is to be believed, the climate “crisis” is accelerating fast. Everything is so much worse than last year, and next year will be much worse than this year, and so on.

But just how much more potent is the greenhouse effect now than last year? It’s something that is not made obvious in discussion of emissions (globally seemingly flatlining according to EDGAR) or even as reflected in CO2 concentrations in the atmosphere.

The consequences of climate change arise downstream of the physical drivers. The most important driver of present interest is CO2 concentrations, to which global atmospheric temperature increases are related by a logarithmic function. As we know, CO2 concentrations are inching up year by year, but the thing that is dinned into us more often than anything else is the importance of the CO2emissions themselves, in other words our current account spending, when the problem is in fact the overdraft.

This way of thinking only gets you so far, because if this “pollutant” was cleansed from the atmosphere, every living thing on Earth would die. Some pollutant. Situation “normal” we consider to be 280 parts per million of CO2 in the atmosphere by count of molecules, and it is against this benchmark that the climate “crisis” is judged.

Well, I decided to crunch the numbers to find out just how objectively worse this year’s climate is compared to last year’s in terms of the potency of the CO2 in the atmosphere.

I also wanted to answer a subsidiary question: How much better would things have been this year, if the UK had not existed at all?

Some interesting numbers 1: the mass of the atmosphere

According to Wiki, the mass of the atmosphere is 5.15*1018 kg, or 5.15 petatonnes. Of that, about 3.13*1015 kg is CO2, or 3.13 teratonnes. [Remembering that CO2 is heavier than the other molecules.]

In Denierland, I said that the present mass of C [not CO2] in the atmosphere is 850 Gt, which equals 3.12*1015 kg CO2, close to Wiki’s figure.

Allowing 0.04 % CO2 to equal this figure, you get 1 ppm CO2 ≈ 8 Gt (billion tonnes).

Some interesting numbers 2: global emissions of CO2

According to EDGAR, the total quantity of CO2 emitted globally in 2022 was 38.5 Gt. This is 3.85*1013 kg, or roughly 1% of the quantity of CO2 already in the atmosphere.

However, not all our emissions end up getting added to the atmospheric stock. Again referring to Wiki I find that 57% of humanity’s emissions were absorbed in 2012. I’m going to say that is reduced to 50% today, which is the same figure I used in Denierland (benefitting the alarmist case). So of the 38.5 Gt emitted, 19.25 Gt net is added to the atmosphere.

Net added last year / Existing stock = Proportion added last year

19.25 / 3120 ≈ 0.006

This is about 1 part in 167, so it would take a further 167 years for CO2 levels to double from today’s, given no change in our behaviour. [We are already well on the way to doubling from 280. This illustration would take us from 420 to 840 in 2190 AD, although we would probably run out of fossil fuels before then.]

We already have the estimate that 1 ppm CO2 ≈ 8 Gt, so we estimate too that the 19.25 Gt we added last year would have increased the CO2 concentration by 19.25 / 8 ≈ 2.4 ppm.

In May last year (Wiki again) the concentration of CO2 was 421 ppm. So on these numbers, it should now be 423.4 ppm.

What about the UK’s contribution to this increase? According to EDGAR, the UK emitted 0.34 Gt CO2 last year. This is 0.88 % of global emissions, meaning that we are responsible for 0.0088 * 2.4 ppm, or 0.0212 ppm out of the increase of 2.4 ppm.

Some interesting numbers 3: global temperature change

We know that every time you double the CO2 concentration, you get the same increase in global temperature. What this means is that every ppm you add is worth less in terms of warming potential than the previous. It also means that, if you have a baseline level of CO2 (280 ppm) it is easy to calculate the temperature difference resulting from any other concentration of CO2, with a couple of caveats.

Caveat 1: you have to put in a number for transient climate response (TCR). TCR is the fast-acting part of climate change, the bit that is relevant to a human timescale, before long-term feedbacks have reached equilibrium. Lewis & Curry (2014, ?Climate Dynamics) estimated this to be 1.33 K per doubling using an energy budget approach. Big computers give higher numbers. I’m very generously (for the alarmist case) going to call TCR = 2 K here. However, TCR is not instantaneous; it’s relevant to 2100. So the temperature change reported here would be in train but not wholly realised.

Caveat 2: there are other things going on other than CO2 concentrations going up. There are other greenhouse gases (in particular methane), and natural cycles, volcanoes, etc. This elementary analysis ignores all those.

The function you end up with (I don’t know if this is standard, or idiosyncratic) to calculate the temperature change from the baseline is:

Or in words, the temperature change from the baseline equals TCR times the logarithm base 2 of the present CO2 concentration divided by the baseline concentration (280 ppm).

We can now estimate the temperature increase from the baseline by plugging in TCR = 2 K and CO2 = 423.4. The answer you get is that the global climate should be an average of

1.193 K

warmer than it was in the pre-industrial baseline situation (or it will be once the short-term feedbacks are settled).

If you plug in last year’s concentration (421 ppm) you get

1.177 K

So the annual increase in temperature in this idealised situation would be 0.016 K between 2022 and 2023, or a sixtieth of a kelvin.

What about the UK’s contribution to the temperature increase over the baseline? What if we had disappeared last year? What if, in 2022, the UK’s 0.34 Gt had not been emitted?

With the UK, the estimated increase over the baseline was 1.193 K. Without the UK, the estimate is

1.193 K

(Yes, it’s the same number because of insufficient significant figures.) In fact, on these estimates, the UK’s contribution to climate change last year was:

0.00014 K

Or roughly 1/7000th of a kelvin.

Bottom line

Although temperatures are rising, they (or the part driven by CO2 emissions at any rate) are rising at a non-threatening and incremental rate. With parameter estimates favouring the alarmist cause, the effect of CO2 emissions last year was of the order of a sixtieth of kelvin. The UK’s contribution to global warming from CO2 emissions last year was about 1/7000th of a kelvin, again with parameter estimates favouring alarm.

If these temperature changes drive other variables and give rise to dangerous weather etc, then it seems unlikely that any such changes would be noticeable on a year-to-year basis.

The climate “crisis” is not accelerating.

Editorial

Oh, for a leader who would stop dictating every aspect of our lives, and let us get on with them.

via Climate Scepticism

https://ift.tt/bzCgAiL

September 16, 2023 at 03:49PM

IPCC Guilty of “Prosecutor’s Fallacy”

IPCC made an illogical argument in a previous report as explained in a new GWPF paper The Prosecutor’s Fallacy and the IPCC Report.  Excerpts in italics with my bolds and added images.

London, 13 September – A new paper from the Global Warming Policy Foundation reveals that the IPCC’s 2013 report contained a remarkable logical fallacy.

The author, Professor Norman Fenton, shows that the authors of the Summary for Policymakers claimed, with 95% certainty, that more than half of the warming observed since 1950 had been caused by man. But as Professor Fenton explains, their logic in reaching this conclusion was fatally flawed.

“Given the observed temperature increase, and the output from their computer simulations of the climate system, the IPCC rejected the idea that less than half the warming was man-made. They said there was less than a 5% chance that this was true.”

“But they then turned this around and concluded that there was a 95% chance
that more than half of observed warming was man-made.”

This is an example of what is known as the Prosecutor’s Fallacy, in which the probability of a hypothesis given certain evidence, is mistakenly taken to be the same as the probability of the evidence given the hypothesis.

As Professor Fenton explains

“If an animal is a cat, there is a very high probability that it has four legs.
However, if an animal has four legs, we cannot conclude that it is a cat.
It’s a classic error, and is precisely what the IPCC has done.”

Professor Fenton’s paper is entitled The Prosecutor’s Fallacy and the IPCC Report.

What the number does and does not mean

Recall that the particular ‘climate change number’ that I was asked to explain was the number 95: specifically, relating to the assertion made in the IPCC 2013 Report of ‘at least 95% degree of certainty that more than half the recent warming is man-made’.  The ‘recent warming’ related to the period 1950–2010. So, the assertion is about the probability of humans causing most of this warming.

Before explaining the problem with this assertion, we need to make clear that (although superficially similar) it is very different to another more widely known assertion (still promoted by NASA) that ‘97% of climate scientists agree that humans are causing global warming and climate change’. That assertion was simply based on a flawed survey of authors of published papers and has been thoroughly debunked.

The 95% degree of certainty is a more serious claim.
But the  case made for it in the IPCC report is also flawed.

[Commment: In the short video above, Norman Fenton explains the fallacy IPCC committed.  Synopsis of example.  A man dies is a very rowdy gathering of young men.  A size 13 footprint is found on the body.  Fred is picked up by the police.  He admits to being there but not to killing anyone, despite wearing size 13 shoes.  Since statistics show that only 1% of young men have size 13 feet, the prosecutor claims a 99% chance Fred is guilty.  The crowd was reported to be on the order of 1000, so  there were likely 10 others with size 13 shoes.  So in fact there is only a 10% chance Fred is guilty.]

The flaw in the IPCC summary report

It turns out that the assertion that ‘at least 95% degree of certainty that more than half the recent warming is man-made’ is  based on the same fallacy. In my article about the programme, I highlighted this concern as follows:

The real probabilistic meaning of the 95% figure. In fact it comes from a classical hypothesis test in which observed data is used to test the credibility of the ‘null hypothesis’. The null hypothesis is the ‘opposite’ statement to the one believed to be true, i.e. ‘Less than half the warming in the last 60 years is man-made’. If, as in this case, there is only a 5% probability of observing the data if the null hypothesis is true, the statisticians equate this figure (called a p-value) to a 95% confidence that we can reject the null hypothesis.

But the probability here is a statement about the data given the hypothesis. It is not generally the same as the probability of the hypothesis given the data (in fact equating the two is often referred to as the ‘prosecutors fallacy’, since it is an error often made by lawyers when interpreting statistical evidence).

IPCC defined ‘extremely likely’ as at least 95% probability.  The basis for the claim is found in Chapter 10 of the detailed Technical Summary, which describes various climate change simulation models, which reject the null hypothesis (that more than half the warming was not man-made) at the 5% significance level. Specifically, in the simulation models, if you assumed that there was little man-made impact, then there was less than 5% chance of observing the warming that has been measured. In other words, the models do not support the null hypothesis of little man-made climate change. The problem is that, even if the models were accurate (and it is unlikely that they are) we cannot conclude that there is at least a 95% chance that more than half the warming was man-made, because doing so is the fallacy of the transposed conditional.

The illusion of confidence in the coin example comes from ignoring (the ‘prior probability’) of how rare the double-headed coins are. Similarly, in the case of climate change there is no allowance made for the prior probability of man-made climate change, i.e. how likely it is that humans rather than other factors such as solar activity cause most of the warming. After all, previous periods of warming certainly could not have been caused by increased greenhouse gases from humans, so it seems reasonable to assume – before we have considered any of the evidence – that the probability humans caused most of the recent increase in temperature to be very low. 

Only the assumptions of the simulation models are allowed,
and other explanations are absent.

In both of these circumstances, classical statistics can then be used to deceive you into presenting an illusion of confidence when it is not justified.

See Also 

Beliefs and Uncertainty: A Bayesian Primer

 

You pick one unopened door. Monty opens one other door. Do you stay with your choice or switch?

Monty Hall Problem Simulator

 

via Science Matters

https://ift.tt/lumgQKs

September 16, 2023 at 02:56PM