Month: May 2023

Obama Longs for the Days When Establishment Media Set the National Agenda

Essay by Eric Worrall

“… Now people will say … I don’t care about the science … cause they’re just all liberals …”

Obama concerned that Americans ‘almost occupy different realities’

BY LAUREN SFORZA – 05/16/23 2:45 PM ET

Former President Obama said in a new interview that what most worries him is that Americans are occupying “different realities.”

“The thing that I’m most worried about is the degree to which we now have a divided conversation, in part because we have a divided media, a splintered media,” Obama told “CBS Mornings” host Nate Burleson.

“Today what I’m most concerned about is the fact that, because of the splintering of the media we almost occupy different realities, right? If something happens that, you know, in the past everybody could say, ‘All right, we may disagree on how to solve it, but at least we all agree that, yeah, that’s an issue,’” he said. 

“Now people will say, ‘Well, that didn’t happen,’ or, ‘I don’t believe that,’ or, ‘I don’t care about the science,’ or, ‘I’m not concerned about these experts, you know, ’cause they’re just all liberals’ or, you know, ‘That’s just conservative propaganda,’” he continued.

Read more: https://thehill.com/blogs/blog-briefing-room/4006754-obama-concerned-that-americans-almost-occupy-different-realities/

The one question Obama doesn’t ask is why the conversation fractured. If Obama wants to understand who caused the problem, perhaps he should try looking in the mirror.

The answer to why is obvious – noble cause corruption. Mainstream media abandoned their original mission of objectively reporting the news, and embraced an alternative mission of trying to re-shape society.

Consider the furore over Trump’s CNN town hall. After Trump’s outstanding CNN performance, critics erupted in anger that CNN had given President Trump a platform.

But millions of people want to hear what Trump has to say.

Why do many mainstream media figures believe CNN should have denied President Trump a platform, when so many of their potential audience wanted to see Trump speak? Were mainstream media objections based on their mission to uphold objective, unbiased reporting? Or were they angry that CNN had deviated from a liberal mission to re-shape society, by silencing voices which dispute the liberal narrative?

When Obama talks about healing the fracture, does he mean giving airtime to other points of view? Or is his idea of healing the fractured national voice an intention to silence those voices he doesn’t like?

Too many people in politics and mainstream media seem to believe it is their mission to protect the public from dangerous ideas and “misinformation”, to be the arbiters of truth, to decide which voices people should hear.

Did people like Obama really expect people would just sit still and take the abuse?

Whether it’s suppressing climate skeptics, refusing to debate skeptics, or deplatforming politicians like Trump, mainstream media and the establishment’s intolerable attempts to shape the conversation through heavy handed censorship are in my opinion the cause of the fracture.


Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience. They may be more likely to go to Heaven yet at the same time likelier to make a Hell of earth. This very kindness stings with intolerable insult. To be “cured” against one’s will and cured of states which we may not regard as disease is to be put on a level of those who have not yet reached the age of reason or those who never will; to be classed with infants, imbeciles, and domestic animals. – CS Lewis

via Watts Up With That?

https://ift.tt/odOLf2I

May 16, 2023 at 08:47PM

More CO2 Good, Less CO2 Bad

Gregory Wrightstone explains at CO2 Coalition More Carbon Dioxide Is Good, Less Is Bad.  Excerpts in italics with my bolds and added images.

People should be celebrating, not demonizing, modern increases in atmospheric carbon dioxide (CO2). We cannot overstate the importance of the gas. Without it, life doesn’t exist.

First, a bit of history: During each of the last four glacial advances, CO2’s concentration fell below 190 parts per million (ppm), less than 50 percent of our current concentration of 420 ppm. When glaciers began receding about 14,000 years ago – a blink in geological time – CO2 levels fell to 182 ppm, a concentration thought to be the lowest in Earth’s history.

Line of Death

Why is this alarming? Because below 150 ppm, most terrestrial plant life dies. Without plants, there are no animals.

In other words, the Earth came within 30 ppm in CO2’s atmospheric concentration of witnessing the extinction of most land-based plants and all higher terrestrial life-forms – nearly a true climate apocalypse. Before industrialization began adding CO2 to the atmosphere, there was no telling whether the critical 150-ppm threshold wouldn’t be reached during the next glacial period.

Contrary to the mantra that today’s CO2 concentration is unprecedentedly high, our current geologic period, the Quaternary, has seen the lowest average levels of carbon dioxide since the end of the Pre-Cambrian Period more than 600 million years ago. The average CO2 concentration throughout Earth’s history was more than 2,600 ppm, nearly seven times current levels.

Beneficial CO2 Increases

CO2 increased from 280 ppm in 1750 to 420 ppm today, most of it after World War II as industrial activity accelerated. The higher concentration has been beneficial because of the gas’s role as a plant food in increasing photosynthesis.

Its benefits include:

— Faster plant growth with less water and larger crop yields.

— Expansion of forests and grasslands.

— Less erosion of topsoil because of more plant growth.

— Increases in plants’ natural insect repellents.

A summary of 270 laboratory studies covering 83 food crops showed that increasing CO2 concentrations by 300 ppm boosts plant growth by an average of 46 percent. Conversely, many studies show adverse effects of low-CO2 environments.

For instance, one indicated that, compared to today, plant growth was eight percent less in the period before the Industrial Revolution, with a low concentration of 280 ppm CO2.

Therefore, attempts to reduce CO2 concentrations are bad for plants, animals and humankind.

Data reported in a recent paper by Dr. Indur Goklany, and published by the CO2 Coalition, indicates that up to 50 percent of Earth’s vegetated areas became greener between 1982-2011.

Researchers attribute 70 percent of the greening to CO2 fertilization from of fossil fuel emissions. (Another nine percent is attributed to fertilizers derived from fossil fuels.)

Dr. Goklany also reported that the beneficial fertilization effect of CO2 – along with the use of hydrocarbon-dependent machinery, pesticides and fertilizers – have saved at least 20 percent of land area from being converted to agricultural purposes – an area 25 percent larger than North America.

The amazing increase in agricultural productivity, partly the result of more CO2, has allowed the planet to feed eight billion people, compared to the fewer than 800,000 inhabitants living a short 300 years ago.

More CO2 in the air means more moisture in the soil. The major cause of water loss in plants is attributable to transpiration, in which the stomata, or pores, on the undersides of the leaves open to absorb CO2 and expel oxygen and water vapor.

With more CO2, the stomata are open for shorter periods, the leaves lose less water, and more moisture remains in the soil. The associated increase in soil moisture has been linked to global decreases in wildfires, droughts and heat waves.

Exaggeration of CO2’s Warming Effect

Alarm over global warming stems from exaggerations of CO2’s potential to retain heat that otherwise would radiate to outer space. As with water vapor, methane and nitrous oxide, CO2 retains heat in the atmosphere by how it reacts to infrared portions of the electromagnetic spectrum.

However, the gas has saturated to a large extent within the infrared range, leaving relatively little potential for increased warming.

Both sides of the climate debate agree that the warming effect of each molecule of CO2 decreases significantly (logarithmically) as the concentration increases.

This is one reason why there was no runaway greenhouse warming when CO2 concentrations approached 20 times that of today. This inconvenient fact, despite its importance, is rarely mentioned because it undermines the theory of a future climate catastrophe.

A doubling of CO2 from today’s level of 420 ppm – an increase estimated to take 200 years to attain – would have an inconsequential effect on global temperature.

Pennsylvania’s solar-powered fossil fuels

CO2 being liberated today from Pennsylvania coal was removed from the atmosphere by the photosynthesis of trees that fed on sunlight and carbon dioxide and then died to have their remains accumulate in the vast coal swamps of the Carboniferous Period.

Pennsylvania Marcellus and Utica shale hydrocarbons being exploited today were also the likely hydrocarbon source of shallower reservoirs producing since the late 1800s.

The source of those hydrocarbons was algae remains that gathered on the bottom of the Ordovician and Devonian seas.

Like the coal deposits, the algae used solar-powered photosynthesis and CO2 (the algal blooms were likely fueled by regular dust storms) to remove vast amounts of CO2 from the air and lock it up as carbon-rich organic matter.

The provenance of these hydrocarbons spawns two novel ideas. First, there is a strong case that these are solar-powered fuels.

Second, the sequestering of carbon during the creation of the hydrocarbons lowered atmospheric concentrations of CO2 to sub-optimum levels for plants. Therefore, the combustion of today’s coal and gas is liberating valuable CO2 molecules that are turbocharging plant growth.

The plain fact of the matter is that the modest warming of less than one degree Celsius since 1900, combined with increasing CO2, is allowing ecosystems to thrive and humanity to prosper.

Additional information on CO2’s benefits and related topics are available at CO2Coalition.org, which includes a number of publications and resources of interest.

 

via Science Matters

https://ift.tt/0eJuvGQ

May 16, 2023 at 06:41PM

The World’s Smartest Person

Michael Mann must be the world’s smartest person, because he observed a history of earth’s climate which no one else was able to see. In his world ice melts when it is cold, and freezes when temperatures are warm.

via Real Climate Science

https://ift.tt/WG92UeV

May 16, 2023 at 05:46PM

Averaging Last Seconds Versus Bureau Peer-Review

From Jennifer Marohasy’s Blog

Jennifer Marohasy

Averaging by its very nature smooths: removing peaks and troughs. Temperature data tends to be cyclical, whether on a one-minute, or thousand-year scale. The Australian Bureau of Meteorology has made a habit of smoothing when it is convenient and using extreme values otherwise. Take their one-minute temperature data from Canberra Airport: super-sensitive electronic equipment now records the highest, lowest, and last second of each minute and reports the highest second as the daily maximum temperature. Back in 2019 I purchased some of this data to test the Bureau’s claim that averaging the data would make no difference. I found that averaging the last one-second of each minute always gave me a lower maximum temperature. This is because the difference between the the highest and the last second could typically be 0.7 degrees Celsius as shown in Figure 17 – that is from a comprehensive report I co-authored in 2020. I have so far been unable to get this report published in a suitable peer-reviewed journal perhaps because it contradicts the Bureau’s much lauded Ayers and Warne (2020) analysis that comes to the opposite conclusion.

From an unpublished report that I co-authored back in 2020, entitled ‘One Minute Surface Air Temperature Observations – Canberra, Melbourne and Adelaide’

The Bureau claim that there is no need to average all 60-seconds in each minute as recommended by the World Meteorological Organisation when using resistance probes hooked up to data loggers. Ideally the Bureau would at least collect each of these seconds, and test this claim, but they never do. It was after meeting with Carl Otto Weiss for a drink at the Sunshine Beach Surf Club back in 2017 that I decided to at least test the concept by averaging the last second of each minute. This data can be purchased from the Bureau at some cost and with some delay.

In September 2017, I did met with Carl Otto Weiss. He is an Advisor to the European Institute for Climate and Energy and a former President of the German Meteorological Institute, Braunschweig. He was not particularly interested in my work on how the Australian Bureau of Meteorology measures temperatures, he had come to Noosa to meet with me and John Abbot to discuss our research newly published in the journal GeoResJ on the application of artificial intelligence, for evaluating anthropogenic versus natural climate change (GeoResJ, Vol. 14, Pgs 36-46 published in July 2017).

Our GeoResJ paper had been pilloried on Twitter, and we had been defamed by Graham Readfearn in The Guardian. So, it was a relief that contrary to everyone else in mainstream climate science at the time, who wanted our GeoResJ paper retracted/destroyed/burnt, that Otto Weiss praised it.

He thought it a most wonderful contribution to science showing not only what many suspect, that natural climate cycles drive the more significant changes in temperature over hundreds and thousands of years, but most importantly how the latest advances in artificial intelligence could be used to quantifying these effects.

I knew that Otto Weiss had a particular interest in measurement, after all, he had just attended the Australasian Measurement Conference (MSA2017) in Brisbane with Jane Warne from the Australian Bureau of Meteorology.

I wanted to know what he thought about the Bureau recording Australian temperatures as the highest, lowest and last second in every minute rather than taking the average of all the seconds over each minute.

The World Meteorological Organisation recommended that with the transition to more sensitive resistance probes hooked up to data loggers, to maintain some consistency with temperatures historically measured by mercury thermometers that have more inertia, sampling is best averaged over at least one minute.

At that time the Bureau had just finished and published its ‘Review of the Bureau of Meteorology’s Automatic Weather Stations’ in direct response to a front-page article by Graham Lloyd in The Australian newspaper on 1st August 2017. That article, with a photograph of Lance Pidgeon and me at the Goulburn airport, explained the Bureau had been forced to admit it had been caught out setting a limit of minus 10 degrees Celsius on how cold temperatures could be recorded; the limit had been in place for some 15 years since the transition to data loggers and the many ways the algorithms can be pre-programmed.

Side-stepping the issue of the cold limits, Otto Weiss queried whether it really was the case that the Bureau took spot-readings, rather than numerically averaging. I showed him the Bureau’s newly published AWS review and quoted from pages 22 where it explains:

One-minute maximum air temperature is the maximum valid one-second temperature value in the minute interval.

I also explained that the Bureau takes the lowest one-second spot reading as the minimum, but that until recently the Bureau had sent a limit of minus 10 degrees Celsius on how cold a temperature could actually be recorded.

I explained that the Bureau also records the last one-second temperature value in each minute interval. Otto Weiss explained this was the value that was perhaps most useful, the last second in each minute. He suggested that if the Bureau’s new resistance probes with data loggers had time constants that accurately mimicked mercury thermometers as the Bureau claimed, then this could be tested by averaging the last second that is recorded in each minute.

Perhaps the highest of these would then be recorded as the maximum temperature for each day? This is the method since used to calculate the daily maximum temperatures in the much quoted paper by Jane Warne and Greg Ayer published in the Journal of Southern Hemisphere Earth Systems Science (Vol 70, Pgs 160-165) in 2020.

Except there are three key problems with their method, never mind the dearth of data they actually compare:

1. Ayer and Warne claim to compare this last-second with the average of all 60-seconds in each minute except they compared the last second with just 5 one-second values from each one-minute interval incorporating the highest and lowest.

2. They used data from Darwin Airport (Site No. 14015), that is one of the 38 sites that still has mercury thermometers recording temperatures. So why not record the last-second from the probes with the value recorded from the mercury thermometer. If the objective of the Ayer and Warne study is to determine whether the time constant of the resistance probes is equivalent to a mercury thermometer, why not make a direct comparison.

3. While Ayer and Warne conclude that it is appropriate for the Bureau to record the value at the last second of each minute as satisfying WMO requirements, the Bureau don’t ever actually use this value. To reiterate, the Bureau use the highest one-second and the lowest-one second. It is nonsense and dishonest for Ayer and Warne to suggest otherwise.

According to page 17 of the AWS review:

The Almos DAS can provide one-second, one-minute, and 10-minute messages, as well as various other standard format meteorological messages.

So, the probe at Darwin Airport could have been reprogrammed to record a true one-minute average of all 60 one-second measurements. Then the comparison would at least have been consistent with WMO guidelines. This average could then have been compared with the manual recordings from the mercury at Darwin Airport, at least as a check of the WMO guidelines. Alas, and to reiterate, to justify the method currently used by the Bureau, Greg Ayers and Jane Warne would also have needed to make the comparison with the highest and lowest one-second value in each 24 hour period. Ayer and Warne never did this.

Yet, the Ayers and Warne paper has been held up as proof that temperature measurements from the Bureau’s probes in automatic weather stations are equivalent to readings from traditional mercury thermometers. Further, for me to suggest otherwise has been labelled a conspiracy theory.

Meanwhile, I can only characterise the Ayers and Warne paper as a ‘fake’ because it uses this different method of recording temperatures (the highest last-second of all the last-seconds each day) while claiming to be using the Bureau’s method that records the highest second within each minute each day as the maximum temperature. Detail can be tedious, and in this case is important. So I reiterate.

Nevertheless, Ayers and Warne are cited in The Guardian, by the Australian Broadcasting Corporation and the Agency France-Press, as reason to disregard my concerns about the Bureau hyping maximum temperatures.

It was two years after Otto Weiss visited, in 2019, and after purchasing batches of daily one-second data for Canberra, Adelaide and Melbourne from the Bureau, that I tested Otto Weiss’s hypothesis, that is essentially the Ayers and Warne methodology of using the last second in each minute.

I co-authored a 27-page report that sets out our method, results and conclusions. We test a lot more data points than Ayers and Warne, and in different ways. We could not calculate a proper minute average because the Bureau never collects every second of each minute. And we were unable to compare against a mercury thermometer, because the Bureau will not provide us with the parallel data for Canberra Airport, or any of the other locations.

Like the Ayers and Warner paper, our analysis was ready for publication in 2020. Entitled ‘One Minute Surface Air Temperature Observation – Adelaide, Canberra, Melbourne’ it, however, remains unpublished. Unlike Ayers and Warne, I no-longer have any colleagues willing to risk publishing me in a mainstream climate science journal. The last editor who published me had his journal shutdown: GeoResJ was discontinued in 2018.

My co-author of this report, testing the last one-second hypothesis as discussed with Otto Weiss all those years ago, cannot be named. My co-author also lives in Australia that is purportedly a secular democracy, but he risks losing his day job for assisting me with the analysis and report given there is no tolerance of dissent in Australia when it comes to issues of science and climate change.

Our unpublished manuscript begins:

Resistance temperature detectors (RTDs) in the Australian Bureau of Meteorology (BoM) automatic weather station (AWS) network provide temperature data at a rate of 1 Hz (sample per second). For every clock minute, three surface air temperature (SAT) observations are recorded:
• T , the last one-second reading (taken at 00 seconds of each minute)
• Tmax, the highest one-second reading over the last 60 seconds
• Tmin, the lowest one-second reading over the last 60 seconds

The BoM, however, only publishes the daily extreme values and associated statistics, e.g. the monthly and annual means. The one-minute data can be requested from the BoM for a given station, typically at a cost and processing delay.

The BoM has published statements indicating that their RTD and historical liquid-in- glass (LiG) measurements are equivalent, and specifically that the response times are similar. Every one-second reading is viewed as a time-averaged value (integrating over the past 40 to 80 seconds), effectively describing the moving-average temperature leading up to the given second, due to design of the RTD. High-frequency temperature fluctuations should therefore not be seen from second to second in the data, and also not from minute to minute (although more fluctuation could be expected at longer time scales).

Evidence that high-frequency fluctuations are indeed present in the measurements is given in this report, questioning the equivalence between RTD and LiG data.1 This can be seen by evaluating the time series consisting of all the last-second observations (a temperature series with constant sample spacing of 60 seconds), and also the difference between the last second and extreme measurements (Tmin and Tmax) for every minute, which indicates the measure of fluctuation possible, as measured with an RTD, within one minute. ENDS.

This is technical speak for let’s compare the last second reading from the resistance probe (RTD) with the highest and lowest reading each minute and the average.

When we did the analysis for Canberra airport – the example I am using in this note – we found that within the one-minute interval the difference between the last second reading and the highest second reading (maximum temperature archived by the Bureau) in any one minute interval was often more than 0.5C, and sometimes as high as 2.1C, as shown in Figure 2.3 chart B and table bottom left.

Figure 2.3 is from page 6 of my unpublished report. I am keen to get this published, should a reputable journal editor be prepared to take it on.

We concluded our analysis of the Canberra, Melbourne and Adelaide one minute data with comment:

The approach of the BoM to measure SAT [surface air temperature] is to record the highest, lowest and last second of every minute, as discussed before. The last-second data with the daily extremes are published and updated every 10 minutes on the “Latest Weather Observations” page for a given AWS [automatic weather station]. The data from the last 72 hours are updated every 30 minutes. The one- minute Tmin and Tmax data are also used to determine the daily ADAM Tmin and Tmax.

The WMO recommends averaging RTD [resistance probe] data over one minute. However, the BoM does not average at all, which is the reason for the spikiness of the data analysed in this report. Another example is shown in Fig. 16, displaying the last-second data observed at Canberra Airport (70351) on 17 Jan 2019.

If the WMO recommendation were followed, the BoM would provide the mean of 60 values — instead of only the single last value — for each minute. This would smooth the time series, similarly to what the averaging process depicted in Fig. 16 would do.

For illustration purposes, the moving average (MA) series over the last 5 samples (or 5 minutes, with only 1 sample per minute) is shown over the spiky last-second data. Al- though this illustration is not perfect (more data is needed to smooth over every minute), it does show that the daily Tmax would likely be lower, as it would be based on an average and not an instantaneous observation. ENDS.

Numerical averaging will drop the daily maximum temperature by almost a full one degree Celsius relative to taking the last second in each minute and by more than one degree when recording the highest one-second in each minute.

It is Bureau policy to record the highest one-second in each minute and the highest of these becomes the maximum temperature for that day for that location.

ADAM is the value archived by the Bureau as the maximum for that day. Data Tmax is the last one-second reading for those minutes, and the red line (MA) is the moving average of the five second averages. This is from page 22 of my report, that I would like published in the peer-reviewed climate science literature.

This last chart (Figure 16) from my unpublished report, shows that contrary to the hypothesis of Carl Otto Weiss, which is also a central thesis of the fake paper by Greg Ayers and Jane Warne, recording just the last second of the minute is not equivalent to the numerical average of even just the five last-one second readings. At least this was the situation at Canberra on 17th January 2019.

Lance Pidgeon, who often signs comments at blog threads as Siliggy, with me at the Goulburn Airport weather station late July 2017.

This is part 6 of ‘Jokers, Off-Topic Reviews and Drinking from the Alcohol Thermometer’. In part 7 I will explain why it is imperative that Greg Ayers and Jane Warne provide the A8 reports for Darwin Airport for April 2018 – that is the parallel data on which their analysis is based. The highest, lowest and last second records for each minute for the months of March, April and May of 2018 also need to be made public. You can read some of my criticism of Warne and Ayer at the popular climate blog WattsUpWithThat.com. I am grateful to Anthony Watts and Charles Rotter for republishing this series.

via Watts Up With That?

https://ift.tt/soC3XYA

May 16, 2023 at 05:02PM