Month: February 2020

Big Trouble with Spiders

Guest Essay by Kip Hansen — 6 February 2020

 

only_here_for_spidersonly_here_for_spidersHow deeply have you considered the social life of spiders?  Are they social animals or solitary animals?  Do they work together?  Do they form social networks?  Does their behavior change as in  “adaptive evolution of individual differences in behavior”?

In yet another blow to the sanctity of peer-reviewed science and a simultaneous win for personal integrity and self-correcting nature of science, there is an ongoing tsunami of retractions in a field of study of which most of us have never even heard.

Science magazine online covers part of the story in “Spider biologist denies suspicions of widespread data fraud in his animal personality research”:

“It’s been a bad couple of weeks for behavioral ecologist Jonathan Pruitt—the holder of one of the prestigious Canada 150 Research Chairs—and it may get a lot worse. What began with questions about data in one of Pruitt’s papers has flared into a social media–fueled scandal in the small field of animal personality research, with dozens of papers on spiders and other invertebrates being scrutinized by scores of students, postdocs, and other co-authors for problematic data.

Already, two papers co-authored by Pruitt, now at McMaster University, have been retracted for data anomalies; Biology Letters is expected to expunge a third within days. And the more Pruitt’s co-authors look, the more potential data problems they find. All papers using data collected or curated by Pruitt, a highly productive researcher who specialized in social spiders, are coming under scrutiny and those in his field predict there will be many retractions.”

The story is both a cautionary tale and an inspiring lesson of courage in the face of professional setbacks — one of each for the different players in this drama.

I’ll start with Jonathan Pruitt, who is described as “a highly productive researcher who specialized in social spiders”. Pruitt was a rising star in his field and his success led to his being offered “one of the prestigious Canada 150 Research Chairs” — where he has established himself at McMaster University in Hamilton, Ontario, Canada in the psychology department where he is listed as  the Principle Investigator at “The Pruitt Lab”.  The Pruitt Lab’s home page tells us:

“The Pruitt Lab is interested in the interactions between individual traits and the collective attributes of animal societies and biological communities. We explore how the behaviors of individual group members contribute to collective phenotypes, and how these collective phenotypes in turn influence the persistence and stability of collective units (social groups, communities, etc.). Our most recent research explores the factors that lead to the collapse of biological systems, and which factors may promote systems ability to bounce back from deleterious alternative persistent states.”

This field of study is often referred to as behavioral ecologyIn terms of research methodology, this is a difficult field — one cannot, after all, simply  administer a series of personality tests to various groups of spiders or fish or birds or amphibians.  Experimental design is difficult and not normalized within the field;  observations are in many cases by necessity quite subjective.

We have seen a recent example in the Ocean Acidification (OA) papers concerning fish behavior, in which a three-year effort failed to replicate the alarming findings about effects of ocean acidification on fish behavior.  The team attempting the replication took care to record and preserve all the data and, Science reports, “It’s an exceptionally thorough replication effort,” says Tim Parker, a biologist and an advocate for replication studies at Whitman College in Walla Walla, Washington.  Unlike the original authors, the team released video of each experiment, for example, as well as the bootstrap analysis code. “That level of transparency certainly increases my confidence in this replication,” Parker says.”

The fish behavior study is of the same nature as the Pruitt studies involving social spiders.  Someone has to watch the spiders under the varied conditions, make decisions about perceived differences in behavior, record differences in behavior, in some cases time behavioral responses to stimuli.  The results of these types of studies are in some cases entirely subjective — thus, in the OA replication, we see the care and effort to video the behaviors so that others would be able to make their own subjective evaluations.

The trouble for Pruitt came about when one of his co-authors was alerted to possible problems with data in a paper she wrote with Pruitt in 2013 (published in the Proceedings of the Royal Society B in January 2014) titled “Evidence of social niche construction: persistent and repeated social interactions generate stronger personalities in asocial spider“.

That co-author is Dr. Kate Laskowski, who now runs her own lab at the University of California at San Diego.   She was, at the time the paper was written, a PhD candidate.  I’ll let you read her story — it is inspiring to me — as she tells it in a blog post  titled “What to do when you don’t trust your data anymore”.  Read the whole thing, it might restore your faith in science and scientists.

Here’s her introduction:

“Science is built on trust. Trust that your experiments will work. Trust in your collaborators to pull their weight. But most importantly, trust that the data we so painstakingly collect are accurate and as representative of the real world as they can be.”

“And so when I realized that I could no longer trust the data that I had reported in some of my papers, I did what I think is the only correct course of action. I retracted them.”

“Retractions are seen as a comparatively rare event in science, and this is no different for my particular field (evolutionary and behavioral ecology), so I know that there is probably some interest in understanding the story behind it. This is my attempt to explain how and why I came to the conclusion that these papers needed to be removed from the scientific record.”

How did this happen?  The short story is that as a result of meeting and talking with Jonathan Pruitt at a conference in Europe, Pruitt sent Laskowski “a datafile containing the behavioral data he collected on the colonies of spiders testing the social niche hypothesis.”  Laskowski relates how the data looked good and that there was clear inference in the data that was “strong support for the social niche hypothesis”.  With such clear data, she easily wrote a paper.

“The paper was published in Proceedings of the Royal Society B (Laskowski & Pruitt 2014). This then led to a follow-up study published in The American Naturalist showing how these social niches actually conferred benefits on the colonies that had them (Laskowski, Montiglio & Pruitt 2016). As a now newly minted PhD, I felt like I had successfully established a productive collaboration completely of my own volition. I was very proud.”

The situation was a dream come true for a young researcher — and her subsequent excellent work brought her to UCSD where she established her own lab.  Then….

“Flash forward now to late 2019. I received an email from a colleague who had some questions about the publicly available data in the 2016 paper published in Am Nat. In this paper we had measured boldness 5 times prior to putting the spiders in their familiarity treatment and then 5 times after the treatment.

The colleague noticed that there were duplicate values in these boldness measures. I already knew that the observations were stopped at ten minutes, so lots of 600 values were expected (the max latency). However, the colleague was pointing out a different pattern – these latencies were measured to the hundredth of a second (e.g. 100.11) and many exact duplicate values down to two decimal places existed. How exactly could multiple spiders do the exact same thing at the exact same time?”

Lawkowski performed a forensic deep-dive into the data and discovered problems such as these (highlights indicate unlikely duplications of exact values; see Lawkowski’s blog post for larger images and more information):

suspect_duplicationssuspect_duplications

Remember, Laskowski’s paper was not based on data that she had collected herself, but on data provided to her by a respected senior scientist in the field, Jonathan Pruitt.  It was data collected by Pruitt personally, not as part of a research team, but by himself.  And that point turns out to be pivotal in this story.

Let me be clear, I am not accusing Jonathan Pruitt of falsifying or manufacturing the data contained in the data file sent to Laskowski — I have not investigated the data closely myself.  Pruitt is reported to be doing field work in Northern Australia and Micronesia currently and communications with him have been sketchy — inhibiting full investigations by the journals involved.   Despite his absence, there are serious efforts to look into all the papers that involve data from Pruitt. Science magazine reports “All papers using data collected or curated by Pruitt, a highly productive researcher who specialized in social spiders, are coming under scrutiny and those in his field predict there will be many retractions.” [ source ]

A blog that covers this field of science, Eco-Evo Evo-Eco, has posted a two part series related to data integrity:  Part 1 and Part 2.  In addition, there are two specific posts on the “Pruitt retraction storm” [ here and here ] , both written by Dan Bolnick, who is editor-in-chief of The American Naturalist.   This journal has already retracted one paper based on data supplied by Pruitt, at Laskowski’s request. 

In one of the discussions this situation has spawned, Steven J. Cooke, Institute of Environmental and Interdisciplinary Science, Carleton University, Ottawa, Canada opined:

“As I reflect on recent events, I am left wondering how this could happen.  A common thread is that data were collected alone.  This concept is somewhat alien to me and has been throughout my training and career.  I can’t think of a SINGLE empirically-based paper among those that I have authored or that has been done by my team members for which the data were collected by a single individual without help from others.  To some this may seem odd, but I consider my type of research to be a team sport.  As a fish ecologist (who incorporates behavioural and physiological concepts and tools), I need to catch fish, move them about, handle them, care for them, maintain environmental conditions, process samples, record data, etc – nothing that can be handled by one person without fish welfare or data quality being compromised.” 

It wasn’t long ago that we saw this same element in another retraction story — that of Oona Lönnstedt, who was found to have “fabricated data for the paper, purportedly collected at the Ar Research Station on Gotland, an island in the Baltic Sea.”  Science Magazine quotes  Peter Eklöv, Lönnstedt’s supervisor and co-author in this Q & A:

Q: The most important finding in the new report is that Lönnstedt didn’t carry out the experiments as described in the paper; the data were fabricated. How could that have happened?

A: It is very strange. The history is that I trusted Oona very much. When she came here she had a really good CV, and I got a very good recommendation letter—the best I had ever seen.

In the case of Jonathan Pruitt, the evidence is not yet all in.  Pruitt has not had a chance to fully give his side of the story or to explain exactly how the data he collected alone could reasonably contain so many implausible duplications of overly exactly measurements.  I have no wish to convict Jonathan Pruitt in this brief overview essay.

But the issue raised is important and has wide generalisability.  It can inform us of a great danger to the reliability of scientific findings and the integrity of science in general.

When a single researcher works alone, without the interaction and support of a research team, there is the danger that shortcuts can be taken with justifying  excuses made to himself, leading to data being inaccurate  or even just filled in with expected results for convenience.  Dick Feynman’s “fooling themselves” with a twist.

Detailed research is not easy — and errors can be and are made.  Data files can become corrupted and confused.  The accidental slip of a finger on a keyboard can delete an hour’s careful spreadsheet reformatting or cast one’s carefully formatted data into oblivion.  And scientists can become lazy and fill in data where none was actually generated by experiment.  A harried researcher might find himself “forced” to “fix up” data that isn’t returning the results required by his research hypothesis, which he “knows” perfectly well is correct.  In other cases, we find researchers actively hiding data and methods from review and attempted validation by others, out of fear of criticism or failure to replicate.

There are major efforts afoot to reform the practice of scientific research in general — suggestions include requiring pre-registration of studies including their designs, methodologies, statistical methods to be applied, end points, hypotheses to be tested with all these posted to online repositories that can be reviewed by peers even before any data is collected.  Searching the internet for “saving science”, “research reform” and the “reproducibility crisis” will get you started.  Judith Curry, at Climate etc., has covered the issue over the years.

Bottom Line:

Scientists are not special and they are not gods — they are human just like the rest of us.  Some are good and honorable, some are mediocre, some are prone to ethical lapses.  Some are very careful with details, some are sloppy, all are capable of making mistakes.  This truth is contrary to what I was led to believe as a child in the 1950s, when scientists were portrayed as a breed apart — always honest and only interested in discovering the truth.  I have given up that fairy-tale version of reality.

The fact that some scientists make mistakes and that some scientists are unethical should not be used to discount or dismiss the value of Science as a human endeavor.  Despite these flaws, Science has made possible the advantages of modern society.

Those brave men and women of science that risk their careers and their reputations to call out and retract bad science, like Dr. Laskowski,  have my unbounded admiration and appreciation.

# # # # #

Author’s  Comment:

I hope readers can avoid leaving an endless stream of comments about how this-that-and-the-other climate scientist has faked or fudged his data.  I don’t personally believe that we have had many proven cases of such behavior in the field.   Climate Science has its problems: data hiding and unexplained or unjustified data adjustments have been among those problems.

The desire to “improve the data” must be tremendously tempting for researchers who have spent their grant money on a lengthy project only to find the data barely adequate or inadequate to support their hypothesis.  I sympathize but do not condone acting on that temptation.

I would appreciate it if researchers and other professionals would leave their stories and personal experiences that apply to the issue raised.

Begin your comments with an indication of whom you are addressing.  Begin with “Kip…” if speaking to me.  Thanks.

# # # # #

via Watts Up With That?

https://ift.tt/385Urb1

February 6, 2020 at 08:17AM

Are Earth’s obliquity and axial precession in a long-term 5:8 ratio?

Earth’s tilt moves back and forth between about 22 and 24.5 degrees

If there is a mean ratio of 5:8 it would be linked to the known variation of Earth’s tilt, which in turn causes variation in the precession and obliquity periods.

Encyclopedia Britannica’s definition says:
Precession of the equinoxes, motion of the equinoxes along the ecliptic (the plane of Earth’s orbit) caused by the cyclic precession of Earth’s axis of rotation…The projection onto the sky of Earth’s axis of rotation results in two notable points at opposite directions: the north and south celestial poles. Because of precession, these points trace out circles on the sky.

(Axial precession is another term for ‘precession of the equinoxes’).

Our 2016 unified precession post started with this quote from Wikipedia (bolds added):
Because of apsidal precession the Earth’s argument of periapsis slowly increases; it takes about 112000 years for the ellipse to revolve once relative to the fixed stars. The Earth’s polar axis, and hence the solstices and equinoxes, precess with a period of about 26000 years in relation to the fixed stars. These two forms of ‘precession’ combine so that it takes about 21000 years for the ellipse to revolve once relative to the vernal equinox, that is, for the perihelion to return to the same date (given a calendar that tracks the seasons perfectly).

Three linked precessions

In units of 1,000 years:
21 * (16/3) = 112
112 * (3/13) = 25.846~ (near 26)
25.846~ * (13/16) = 21
That was the number theory of the ‘unified precession’ post, i.e. a 3:13:8*2 ratio.

Where might the obliquity period, known to be somewhere near 41,000 years, fit into that?

Referring to the chart (above, right) and converting decimals to whole numbers:
AY – SY = 328 = 109*3, +1
SY – TY = 1417 = 109*13
AY – TY = 1745 (328 + 1417) = 109*16, +1
[327:1417:1744 = 3:13:16]

So that supports the number theory.

Starting out, I just updated the chart to include an entirely theoretical obliquity period of 8/5 times axial precession, linking it to the other known cycles as suggested by my 2016 comment to the unified precession post, here.

That post was a follow-up to: Why Phi? – some Moon-Earth interactions, which showed how:
The period of 6441 tropical years (6440.75 sidereal years) is one quarter of the Earth’s ‘precession of the equinox’.
Multiplying by 4: 25764 tropical years = 25763 sidereal years.
The difference of 1 is due to precession.

[NB Wikipedia quotes 25772 years (‘disputed – discuss’) for this precession cycle, but as it’s not a fixed number the question is: what is the mean period? Earth is currently around the mid-point of the tilt variation, moving towards minimum tilt i.e a shorter precession period.]

But then I came across two things: a paper by EPJ van den Heuvel, cited in Wikipedia, and another entry in Wikipedia (see below), that together suggested viable alternative numbers but with the same 5:8 ratio.

On the Precession as a Cause of Pleistocene Variations of the Atlantic Ocean Water Temperatures
— E. P. J. van den Heuvel (1965)

From the summary:
‘The Fourier spectrum (Fig. 8) shows two significant main periods, P1 = 40000 years and P2 = 12825 years*. The first period agrees well with the period of the oscillations of the obliquity of the ecliptic. The second period corresponds very well with the half precession period.’
[*But the specific periods found were: 42857, 39474 and 12825 years]

From Wikipedia – Axial tilt – long term (Wikipedia):
‘For the past 5 million years, Earth’s obliquity has varied between 22° 2′ 33″ and 24° 30′ 16″, with a mean period of 41,040 years. This cycle is a combination of precession and the largest term in the motion of the ecliptic.’

41040:12825 = 16:5 exactly. Since 12825 is the half precession period, the full period ratio is 8:5 as in the chart, but with slightly different numbers.

If this is correct, the 25764y period in the chart would need adjusting by a factor of 225/226:
25764 * (225/226) = 25650 = 2 * 12825

The Wikipedia obliquity period of 41040 years is divisible by 19, so is an exact number of Metonic cycles (2160), as is the revised axial precession of 25650 years (1350). So the alternative period equals a reduction of 6 Metonic cycles of axial precession. The idea of a role for the Moon in Earth’s obliquity has been put forward before.

Of course 225/226 represents less than half a percent of correction, so could be argued to be negligible.
– – –
Now something else has turned up, written around the same time as two Talkshop posts already referred to:
The Secret of the Long Count, by John Martineau

In the ‘Long Count’ section of the article the writer also puts forward an argument for a (mean) 5:8 ratio of obliquity and axial (equinoctial) precession, using some historical context (see below).

So at least one other person has been thinking along the same lines. Note that 2,3,5,8 and 13 are Fibonacci numbers.


– – –
The Secret of the Long Count

In the summer of 2012 I visited Carnac, accompanied by Geoff Stray. Howard Crowhurst runs an annual midsummer conference there and we had been invited to speak at the 2012-themed event. Halfway through his presentation, Crowhurst was describing his hunches surrounding megalithic awareness of the 41,000-year cycle, when he casually mentioned a startling fact:

The 41,000-year cycle very precisely consisted of eight Mayan Suns.

I did a double take. Eight suns, but five made precession! Startled, I cornered Geoff Stray. He had already come across the eight Suns figure for the obliquity cycle, but not realised the significance of 5:8, while Howard Crowhurst had been unaware of the fact that five Suns gave a value for Precession. We had cracked it.

One Mayan Sun is 5,125 years.

Five Suns give the Precessional Cycle

5 x 5125 = 25,625 years (current value 25,700 years, 75 years out)

Eight Suns give the Earth’s Obliquity Cycle.

8 x 5125 = 41,000 years (current value 41,040 years, 40 years out)

Five and eight! The two long cycles that most affect the Earth relate as 5:8 and are both encoded by the Long Count. The Maya must have known. No wonder they drew so many pictures of jawbones. Five and eight! The same two numbers displayed by human teeth are the same two numbers as those used by the plants all around us, and these are the same two numbers that connect us with our closest neighbour Venus, and the same two numbers that relate the two long cycles that affect Earth-bound astronomy.

[emphasis by the author]

From: The Secret of the Long Count, by John Martineau

via Tallbloke’s Talkshop

https://ift.tt/2SlgrZ3

February 6, 2020 at 06:52AM

Even Privatisation May Come Too Late To Save An Increasingly Irrelevant BBC

The BBC is no longer a national broadcaster: it is at odds (and even at war) with a majority on social issues, is losing younger viewers to rival platforms and its grip on the commanding heights of British culture is steadily loosening.

For decades, foreigners have been baffled by just how emotionally attached we are to our health service and national broadcaster, and their outsized role in our national story. Other countries define themselves by their language, food, culture, history or constitution, but the British have long cited the NHS and the BBC as central to who we are.

This unusual love for two state-owned, technocratic organisations and their egalitarian financing mechanisms goes to the heart of the UK’s complex Left-Right, collectivist-individualistic identity. Even Margaret Thatcher didn’t dare take them on, fearing the electorate’s wrath.

Yet even Britain can change, as the BBC’s panjandrums at Broadcasting House are about to discover. The electorate, regrettably, still considers the NHS, for all its appalling failures, to be one of the gods in Britain’s unofficial polytheistic secular religion, which is why Boris Johnson has promised to spend billions more on it. The BBC, by contrast, has been slowly falling out of favour for the past decade, its popularity eroded by technological, cultural and political forces, just as minor deities were sometimes superseded by others in pre-Christian and pre-Islamic civilisations.

The internet, as ever, is the most important reason: not merely the rise of streaming services such as Netflix but also of social media. The NHS remains unassailable in the public psyche because its monopoly means it treats more people than ever before and its share of GDP keeps on rising; the BBC, by contrast, is losing the relentless battle for time and attention to hyper-dynamic competitors.

Even its huge budget, financed by a poll tax enforced by the threat of prison, is no longer enough to save it. Most people don’t actively dislike the BBC: they just don’t care as much about it and are increasingly unwilling to pay for it. It is clear that the private sector can and will provide every kind of “public service” broadcasting.

Yet the BBC’s problems are greater than apathy. The politically engaged cannot stand it any longer: the hard-Left sees the BBC (absurdly) as a pro-capitalist, pro-Tory plot and blame it (spuriously) for Jeremy Corbyn’s implosion.

But it’s the anger of the Brexiteers and of conservatives which will ultimately lead to its disestablishment. They consider it to be hopelessly biased, not just in its news coverage but also in its entertainment programming, which they find infuriatingly woke, anti-business, preachy and engaged in full-on cultural war against conservative values. Its news division found it impossible to predict Brexit, and then appeared to feel guilty for having “allowed” it to happen. A lack of political, class and geographic diversity means it does not understand the country it is meant to serve.

Someone who only watched Today or Newsnight would have been shocked when Johnson won his 80-seat majority and would be ignorant of the nation’s hierarchy of concerns. The corporation’s defensive attempts at encouraging its senior staff to engage with social media have merely shown up Left-wing bias – and confirmed that the few Tory Brexiteers are in hiding.

The BBC is no longer a national broadcaster: it is at odds (and even at war) with a majority on social issues, is losing younger Left-wing viewers to rival platforms and its grip on the commanding heights of British culture is steadily loosening, with no way back in a fractured, heterogeneous society. It is no wonder that commercial radio, led by LBC-owner Global, is on a roll: its finger is on the national pulse.

Full post (£)

The post Even Privatisation May Come Too Late To Save An Increasingly Irrelevant BBC appeared first on The Global Warming Policy Forum (GWPF).

via The Global Warming Policy Forum (GWPF)

https://ift.tt/3bjfN6P

February 6, 2020 at 05:09AM

Global Fossil Fuel Emissions Up 0.6% In 2019

By Paul Homewood

 

h/t Dennis Ambler

[Note that this article was written in December 2019]

image

After increasing at the fastest rate for seven years in 2018, global CO2 emissions are set to rise much more slowly this year – but will, nevertheless, reach another record high.

Emissions from fossil fuel and industry (FF&I) are expected to reach 36.81bn tonnes of CO2 (GtCO2) in 2019, up by only 0.24GtCO2 (0.6%) from 2018 levels, according to the latest estimates from the Global Carbon Project (GCP).

The data is being published in Earth System Science Data Discussions, Environmental Research Letters and Nature Climate Change to coincide with the UN’s COP25 climate summit in Madrid, Spain.

The growth of global emissions in 2019 was almost entirely due to China, which increased its CO2 output by 0.26GtCO2. The rest of the world actually reduced its emissions by -0.02GtCO2, thanks to falling coal use in the US and Europe, as well as much more modest increases in India and the rest of the world, compared to previous years.

The GCP researchers say that “a further rise in emissions in 2020 is likely” as global consumption of natural gas is “surging”, oil use continues to increase and, overall, energy demand rises.

Despite the rapid rise and falling costs of renewables in many parts of the world, the majority of increases in energy demand continue to be met by fossil fuels. For example, gas met around two-fifths of the increase in demand in 2018, against just a quarter coming from renewables.

Overall, human-caused CO2 emissions, including those from FF&I and land use, are projected to increase by 1.3% in 2019. This is driven by a 0.29GtCO2 (5%) increase in land-use emissions – including deforestation –  which is the fastest rate in five years. While land use only represents around 14% of total 2019 emissions, it will contribute more than half the increase in emissions in 2019.

While more modest than in recent years, the increase in emissions in 2019 puts the world even further away from meeting its climate change goals under the Paris Agreement.

image

image

https://www.carbonbrief.org/analysis-global-fossil-fuel-emissions-up-zero-point-six-per-cent-in-2019-due-to-china 

 

Zeke Hausfather, who wrote this piece, makes the same mistake as most of us do (me included!), in concentrating on the minutiae of small increases in emissions.

Some years the number is small, and sometimes bigger. But the real significance, for those interested in such matters, is the absolute level of emissions. Quite simply there is no sign at all of total global emissions falling substantially, never mind being eliminated totally.

Given that global carbon dioxide emissions are now 14% higher than when the abortive Copenhagen summit was held in 2009, and when we were told we had ten years to save the planet, the odd half a percent change up or down is utterly irrelevant.

And the reason is perfectly clear – the world needs fossil fuels, for the simple reason that nothing else can replace them.

via NOT A LOT OF PEOPLE KNOW THAT

https://ift.tt/2vVQKGT

February 6, 2020 at 04:06AM