Month: August 2024

Wednesday

0 out of 10 based on 0 rating

via JoNova

https://ift.tt/4R3HEvA

August 27, 2024 at 09:59AM

Ed Miliband Lies About Energy Price Cap

By Paul Homewood

 

 

OFGEM have just announced the Energy Price Cap will rise by £149 a year from October 1st, and Ed Miliband says it is all the fault of the wicked Tories.

In a prepared statement, he writes:

Today’s announcement from Ofgem will be worrying news for families across Doncaster. The expected rise in the price cap is yet another consequence of the toxic legacy left by the previous government.

Their failure over 14 years to secure our energy system has left families paying more for their energy bills, and left us at the mercy of international markets controlled by dictators. The new Government is determined to change this.

The only long term solution to achieve energy independence is to sprint towards clean, homegrown power.

That’s why the Government is moving at pace to deliver on our mission for clean power, by lifting the onshore wind ban, consenting solar and getting more renewable projects built.”

The Labour manifesto is even clearer, stating that the Conservatives’ ban on new onshore wind has led to some of the highest energy bills in Europe. In other words, Miliband is claiming that those bills would have been lower if we had built more wind and solar farms.

But is this true?

Even Miliband himself cannot deny how obscenely expensive all of the renewable energy subsidised under Renewable Obligations (RO) continues to be. This was the scheme introduced by the Labour Government under Tony Blair. According to the OBR, it will add £7.9 billion to energy bills this year. The RO scheme was replaced for new projects in 2016 by Contracts for Difference (CfDs), but wind and solar farms already qualified under RO continue to receive subsidies, which to make matters worse automatically increase year-on-year.

When indirect renewable subsidies, such as providing standby capacity and grid balancing, are added on, the average household is paying over £500 pa for the privelege of having renewable energy.

For all of this, of course, we have Ed Miliband’s 2008 Climate Change Act to thank, the biggest act of national self harm ever inflicted on the UK. His concern for people’s high energy bills rings rather hollow!

But Miliband is specifically referring to David Cameron’s “ban” on onshore wind in 2016. The BBC/Labour Party often call it a ban, or more recently a de-facto ban. In reality there never was a ban. Cameron merely announced the end of subsidies for onshore wind and solar farms, whilst also making them subject to local planning approval in England. Unable to make a profit without generous subsidies, the building of new wind farms understandably ground to a halt.

There were however some onshore wind and solar farms already approved at the time for CfD subsidies, and on average these now receive a guaranteed price of £113.48 and £110.25/MWh respectively. The current market price, according to OFGEM, is £86.75. Clearly then, if the Tories had indeed built more wind and solar farms, electricity prices would have been even higher than they already are. And this does not even take into account all of those extra costs incurred to handle the problems of intermittency.

As for Miliband’s “dictators”, they’ll be loving his decision to out an end North Sea oil and gas!

image

https://dp.lowcarboncontracts.uk/dataset/actual-cfd-generation-and-avoided-ghg-emissions

via NOT A LOT OF PEOPLE KNOW THAT

https://ift.tt/h3RbNpo

August 27, 2024 at 08:10AM

AI-Model Collapse

Guest Opinion by Kip Hansen — 27 August 2024 — 1200 words

Last week I wrote an article here titled:  “Illogically Facts —“Fact-Checking” by Innuendo”.  One of my complaints about the fake fact-check performed by three staff at Logically Facts was that it read suspiciously like it had been written by an AI-chat-bot, a suspicion bolstered by the claim-to-fame of Logically Facts is that it is an AI-based effort. 

I made the following statements:

Logically Facts is a Large Language Model-type AI, supplemented by writers and editors meant to clean-up the mess returned by this chat-bot type AI.    Thus, it is entirely incapable to making any value judgements between repeated slander, enforced consensus views, the prevailing biases of scientific fields and actual facts.  Further, any LLM-based AI is incapable of Critical Thinking and drawing logical conclusions.”

“Logically Facts and the rest of the Logically empire, Logically.ai, suffer from all of the major flaws in current versions of various types of AIs, including hallucination, break-down and the AI-version of “you are what you eat”.

The article is very well written and exposes one of the many major flaws of modern AI-Large Language Models (AI LLMs).   AI LLMs are used to produce both text response to chat-bot type questions, internet  “searches”, and to build on-request images.

It has long been known that LLMs can and do “hallucinate”.  The Wiki gives examples here.  IBM gives a very good description of this problem – which you should read right now – at least the first half-dozen paragraphs have a moderate understanding how these examples could occur:

“Some notable examples of AI hallucination include:

  • Google’s Bard chatbot incorrectly claiming that the James Webb Space Telescope had captured the world’s first images of a planet outside our solar system. [NB:  I was unable to verify this claim – kh]
  • Microsoft’s chat AI, Sydney, admitting to falling in love with users and spying on Bing employees
  • Meta pulling its Galactica LLM demo in 2022, after it provided users inaccurate information, sometimes rooted in prejudice.

So, the fact that AI LLMs can and do  return not only incorrect, non-factual information, but entirely “made up” information, images, and even citations to non-existent journal articles, should shatter any illusion you might have as to the appropriate uses of chat-bot and AI search engine responses, even to fairly simple inquiries. 

Now we add another layer of actuality, another layer of reality, to the lens through which you should view AI LLM based responses to questions you might pose to it.  Remember, AI LLM are currently being used to write thousands of “news articles” (like the suspect Logically Facts “analysis” of climate denial), journal papers, editorials, scripts for TV and radio news. 

AI LLMs:  They are What They Eat

This latest article in the New York Times [repeating the link]  does a good job of describing and warning us of the dangers of LLMs being trained n their own output. 

What is LLM training?

“As they (AI companies) trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.”

The Times presents a marvelous example of what happens when an AI-LLM is trained on its own output, in this case, hand-written digits it should be able to read and reproduce:

One can see that even in the first training on self-generated data, the LLM returns incorrect data – the wrong digits:  The upper-left 7 becomes a 4, the 3 below that becomes an 8, etc.  As that incorrect data is used to train the LLM further, after 20 iterations of re-training, the data (digits returned) as entirely undependable.  After 30 iterations, all of the digits have become homogenized, basically representing nothing at all, no discernible digits, all the same.

The Times article, which was written by Aatish Bhatia, quite cleverly quips this as “Degenerative A.I.

Think of the implications of this training when it has already become impossible  for humans to easily distinguish between AI generated output and human written output.  In AI training, it is only words (and pixels in case of images) that are included in the probability determination that results in the output – the AI answering for itself:  “What is the most likely word to use next?”.

You really must see the examples of “Distribution of A.I.-generated data“ used in the Times article.  As an AI is trained on its own previous output (“Eats itself” – kh) the probability distributions become narrower and narrower and the data less diverse.

I wrote previously that “The problem is immediately apparent:  in any sort of controversy, the most “official” and widespread view wins and is declared “true” and contrary views are declared “misinformation” or “disinformation”.  Individuals representing the minority view are labelled “deniers” (of whatever) and all slander and libel against them is rated “true” by default.“

With today’s media outlets all being generally biased in the same direction, towards the left, liberalism, progressivism and in favor of a single party or viewpoint (slightly different in each nation), AI LLMs become trained and thus biased to that viewpoint – the major media outlets being pre-judged as “dependable sources of information”.  By the same measure, sources with opinions, viewpoints or facts contrary to the prevailing bias are pre-judged to be “undependable sources of information, mis- or disinformation”.

AI LLMs are thus trained on stories mass-produced by AI LLMs, slightly modified by human authors to read less machine-generated, and subsequently published in major media outlets.  Having “eaten” their own output repeatedly, AI LLMs give narrower and less diverse answers to questions, which are less and less factual. 

This leads to:

As a LLM is trained on its own data “the model becomes poisoned with its own projection of reality.

Consider the situation we find in the real world of climate science.  The IPCC reports are generated by humans from the output of climate scientists and others in trusted peer-reviewed science journals.  It is well known that non-conforming papers are almost entirely excluded from journals because they are non-conforming.  Some may sneak through, and some may find a pay-to-play journal that will publish these non-conforming papers, but not many see the light of day.

Thus, only consensus climate science enters the “trusted literature” of the entire topic.  John P.A. Ioannidis has pointed out that “Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”

AI LLMs thus are trained on trusted sources that are already biased by publication bias, funding bias, the prevailing biases of their own fields, fear of non-conformity and group-think.   Worse yet, as AI LLMs thus train themselves on their own output, or output of other AI LLMs, the results become less true, less diverse, and less dependable —  potentially poisoned by their own false projections of  reality.

In my opinion, many sources of information are already seeing the effects of impending AI LLM collapse – the subtle blurring of fact, opinion and outright fiction.

# # # # #

Author’s Comment:

We live in interesting times. 

Be careful what information you accept – read and think critically, educate yourself from original principles and basic science. 

Thanks for reading.

# # # # #

via Watts Up With That?

https://ift.tt/AO3Uhfy

August 27, 2024 at 08:03AM

USA ELECTION – DEMOCRATS KEEPING QUIET ABOUT CLIMATE POLICY

Just as in the recent UK election, there as been no mention of costly climate policies in the US election by the Democrats. The difference in the USA is that the other party, the Republicans, do have a different policy and they should mention it and not fear the Green Blob. We will see in the coming weeks if Trump has the courage to speak up. 

Election Deception: Democrats won’t mention the climate, but The Greens are happy anyway « JoNova (joannenova.com.au)

via climate science

https://ift.tt/0pwKYVr

August 27, 2024 at 07:43AM