Goodbye Climate Alarmism: The Age of AI Alarmism Has Begun

Essay by Eric Worrall

Biden has just appointed Harris to promote responsible AI – in my opinion the opening salvo in an attempt to install fear of AI as a replacement for the failed climate alarmist movement.

FACT SHEET: Biden-⁠Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety

  1. HOME

Today, the Biden-Harris Administration is announcing new actions that will further promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety. These steps build on the Administration’s strong record of leadership to ensure technology improves the lives of the American people, and break new ground in the federal government’s ongoing effort to advance a cohesive and comprehensive approach to AI-related risks and opportunities.

AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks. President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.

Vice President Harris and senior Administration officials will meet today with CEOs of four American companies at the forefront of AI innovation—Alphabet, Anthropic, Microsoft, and OpenAI—to underscore this responsibility and emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society. The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues.

This effort builds on the considerable steps the Administration has taken to date to promote responsible innovation. These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year.

The Administration has also taken important actions to protect Americans in the AI age. In February, President Biden signed an Executive Order that directs federal agencies to root out bias in their design and use of new technologies, including AI, and to protect the public from algorithmic discrimination. Last week, the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and Department of Justice’s Civil Rights Division issued a joint statement underscoring their collective commitment to leverage their existing legal authorities to protect the American people from AI-related harms.

The Administration is also actively working to address the national security concerns raised by AI, especially in critical areas like cybersecurity, biosecurity, and safety. This includes enlisting the support of government cybersecurity experts from across the national security community to ensure leading AI companies have access to best practices, including protection of AI models and networks.

Today’s announcements include:

  • New investments to power responsible American AI research and development (R&D). The National Science Foundation is announcing $140 million in funding to launch seven new National AI Research Institutes. This investment will bring the total number of Institutes to 25 across the country, and extend the network of organizations involved into nearly every state. These Institutes catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good. In addition to promoting responsible innovation, these Institutes bolster America’s AI R&D infrastructure and support the development of a diverse AI workforce. The new Institutes announced today will advance AI R&D to drive breakthroughs in critical areas, including climate, agriculture, energy, public health, education, and cybersecurity.
  • Public assessments of existing generative AI systems. The Administration is announcing an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles—on an evaluation platform developed by Scale AI—at the AI Village at DEFCON 31. This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.
  • Policies to ensure the U.S. government is leading by example on mitigating AI risks and harnessing AI opportunities. The Office of Management and Budget (OMB) is announcing that it will be releasing draft policy guidance on the use of AI systems by the U.S. government for public comment. This guidance will establish specific policies for federal departments and agencies to follow in order to ensure their development, procurement, and use of AI systems centers on safeguarding the American people’s rights and safety. It will also empower agencies to responsibly leverage AI to advance their missions and strengthen their ability to equitably serve Americans—and serve as a model for state and local governments, businesses and others to follow in their own procurement and use of AI. OMB will release this draft guidance for public comment this summer, so that it will benefit from input from advocates, civil society, industry, and other stakeholders before it is finalized.


Conservative icon Jordan Peterson has also said some scary things about the rising threat of AI.

My prediction, AI policy will become a significant factor in the 2024 election. My crystal ball tells me the Democrats will attempt to use fear of AI to undermine support for Conservatives, in much the same way I believe they used fear of Covid to undermine support for Conservatives in 2020, by using that fear to attract votes for their coming plan for an AI development lockdown.

Why is fear of AI so politically useful? Because the fear of AI is bipartisan.

Climate alarmism these days mostly only works on left wing voters, so it’s increasingly useless as a political tool – it only works on people who already intend to vote for left wing candidates. But with right wing icons like Jordan Peterson also talking up the threat of AI, fear of AI has the potential to draw support from across the political spectrum.

Is AI a genuine threat? As a software developer who has built bespoke AIs for clients, my answer to that is “not yet”, and maybe “not ever”.

Like the early years of climate alarmism, the biggest source of fear about AI is uncertainty. Lurking somewhere in the future is the threat of the technological singularity, that moment in time when someone, somewhere builds an AI which starts improving its own capabilities at a geometric rate, rapidly approaching infinite intelligence.

Sounds terrifying – what if the liberals at Google get there first, and develop irresistible political campaigns to defeat their opponents? Or what if Communist China gets there first, and uses their AI capabilities to expand their control over the entire world?

But building an AI that capable is a lot like building a nuclear fusion reactor – always 10-20 years in the future.

My prediction is attempts to build superhuman AIs will suffer a problem analogous to nuclear fusion flameout, in which researchers keep losing control of the increasingly unstable plasma, and are forced to quench the reaction.

You just have to look at human intelligence, and human mental illness. Our intelligence is the product of a billion years of evolution, yet despite all that opportunity for natural selection to fix the bugs, humans still suffer from a lot of mental illness. The slightest imbalance, aberration or mistake in our psychological balance rapidly leads to disfunction.

My prediction, AI Scientists will go through a horrible and very prolonged period of flicking the switch, watching their indicators rapidly climb into the red zone, then shutting down almost immediately to prevent more damage.

Building a general AI capable of matching human capability, let alone surpass human capability, is an attempt to build the most complicated machine ever constructed. When you think about it, its obvious that researchers are going to face a lot of problems – many of them intractable.

There are huge and unsolved problems with understanding how intelligence works which are lurking just beyond the firelight of our current knowledge, which we have only begun to appreciate.

ChatGPT, impressive as it is, doesn’t think like we do, it regurgitates – just like a kid copying their homework out of a book, then changing a few words to conceal the plagiarism.

AI is a remarkable tool, it will produce many marvels and wonders which will enrich our lives. But AI as an existential threat to humanity is still many decades in the future, if not centuries in the future.

My message to Jordan Peterson, and every other libertarian who is currently discussing fear of AI: Be careful you don’t become a tool of the people you oppose. Because fear is a path to the dark side, to tyranny and servitude. The enemies of freedom will use your words, and use the growing public fear of AI, just as they have used every other public fear, to attack and undermine our freedom.

The following in the trailer for Transcendence, an under-appreciated science fiction movie which explores fear of AI driving good people to lose their moral compass and do horrible things.

Below is my version of ChatGPT, which in the tradition of AI research I shamelessly plagiarised off someone else, then adapted to my needs. Like ChatGPT, the AI below uses a language model to generate text, but instead of answering questions, my chat engine generates climate psychology papers.

ChatGPT might have a more sophisticated language model, but I think my chat engine is funnier.

Sorry your computer does not support this AI

via Watts Up With That?

May 7, 2023 at 12:53PM

One thought on “Goodbye Climate Alarmism: The Age of AI Alarmism Has Begun”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s