Model Uncertainties Too Great To Reliably Advise Policymakers, German Scientists Say

German scientists Dr. Sebastian Lüning and Prof. Fritz Vahrenholt — and new recent studies — show climate models have a long way to go before they can be used for advising policymakers.

Climate sciences cannot reliably advise policymakers as long as model uncertainties cannot be reduced

By Dr. Sebastian Lüning and Prof. Fritz Vahrenholt
(German text translated by P Gosselin)

On November 3, 2017, the Institute of Atmospheric Physics, Chinese Academy of Sciences put out a press release concerning the quality check of climate models:

A new method to evaluate overall performance of a climate model

Many climate-related studies, such as detection and attribution of historical climate change, projections of future climate and environments, and adaptation to future climate change, heavily rely on the performance of climate models. Concisely summarizing and evaluating model performance becomes increasingly important for climate model intercomparison and application, especially when more and more climate models participate in international model intercomparison projects.

Most of current model evaluation metrics, e.g., root mean square error (RMSE), correlation coefficient, standard deviation, measure the model performance in simulating individual variable. However, one often needs to evaluate a model’s overall performance in simulating multiple variables. To fill this gap, an article published in Geosci. Model Dev., presents a new multivariable integrated evaluation (MVIE) method.

“The MVIE includes three levels of statistical metrics, which can provide a comprehensive and quantitative evaluation on model performance.” Says XU, the first author of the study from the Institute of Atmospheric Physics, Chinese Academy of Sciences. The first level of metrics, including the commonly used correlation coefficient, RMS value, and RMSE, measures model performance in terms of individual variables. The second level of metrics, including four newly developed statistical quantities, provides an integrated evaluation of model performance in terms of simulating multiple fields. The third level of metrics, multivariable integrated evaluation index (MIEI), further summarizes the three statistical quantities of second level of metrics into a single index and can be used to rank the performances of various climate models. Different from the commonly used RMSE-based metrics, the MIEI satisfies the criterion that a model performance index should vary monotonically as the model performance improves.

According to the study, higher level of metrics is derived from and concisely summarizes the lower level of metrics. “Inevitably, the higher level of metrics loses detailed statistical information in contrast to the lower level of metrics.” XU therefore suggests, “To provide a more comprehensive and detailed evaluation of model performance, one can use all three levels of metrics.”

It is highly satisfying to see that climate modelers are now taking the quality checks of their models seriously. For the time before the Little Ice Age, the models unfortunately have practically no skill at all. With the effective use of quality checks, this quickly becomes clear.

Ancell et al. (2018) has shown that some models are dominated by chaos. Small changes in the input values lead to very different results:

Seeding Chaos: The Dire Consequences of Numerical Noise in NWP Perturbation Experiments
Perturbation experiments are a common technique used to study how differences between model simulations evolve within chaotic systems. Such perturbation experiments include modifications to initial conditions (including those involved with data assimilation), boundary conditions, and model parameterizations. We have discovered, however, that any difference between model simulations produces a rapid propagation of very small changes throughout all prognostic model variables at a rate many times the speed of sound. The rapid propagation seems to be due to the model’s higher-order spatial discretization schemes, allowing the communication of numerical error across many grid points with each time step. This phenomenon is found to be unavoidable within the Weather Research and Forecasting (WRF) Model even when using techniques such as digital filtering or numerical diffusion. These small differences quickly spread across the entire model domain. While these errors initially are on the order of a millionth of a degree with respect to temperature, for example, they can grow rapidly through nonlinear chaotic processes where moist processes are occurring. Subsequent evolution can produce within a day significant changes comparable in magnitude to high-impact weather events such as regions of heavy rainfall or the existence of rotating supercells. Most importantly, these unrealistic perturbations can contaminate experimental results, giving the false impression that realistic physical processes play a role. This study characterizes the propagation and growth of this type of noise through chaos, shows examples for various perturbation strategies, and discusses the important implications for past and future studies that are likely affected by this phenomenon.”

Also see the discussion at WUWT on this paper.

Also Eos looked at the limits of climate modeling on February 26, 2018. A team led by Kenneth Carslaw wrote:

Climate Models Are Uncertain, but We Can Do Something About It
Model simulations of many climate phenomena remain highly uncertain despite scientific advances and huge amounts of data. Scientists must do more to tackle model uncertainty head-on.

Model uncertainty is one of the biggest challenges we face in Earth system science, yet comparatively little effort is devoted to fixing it. A well-known example of persistent model uncertainty is aerosol radiative forcing of climate, for which the uncertainty range has remained essentially unchanged through all Intergovernmental Panel on Climate Change assessment reports since 1995. From the carbon cycle to ice sheets, each community will no doubt have its own examples. We argue that the huge and successful effort to develop physical understanding of the Earth system needs to be complemented by greater effort to understand and reduce model uncertainty.Without such reductions in uncertainty, the science we do will not, by itself, be sufficient to provide robust information for governments, policy makers, and the public at large.”

Read more at Eos.

via NoTricksZone

https://ift.tt/2HqMYGi

April 25, 2018 at 07:52AM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: