Climate model abuse

Roger Pielke Sr. reviews another very important new paper showing the abuse of models.

In the opinion of the editor Kundzewicz (who has served prominently on the IPCC), climate models were only designed to provide a broad assessment of the response of the global climate system to greenhouse gas (GHG) forcings, and to serve as the basis for devising a set of GHG emissions policies. They were not designed for regional adaptation studies.

To expect more from these models is simply unrealistic, at least for direct application to regional water management problems. The Anagnostopoulos et al conclusions negate the value of spending so much money on regional climate predictions decades into the future, for example on the South Eastern Australian Climate Initiative and the Queensland Climate Change Centre of Excellence.

Kundzewicz distances the professionals from such efforts:

They are not climate sceptics, but are sceptical of the claims of some climatologists and hydroclimatologists that these models are well suited for water management applications.

Hydrologists and water management professionals (hydrological and hydraulic engineers) have entered the scientific debate in force, because the GCMs are being advocated for purposes they were not designed for, i.e. watershed vulnerability assessments and infrastructure design.

As I showed in Critique of the DECR this is not a matter of opinion, but a matter that can be decided by applying basic validation tests in each instance. To the detriment of the field, tests that would justify the use of the models do not seem to be applied, or if they are they are not being made available. Such testing is regarded as good and standard practise elsewhere.

The recent surge of these sorts of papers suggests I am not the only one to think it is time for the field to pay the piper.

Drought predictions for this century

In The National Science Foundation Funds Multi-Decadal Climate Predictions Without An Ability To Verify Their Skill Roger Pielke Sr. links GCM skill at predicting drought with natural variation:

2. “Future efforts to predict drought will depend on models’ ability to predict tropical SSTs.”

In other words, there is NO way to assess the skill of these models are predicting drought as they have not yet shown any skill in SST predictions on time scales longer than a season, nor natural climate cycles such as El Niño [or the PDO, the NAO, etc.].

This seems an convoluted turn of phrase. There are ways to assess the skill of these models — by comparing them with past drought frequency and severity. Such assessments show the models have NO skill at predicting droughts.

The assumption is that IF they were able to predict cycles like PDO etc. then they would be able to predict droughts. But clearly if we average over these cycles, there is still the little problem of overall trends in extreme phenomena, which accuracy at PDO etal. would not necessarily satisfy.

His argument that drought efficacy swings on PDO prediction is useful, however, as a basis for excluding applications of models for climate phenomena that rely on them.

Roger is perhaps being polite about misleading policymakers when he continues:

Funding of multi-decadal regional climate predictions by the National Science Foundation which cannot be verified in terms of accuracy is not only a poor use of tax payer funds, but is misleading policymakers and others on the actual skill that exists in predicting changes in the frequency of drought in the future.

The review by Dai favours the PDSI drought index:

The PDSI was created by Palmer22 with the intent to measure the cumulative departure in surface water balance. It incorporates antecedent and current moisture supply (precipitation) and demand (PE) into a hydrological accounting system. Although the PDSI is a standardized measure, ranging from about −10 (dry) to +10 (wet)…

I always search for the assessment of accuracy first, and as usual the skill of models gets a very little, non-quantitative coverage. Climate scientists are loath judge the models, preferring to cloak their results in paragraphs of uncertainty, and present “dire predictions” of GCMs in garish figures (his Figure 11).

They need to start acting like scientists and stop these misleading practises until it is shown by rigorous empirical testing, and for fundamental reasons, that the current GCMs are fit for the purpose of drought modelling.

Just to show I am not always negative, this recent report has a lot to recommend in it. The authors of “Climate variability and change in south-eastern Australia” do quite a good job of describing the climatological features impacting the area, and putting technical issues, climate, hydrology and social impact together in an informative report.

While they say:

The current rainfall decline is apparently linked
(at least in part) to climate change, raising the
possibility that the current dry conditions may
persist, and even possibly intensify (as has been the
case in south-west Western Australia).

They also admit they don’t know how to combine the output of multiple models:

Some research (Smith & Chandler, 2009) suggests that
uncertainties in climate projections can be reduced
by careful selection of the global climate models, with
less weight being given to models that do not simulate
current climate adequately. Other work suggests that
explicit model selection may not be necessary (Watterson,
2008; Chiew et al., 2009c). Further research is being
done to determine how to combine the output of global
climate models to develop more accurate region-scale
projections of climate change.

I would fault that there is no suggestion that anything other than GCMs might be used, and no evidence the GCMs perform better than a mean value. If a model does no better than the long term average then there is good reason to suppose it has no skill, and throw it out. This is called ‘benchmarking’, but its an alien concept to reject any GCM from the IPCC, apparently.

Hal Lewis’ Resignation

The APS reminds me of the Australian Prime Minister’s Standing Committee on Climate Change. Some of the more choice parts of Hal’s resignation letter are extracted below.

Everything that has been done in the last year has been designed to silence debate

The appallingly tendentious APS statement on Climate Change

The original Statement, which still stands as the APS position, also contains what I consider pompous and asinine advice to all world governments, as if the APS were master of the universe. It is not, and I am embarrassed that our leaders seem to think it is. This is not fun and games, these are serious matters involving vast fractions of our national substance, and the reputation of the Society as a scientific society is at stake.

It is of course, the global warming scam, with the (literally) trillions of dollars driving it, that has corrupted so many scientists, and has carried APS before it like a rogue wave. It is the greatest and most successful pseudoscientific fraud I have seen in my long life as a physicist.

APS management has gamed the problem from the beginning, to suppress serious conversation about the merits of the climate change claims. Do you wonder that I have lost confidence in the organization?

There are indeed trillions of dollars involved, to say nothing of the fame and glory (and frequent trips to exotic islands) that go with being a member of the club. Your own Physics Department (of which you are chairman) would lose millions a year if the global warming bubble burst.

I want no part of it, so please accept my resignation. APS no longer represents me, but I hope we are still friends.

Hal

Number of resignations by Australian scientists? Zero?

Show us your tests – Australian climate projections

My critique of models used in a major Australian drought study appeared in Energy and Environment last month (read Critique-of-DECR-EE here). It deals with validation of models (the subject of a recent post by Judith Curry), and regional model disagreement with rainfall observations (see post by Willis here).

The main purpose is summed up in the last sentence of the abstract:

The main conclusion and purpose of the paper is to provide a case study showing the need for more rigorous and explicit validation of climate models if they are to advise government policy.

It is well known that despite persistent attempts and claims in the press, general circulation models are virtually worthless at projecting changes in regional rainfall, the IPCC says so, and the Australian Academy of Science agrees. The most basic statistical tests in the paper demonstrate this: the simulated drought trends are statistically inconsistent with the trend of the observations, a simple mean value shows more skill that any of the models, and drought frequency has dropped below the 95%CL of the simulations (see Figure).

Rainfall has increased in tropical and subtropical areas of Australia since the 70’s, while some areas of the country, particularly major population centers to the south-east and south-west have experienced multi-year deficits of rainfall. Overall Australian rainfall is increasing.

The larger issue is how to acknowledge that there will always be worthless models, and the task of genuinely committed modellers to identify and eliminate these. It’s not convincing to argue that validation is too hard for climate models, or they are justified by physical realism, or use the calibrated eyeball approach. The study shows that the obvious testing regimes would have eliminated these drought models from contention — if performed.

While scientists are mainly interested in the relative skill of models, where statistical measures such as root mean square (RMS) are appropriate, decision-makers are (or should) be concerned with whether the models should be used at all (are fit-for-use). Because of this, model testing regimes for decision-makers must have the potential to completely reject some or all models if they do not rise above a predetermined standard, or benchmark.

There are a number of ways that benchmarking can be set up, which engineers or others in critical disciplines would be familiar with, usually involving a degree of independent inspection, documentation of expected standards, and so on. My study makes the case that climate science needs to start adopting more rigorous validation practises. Until they do, regional climate projections should not be taken seriously by decision-makers.

It is up to the customers of these studies to not rely on the say-so of the IPCC, the CSIRO and the BoM, and to ask “Show me your tests”, as would be expected with any economic, medical or engineering study where the costs of making the wrong decision are high. Their duty of care requires they are confident that all reasonable means have been taken to validate all of the models that support the key conclusions.