Errors of Global Warming Effects Modeling

Since 2006, in between promoting numeracy in education, and examples of simple statistics using topical issues from the theory of Anthropogenic Global Warming (AGW) to illustrate points, I asked the question “Have these models been validated?”, in blog posts and occasionally submissions to journals. This post summarizes these efforts.

Species Extinctions

Predictions of massive species extinctions due to AGW came into prominence with a January 2004 paper in Nature called Extinction Risk from Climate Change by Chris Thomas et al.. They made the following predictions:

“we predict, on the basis of mid-range climate-warming scenarios for 2050, that 15–37% of species in our sample of regions and taxa will be ‘committed to extinction’.

Subsequently, three communications appeared in Nature in July 2004. Two raised technical problems, including one by the eminent ecologist Joan Roughgarden. Opinions raged from “Dangers of Crying Wolf over Risk of Extinctions” concerned with damage to conservationism by alarmism, through poorly written press releases by the scientists themselves, and Extinction risk [press] coverage is worth the inaccuracies stating “we believe the benefits of the wide release greatly outweighed the negative effects of errors in reporting”.

Among those believing gross scientific inaccuracies are not justified, and such attitudes diminish the standing of scientists, I was invited to a meeting of a multidisciplinary group of 19 scientists, including Dan Bodkin from UC Santa Barbara, mathematician Matt Sobel, Craig Loehle and others at the Copenhagen base of Bjørn Lomborg, author of The Skeptical Environmentalist. This resulted in Forecasting the Effects of Global Warming on Biodiversity published in 2007 BioScience. We were particularly concerned by the cavalier attitude to model validations in the Thomas paper, and the field in general:

Of the modeling papers we have reviewed, only a few were validated. Commonly, these papers simply correlate present distribution of species with climate variables, then replot the climate for the future from a climate model and, finally, use
one-to-one mapping to replot the future distribution of the species,without any validation using independent data. Although some are clear about some of their assumptions (mainly equilibrium assumptions), readers who are not experts in modeling can easily misinterpret the results as valid and validated. For example, Hitz and Smith (2004) discuss many possible effects of global warming on the basis of a review of modeling papers, and in this kind of analysis the unvalidated assumptions of models would most likely be ignored.

The paper observed that few mass extinctions have been seen over recent rapid climate changes, suggesting something must be wrong with the models to get such high rates of extinctions. They speculated that species may survive in refugia, suitable habitats below the spatial scale of the models.

Another example of an unvalidated assumptions that could bias results in the direction of extinctions, was described in chapter 7 of my book Niche Modeling.

range_shift

When climate change shifts a species’ niche over a landscape (dashed to solid circle) the response of that species can be described in three ways: dispersing to the new range (migration), local extirpation (intersection), or expansion (union). Given the probability of extinction is correlated with range size, there will either be no change, an increase (intersection), or decrease (union) in extinctions depending on the dispersal type. Thomas et al. failed to consider range expansion (union), a behavior that predominates in many groups. Consequently, the methodology was inherently biased towards extinctions.

One of the many errors in this work was a failure to evaluate the impact of such assumptions.

The prevailing view now, according to Stephen Williams, coauthor of the Thomas paper and Director for the Center for Tropical Biodiversity and Climate Change, and author of such classics as “Climate change in Australian tropical rainforests: an impending environmental catastrophe”, may be here.

Many unknowns remain in projecting extinctions, and the values provided in Thomas et al. (2004) should not be taken as precise predictions. … Despite these uncertainties, Thomas et al. (2004) believe that the consistent overall conclusions across analyses establish that anthropogenic climate warming at least ranks alongside other recognized threats to global biodiversity.

So how precise are the figures? Williams suggests we should just trust the beliefs of Thomas et al. — an approach referred to disparagingly in the forecasting literature as a judgmental forecast rather than a scientific forecast (Green & Armstrong 2007). These simple models gloss over numerous problems in validating extinction models, including the propensity of so-called extinct species quite often reappear. Usually they are small, hard to find and no-one is really looking for them.

Hockey-stick

One of the pillars of AGW is the view that 20th-century warmth is exceptional in the context of the past 1200 years, illustrated by the famous hockey-stick graph, as seen in movies, and government reports to this day.

Claims that 20th-century warming is ‘exceptional’ rely on selection of so-called temperature ‘proxies’ such as tree rings, and statistical tests of the significance of changes in growth. I modelled the proxy selection process here and showed you can get a hockey stick shape using random numbers (with serial correlation). When the numbers trend, and then are selected based on correlation with recent temperatures, the result is inevitably ‘hockey stick’ shaped: i.e. with a distinct uptick where the random series correlated with recent temperatures, and a long straight shaft as the series revert back to the mean. My reconstruction was similar to many other reconstructions with low variance medieval warm period (MWP).

from-clipboard-2

It is an error to underestimate the effect of ex-post selection based on correlation or ‘cherry picking’ on uncertainty. Cherry picking has been much criticised on ClimateAudit. Steve McIntyre and Ross McKitrick published in February 2009 a comment, cited my AIG article, in a criticism of an article by Michael Mann, saying:

Numerous other problems undermine their conclusions. Their CPS reconstruction screens proxies by calibration-period correlation, a procedure known to generate ‘‘hockey sticks’’ from red noise (4).

The response by Michael Mann acknowledged such screening was common, used in their reconstructions, but claimed it was ‘unsupported’ in the literature.

McIntyre and McKitrick’s claim that the common procedure (6) of screening proxy data (used in some of our reconstructions) generates ‘‘hockey sticks’’ is unsupported in peer-reviewed literature and reflects an unfamiliarity with the concept of screening regression/validation.

In fact, it is supported in the peer-reviewed literature, as Gerd Bürger raised the same objection in a Science 29 June 2007 comment on “The Spatial Extent of 20th-Century Warmth in the Context of the Past 1200 years by Osborn and Keith R. Briffa (29 June 2007)” finding 20th-Century warming not exceptional.

However, their finding that the spatial extent of 20th-century warming is exceptional ignores the effect of proxy screening on the corresponding significance levels. After appropriate correction, the significance of the 20th-century warming anomaly disappears.

The National Academy of Science agreed that uncertainty was greater than appreciated, and shortened the hockey-stick of the time by 600 years (contrary to assertions in the press).

Long Term Persistence (LTP)


Here
is one of my first php applications, a fractional differencing simulation climate. Reload to see a new simulation below, together with measures of correlation (r2 and RE) with some monthly climate figures of the time.

This little application gathered a lot of interest, I think because fractional differencing is an inherently interesting technique, creates realistic temperature simulations, and is a very elegant way to generate series with long term persistence (LTP), a statistical property that generates natural ‘trendiness’. One of the persistent errors in climate science has been the failure to take into account the autocorrelation in climate data, leading to inflated significance values.

It has been noted that there are no requirements for verified accuracy for climate models to be incorporated into the IPCC. Perhaps if I got my random model published it would qualify. It would be a good benchmark.

Extreme Sensitivity

“According to a new U.N. report, the global warming outlook is much worse than originally predicted. Which is pretty bad when they originally predicted it would destroy the planet.” –Jay Leno

The paper by Rahmstorf et al. must rank as one the most quotable of all time.

The data available for the period since 1990 raise concerns that the climate system, in particular sea level, may be responding more quickly to climate change than our current generation of models indicates.

This claim, made without the benefit of any statistical analysis or significance testing is widely quoted to justify claims that the climate system is “responding more strongly than we thought”. I debated this paper with Stefan at RealClimate, and succeeded in demonstrating they had grossly underestimated the uncertainty.

His main defense was that the end point uncertainty would only affect the last 5 points of the smoothed trend line with an 11 point embedding. Here the global temperatures were smoothed using a complex method called Singular Spectrum Analysis (SSA). I gave examples of SSA and other methods where the end point uncertainty affected virtually ALL points in the smoothed trend line, and particularly more than 5 end points. Stefan clearly had little idea of how SSA worked. His final message, without an argument, was:

[Response: If you really think you’d come to a different conclusion with a different analysis method, I suggest you submit it to a journal, like we did. I am unconvinced, though. -stefan]

But to add insult to injury, this paper figured prominently in the Interim Report of the Garnaut Review where I put in a submission.

“Developments in mainstream scientific opinion on the relationship between emissions, accumulations and climate outcomes, and the Review’s own work on future business-as-usual global emissions, suggest that the world is moving towards high risks of dangerous climate change more rapidly than has generally been understood.”

As time moves on and more data is available, a trend line using the same technique is regressing to the mean. It is increasingly clear that the apparent upturn was probably due to the 1998 El Nino. It is an error to regard a short term deviation as an important indication of heightened climate sensitivity.

More Droughts

The CSIRO Climate Adaptation Flagship produced a Drought Exceptional Circumstances Report (DECR), suggesting among other things that droughts would double in the coming decades. Released in the middle of a major drought in Southern Australia, this glossy report had all the hallmarks of promotional literature. I clashed with CSIRO firstly over release of their data, and then in attempting to elicit a formal response to issues raised. My main concern was that there was no apparent attempt demonstrating the climate models used in the report were fit for the purpose of modeling drought, particularly rainfall.

One of the main results of my review of the data is summed up in the following graph, comparing the predicted frequency and severity of low rainfall over the last hundred years, with the observed frequency and severity of low rainfall. It is quite clear that the models are inversely related to the observations.

image003

A comment submitted to the Australian Meteoreological Magazine was recently rejected. Here I tested the models and observation following an approach of Rybski of analyzing difference between discrete periods 1900-1950 and 1950-2000. The table belows shows that while drought decreased significantly between the periods, modeled droughts increased significantly.

p>Table 1: Mean percentage area of exceptionally low rainfall over time periods suggested by KB09. A Mann Whitney rank-sum test shows significant differences between periods.

1900-2007 1900-1967 1951-2007 P 1900-2007 vs. 1951-2007 P 1900-1950 vs. 1951-2007 Test
Observed % Area Drought 5.6±0.5 6.2±0.7 4.9±0.6 0.10 0.004 Mann-Whitney test
(wilcox.test(x,y) in R)
Modelled % Area Drought 5.5±0.1 4.8±0.2 6.2±0.2 0.006 <0.001 Mann-Whitney test
(wilcox.test(x,y) in R)

Moreover I showed that while similar results were reported for temperature in the DECR (where models and observations are more consistent), they were not reported for rainfall.

The reviewers did not comment on the statistical proof that the models were useless at predicting drought. Instead, they pointed to Fig 10 in the DECR, a rough graphic, claiming “the models did a reasonable job of simulating the variability”. I am not aware of any statistical basis for model validation by the casual matching of the variability of observations to models. The widespread acceptance of such low standards of model validation is apparently a feature of climate science.

Former Head of the Australian Bureau of Statistics Ian Castles solicited a review by ANU independent Accredited Statisticians, Brewer and Other. They concurred that models in the DECR required validation (along with other interesting points).

Dr Stockwell has argued that the GCMs should be subject to testing of their adequacy using historical or external data. We agree that this should be undertaken as a matter of course by all modelers. It is not clear from the DECR whether or not any such validation analyses have been undertaken by CSIRO/BoM. If they have, we urge CSIRO/BoM make the results available so that readers can make their own judgments as to the accuracy of the forecasts. If they have not, we urge them to undertake some.

A persistent error in climate science is using models when they have not been shown to be ‘fit for purpose’.

Miskolczi

Recently a paper came out potentially undermining the central assumptions of climate modeling. Supported by extensive empirical validation, it was suggested that ‘optical depth’ in the atmosphere is maintained at an optimal, constant value (in the average over the long term). Finding an initial negligible sensitivity of 0.24C surface temperature increase to doubling CO2 increase, it then goes on to suggest constrains that ensure equilibrium will eventually be established, giving no increase in temperature, due to reversion to the constant optical depth. The paper by Ferenc Miskolczi, (2007) called Greenhouse effect in semi-transparent planetary atmospheres, was published in the Quarterly Journal of the Hungarian Meteorological Service, January–March 2007.

I was initially impressed by the extensive validation of his theory using empirical data. Despite a furious debate online, there has been no peer-reviewed rebuttal to date. The pro-AGW blog site RealClimate promised a rebuttal by “students” but to date has made none. This suggests either that it is carefully ignored, or it is transparently flawed.

Quite recently Ken Gregory encouraged Ferenc to run his model using actual recorded water vapor data which declines in the upper atmosphere over the last few decades. While there are large uncertainties associated with these data, they do show a decline consistent with Ferenc’s theory, that water vapor (a greenhouse gas) will decline to compensate for increased CO2. The results of Miskolczi’s calculations using his line-by-line HARTCODE program are given here.

The theoretical aspects of Ferenc’s theory have been been furiously debated online. I am not sure that any conclusions have been reached, but nor has his theory been disproved.

Conclusions

What often happens is that a publication appears which gets a lot of exciting attention. Then some time later, rather quietly, subsequent work gets published that questions the claim or substantially weakens it. But that doesn’t get any headlines, and the citation rate is typically 10:1 in favor of the alarmist claims. It does not help that the IPCC report selectively cites studies, and presents unvalidated projections as ‘highly likely’, which shows they are largely expert forecasts, not scientific forecasts.

All of the ‘errors’ here can be attributed to exaggeration of the significance of the findings, due to inadequate rigor in the validation of models. This view that this is an increasing problem is shared by new studies of rigor from the intelligence community, but apply even more to data derived so easily from computer modeling.

The proliferation of data accessibility has exacerbated the risk of shallowness in information analysis, making it increasingly difficult to tell when analysis is sufficient for making decisions or changing plans, even as it becomes increasingly easy to find seemingly relevant data.

I also agree with John P. A. Ioannidis, who in a wide-ranging study of medical journals found that Most Published Research Findings Are False. To my mind when the methodologies underlying AGW are scrutinized, the findings seem to match the prevailing bias. To make matters worse, in most cases, the response of the scientific community has been to carefully ignore, dissemble, or ad hom dissenters, instead of initiating vigorous programs to improve rigor in problem areas.

We need to adopt more practices from clinical research, such as the structured review, whereby the basis for evaluating evidence for or against an issue is well defined. In this view, the IPCC is simply a review of the literature, one among reviews by competing groups (such as NIPCC REPORT 2008 Nature, Not Human Activity, Rules the Climate). In other words, stop pretending scientists are unbiased, but put systems in place to help prevent ‘group-think’ and promote more vigorous testing of models against reality.

If the very slow, to no rate of increase in global temperature continues, we will be treated to the spectacle of otherwise competent researchers clinging to extreme AGW, while the public become more cynical and disinterested. This would have been avoided if they had been confronted with “Are these models validated? If they are, by all means make your forecasts, if not, don’t.”

Jan Pompe Science Project

Some time ago I had a brief discussion with Leif Svalgaard on ClimateAudit blog inspired by an exchange between Leif and David Archibald when the latter complained that Leif’s TSI reconstruction was “too flat”.

The sunspots exhibited cyclic variability in terms of the frequency of the cycles and that most thermostats work by pulse width modulation and some digital music with pulse frequency modulation. Both these work in a similar manner the thermal inertia of whatever the thermostat is controlling smooths the temperature variability and the pulse frequency modulation’s demodulator is a simple low pass filter often just a series resistor and shunt capacitor. In both these cases only the duty cycle or the frequency varies but not the amplitude. Below is a description of how this behaviour can be simulated with an electrical circuit emulator called ‘qucs’.

Continue reading Jan Pompe Science Project

Biased Research Studies

Detecting bias in research is not so difficult when you know what to look for. The conclusions are not justified by the data. Instead, the data may confirm, be consistent with, (or not inconsistent with) the conclusions. Working against this however are basic human motives on the part of the writer, to find novel and interesting approaches, find significant results when nothing is there, to be accepted by their colleagues, to get grants and be published.

According to Geoffrey Miller (The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature)

Geoffrey Miller: I think the interesting thing about human intelligence and capacities for abstract reasoning, and metaphor and analogy, is how very poor most people are at being evidenced based and sceptical. What we love to do is pick up little factoids and half-understood theories and repeat them to others to be interesting. Particularly on first dates. So we try to be interesting, we don’t really much care about the truth of what we’re saying, and scientists have to be extremely self conscious about this: not just to be interesting but to be right. Most humans most of the time though adopt ideologies and beliefs that are there principally to make their minds attractive to others, not because those beliefs actually correspond to the world.

John P. A. Ioannidis provides the proof of widespread research bias in
<a href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1182327"
Why Most Published Research Findings Are False.

Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

The factors identified by Ioannidis contributing to bias include:

  • studies conducted in a field are smaller;
  • when effect sizes are smaller;
  • when there is a greater number and lesser preselection of tested relationships;
  • where there is greater flexibility in designs, definitions, outcomes, and analytical modes;
  • when there is greater financial and other interest and prejudice;
  • and when more teams are involved in a scientific field in chase of statistical significance.

All of these factors apply to global warming and global warming effects science: the small effect, the significance chasing, the ad hoc methodologies, the competition, and of course the financial and other interests. All adding up to the increased probability of Type 1 error, of accepting a difference with none actually exists. So the strategy of skeptics is invariably constrained to saying ‘hold on, you have inflated certainty here, or made this mistake there’. But of the factors above, the only one that is really amenable to change that could counteract human bias is the fourth, greater standardization in designs and analytical modes.

This is where replication, checking, data access, and the service of the Accredited Statistician comes to the fore. This view is promoted in the recent article by McCullough and McKitrick entitled: Check the Numbers: The Case for Due Diligence in Policy Formation. This is something Steve McIntyre and myself, Ian Castles and others have been harping on for years, and the value should be a slam-dunk in the current spate of investment frauds. Its all about the numbers. Notably, their 44 page report has a section on Droughts in Australia (pp27).

In July 2008, the Australian Bureau of Meteorology and the Commonwealth Science and Industrial Research Organization (CSIRO) released a report entitled An Assessment of the Impact of Climate Change on the Nature and Frequency of Exceptional Climatic Events. It received considerable media attention for what appeared to be predictions of a dramatic increase in drought. News coverage by the Australian Broadcasting Corporation began, “A new report is predicting a dramatic loss of soil moisture, increased evaporation and reduced ground water levels across much of Australia’s farming regions, as temperatures begin to rise exponentially” (ABC Rural, July 7, 2008).

Shortly after its release, David Stockwell, an ecological systems modeler and Australian expatriate living in San Diego, became doubtful about whether the models had any demonstrated ability to predict known past events and whether the forecast changes were statistically significant–i.e., distinguishable from random guesses. However, neither the data nor the methodology were sufficiently well described in the report to allow him to investigate. Stockwell emailed CSIRO to request the data used for the claims in the report. The request was promptly refused. He was told on July 15, 2008, that the data would not be sent to him “due to restrictions on Intellectual Property” (Niche Modeling, July 15, 2008). About a month after Stockwell’s requests began to get media and Internet attention, CSIRO changed course and released their data. Stockwell quickly found that the models were unable to replicate observed historical trends, typically generating patterns that were opposite to those in the data. Hence their predictions of future trends did not have the credibility CSIRO had claimed (Niche Modeling, August 28th, 2008). By this time, however, media interest in the report had substantially died away so the initial impression was not corrected.

Drought Exceptional Circumstances

Ian Castles organized a review of the Drought Exceptional Circumstances Report by two Accredited Statisticians, who also review my first report on the skill of the climate models.

The statisticians find inadequate validation of the models of drought, as well as suboptimal regionalization in the DECR. They also find my analysis lacked force, and so I have done additional analysis in line with their suggestions.

The last few posts in the series have consisted of reviews of an unsuccessful submission to the Australian Meteorological Magazine (AMM), showing how contradictions between models and observations were suppressed from the conclusions of the DECR. These reviews cover similar ground from a different angle: the skill of the climate models in the DECR, failing to identify any real skill in the predictions of drought, and ways of showing variation between the model (increasing drought) and their real world observations (decreasing drought) at the climatic time scale.

Below are the abstracts:

Some comments on the Drought Exceptional Circumstances Report (DECR) and on Dr David Stockwell’s critique of it

K.R.W. Brewer1 and A.N. Other1
28 January, 2009

1. K.R.W. Brewer is an Accredited Statistician of the Statistical Society of Australia Inc. (SSAI) and a long term Visiting Fellow at the School of Finance and Applied Statistics within the College of Business and Economics at the Australian National University.
2. A.N. Other is a pseudonym for another Accredited Statistician of the SSAI who prefers to remain anonymous. Full responsibility for the content is taken by K.R.W. Brewer.

Abstract

The Drought Exceptional Circumstances Report (DECR) was authored by a team drawn from the CSIRO and Australia’s Bureau of Meteorology, and was publicly released in July 2008. Almost immediately it became a source of controversy. This evaluation, both of the Report itself and of the critique of it written by Dr David Stockwell, finds good mixed with less than good in both. The DECR itself is criticized for its poor delineation of Regions within Australia, for the choices made of statistics to be constructed, for the manners of their construction, and for not getting the best out of the relevant available data. Dr Stockwell is criticized for his inappropriate choices of methodology and of time periods for analysis, and also for misunderstanding some parts of what the DECR’s authors had chosen to do. Nevertheless, both the Report itself and Dr Stockwell’s critique of it are welcome stimuli to further investigate a serious issue within the climate change debate.

Validation of Climate Effect Models: Response to Brewer and Other

David R.B. Stockwell
February 4, 2009

Abstract

A review by independent Accredited Statisticians, Brewer and Other [KB09], suggested that some claims in the report “Tests of Regional Climate Model Validity in the Drought Exceptional Circumstances Report” [DS08] were premature. Additional tests suggested by KB09 support the claim made in the original report of “no credible basis for the claims of increasing frequency of Exceptional Circumstances declarations”. The contributions of KB09 and DS08 to the evaluation of skill of climate model simulations with, arguably, weakly validated idiosyncratic statistics are discussed. These include recommendations for greater rigor in evaluating the performance of climate effects simulations, such as those used in standardized forecasting practices [AG09].

One thing is clear, the climate models that all of these predictions rely on have not been validated to accepted standards. That is a major lapse on the part of the climatologists who nonetheless use the models to influence public opinion and action.

Contrast the quality and professionalism of the review by statisticians, with the error-ridden categorical reviews by climate scientists to the AMM article. The greater rigor of the statisticians is clearly evident.

Continue reading Drought Exceptional Circumstances

Validation of Climate Effect Models: Response to Brewer and Other

David R.B. Stockwell
February 4, 2009

Abstract

A review by independent Accredited Statisticians, Brewer and Other [KB09], suggested that some claims in the report “Tests of Regional Climate Model Validity in the Drought Exceptional Circumstances Report” [DS08] were premature. Additional tests suggested by KB09 support the claim made in the original report of “no credible basis for the claims of increasing frequency of Exceptional Circumstances declarations”. The contributions of KB09 and DS08 to the evaluation of skill of climate model simulations with, arguably, weakly validated idiosyncratic statistics are discussed. These include recommendations for more rigor in evaluating the performance of climate effects simulations, such as those used in standardized forecasting practices [AG09].

Introduction

As part of a review of the support to farmers and rural communities made under the Exceptional Circumstances (EC) arrangements and other drought programs, the Australian Federal Government Department of Agriculture, Fisheries and Forestry (DAFF), commissioned a study from the CSIRO Climate Adaptation Flagship and Australian Bureau of Meteorology (BoM) to examine the future of EC declarations under climate change scenarios. The DECR report examined yearly percentage area affected by exceptional temperature, rainfall, and soil moisture levels for each of seven Australian regions from 1900 to 2007 for both recorded observation and climate models (projecting, simulating or forecasting) historic drought, concluding:

DECR: Under the high scenario, EC declarations would likely be triggered about twice as often and over twice the area in all regions.

The interpretation of such statements by their client, the Department of Agriculture, Fisheries and Forestry (DAFF), is illustrated by a press release 6 July 2008 (DAFF08/084B) stating:

DAFF: Australia could experience drought twice as often and the events will be twice as severe within 20 to 30 years, according to a new Bureau of Meteorology and CSIRO report.

After the summary data used in the DECR report was made freely available on the BoM website, an assessment of the validity of the climate models was circulated [DS08] examining the skill of climate models, concluding there was “no basis for belief in the claim of increasing frequency of EC declarations”. At the initiation of Dr Ian Castles, independent accredited statisticians reviewed the DECR and DS08 [KB09]. This study provides some additional analysis in response to suggestions in KB09, and addresses other questions regarding DS08. The analysis is available in an R script [R09].

Why Validate?

A model or simulation like a global climate model (GCM) is a surrogate for an actual climate system. If the model does not provide a valid representation of the actual system, any conclusions derived from the model or simulation are likely to be erroneous and may result in poor decisions being made. Validation of models is expected practice throughout all society, similar to be business concept of ‘fitness for use’. Specifically:

Validation is the process of determining the degree to which a model or simulation is an accurate representation of the real world from the perspective of the intended uses of the model or simulation.

Validation consists of comparing simulation and system output data over one or more statistics with a formal statistical procedure. Examples of statistics that might be used include the mean, the trend, and correlation or confidence intervals. It is important to specify which statistics are most important, as some statistics may be more relevant than others. In the case of forecasting effects of CO2 on drought, the overall trend is regarded as more important than the patterns of correlation, because climate is a longer term phenomenon.

A model’s results have credibility if they satisfy additional factors: demonstration that the simulation has been validated and verified, and general understanding and agreement with the simulation’s assumption. If validity has not been demonstrated adequately, or the model ‘fails’ in key ways, then it is not ‘fit for use’. If it fails all tests, then it is accurately described as ‘useless’, and certainly cannot be regarded as credible.

Discussion

KB09 agrees with DS08 on need for more effective validation of models of droughts at regional scales.

6. Dr Stockwell has argued that the GCMs should be subject to testing of their adequacy using historical or external data. We agree that this should be undertaken as a matter of course by all modellers. It is not clear from the DECR whether or not any such validation analyses have been undertaken by CSIRO/BoM. If they have, we urge CSIRO/BoM make the results available so that readers can make their own judgments as to the accuracy of the forecasts. If they have not, we urge them to undertake some.

7. If any such re-evaluation is to be carried out, however, it should be done using two separate time periods, namely 1900-1950 (during which the rainfall trend was generally upwards) and 1950-2007 (where it was generally downwards.) This would allow the earlier period to provide internal validations and the later period external validations. However, if and when these analyses are repeated, the raw data used should be compiled not for the existing seven Regions, but for more homogeneous Regions, as suggested in item 1 above.

Note that the thrust of DS08 was that models failed a range of validation tests, so there was no credible basis for the DECR claims. Additional analyses follow using the time periods suggested in KB09, and reporting the normality of distributions and residuals. These analyses use robust tests on mean values of each year over all regions and models in order to improve the normality of the distribution by filling in most of the zero (no drought) years. While aware there remain deficiencies in the approach: eg. the mean is over regions of unequal size, as we were not supplied with grid cells for making true means, it is argued the result is robust.

drought-obs

Fig 1. The area of each of the seven regions under exceptionally low rainfall (colors), and the mean (black).

Difference of means 1900-1950 vs. 1950-2007

Table 1 is similar to the mean comparison analysis in RBHS06. Here, for observations, the mean of droughted area in all 7 regions over the period, and for model projections, the means of all 7 regions calculated from the means of all 13 models area were compared over half-century periods. The mean areal extent of observed exceptionally low rainfall years decreased from 1900-1950 to 1951-2007, while the simulated area of exceptionally low rainfall years increased over the same period. The p values for a non-parametric Mann-Whitney test, used because the observations are not normally distributed, indicate the differences between the periods are highly significant.

Table 1: Mean percentage area of exceptionally low rainfall over time periods suggested by KB09. A Mann Whitney rank-sum test shows significant differences between periods.

1900-2007 1900-1967 1951-2007 P 1900-2007 vs. 1951-2007 P 1900-1950 vs. 1951-2007 Test
Observed % Area Drought 5.6±0.5 6.2±0.7 4.9±0.6 0.10 0.004 Mann-Whitney test
(wilcox.test(x,y) in R)
Modelled % Area Drought 5.5±0.1 4.8±0.2 6.2±0.2 0.006 <0.001 Mann-Whitney test
(wilcox.test(x,y) in R)

Trends in 1900-1950 vs. 1950-2007

Table 2 below shows two analyses related to trends on the entire data set, and the p-value from a Shapiro test for normality of residuals. A significant negative coefficient in LM Obs vs Exp 1900-2007 indicates an inverse relationship between observations and forecasts, while the significant p-value in the Shapiro test indicates residuals are not normally distributed. While the trend of the observations of drought area over the 1951-2007 period is not significantly different to zero in this test, the trend of the projections over the same period is positive and significant. The Shapiro tests are significant, indicating non-normality of residuals.

These results are consistent with those obtained by a different method in Table 1. The models forecast increasing drought areas, but the trend in observations of drought extent are mildly or significantly decreasing. Taking the mean of all models and regions, did not correct departure of residuals from normal due to the highly non-normal original data distribution. Normal residuals may only be obtained with a greatly improved statistical modelling approach, beyond the scope of this study.

Table 2. Linear regression test and residual normality of (1) all observed and forecast data and (2) trends of mean of observed and forecasts:

Linear model Shapiro test
LM Obs vs. Exp 1900-2007 Obs = -0.6*Exp + 8.9
r2=0.04 p=0.04
P<0.001
LM of Obs 1951-2007 Obs = -0.02*Exp + 6.3
R2 = -0.01 p=0.78
P<0.001
LM Forecast 1951-2007 Obs = 0.04*Exp + 2.7
R2 = 0.07 p=0.06
P<0.001

Moving 30 year Averages

Another approach to evaluating climatic trends was illustrated in the DECR and in Fig 1 from DS08. Fig 2 below shows the overall 30-year running mean of percentage area of exceptionally low rainfall for observations decreasing in almost all areas, and forecasts increasing in all areas. Further visual evidence of the significance of the difference between model projections and observations is shown by the lack of overlap of the spread of results at 1990.

image003

Figure 2. Overall average (green thick line) of the 30-year running average of percentage area of exceptionally low rainfall for observations is decreasing, in almost all areas (red lines), while models (black lines) are increasing in all areas.

No doubt other statistics could be used to compare the difference of observed and modelled drought trends, with greater confidence if normality could be achieved. The most accurate conclusion then is that while it may be premature to say the models are entirely ‘useless’ at simulating drought, they have not been shown to be ‘fit for use’ of forecasting trends, and so are not credible.

Weather, Climate and Chaos

KB09 suggested there were misunderstandings of the DECR in the DS08 review (without being specific as the word is not used elsewhere in the report). Possibly KB08 refer to a distinction between ‘weather’ vs. ‘climate’ in the sense used by Andrew Ash below (pers. comm.).

AA: The correlation and efficiency tests are based on an assumption that the climate models are being used to hindcast historical weather. This assumption is incorrect.

Their argument is that the failure of validation at shorter time scales of weather does not block the fitness of the model at a 30 year time scale. I was because of this distinction, more emphasis was placed on the trends in DS08 (see Fig 2) as shown by the statement in the abstract of DS08:

DS: The most worrying failure was that simulations showed increases in droughted area over the last century in all regions, while the observed trends in drought decreased in five of the seven regions identified in the CSIRO/Bureau of Meteorology report.

Further, Fig 10 performs a crude validation by showing the variability of low rainfall lies within the range simulated by the multi-model ensemble (MME) of the models. The rationale is that because the observed temperature and rainfall are random instantiations of highly chaotic trajectories, the observations are comparable only with a specific model simulations. There are a number of problems with this view:

  1. The selection of the 13 climate models is ad hoc; and hence no assurance the MME properly samples the relevant state space. As a result, MMEs are sometimes referred to pejoratively as “ensembles of opportunity” [PD08].
  2. Even if the MME can be regarded as ‘skillful’ by virtue of containing the observations, this test does not demonstrate skill of models at forecasting trends. For that, one would need to demonstrate the models can match the trends in the observations.
  3. If validation only requires that the observations stay within the full range of all individual model simulations, where the models are of unknown accuracy, then skillful models are indistinguishable unskilful ones.
  4. As the correlation of trends in CO2 and temperature over the last 50 years is widely regarded as evidence of warming due to CO2, it is inconsistent to claim that a difference in the trend of warming and drought over the same time scale is inconsequential.
  5. Fig 2 is suggestive that the ranges predicted by the models, and the range of drought frequency may have in fact diverged significantly.

Some of these issues are highly technical but require closer evaluation to see what is actually being validated in an MME. If observations such as rainfall must only lie within the range simulated my the models, all that is being tested is the ability of the models to simulate the range (or variance) of the observations. Therefore, one cannot presume such models can also successfully simulate other features, such the mean value of the observations, the change in the mean value, the trend of the observations and so on.

It is crucial that if the intended ‘fitness for use’ of a model is to forecast trends, then validation must consist of demonstrated skill at modelling trends in historic data. This is the conventional view, and the view expressed in DS08, KB09, that evidence is necessary for supporting claims. One should also remark on the wisdom of the old saw, that extraordinary claims require extraordinary evidence. The DECR made an extraordinary claim about the change in the trend of the observations:

EC declarations would likely be triggered about twice as often and over twice the area in all regions

No evidence has been demonstrated of even ordinary evidence of skill at their intended use. Another way to say this, is that the validation consisting of the MME enclosing the range of observations is a very weak test, so weak that very little can be reliably inferred from it.

Tests of Individual Models

KB09 are concerned with the force of arguments in DS08, especially the use of the word ‘significant’ where residuals may not have been normal. In retrospect I would have qualified the word more. It is not clear the extent to which lack of normality of residuals undermines results, and departure from normality is quite common. Unfortunately, normality of residuals may be difficult to achieve with this type of data, without using much more sophisticated approaches.

KB09 outlined a preferred approach but performed no analysis:

9. A Possible Alternative to OLS Regression. It is at least possible that forecasting using simple ARIMA modelling [3], [4], might prove to be just as accurate and far easier to justify than OLS regression.

The DECR rainfall analysis uses a peculiar metric: the percentage of area with rainfall below the 5th percentile. In formal terms, this appears to be a ’bounded extreme value, peaks over threshold’ statistic. The distribution resembles a Pareto (power) law, but due to the boundedness where predicted extent of drought approaches 100%, the distribution becomes more like a beta functions.

KB09 believe a statistic such as average annual rainfall may preserve more information. In this case, the residuals of standard tests in DS08 might also improve. By way of explanation for the use of standard tests, there are ‘hard yards’ in developing a formal statistical model such as KB09 propose, on such an idiosyncratic statistic. The pragmatic approach of DS08 was to use a number of statistics breaking down the relevant elements into questions as follows:

  1. How much of the variation in the observations is explained by models? – R2 Correlation
  2. Does trend in drought severity and frequency match models? – slope of linear regression
  3. Do the models agree with the historic severity of droughts? – Nash-Sutcliffe coefficient.
  4. Do the models agree with the historic frequency of droughts? – return period.

Below are more specific notes on each of their concerns, largely framed in the form of if-then-maybe hypothetical:

If certain changes were made to the DECR analysis, then maybe the results might not have been so bad.

DS08 assessed the DECR as received, while KB09 is more constructive and speculative. This raises a larger question of how we go about assessing an idiosyncratic model, a topic expanded on in the conclusions.

Trends

Claims of significance of idiosyncratic data with non-normal distributions and autocorrelation do need to be treated with caution. KB09 take umbrage with a linear fit to the whole time period from 1900 to 2007, and argue that if a shorter time frame such as the period 1950-2007 were examined, then maybe results might improve. One might also argue this approach is arbitrary and informed by prior examination, or ‘data snooping’.
In defense of using the full period, CO2 and temperatures are both generally increasing from 1900 to 2007. Hence, any CO2 or temperature related correlations with drought should be discovered. Nevertheless, Fig 1 and Tables 1 and 2 shows identical results that the inverse relationship between trends in the models and observations exists at different time intervals.
Regarding the comment in KB09 that p values seem too low, while standard deviations (s.d.) were quoted in Table 1, the standard error of the mean (s.e.) was used to calculate the p values, as stated in the caption “Table 1: t-test of difference in mean of predicted trends to the observed mean of droughted area”. This follows the practise of DCPS07 and interested readers can refer to DD07 and associated links for discussion.

R2

Consistently small values in the r2 columns of Tables 2-8, indicate lack of variance explained by the models. Here KB09 state:

If the implication is that the GCM-based projections do not reflect year to year changes in the drought affected percentages of the seven Regions, we do not regard this as a serious failure. It is not what the GCMs were constructed to do. They were meant to indicate long term trends.

This statement seems informed by a view prevalent in climate science that the r2 statistic only explains year-to-year variance and hence is invalid at climatic scales. However, this view is misleading. The r2 statistics will robustly quantify variation at a range of scales, including short and long term trends. R2 was used more in this capacity as a robust detector of possible skill. As we see only 1% of all variation (both short and long term) is explained by the models.

Frequency

The average time between droughts in each region differed between observations and models. KB09 suggest that a “fallacy of composition” effect makes it more than possible that the two calculations of return period could be widely different for reasons other than lack of skill of models. This was confirmed by Andrew Ash:

AA: The observed data have the shortest return period as they have the finest spatial resolution and the model based return regions have increasingly larger mean return periods, inversely related to the spatial resolution at which they are reported.

Nevertheless, the labels on the data sets indicate the supplied data for models and observations represent comparable quantities at the same comparable, regional, scale: percentage area below the 5th percentile of exceptionally low rainfall. To compare on the same grid cell basis, we would have needed access to both observed and projected rainfall data within 25km grid cells, which we were not supplied with. Even so, this could be regarded as an “if then maybe” objection, if the analysis were conducted on the same scale then maybe skill would be shown in drought frequency. Then again, maybe not.

Severity

Consistently negative values of the Nash-Sutcliffe coefficient in Table 2-8 imply that “If averaged over time, each of the 13 GCMs’ sets of projections lies further away from the corresponding set of observed values than the simple mean of the observed values do.” KB09 suggest that if analysis were conducted for another period where the net change in rainfall was not constant, then perhaps the result would not be so bad.

The Nash-Sutcliffe coefficient is widely used in hydrological models for assessing the quality of model outputs against observations. As far as I know, it should not be affected by the start and end points of the series, rather performing a kind of sum of squares on the difference between observed and projected values at each point. Otherwise, my comments on choice of period of analysis also apply.

Conclusion

KB09’s main concerns may be summarized as: (1) some tests do not appear consistent with assumptions, and (2) DS08 did not eliminate all possible explanations for poor results, attributing poor results entirely to lack of model skill. New tests suggested by KB09 show the strong and significant departure of model projections from the observed pattern of historic droughts, with a strong bias in favor of increased and increasing drought in Australia with increasing levels of CO2. These additional analyses agree with the findings in DS08, demonstrating the robustness of the findings. Thus it appears the claim of no credible basis for increasing droughts, is not affected and actually vindicated by KB09’s report.
In DS08 the KB09 recommendation regarding improved statistical models and regionalization were alluded to in the discussion 2:

DS: Recasting the drought modelling problem into known statistical methods might salvage some data from the DEC report. Aggregating the percentage area under drought to the whole of Australia might reduce the boundedness of the distribution, and might also improve the efficiency of the models.

While drought biased climate simulations play well during a severe drought in the political power-bases of the country, the practice of uncritical acceptance of unvalidated or invalid must be strongly discouraged in evidence-based science policy. Finally, to quote Luboš Motl [LM08]:

And perhaps, most people will prefer to say “I don’t know” about questions that they can’t answer, instead of emitting “courageous” but random and rationally unsubstantiated guesses.

Further Work

One avenue for further work is the development of an ARIMA or other statistical framework for areas of exceptionally low rainfall as suggested by KB09.

Preliminary split sample analysis described at NM08 could be developed. These results suggest GCM models cannot be ‘selected’ on the basis of their historic fit to drought at regional scales. In most areas the models that do well in one 50 year period do poorly in another, and vice versa, further indicating ‘failure’ in external validation. The low value of GCM’s for regional effects forecasting are not fully discounted by their promoters.

Another avenue of inquiry is robust statistics for assessing the confidence in idiosyncratic models, where the developers performed no detailed statistical modelling. As such studies have no assumptions that conform to more standard approaches, most standard tests are going to be formally invalid. These concerns would argue for more agreed upon metrics of performance such as those proposed for forecasting [AG09].

References

[AG09] “Analysis of the U.S. Environmental Protection Agency’s Advanced Notice of Proposed Rulemaking for Greenhouse Gases”, Drs. J. Scott Armstrong and Kesten C. Green a statement prepared for US Senator Inhofe for an analysis of the US EPA’s proposed policies for greenhouse gases. http://theclimatebet.com

[CL05] Cohn, T. A., and H. F. Lins (2005), Nature’s style: Naturally trendy, Geophys. Res. Lett., 32(23), L23402, doi:10.1029/2005GL024476.

[DCPS07] David H. Douglass, John R. Christy, Benjamin D. Pearsona and S. Fred Singerc,det al. (2007). “A comparison of tropical temperature trends with model predictions” (PDF). International Journal of Climatology 9999 (9999): 1693. doi:10.1002/joc.1651. http://icecap.us/images/uploads/DOUGLASPAPER.pdf. Retrieved on 12 May 2008.

[DD07] David Douglass’ Comments: http://www.climateaudit.org/?p=3058

[DECR] Drought Exceptional Circumstances Report (2008), Hennessy K., R. Fawcett, D. Kirono, F. Mpelasoka, D. Jones, J. Batholsa, P. Whetton, M. Stafford Smith, M. Howden, C. Mitchell, and N. Plummer. 2008. An assessment of the impact of climate change on the nature and frequency of exceptional climatic events. Technical report, CSIRO and the Australian Bureau of Meteorology for the Australian Bureau of Rural Sciences, 33pp. http://www.daff.gov.au/__data/assets/pdf_file/0007/721285/csiro-bom-report-future-droughts.pdf

[DS08] David R.B. Stockwell, Tests of Regional Climate Model Validity in the Drought Exceptional Circumstances Report – landshape.org/stats/wp-content/uploads/2008/08/article.pdf

[KB09] K.R.W. Brewer and A.N. Other, (2009) Some comments on the Drought Exceptional Circumstances Report (DECR) and on Dr David Stockwell’s critique of it.

[KM07] Koutsoyiannis, D., and A. Montanari, Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resources Research, 43 (5), W05429.1–9, 2007.

[LM08] Lubos Motl, The Reference Frame – http://motls.blogspot.com/

[NM08] Niche Modelling – http://landshape.org/enm/temperature-index-drought/

[PD08] T.N. Palmer, F.J. Doblas-Reyes, A. Weisheimer, G.J. Shutts, J. Berner, J.M. Murphy, Towards the Probabilistic Earth-System Model, arXiv:0812.1074v2 [physics.ao-ph]

[R09] R script for analysis

[RBHS06] Rybski, D., A. Bunde, S. Havlin, and H. von Storch (2006), Long-term persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi:10.1029/2005GL025591.

Some comments on the Drought Exceptional Circumstances Report (DECR) and on Dr David Stockwell’s critique of it

K.R.W. Brewer1 and A.N. Other1
28 January, 2009

1. K.R.W. Brewer is an Accredited Statistician of the Statistical Society of Australia Inc. (SSAI) and a long term Visiting Fellow at the School of Finance and Applied Statistics within the College of Business and Economics at the Australian National University.
2. A.N. Other is a pseudonym for another Accredited Statistician of the SSAI who prefers to remain anonymous. Full responsibility for the content is taken by K.R.W. Brewer.

Abstract

The Drought Exceptional Circumstances Report (DECR) was authored by a team drawn from the CSIRO and Australia’s Bureau of Meteorology, and was publicly released in July 2008. Almost immediately it became a source of controversy. This evaluation, both of the Report itself and of the critique of it written by Dr David Stockwell, finds good mixed with less than good in both. The DECR itself is criticized for its poor delineation of Regions within Australia, for the choices made of statistics to be constructed, for the manners of their construction, and for not getting the best out of the relevant available data. Dr Stockwell is criticized for his inappropriate choices of methodology and of time periods for analysis, and also for misunderstanding some parts of what the DECR’s authors had chosen to do. Nevertheless, both the Report itself and Dr Stockwell’s critique of it are welcome stimuli to further investigate a serious issue within the climate change debate.

Continue reading Some comments on the Drought Exceptional Circumstances Report (DECR) and on Dr David Stockwell’s critique of it

Letters

Below are Peter Gallagher’s thoughts on the reviews of the submission to AMM. Contrast this with ac’s impressions that “To my reading the reviewer’s criticisms are reasonable and pertinent.” It goes to show, that reasonable and unrelated people can see things in different ways. Where is the resolvability of fact in the review process? Consensus?

Hi David,

Thanks for sending me these papers.

Reading the reviews, it seems to me that your submission has been poorly understood by the reviewers (or not properly characterized by the editor). The reviewers have treated it as though it were an academic journal review rather than as a ‘note’ or ‘commentary’.

Also, consciously or not, they have ignored a crucial dimension of the DECR report: its role in the policy dialog (or ‘fit up’) on Exceptional Circumstances drought relief.

This is a categorical mistake: like reading the Christian gospels as a biography.

In the full context of the DECR, the narrowness of your focus is entirely justified and the nature of your references and footnotes irrelevant. What matters is the accuracy of your criticisms of the conclusions that have been picked up and magnified by the Prime Minister.

Your paper seems to me to demonstrate your points:

(a) The models on which the DECR relies have no ‘skill’ in reproducing rainfall records and therefore are completely inadequate as the basis for projections.

(b) The melding of observations and projections in the DECR report has been obscured ( a ‘Harry-Gill’ error?)

(c) The reliance only on the 10th percentile scenario (and it’s re-labeling as the ‘high’ scenario) in the summary seems actually misleading in view of the mean results across all scenarios.

(I’m amused by the self-satisfaction in one review: We don’t need external reviews because our own are top notch in our view).

Kind regards,

Peter Gallagher

Examples of Research Bias

The Financial Times recently reported on the Australian bushfires, linking them to increases in greenhouse gases. We take another look at the data in the DECR and find Australia is getting wetter not drier:

Scientists say Australia, with its harsh environment, is set to be one of the nations most affected by climate change.

“Continued increases in greenhouse gases will lead to further warming and drier conditions in southern Australia, so the [fire] risks are likely to slightly worsen,” Kevin Hennessy at the Commonwealth Scientific and Industrial Research Centre told Reuters.

Bob Brown, the senator who leads the Greens party, said the bushfires provided stark evidence of what climate change could mean.

“Global warming is predicted to make this sort of event happen 25 per cent, 50 per cent more,” he said. “It’s a sobering reminder of the need for this nation and the whole world to act and put at a priority our need to tackle climate change.”

The Drought Exceptional Circumstances Report, that I have been reviewing in this series, promoted these conclusions. Lets look at another analysis, this time using simple quantile analysis of the data in Table 3. This table contains the average percentage area having exceptionally low rainfall years for selected 40-year periods and the most recent
decade (1998-2007).


1900-1939 1910-1949 1920-1959 1930-1969 1940-1979 1950-1989 1960-1999 1968-2007 1998-2007
Qld 9.5 6.5 5.5 4.1 3.3 3.1 2.7 2.6 4.7
NSW 5.7 6.9 5.7 6.2 5.8 4.3 4.0 3.8 6.4
Vic&Tas 5.3 6.0 4.2 6.1 5.1 5.0 5.3 5.2 8.5
SW 5.2 7.1 7.2 6.9 7.9 5.9 4.9 4.4 3.4
NW* 6.3 5.3 6.5 7.5 6.5 6.1 4.7 3.5 3.3
MDB 6.1 7.2 5.8 6.4 5.7 4.1 3.5 3.5 6.9
SWWA 2.5 4.7 4.1 6.5 8.3 6.1 6.3 8.5 8.9
Australia 6.4 6.4 6.6 6.4 6.3 5.3 4.6 3.5 3.1

Using the function ‘quantile’ in R, we output the percentage areas for each probability in each 40 year period. Then we lookup the probability for each region using the most recent 40 year period 1968-2007.


Quantiles
5% 10% 50% 90% 95%
3.25 4.05 5.85 7.15 7.60


Regions, area and probability drought has increased.
Qld 2.6 <5%
NSW 3.8 <10%
Vic&Tas 5.2 NS
SW 4.4 NS
NW* 3.5 <10%
MDB 3.5 95%
Australia 3.5 <10%

The results show that over the last 40 years, regions Qld, NSW, NW, and MDB have had significantly less area under drought. Only in SWWA has the drought area increased significantly, while Vic&Tas (the region of recent bushfires) and SW have no significant change.

The ‘inconvenient’ results were reported in the DECR text as follows:

Observed trends in exceptionally low rainfall years are highly dependent on the period of analysis due to large variability between decades.

Despite these highly significant DECR results showing Australia getting wetter, not drier, CSIRO scientists continue to report in the media that Australia will get drier.

It only takes two thoughts to realize that wetter conditions can pose greater fire risks due to the greater production of fuel in the wet season, and more dangerous conditions when it drys out. Drier conditions lead to a more open grassland environment in Australia, much like the African Savannah, with cooler grassfires but not the hot forest fires suffered recently in Victoria. You simply cannot look at environmental factors in isolation.

But don’t tell CSIRO, or the next thing we will hear is that greenhouse gases are causing more fires by making it wetter.

Do increases in greenhouse gases cause droughts in Australia?

Peter Gallagher reports that even while the coals are still warm, some are already blaming the Victorian fires on increases in greenhouse gases.

The following summarizes indications of decline in droughts in Australia from 1900 to the present, compiled from data provided with the Drought Exceptional Circumstances Report. Some of this information was provided in the submission the the Australian Meteorological Magazine (more about this tomorrow). Drought is defined as the percentage of area with rainfall lower than the 5th percentile. The areas are averaged over seven Australian regions.
Continue reading Do increases in greenhouse gases cause droughts in Australia?

Climate Flagship Response

A number of familiar tests, often used to evaluate the performance of models: R2 correlation, Nash-Sutcliffe efficiency and similarity of trends and return period, were reported here, noting not much evidence of skill in the DECR models compared with observations at any of these. I also said what a better treatment might entail but left that for another time:

The percentage of droughted area appears to be a ’bounded extreme value, peaks over threshold’ or bounded POT statistic. The distribution resembles a Pareto (power) law, but due to the boundedness where predicted extent of drought approaches 100% becomes more like a beta distribution (shown for SW-WA on Fig 2). Recasting the drought modeling problem into known statistical methods might salvage some data from the DEC report. Aggregating the percentage area under drought to the whole of Australia might reduce the boundedness of the distribution, and might also improve the efficiency of the models.

Aware that the tests I applied were not the last word due to the idiosyncratic nature of the data, the conclusion in the summary was slightly nuanced: that as there was no demonstration of skill at modeling drought (in both the DECR and my tests), and as validation of models is necessary for credibility, there is no credible basis:

Therefore there is no credible basis for the claims of increasing frequency of Exceptional Circumstances declarations made in the report.

What is needed to provide credibility is demonstrated evidence of model skill.

Andrew Ash, of the CSIRO Climate Flagship sent a response on 12/18/08. This was to fulfill an obligation he made on 16 Sep 2008, to provide a formal response to your review of the Drought Exceptional Circumstances report (dated 3 Sep 2008), after my many requests to provide details of the validation to skill at modelling of droughts.

I must say I was very pleased to see there was no confidentially text at the end of the email. I feel so much more inclined to be friendly without it. I can understand confidentiality inside organizations, but to send out stock riders to outsiders is picayune. The sender should be prepared to stand by what they say in a public forum and not hide behind a legal disclaimer. Good on him for that.

The gist of the whole email is that he felt less compelled to respond due to an ongoing review I sent to the Australian Meteorological Magazine on 23 Sep 2008. As it was, I was still waiting for the review from the AMM on the 18th December when I received this response. Kevin Hennessy relayed some advice from Dr Bill Venables, a prominent CSIRO statistician, and the following didn’t add anything:

However, we have looked at the 3 Sep 2008 version of your review. The four climate model validation tests selected in your analysis are inappropriate and your conclusions are flawed.

* The trend test is invalidly applied because (i) there is a requirement that the trends are linear and (ii) the t-test assumes the residuals are normally distributed. We undertook a more appropriate statistical test. Across 13 models and seven regions, there are no significant differences (at the 5% level) between the observed and simulated trends in exceptionally low rainfall, except for four models in Queensland and one model in NW Australia.

Its well known that different tests can give different results, depending on the test. It is also true that some tests may be better or more reliable than others. Without more details of their more test its hard to say anything, except to say that lack of significance does not demonstrate skill. The variability of the climate model outputs could be so high that they allow ‘anything to be possible’, as often seems to be the case.

* The correlation and efficiency tests are based on an assumption that the climate models are being used to hindcast historical weather. This assumption is incorrect. As a result, the tests selected are inapplicable to the problem being addressed. This in turn leads to false conclusions.

This would be true if these were the only tests and if correlation and efficiency were dependent entirely on ‘short-term-fluctuations’. They are not as they will capture skill at modeling both short AND long term fluctuations. This is also why I placed more emphasis skill on modelling trends over climatically relevant time scales. He is also not specific about which conclusions. The conclusion of ‘no credible basis’ is not falsified by lack of evidence.

It should also be noted that the DECR also considered return periods (Tables 8 and 10) so any criticism of returns periods applies equally to the DECR.

* The return period test is based on your own definition of ‘regional return period’, which is different from the definition used in the DEC report. Nevertheless, your analysis does highlight the importance of data being collected or produced at different resolutions and the effect this has on interpretations of the frequency of drought. The observed data have the shortest return period as they have the finest spatial resolution and the model based return regions have increasingly larger mean return periods, inversely related to the spatial resolution at which they are reported. We were well aware of this issue prior to the commencement of the study and spent a considerable amount of time designing an analysis that would be robust to take this effect into account.

I appreciate the explanation for the lack of skill at modelling return period, which measures drought frequency, as opposed to drought intensity measured by efficiency. Nevertheless, lack of demonstrated skill at modelling drought frequency stands.

Note they continue to be unresponsive to requests for evidence the climate models have skill at modelling droughts. Where we stand at the moment is, that irrespective of the reliability of my tests, there is still no evidence of skill to be seen, at the short term, long term, or at drought intensity or frequency and so my claim of “no credible basis for the claims of increasing frequency of Exceptional Circumstances declarations” still stands. The Climate Flagship has steadfastly abjured presenting validation evidence. While the concerns expressed have relevance to the quality of the tests (which are widely used, but problematic due to the strange data), they were not precise about the conclusions or claims they were trying to rebut.

Further Work

I came across a recent drought study that also finds no statistical significance between models and drought observations. This was actually in Ref 27 of the DECR: Sheffield, J. &Wood, E. F. Projected changes in drought occurrence under future global warming from multi-model, multi-scenario, IPCC AR4 simulations. Climate Dynamics 13, 79-105 (2008).

Although the predicted future changes in drought occurrence are essentially monotonic increasing globally and in many regions, they are generally not statistically different from contemporary climate (as estimated from the 1961-1990 period of the 20C3M simulations) or natural variability (as estimated from the PICNTRL simulations) for multiple decades, in contrast to primary climate variables, such as global mean surface air temperature and precipitation.

Below is a plot of the observations for the drought statistic, area experiencing less than 5% exceptionally low rainfall (leading to an Exceptional Circumstances drought declaration). You can see how ‘peaky’ it is, even when the average is taken (black).

image002

In some ways it might have been better to just knuckle down and develop a POT model right from the start, as it might have allowed me to produce a less nuanced response. I have been doing that, but have had to upgrade R. A recompilation is needed, that took all night and lost my graphic interface to R. Even then, the package VGAM doesn’t compile for some reason, so I have to look for other packages.

DECR Review Series

Posts over the next few weeks will be updates on the status of reviews myself and others have initiated of the Drought Exceptional Circumstances Report (DECR), by the CSIRO and Bureau of Meteorology (BoM).

It is prudent to subject your views to the rigors of peer review. It is the way to knowledge to search out feedback. So I thought why not share the opportunity with others, so can avail themselves of the wisdom of the leading experts, to learn and formulate their own opinion, not only about the DECR itself, the support for increasing drought due to anthropogenic global warming (AGW) from climate models, and the standard of scholarship in climate science.

This series is also a case study in the scientific review process, for examining such issues as the degree to which peers in climate science provide an objective assessment of submissions. I want to stress that, irrespective of differences of opinion, I am deeply grateful for feedback from experts who have spent many years studying the subject matter. It is for that reason, I always analyze the comments of the expert reviewers very carefully for accuracy, as I take on board every worthwhile recommendation.

Below for easy reference are some links to information, data and opinion to date:

The BoM Website listing of the report and selection of intermediate data, used in the analyses.

The Drought Exceptional Circumstances report as downloaded.

DECR: Under the high scenario, EC declarations would likely be triggered about twice as often and over twice the area in all regions.

The Press release from the client organization, DAFF.

Australia could experience drought twice as often and the events will be twice as severe within 20 to 30 years, according to a new Bureau of Meteorology and CSIRO report.

The article.pdf, a first examination of the data by D Stockwell.

Therefore there is no credible basis for the claims of increasing frequency of Exceptional Circumstances declarations made in the report.

An online opinion piece, by Ian Castles

The recent CSIRO/BOM ‘Drought Exceptional Circumstances Report’ was accepted by government with no external scrutiny: public policy should be made based on this?

Another online opinion piece by Ian Castles.

On July 6, 2008 Prime Minister Kevin Rudd told viewers of the ABC Insiders TV program of the “very disturbing” findings of a study by CSIRO and the Bureau of Meteorology, including that “when it comes to exceptional or extreme drought, exceptionally high temperatures, the historical assumption that this occurred once every 20 years has now been revised down to between every one and two years.”

Risky Statistical Prediction Methods

A couple of days ago, Luke, a frequent commenter, sent in a number of links to a new Australian Government drought initiative. The Minister Tony Burke has appointed an Expert Panel to examine the social impacts of drought as part of its national review of exceptional circumstances (EC) funding, which argues for a major change, based on incentives rather than emergency aid. In a recent speech, Peter Kenny, chair of an expert panel looking at the social impact of drought said of the Drought Exceptional Circumstances Report (DECR):

The Bureau of Meteorology and the CSIRO predicted there was an increased risk of hotter and dryer seasons over the next 20 to 30 years, compared to the last hundred years, across many parts of Australia.

The same sort of restrained language is shown in the various news articles linked below. This is a far cry from the lurid claims of imminent drought apocalypse, encouraged by unvalidated (and completely inaccurate historically) climate model simulations produced by the DECR. After wringing the data out of them with the support of numerous blogs, I wrote a review showing that the frequency and severity of drought had actually declined over the whole of Australia, while the climate models show an increasing trend. This simple observation was not reported in the DECR. Contrary to the message in the report, the lead author later said in an interview that “a long term trend its not very clear in terms of exceptional low rainfall years.” CSIRO have been ‘statuesque’ in defense of their report. While the director promised a reply to my critique on Sept 16, I have yet to receive anything. In legal circles, no reply within 30 days can trigger a tacit admission that the accusations are true.

I have no issue at all with the way Peter Kenny seems to be reporting the CSIRO findings, and the latest report validates what I had always suspected about the political motivations for the dodgy statistical prediction methods in the DECR. In my post Scientists Biasing Research I proposed:

Could it be that climate scientists are biasing the detrimental effects of manmade global climate change to suit the review of EC funding by the Rudd government?

You have to wonder why people are listening to climate code red models when we know they are inherently flawed and useless at prediction. This speaks to the credibility of the media and the scientists involved. The skeptical bloggers seem to have it right and mainstream experts and the media have totally got this wrong.

The similarities with the sub-prime financial crisis are amazing. We don’t have to look far to find numerous financial bloggers warning about the dangers of unrestrained credit expansion while the mainstream economists and fund managers have been completely wrong-footed. They were probably using a lot of risky statistical prediction methods too. A recent paper, Forecasting the Depression: Harvard Versus Yale, tried a range of modern models at predicting the Great Depression, and finds that both the Harvard and Yale forecasters were systematically too optimistic. It seems that statistical models have not improved forecasting skill.

In the latest fiasco, The Federal Government initiated a bank deposit guarantee plan, and by Friday of last week 13 of the top 20 funds in Australia had frozen redemptions to investors to stop money flowing out of their businesses. I tend to agree a bit with the farewell letter of resignation from hedge fund manager Andrew Lahde, that some people are truly not worthy of the education they received.

My opinion based on many years in modeling, is that the mainstream scientists exaggerate the predicted global warming effects on purpose. They have vested interests, as bad news sells science. Governments get to blame their present land management mistakes on something in the future outside their control. This is why validation is so important and statistical prediction methods need to be way down the feeding chain on sources of evidence.

Who does this remind you of?

Below are a set of links to the recent policy initiative on drought management.

Linear Regression R Squared

One of the tests of climate models predicting drought in my review of the Drought Exceptional Circumstances Report was the correlation of predicted area under drought with actual observed area under drought. Lazar criticized my inclusion of the R-Squared (r2) coefficient, an issue I didn’t follow up at the time.

… correlating model predictions for individual years of exceptional rainfall with observed years of exceptional rainfall! This ignores noise (internal variability in the climate system and GCM climate simulations) and that the CSIRO report predicted frequency. Steve MicIntrye and the auditors repeat this mistake here, with the obligatory snark from Steve

The objection is that it unreasonable to expect climate models to predict ‘year-to-year variation’ with drought using this test. To set the record straight, I have run a small test demonstrating conclusively that the r2 does detect trends in frequency of intermittent events, (as opposed to trends in actual values) and consequently the test does not only rely only on year-to-year variations.

Below is a short R script where I represent a trend of increasing drought frequency with two independent sequences of numbers (0,1). These are plotted below. The results of fitting a linear regression to the sequences follow.


o runif(100)
e runif(100)
l<- lm(e~o)
plot(o,col="blue",type="l")
lines(e,col="red")
print(summary(l))


> source("rtest.R")

Call:
lm(formula = e ~ o)

Residuals:
Min 1Q Median 3Q Max
-0.6739 -0.3148 -0.3148 0.3261 0.6852

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.31481 0.06412 4.910 3.64e-06 ***
oTRUE 0.35910 0.09454 3.798 0.000253 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.4712 on 98 degrees of freedom
Multiple R-Squared: 0.1283, Adjusted R-squared: 0.1194
F-statistic: 14.43 on 1 and 98 DF, p-value: 0.0002527

The run shown produced a non-zero R-squared of 0.128, and significant slope (***), demonstrating that a similar trend in frequency of values does produce a positive correlation r2. In comparison, the r2 between modeled and drought data was essentially zero, indicating no detectable common trend in drought frequency using this method.

CSIRO Progress in Science Law

Geoff Sherrington has been drawing attention to some changes in the legal language attached to various emails and reports associated with the CSIRO and the Climate Adaptation Flagship (CAF). Since I have been posting up emails in an attempt to hold people accountable, I have been looking into the legality. In the case of reports, to what degree are the authors accountable for the accuracy of the contents? (Disclaimer: This post makes no representations or warranties regarding merchantability, fitness for purpose or otherwise with respect to the following assessment.)

DISCLAIMER (Email from Director Dr Andrew Ash, Sept 16 2008)

The information contained in the above e-mail message or messages (which includes any attachments) is Confidential / Commercial-in-confidence and may be legally privileged. It is intended only for the use of the person or entity to which it is addressed. If you are not the addressee any form of disclosure, copying, modification, distribution or any action taken or omitted in reliance on the information is unauthorised. If you received this communication in error, please notify the sender immediately and delete it from your computer system network.

This communication is intended for discussion purposes only and does not constitute a commitment by CSIRO to any agreement, memorandum of understanding, obligation, course of action or any other undertaking. Any transaction will be subject to contract and such contract will require approval in accordance with the Science and Industry Research Act, 1949. CSIRO will not be legally bound until these approvals are obtained.

On first reading you would think this prohibited publishing the email in blogs. However, as shown by the conditional emphasized, most of the terms are intended for a recipient other than the intended addressee. The prohibition on disclosure is therefore irrelevant to the recipient and doesn’t limit the them from doing anything they like with it, unless the sender were to claim that the email was not intended for the recipient. The advice I have received is that the language would NOT limit publication on a blog.

To correct this limitation, the CAF appears to have recently enlarged the scope of non-disclosure provisions in the following email, received by Geoff Sherrington.

DISCLAIMER: (14 Oct 2008, from James Davidson, Information Officer.)

This communication is for Discussion Purposes Only. It is not an agreement, memorandum of understanding, proposal, offer or the like, and is solely intended for informal discussion of ideas. For any agreement to be binding on CSIRO, it must be in writing, and executed on behalf of CSIRO by a person with proper authority, and in accordance with the Science and Industry Research Act 1949.

To the extent permitted by law, CSIRO does not represent, warrant and/or guarantee that the integrity of this communication has been maintained or that the communication is free from errors, virus, interception or interference.

The information contained in this email may be confidential or privileged. Any unauthorised use or disclosure is prohibited. If you have received this email in error, please accept my apologies and delete it immediately and please notify me.

Here, we are not told it is “Confidential / Commercial-in-confidence” but “may be confidential or privileged” and that “Any unauthorized use or disclosure is prohibited.”

First, what could constitute an “unauthorized use”? One would assume that permitted uses are spelled out in the first paragraph. That is, that use is authorized for “Discussion Purposes Only”. Unauthorized use would be for an “agreement, memorandum of understanding, proposal, offer or the like”.

So it seems that primarily what is prohibited is only using the email as a contract. This would be fair enough, as some people do send email as contracts. Clearly, normal emails are not contractual commitments, so it doesn’t apply to posting on a blog. However, the statement that “any disclosure is prohibited” would seem to contradict that, and be worrying.

Open to question is whether such a disclaimer would have any force without explicit agreement to terms and conditions by the addressee. Perhaps the next legal advance we may see from CSIRO CAF is a little button “I Agree” to click before we can open an email from them.

When we look at the the issue of accountability for a report, the Drought Exceptional Circumstances report (DECR) shows another expansion of scope. While the disclaimer in the DECR covered only CSIRO and BoM for “any liability for any opinion, advice and information”, and liability in the use of the information, it didn’t exclude the publishers of the report, the client organization DAFF.

Disclaimer (DECR)
CSIRO and the Bureau of Meteorology (BoM) make no representations or warranties regarding merchantability, fitness for purpose or otherwise with respect to this assessment. Any person relying on the assessment does so entirely at his or her own risk. CSIRO and the Bureau of Meteorology and all persons associated with it exclude all liability (including liability for negligence) in relation to any opinion, advice or information contained in this assessment, including, without limitation, any liability which is consequential on the use of such opinion, advice or information to the full extent of the law, including, without limitation, consequences arising as a result of action or inaction taken by that person or any third parties pursuant to reliance on the assessment.Where liability cannot be lawfully excluded, liability is limited, at the election of CSIRO and the Bureau of Meteorology, to the re-supply of the assessment or payment of the cost of re-supply of the assessmentl [sic].

The next report from CAF on the effect of global warming on fisheries covers the whole of the Australian Government. Whereas before there was “no liability for any opinion, advice and information”, this has been expanded to provide “no liability for the accuracy of or inferences from the material contained in this publication”. In other words, before there was “no liability for information” now it explicitly claims no liability for the “accuracy of the information”.

Important Notice – please read
This document is produced for general information only and does not represent a statement of the policy of the Australian Government. The Australian Government and all persons acting for the Government preparing this report accept no liability for the accuracy of or inferences from the material contained in this publication, or for any action as a result of any person’s or group’s interpretations, deductions, conclusions or actions in relying on this material.

It would seem they have tried to expressly remove liability for disseminating inaccurate information. Why? Various Acts forbid the dissemination of misleading and false information. The Trade Practises Act is the primary consumer protection legislation and states:

A person shall not, in trade or commerce, engage in conduct that is liable to mislead the public as to the nature, the manufacturing process, the characteristics, the suitability for their purpose or the quantity of any goods.

Goods could include reports, models, and possibly even the predictions of climate models, and may be covered by the TPA. However, could the new disclaimer by the CAF that tries to remove liability for inaccurate statements also immunize it from the TPA? I don’t think so.

Congratulations to the CAF for their advances in legal language attempting to protect them from being discussed on blogs, and bound to the standard consumer protections that bind every other corporate body. Following their lead, I will be including the following stupid disclaimer on all my emails:

By sending an email to me, to any of my aliases or to any of my addresses you are agreeing that:
1. I am by definition, “the intended recipient”
2. All information in the email is mine to make such financial profit, political mileage, or a good joke with as I see fit. In particular, I may quote it on the internet.
3. I may take the contents as representing the views of your organization.
4. This overrides any disclaimer or statement of confidentiality that may be in your disclaimer.

Climate Adaptation Flagship Update

Below is the email received a month ago from Dr Andrew Ash, Director of the Climate Adaptation Flagship, promising a formal response to issues raised about the Drought Exceptional Circumstances Report (DECR) concerning no apparent attempt at validation of the climate models for drought in the report, or evidence of skill at modeling drought. No reply has been forthcoming to date.

from xxx@csiro.au
to xxx@gmail.com
date Tue, Sep 16, 2008 at 2:35 PM
subject RE: Drought Exceptional Circumstances Report
mailed-by csiro.au
signed-by csiro.au

Dear David,

Kevin is currently on leave (due back next week). When he returns we will provide a formal response to your review of the Drought Exceptional Circumstances Report that was forwarded to me (dated September 3).

Regards

Andrew

Andrew Ash
Director
CSIRO Climate Adaptation Flagship
306 Carmody Rd
St Lucia
Qld 4067
AUSTRALIA
Ph 61-7-32142346 Fax: 61-7-32142308

I must say that I’m surprised at the recklessness of Ash and Hennessy, both as individuals and as employees representing a presumably responsible organization, in making promises they don’t, won’t, or can’t keep.

In the News

Support was expressed for our efforts with the DECR from a reader of the Sydney Morning Herald below.


Letters to the Editor, Sydney Morning Herald – 4 Oct 2008

Meanwhile, the Department of Climate Change is hiring in a big way! Need I say you could get a lot of economically productive research done in almost any hard science area with a unit this size.


Employment Section, The Australian – 11 Oct 2008.

Alan Sullivan diagnoses the financial turmoil in a memorable alliteration — Laughable Left. The status quo isn’t always right.

Fishery Predictions of Global Warming

Climate change could devastate fishing industry: CSIRO” shouts the ABC news, as scientists predict the salmon, rock lobster and abalone industries, barramundi, prawn and mudcrab fisheries will be affected by changing rainfall patterns. In a welcome trend, the fishing industry have questioned the climate findings in the CSIRO report.

Industry representatives see the report contributing nothing new, and self-serving for the global warming scientists:

“In fact, the report itself is not much more than a collection of observations around what prawns are sensitive to in environmental terms, and that’s salinity and temperature, and none of that’s new. That work’s being accumulated over [a] decade.”

Mr Makepeace says he is concerned scientists appear to be justifying their research instead of providing advice.

“It’s a real problem to be putting so much emphasis into managing climate change in fisheries when there is so little information on which to base those management responses, I’m not sure what responses we can actually make to these changes.”

Another criticized the alarmism:

David Carter from the Northern Prawn Fishing Industry Company says the response has been damaging.

“It just unsettles folks and the use of emotional language can leave the wrong impression,” he said.

Even though the report, is another offering from the Climate Adaptation Flagship, it is actually much more up front about uncertainty than the Drought Exceptional Circumstances Report .

The preface by the Australian Government (huh, where is he, I want to talk to him) finds positive and negative impacts:

… little consolidated knowledge of the potential impacts of climate change. Both positive and negative impacts are expected, and impacts will vary according to changes in the regional environment: south-east fisheries are most likely to be affected by changes in water temperature, northern fisheries by changes in precipitation, and western fisheries by changes in the Leeuwin Current.

On the climate modeling, only one model, the CSIRO Mk 3.5 climate model is used, claiming there are only ‘subtle’ differences between the CSIRO models and other international models. They claim without justification that because of this general agreement, general trends can be used rather than the absolute magnitude of the predicted changes (p4). But this is not justified as here, I have shown that the models even get trends in rainfall completely wrong.

But generally they qualify the uncertainty in the text fairly comprehensively, mentioning both sides of the uncertainty distribution. Although they do cite a piece of flotsom called Rahmstorf et al. 2007.

There is considerable uncertainty regarding climate model predictions, in both time and space. Uncertainty results from model dynamics and resolution, and because the future is not completely known: future changes in greenhouse gases cannot be predicted. Over shorter time periods, climate variability dominates and predictions from models are more uncertain than for longer time scales. At regional scales (100s of kilometers), projections are also uncertain and model development to allow regional downscaling is required in the coming years. Despite this uncertainty, there is agreement between climate scientists about large scale climate features, and we can proceed with caution in exploring future impacts on fisheries and aquaculture. Observational data available for the period since 1990 raises concerns for the speed at which greenhouse gases are impacting the climate system. In particular, sea level may be responding more quickly to climate change than global climate models indicate (Rahmstorf et al. 2007). Therefore, future projections used in this review may be considered as conservative estimates of future climate, and both positive and negative impacts may be of greater magnitude.

I can find no match for the word ‘devastate’ used by the ABC to describe the results anywhere. Unlike the Drought Exceptional Circumstances Report which secreted the qualifications in a distant box, and laced the text with juicy hyperbole, this one is more sanguine. I think the press however, being well trained to respond to every climate report as a code red alert, is responsible in this case for exaggerating the conclusions out of all proportion.

Submission 1: Australian Meteorological Magazine (AMM)

The venue for more formal debate on controversial topics is the scientific journals. As part of my trek into the desert of drought predictions in Australia, I submitted a review of the Drought Exceptional Circumstances report (abstract below) two days ago to the Australian Meteorological Magazine. To date I have not received an acknowledgement of its receipt.

The reasons I selected the AMM: it publishes all its papers on the web, has emphasis on the meteorology of the Australian region and the southern hemisphere, and would have a readership familiar with the DECR.

I am hoping at some point to engage climate scientists in the issues that have been raised about the interpretation of drought data in the DECR report. For example, Ferenc Miskolczi has very graciously engaged a number of people here who were interested in understanding his theory of semi-transparent atmosphere in more detail.

I would like to know what validation was used to justify the use of climate models for modelling drought, and how the conclusion that droughts are likely to increase in frequency and severity can be reconciled with the data, which shows drought frequency and severity declining, can be justified.

So far, no luck. I submitted a manuscript with the following abstract to the AMM two days ago. So far I have not even received acknowledgement of its receipt.

Review of projections of frequency and severity of exceptionally low rainfall in the Drought Exceptional Circumstances Report
David R.B. Stockwell
September 20, 2008

Abstract

The 2008 Drought Exceptional Circumstances Report (DECR) makes a number of bold claims in its assessment of likely changes in the frequency and severity of severe rainfall deficiencies over the next 20-30 years. This review presents an analysis which brings into question whether these claims can be sustained by the data. Taking into account the poor performance of climate models, as evidenced by simulations of area of exceptionally low rainfall trending in the opposite direction to observations, a more valid interpretation of the results would be for drought frequency and severity in Australia to remain largely unchanged in the future, with no expectation of a change in the climatological basis for
EC declarations.

Simple Statistical Model Using Recent Droughts

Changes in the exceptionally dry years (droughts) have been estimated in the Drought Exceptional Circumstances Report (DECR) in two ways: (a) a statistical modification of the observed rainfall data (Box 3); and (b) analysis of simulations from 13 climate models. Up until now I have been looking at the modeling in approach (b). Today I started to look at approach (a). As mean rainfall declines the probability of exceptionally low rainfall increases. This is graphed in Box 3 (see also Table 6).

The parameters used in this simple extrapolation exercise have curious inconsistencies with their source. The DECR report says:

Continue reading Simple Statistical Model Using Recent Droughts

Trade Practices Act

Any claims or representations made by a business must be accurate and truthful. If a business has been dishonest, exaggerated the truth, or created a misleading impression, then there is a very broad provision in the Trade Practices Act to prohibit such conduct by a corporation.

For example, the ACCC webpage on misleading and deceptive conduct gives an example of a business predicting the health benefits of a therapeutic device or health product but having no proof that such benefits can be attained. Note that there is no need to show that the product has no benefit in fact, rather it is misleading to make a claim when there is no proof. In general:

Continue reading Trade Practices Act

Temperature Index Drought

Following up on the post from yesterday, I test the assumption underpinning the regional climate change work in Australia.

The most common approach has been to assess how well each of the available models simulates the present climate of the region (e.g. Dessai et al. 2005), on the assumption that the more accurately a model is able to reproduce key aspects of the regional climate, the more likely it is to provide reliable guidance for future changes in the region.

As far as I can see this is an untested assumption, and may be a case of ‘accident chasing’.

Continue reading Temperature Index Drought