Q: Where Do Climate Models Fail? A: Almost Everywhere

“How much do I fail thee. Let me count the ways”

Ben Santer’s latest model/observation comparison paper demonstrates that climate realists were right and climate models exaggerate warming:

The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes.

Where do the models fail?

1. Significantly warmer than reality (95% CI) in the lower troposphere at all latitudes, except for the arctic.

2. Significantly warmer than reality (95% CI) in the mid-troposphere at all latitudes, except for the possible polar regions.

3. Significant warmer that reality (95% CI) in the lower stratosphere at all latitudes, except possibly polar regions.

Answer: Everywhere except for polar regions where uncertainty is greater.

East Pacific Region Temperatures: Climate Models Fail Again

Bob Tisdale, author of the awesome book “Who Turned on the Heat?” presented an interesting problem that turns out to be a good application of robust statistical tests called empirical fluctuation processes.

Bob notes that sea surface temperature (SST) in a large region of the globe in the Eastern Pacific does not appear to have warmed at all in the last 30 years, in contrast to model simulations (CMIP SST) for that region that show strong warming. The region in question is shown below.

The question is, what is the statistical significance of the difference between model simulations and the observations? The graph comparing the models with observations from Bob’s book shows two CMIP model projections strongly increasing at 0.15C per decade for the region (brown and green) and the observations increasing at 0.006C per decade (magenta).

However, there is a lot of variability in the observations, so the natural question is whether the difference is statistically significant? A simple-minded approach would be to compare the temperature change between 1980 and 2012 relative to the standard deviation, but this would be a very low power test, and only used by someone who wanted to obfuscate the obvious failure of climate models in this region.

Empirical fluctuation processes are a natural way to examine such questions in a powerful and generalized way, as we can ask of a strongly autocorrelated series — Has there been a change in level? — without requiring the increase to be a linear trend.

To illustrate the difference, if we assume a linear regression model, as is the usual practice: Y = mt +c the statistical test for a trend is whether the trend coefficient m is greater than zero.

H0: m=0 Ha: m>0

If we test for a change in level, the EFP statistical test is whether m is constant for all of time t:

H0: mi = m0 for i over all time t.

For answering questions similar to tests of trends in linear regression, the EFP path determines if and when a simple constant model Y=m+c deviates from the data. In R this is represented as the model Y~1. If we were to use a full model Y~t then this would test whether the trend of Y is constant, not whether the level of Y is constant. This is clearer if you have run linear models in R.

Moving on to the analysis, below are the three data series given to me by Bob, and available with the R code here.

The figure below shows the series in question on the x axis, the EFP path is the black line, and 95% significance levels for the EFP path are in red.

It can be seen clearly that while the EFP path for the SST observations series shows a little unusual behavior, with a significant change in level in 1998 and again in 2005, the level is currently is not significantly above the level in 1985.

The EFP path for the CMIP3 model (CMIP5 is similar), however, exceeds the 95% significant level in 1990 and continues to increase, clearly indicating a structural increase in level in the model that has continued to intensify.

Furthermore, we can ask whether there is a change in level between the CMIP models and the SST observations. The figure below shows the EFP path for the differences CMIP3-SST and CMIP5-SST. After some deviation from zero at about 1990, around 2000 the difference becomes very significant at the 5% level, and continues to increase. Thus the EFP test shows a very significant and widening disagreement between the temperature simulation of the CMIP over the observational SST series in the Eastern Pacific region after the year 2000.

While the average of multiple model simulations show a significant change in level over the period, in the parlance of climate science, there is not yet a detectable change in level in the observations.

One could say I am comparing apples and oranges, as the models are average behavior while the SST observations are a single realization. But, the fact remains only the simulations of models show warming, because there is no support for warming of the region from the observations. This is consistent with the previous post on Santer’s paper showing failure of models to match the observations over most latitudinal bands.

Not cointegrated, so global warming is not anthropogenic – Beenstock

Cointegration has been mentioned previously and is one of the highest ranking search terms on landshape.

We have also discussed the cointegration manuscript from 2009 by Beenstock and Reingewertz, and I see he has picked up another author and submitted it to an open access journal here.

Here is the abstract.

Polynomial cointegration tests of anthropogenic impact on global warming M. Beenstock, Y. Reingewertz, and N. Paldor

Abstract. We use statistical methods for nonstationary time series to test the anthropogenic interpretation of global warming (AGW), according to which an increase in atmospheric greenhouse gas concentrations raised global temperature in the 20th century. Specifically, the methodology of polynomial cointegration is used to test AGW since during the observation period (1880–2007) global temperature and solar irradiance are stationary in 1st differences whereas greenhouse gases and aerosol forcings are stationary in 2nd differences. We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcing, aerosols, solar irradiance and global temperature are not polynomially cointegrated. This implies that recent global warming is not statistically significantly related to anthropogenic forcing. On the other hand, we find that greenhouse gas forcing might have had a temporary effect on global temperature.

The bottom line:

Once the I(2) status of anthopogenic forcings is taken into consideration, there is no significant effect of anthropogenic forcing on global temperature.

They do, however, find a possible effect of the CO2 first difference:

The ADF and PP test statistics suggest that there is a causal effect of the change in CO2 forcing on global temperature.

They suggest “… there is no physical theory for this modified theory of AGW”, although I would think the obvious one would be that the surface temperature adjusts over time to higher CO2 forcing, such as through intensified heat loss by convection, so returning to an equilibrium. However, when revised solar data is used the relationship disappears, so the point is probably moot.

When we use these revised data, Eqs. (11) and (12) remain polynomially uncointegrated. However, Eq. (15) ceases to be cointegrated.

Finally:

For physical reasons it might be expected that over the millennia these variables should share the same order of integration; they should all be I(1) or all I(2), otherwise there would be persistent energy imbalance. However, during 150 yr there is no physical reason why these variables should share the same order of integration. However, the fact that they do not share the same order of integration over this period means that scientists who make strong interpretations about the anthropogenic causes of recent global warming should be cautious. Our polynomial cointegration tests challenge their interpretation of the data.

Should the ABS take over the BoM?

I read an interesting article article about Peter Martin, head of the Australian Bureau of Statistics.

He has a refreshing, mature attitude to his job.

‘I want people to challenge our data – that’s a good thing, it helps us pick things up,’ he says.

Big contrast to the attitude of Climate Scientists. Examples that they believe they cannot be challenged are legion, from meetings to peer review. For example, emails expressing disagreement with the science are treated as threatening, as shown by the text of eleven emails released under ‘roo shooter’ FOI by the Climate Institute at Australian National University.

Australia’s Chief statistician is also egalitarian. In response to a complaint by the interviewer about employment figures, he responds:

He says he doesn’t believe there is a problem, but gives every indication he’ll put my concerns to his staff, giving them as much weight as if they came from the Treasurer.

This is a far cry from the stated policy of the CSIRO/BoM (Bureau of Meteorology) to only respond to peer-reviewed publications. Even when one does publish statistical audits identifying problems with datasets, as I have done, you are likely to get a curt review stating that “this paper should be thrown out because its only purpose is criticism”. It takes a certain type of editor to proceed with publication under those circumstances.

When the Federal Government changes this time, as appears inevitable, one initiative they might consider is a greater role for the ABS in overseeing the BoM responsibilities. Although the BoM is tasked with the collection of weather and water data by Acts of Parliament, it would benefit from an audit and ongoing supervision by the ABS, IMHO.

Two New Numerate Blogs

A couple of new entries in the links section:

Sabermetric Research does it own sports research and reviews statistical studies of sports research. I added this after reading one of my Chrissy gifts – Moneyball: The Art of Winning an Unfair Game – by Michael Lewis, now a movie starring Brad Pitt, a David vs Goliath story of stats over precedent.

Status Iatrogenicus by Scott K. Aberegg, M.D., and ER physician in Salt Lake City who also has a Medical Evidence Blog I follow. This blog is about how lack of common sense leads to common nonsense in medical practice, and aims a critical eye at various aspects of medical practice that just plain don’t make sense.

Sea level rise projections bias

Sea levels, recently updated with 10 new data-points, reinforce the hiatus described as a ‘pothole’ by Josh Willis of NASA’s Jet Propulsion Laboratory, Pasadena, Calif., who says you can blame the pothole on the cycle of El Niño and La Niña in the Pacific:

This temporary transfer of large volumes of water from the oceans to the land surfaces also helps explain the large drop in global mean sea level. But they also expect the global mean sea level to begin climbing again.

Attributing the ‘pothole’ to a La Nina and the transfer of water from the ocean to land in Australia and the Amazon seems dubious, given many land areas experienced reduced rainfall at the same times, as shown above.

A quadratic model of sea-level indicates deceleration is now well-established and highly significant, and if present conditions continue, sea level will peak between 2020 and 2050 between 10mm and 40mm above present levels, and may have stopped rising already.

Reference to a ‘pothole’ in a long-term trend caused by short-term La Nina, while ignoring statistically significant overall deceleration, is another example of bias in climate science.

Call:
lm(formula = y ~ x + I(x^2))

Residuals:
Min 1Q Median 3Q Max
-8.53309 -2.39304 0.03078 2.45396 9.17058

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.264e+05 3.517e+04 -6.438 7.40e-10 ***
x 2.230e+02 3.513e+01 6.348 1.21e-09 ***
I(x^2) -5.490e-02 8.772e-03 -6.258 1.98e-09 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.448 on 222 degrees of freedom
Multiple R-squared: 0.9617, Adjusted R-squared: 0.9613
F-statistic: 2786 on 2 and 222 DF, p-value: < 2.2e-16

And the code:

figure5<-function() {
x <- time(SL); y <- SL
l=lm(y ~ x+I(x^2))
new <- data.frame(x = 1993:2050)
predict(l, new, se.fit = TRUE)
pred.w.clim <- predict(l, new, interval="confidence")
matplot(new$x,pred.w.clim,lty=c(1,2,2), type="l", ylab="Sea Level",main="Quadratic Projection of Sea Level Rise",ylim=c(-10,100),lwd=3,col=c(2,2,2),xlab="Year")
lines(SL)
}

NIPCC Report on Species Extinctions due to Climate Change

The NIPCC – Interim Report 2011 updates their last 2009 Report, with an overview of the research on climate change that the IPCC did not see fit to print. Its published by the Heartland Institute with lead authors Craig D. Idso, Australian Robert Carter, and S. Fred Singer with a number of other significant contributions.

I am grateful for inclusion of some of my work in Chapter 6 on the uncertainty of the range-shift method for modeling biodiversity under climate change.

The controversy centered on a paper by Thomas et.al 2004 called “Extinction Risk from Climate Change“, that received exceptional worldwide media attention for its claims of potentially massive extinctions from global warming.

Briefly, the idea is to simulate the change in the range of a species under climate change by ‘shifting’ the range using a presumed climate change scenario.

Daniel Botkin said of the Thomas et.al. 2004 study

Yes, unfortunately, I do consider it to be the worst paper I have ever read in a major scientific journal. There are some close rivals, of course. I class this paper as I do for two reasons, which are explained more fully in the recent article in BioScience:

… written by 17 scientists from a range of fields and myself (here).

While there are many problems with this paper, the most amazing, as I see it, is the way they used changes in the size of species-ranges to determine extinctions. Its generally believed that contracting a species-range increases the probability of extinction.

Consider the case of a species that disperses freely under climate change. While the range-size of individuals change, the average range-size should stay the same, unless there is a major obstruction like an ocean or mountain range. Species whose range size decreases are balanced by species whose range size increases. Overall, the net rate of extinction should be unchanged.

However, Thomas et.al. 2004 simply deleted all species whose range expanded. A massive increase in extinctions was therefore a foregone conclusion, even assuming free dispersion.

There are a number of other ways a bias towards range-reduction can be introduced, such as edge effects and over-fitting assumptions, that I show in my book “Niche Modeling“. In a normal science this would have been a cautionary tale of the dangers of ad-hoc methodologies.

It’s an example of the intellectual bankruptcy of the IPCC report that the uncertainties of Thomas et.al. 2004 and other similar studies were ignored by Working Group II. For example, in Impacts, Adaption and Vulnerability, 13.4.1 Natural ecosystems

Modelling studies show that the ranges occupied by many species will become unsuitable for them as the climate changes (IUCN, 2004). Using modelling projections of species distributions for future climate scenarios, Thomas et al. (2004) show, for the year 2050 and for a mid-range climate change scenario, that species extinction in Mexico could sharply increase: mammals 8% or 26% loss of species (with or without dispersal), birds 5% or 8% loss of species (with or without dispersal), and butterflies 7% or 19% loss of species (with or without dispersal).

And in 19.3.4 Ecosystems and biodiversity:

… up to 30% of known species being committed to extinction * (Chapter 4 Section 4.4.11 and Table 4.1; Thomas et al., 2004;

And in other summaries Table 4.1

Clearly the major difficulty with all this work, something that turned me off it but few acknowledge, is that the lack of skill of simulations of climate change renders fraudulent any claim to skill at the species habitat scale. Only now is the broader climate community finally starting to accept this about multi-decadal climate model predictions, such as contained in the 2007 IPCC WG1 the climate assessments. The NIPCC illustrates the broader opinion which should have been integral to the IPCC process from the beginning, IMHO.

Niche Modeling 2010 Roundup and Goodbye

Seth Godin, a blogger I admire greatly, suggests we publish a list of accomplishments for the year (What did you ship in 2010?).

  • Peer-reviewed publication demonstrating that policy-makers are being misled by inadequate climate models.
  • Hosted a venue for the “Watts Up With the Climate” Tour of Australia.
  • Gave four public lectures on climate skepticism, arguing that the forecasts of prominent climate modelers are not reliable.
  • Contributed to two articles on the ABC community blog site Unleashed.
  • Started a school chess club
  • Reviewed a number of manuscripts
  • Set up the The Climate Sceptics Party web site, supporting their candidates in the general election.

I would like to express my gratitude to everyone who helped as it could not have been done by me alone. I hope that in some small way a measure of common sense has been brought into the climate change controversy.

My greatest achievement in my mind was preparation of a draft of a paper on a new climate modeling approach that has been circulated to selected climate scientists. Based on a control-systems methodology and called “Accumulative model of climate systems” some highlights are:

  • A mechanism for amplification of weak solar forcing by accumulation in the ocean, explaining solar variation from 1 million to annual time scales;
  • A model showing that even if global warming since 1970 was caused by increasing greenhouse gasses (and not UHI eg.), greenhouse gasses cannot present a significant long-term warming problem, due to poor ocean-atmosphere coupling;
  • A model showing how PDO/AMO cycles may be driven by random ENSO variations.

As I would like to spend considerable time this year getting this work published, I am going to discontinue posting comments on topical events on this blog, at least until this paper is shipped.

Since there is so much control-systems background material to go through in order to understand the modeling framework, I have a mind to expand it into a book. In that case I will post draft chapters on this blog as they are done.

All the best for 2011! I’ll be back!

Warming in Antarctica – Who was right?

What did they say about warming in Antarctica? In a review by Professor Will Steffen, Australian National University, commissioned by the Department of Climate Change and a major input to Labor government policy advanced also by Senator Wong in her discussions with Senator Fielding:

Climate Change 2009: Faster Change & More Serious Risks

A recent analysis shows warming of about 0.1°C per decade over the West Antarctica region over the last half century, attributed in part to changes in sea surface temperature (Steig et al. 2009).

In a response by Bob Carter, David Evans, Stewart Franks, and William Kininmonth Minister Wong’s Reply to Senator Fielding’s Three Questions on Climate Change–Due Diligence

11. In addition to acknowledging inadequacies in modelling skill, Steffen also quotes papers that, contrary to many other studies, report empirical data in support of a recently enhanced rate of sea-level rise (Church & White, 2006; Domingues et al., 2006) and a warming of Antarctica (Steig et al., 2009). In reality, these papers are underpinned by complex data manipulations and computer modelling, and the outlier results that they produce contradict other similarly detailed studies that show a steady rate of long term sea level rise (albeit with decadal modulations which include the start ofa recent fall; Jevrejeva et al. 2008; Cazenave et al. 2009; Woodworth et al. 2009) and a cooling Antarctic icecap -which, like Greenland, appears to be close to mass-balance (Stenni et al. 2002; Goodwin et al. 2004; Masson-Delmotte et al. 2004; Schneider, et al. 2006; Monaghan, et al. 2008; Schneider & Steig, 2008; Chapman, 2009).

Who do you believe, flashy commissioned reports or hard-science scientists?

Inquiry into long-term meteorological forecasting in Australia

From Chapter 2 of the inquiry, h/t Warwick Hughes.

2.69 The Committee was astounded to learn that private enterprises are apparently able to forecast particular seasonal conditions and events, which may not necessarily have been forecast by our leading national agencies. The question that came to the mind of Committee members when this issue came to light was “how did you forecast these events and why didn’t anyone else?” When considering the skills, knowledge and expertise in our national agencies, the question that came to mind was “what do they know that CSIRO and the Bureau don’t?”

At issue may be standards required of methodology, much like the problems of alternative medicines in finding acceptance in mainstream medicine. A ‘please explain’ is the first recommendation of the committee.

Recommendation 1 
The Committee recommends that CSIRO and the Bureau of Meteorology provide to the Australian Government a report with detailed explanatory information as to why a particular dynamic forecasting model or system was chosen for use in Australia. The report should be completed by the end of 2010. 

Something to watch for. My feeling is that the BoM should largely be privatized.

Happy Birthday Climategate!

As they used to say about the assassination of President Kennedy, “I remember where I was when I heard about it.” My first post a few days later was entitled Climategate. More was done in Climategate than in all the rebuttals of climate science alarmism that have been published.

So many things have changed since then, some good and some bad, qualifying Climategate as a truly defining event. Some of the things I have noticed:

  • Scientific society popinjays waving the IPCC consensus in our faces.
  • Burgeoning of sociological articles (Professor Naomi Oreskes) deconstructing the psychology of climate sceptics (mad).
  • ClimateAudit morphs from holding feet to the fire of science/stats, to seeking justice (sad).
  • Closing of the CCX, languishing of Cap-n-trade, death of the ETS.
  • Many more papers demonstrating climate models are worthless, effects other than CO2 causing climate change.
  • More rain!
  • RealClimate readership declines even further.

Some things I have not noticed since Climategate:

  • Improvement in the objectivity of climate science papers.
  • Acknowledgment that sceptics might be right and that CO2 emissions cannot cause the end of civilization as we know it.
  • Apologies for the sheer unprofessionalism exhibited therein.

Altering the weather data

JN reports another study confirming the finding that alterations to Australian raw weather data have increased the official trend by over 30%.

A recent submission to the arXiv archive suggests that altering the data to “inflate and dramatize weather conditions” may have a long tradition.

The Weather and its Role in Captain Robert F. Scott and his Companions’ Deaths by Krzysztof Sienicki

Abstract: A long debate has ensued about the relationship of weather conditions and Antarctic exploration. In no place on Earth is exploration, human existence, and scientific research so weather dependent. By using an artificial neural network simulation, historical (Heroic Age) and modern weather data from manned and automated stations, placed at different locations of the Ross Ice Shelf, and the Ross Island, I have examined minimum near surface air temperatures. All modern meteorological data, as well as historical data of Cherry-Garrard, high correlations between temperatures at different locations, and artificial neural network retrodiction of modern and historical temperature data, point out the oddity of Captain Scott’s temperature recordings from February 27 – March 19, 1912. I was able to show that in this period the actual minimum near surface air temperature was on the average about 13F(7C) above that reported by Captain Scott and his party. On the basis of the mentioned evidence I concluded that the real minimum near surface air temperature data was altered by Lt. Bowers and Captain Scott to inflate and dramatize the weather conditions.

And check out CA’s magnificent series on Phil Jones and the China Network.

Climate model abuse

Roger Pielke Sr. reviews another very important new paper showing the abuse of models.

In the opinion of the editor Kundzewicz (who has served prominently on the IPCC), climate models were only designed to provide a broad assessment of the response of the global climate system to greenhouse gas (GHG) forcings, and to serve as the basis for devising a set of GHG emissions policies. They were not designed for regional adaptation studies.

To expect more from these models is simply unrealistic, at least for direct application to regional water management problems. The Anagnostopoulos et al conclusions negate the value of spending so much money on regional climate predictions decades into the future, for example on the South Eastern Australian Climate Initiative and the Queensland Climate Change Centre of Excellence.

Kundzewicz distances the professionals from such efforts:

They are not climate sceptics, but are sceptical of the claims of some climatologists and hydroclimatologists that these models are well suited for water management applications.

Hydrologists and water management professionals (hydrological and hydraulic engineers) have entered the scientific debate in force, because the GCMs are being advocated for purposes they were not designed for, i.e. watershed vulnerability assessments and infrastructure design.

As I showed in Critique of the DECR this is not a matter of opinion, but a matter that can be decided by applying basic validation tests in each instance. To the detriment of the field, tests that would justify the use of the models do not seem to be applied, or if they are they are not being made available. Such testing is regarded as good and standard practise elsewhere.

The recent surge of these sorts of papers suggests I am not the only one to think it is time for the field to pay the piper.

Drought predictions for this century

In The National Science Foundation Funds Multi-Decadal Climate Predictions Without An Ability To Verify Their Skill Roger Pielke Sr. links GCM skill at predicting drought with natural variation:

2. “Future efforts to predict drought will depend on models’ ability to predict tropical SSTs.”

In other words, there is NO way to assess the skill of these models are predicting drought as they have not yet shown any skill in SST predictions on time scales longer than a season, nor natural climate cycles such as El Niño [or the PDO, the NAO, etc.].

This seems an convoluted turn of phrase. There are ways to assess the skill of these models — by comparing them with past drought frequency and severity. Such assessments show the models have NO skill at predicting droughts.

The assumption is that IF they were able to predict cycles like PDO etc. then they would be able to predict droughts. But clearly if we average over these cycles, there is still the little problem of overall trends in extreme phenomena, which accuracy at PDO etal. would not necessarily satisfy.

His argument that drought efficacy swings on PDO prediction is useful, however, as a basis for excluding applications of models for climate phenomena that rely on them.

Roger is perhaps being polite about misleading policymakers when he continues:

Funding of multi-decadal regional climate predictions by the National Science Foundation which cannot be verified in terms of accuracy is not only a poor use of tax payer funds, but is misleading policymakers and others on the actual skill that exists in predicting changes in the frequency of drought in the future.

The review by Dai favours the PDSI drought index:

The PDSI was created by Palmer22 with the intent to measure the cumulative departure in surface water balance. It incorporates antecedent and current moisture supply (precipitation) and demand (PE) into a hydrological accounting system. Although the PDSI is a standardized measure, ranging from about −10 (dry) to +10 (wet)…

I always search for the assessment of accuracy first, and as usual the skill of models gets a very little, non-quantitative coverage. Climate scientists are loath judge the models, preferring to cloak their results in paragraphs of uncertainty, and present “dire predictions” of GCMs in garish figures (his Figure 11).

They need to start acting like scientists and stop these misleading practises until it is shown by rigorous empirical testing, and for fundamental reasons, that the current GCMs are fit for the purpose of drought modelling.

Just to show I am not always negative, this recent report has a lot to recommend in it. The authors of “Climate variability and change in south-eastern Australia” do quite a good job of describing the climatological features impacting the area, and putting technical issues, climate, hydrology and social impact together in an informative report.

While they say:

The current rainfall decline is apparently linked
(at least in part) to climate change, raising the
possibility that the current dry conditions may
persist, and even possibly intensify (as has been the
case in south-west Western Australia).

They also admit they don’t know how to combine the output of multiple models:

Some research (Smith & Chandler, 2009) suggests that
uncertainties in climate projections can be reduced
by careful selection of the global climate models, with
less weight being given to models that do not simulate
current climate adequately. Other work suggests that
explicit model selection may not be necessary (Watterson,
2008; Chiew et al., 2009c). Further research is being
done to determine how to combine the output of global
climate models to develop more accurate region-scale
projections of climate change.

I would fault that there is no suggestion that anything other than GCMs might be used, and no evidence the GCMs perform better than a mean value. If a model does no better than the long term average then there is good reason to suppose it has no skill, and throw it out. This is called ‘benchmarking’, but its an alien concept to reject any GCM from the IPCC, apparently.

Show us your tests – Australian climate projections

My critique of models used in a major Australian drought study appeared in Energy and Environment last month (read Critique-of-DECR-EE here). It deals with validation of models (the subject of a recent post by Judith Curry), and regional model disagreement with rainfall observations (see post by Willis here).

The main purpose is summed up in the last sentence of the abstract:

The main conclusion and purpose of the paper is to provide a case study showing the need for more rigorous and explicit validation of climate models if they are to advise government policy.

It is well known that despite persistent attempts and claims in the press, general circulation models are virtually worthless at projecting changes in regional rainfall, the IPCC says so, and the Australian Academy of Science agrees. The most basic statistical tests in the paper demonstrate this: the simulated drought trends are statistically inconsistent with the trend of the observations, a simple mean value shows more skill that any of the models, and drought frequency has dropped below the 95%CL of the simulations (see Figure).

Rainfall has increased in tropical and subtropical areas of Australia since the 70’s, while some areas of the country, particularly major population centers to the south-east and south-west have experienced multi-year deficits of rainfall. Overall Australian rainfall is increasing.

The larger issue is how to acknowledge that there will always be worthless models, and the task of genuinely committed modellers to identify and eliminate these. It’s not convincing to argue that validation is too hard for climate models, or they are justified by physical realism, or use the calibrated eyeball approach. The study shows that the obvious testing regimes would have eliminated these drought models from contention — if performed.

While scientists are mainly interested in the relative skill of models, where statistical measures such as root mean square (RMS) are appropriate, decision-makers are (or should) be concerned with whether the models should be used at all (are fit-for-use). Because of this, model testing regimes for decision-makers must have the potential to completely reject some or all models if they do not rise above a predetermined standard, or benchmark.

There are a number of ways that benchmarking can be set up, which engineers or others in critical disciplines would be familiar with, usually involving a degree of independent inspection, documentation of expected standards, and so on. My study makes the case that climate science needs to start adopting more rigorous validation practises. Until they do, regional climate projections should not be taken seriously by decision-makers.

It is up to the customers of these studies to not rely on the say-so of the IPCC, the CSIRO and the BoM, and to ask “Show me your tests”, as would be expected with any economic, medical or engineering study where the costs of making the wrong decision are high. Their duty of care requires they are confident that all reasonable means have been taken to validate all of the models that support the key conclusions.

Government Science

I’m seeing a few articles on Government-sponsored science lately that seem particularly applicable to the climate change research:

A short review of Economic Laws of Scientific Research links to an overview of the area, particularly the Cato Institute

Scientists may love government money, and politicians may love the power its expenditure confers upon them, but society is impoverished by the transaction.

Another in a similar vein on medical research reminds me of Craig Venter’s decoding of the Human Genome. I was at the San Diego Supercomputer at the time, and his use of innovative use of supercomputing to assemble pieces of DNA — called shotgun sequencing — made the Government-funded competitors look like clods. There was a prize offered, and it was decided to award the prize to both – how very droll.

A more balanced argument is presented here. Some infrastructural components, like large meteorological data sets, are better handled by government departments than others.

Professor Sinclair Davidson shows that the standard economic analysis supporting public expenditure on research is fundamentally and methodologically flawed.

The notion that throwing an infinite amount of money at public research will somehow, at some time, automatically lead to some benefit is a myth. The government spends a substantial amount on public science and innovation. It is not clear that any substantial benefit is derived from that expenditure.

He identifies the following ‘stepping stones':

  • R&D is not a public good.
  • The cost of public funds is not lower than the cost of private funds.
  • The returns to public science are low.
  • Governments have a poor track record of picking ‘winners’.
  • Publicly funded R&D has a negative impact on economic growth.
  • Economists are unable to explain how spillovers occur, or how valuable these spillovers are.

The main argument against government science, that “publicly-performed R&D crowds out resources that could be alternatively used by the private sector” needs to be strengthened in the case of climate science.

The push for taxes like the ETS, and subsidising impractical renewable energy schemes shows the impact of government climate science is regressive.

Climate science seems to particularly prone to the worst aspects of government science, from the UN IPCC process, to ClimateGate and through the enquiries, it’s like an season of ‘Yes, Minister’. If global warming is eventually shown to be non-existent or harmless, no doubt the climate scientists will declare victory and say they were sceptics all along.

Audit the BoM

Kenskingdom demonstrates again the wisdom of ‘trust, but verify':

I compared the adjusted [Australian Temperature] data with the raw data of these 34 stations.

Here are the results, and they are perplexing.

* I was expecting to find a stronger warming trend in the urban data than the 100 non-urban sites. WRONG.
* I was expecting to find BOM correcting for UHI, that is, reducing the trend. PARTLY RIGHT. But less often than with the non-urban sites.
* I was expecting the urban sites to have much better quality of data, with long records, few gaps, and good overlaps if stations’ data had to be combined. WRONG.

Fed up with c**p government science yet?

See JoNova for more.

Queensland Drought Comparisons

In 2009, the Queensland Climate Change Centre of Excellence prepared a series of reports detailing projected climate changes for 13 regions throughout Queensland. The reports provide a high-level summary of projected changes and an accessible overview of the potential impacts to a wide audience, including:

# a tendency for less rainfall, particularly in central and southern regions throughout winter and spring;
# more severe droughts, occurring with increasing frequency;

CO2 Science reviews a study of the United States’ Northern Great Plains which like Queensland, is a significant source of grain both locally and internationally, and like Queensland because of its location, it is also susceptible to extreme droughts. Because of this fact, it is probably as good a place as any to look for a manifestation of the climate-alarmist claim (Gore, 2006; Mann and Kump, 2008) that global warming will usher in a period of more frequent and intense drought.

The conclusions:

In light of climate-alarmist predictions of intensified drought conditions in a warming world, many people would assuredly claim that any new period of intensified drought on America’s Northern Great Plains would be a vindication of those prognostications … and probably of other climate-alarmist contentions as well. It is clear from the work of Fritz et al., however, that such need not be the case; for everything bad that happens need not be the result of rising atmospheric CO2 concentrations, as the study here described clearly demonstrates.

Here is the rainfall anomaly for the last 3 years (weather is not climate, yadda yadda). Almost no area has had below average rainfall.

Projected future runoff of the Breede River under climate change

More evidence of worthless model predictions from CO2 Science:

All of the future flow-rates calculated by Steynor et al. exhibited double-digit negative percentage changes that averaged -25% for one global climate model and -50% for another global climate model; and in like manner the mean past trend of four of Lloyd’s five stations was also negative (-13%). But the other station had a positive trend (+14.6%). In addition, by “examination of river flows over the past 43 years in the Breede River basin,” Lloyd was able to demonstrate that “changes in land use, creation of impoundments, and increasing abstraction have primarily been responsible for changes in the observed flows” of all of the negative-trend stations.

Interestingly, Steynor et al. had presumed that warming would lead to decreased flow rates, as their projections suggested; and they thus assumed their projections were correct. However, Lloyd was able to demonstrate that those results were driven primarily by unaccounted for land use changes in the five catchments, and that in his newer study the one site that had “a pristine watershed” was the one that had the “14% increase in flow over the study period,” which was “contrary to the climate change predictions” and indicative of the fact that “climate change models cannot yet account for local climate change effects.” As a result, he concluded that “predictions of possible adverse local impacts from global climate change should therefore be treated with the greatest caution,” and that, “above all, they must not form the basis for any policy decisions until such time as they can reproduce known climatic effects satisfactorily.”

Long-time temperature variations in Portugal over the last 140 years and the effect of the solar activity

A recipe for more reliable climate correlations with solar factors – use long temperature records such as Portugal for 140 years (from 1865 to 2005). Another study showing around half of decadal to centennial variations in temperature can be attributed to Cosmic Ray Flux.

Monthly averaged temperature series have been analyzed together with monthly North-Atlantic Oscillation (NAO) index data, sunspot numbers (W) and cosmic ray (CR) flux intensity. The absolute values of the correlation coefficients between the temperature and the CR are higher than those between the temperature and the sunspot numbers. Our results are consistent with some of the proposed mechanisms that relate solar activity to Earth climate and could be explained through the effect of the solar UV radiation and stratosphere-troposphere coupling or/and through the effect of the CR particles on clouds and stratospheric and tropospheric conditions.

CSIRO Mk3 Model Performance

On the comparisons of Climate Models from Douglass et al here is a table showing how well(?) the CSIRO Mark 3 model performed.

In layers near 5 km, the modelled trend is 100 to 300% higher than observed, and, above 8 km, modelled and observed trends have opposite signs.

The raw data are from http://www.pas.rochester.edu/~douglass/papers/Published%20JOC1651.pdf (2007).

Table II. (a). Temperature trends for 22 CGCM Models with 20CEN forcing. The numbered models are fully identified in Table II(b).

Pressure (hPa)–>Surface 1000 925 850 700 600 500 400 300 250 200 150 100 Model Sims.∗ Trends (milli °C/decade)

1 9 128 303 121 177 161 172 190 216 247 263 268 243 40
2 5 125 1507 113 112 123 126 138 148 140 105 2 −114 −161
3 5 311 318 336 346 376 422 484 596 672 673 642 594 253
4 5 95 92 99 99 131 179 158 184 212 224 182 169 −3
5 5 210 302 224 215 249 264 293 343 391 408 400 319
6 4 119 118 148 175 189 214 238 283 365 406 425 393 −33
7 4 112 460 107 123 122 130 155 183 213 228 225 211 0
8 3 86 62 57 58 82 95 108 134 160 163 155 137 100
9 3 142 143 148 150 149 162 200 234 273 284 282 258 163
10 3 189 114 200 210 225 238 269 316 345 348 347 308 53
11 3 244 403 270 278 309 331 377 449 503 481 461 405 75
12 3 80 173 114 115 102 98 124 150 161 164 166 142 4
13 2 162 155 170∗∗ 182 225 218 221 282 352 360 340 277 −39
14 2 171 293 190 197 252 245 268 328 376 367 326 278 69
15 2 163 213 174 181 199 204 226 271 307 299 255 166 53
16 2 119 128 124 140 151 176 197 228 271 289 306 260 120
17 2 219 −1268 199 223 259 283 321 373 427 454 479 465 280
18 1 117 117 126 148 163 159 180 207 227 225 203 200 16
19 1 230 220 267 283 313 346 410 506 561 554 526 521 244
20 1 191 151 176 194 212 237 254 304 387 410 400 367 314
21 1 191 328 241 222 193 187 215 255 300 316 327 304 90
22 1 28 24 46 73 27 −26 −26 −1 20 24 32 −1 −136

Total simulations: 67

Average 156 198 166 177 191 203 227 272 314 320 307 268 78
Std. Dev. (σ) 64 443 72 70 82 96 109 131 148 149 154 160 124
∗ ‘Sims.’ refers to the number of simulations over which averages
CSIRO are number 15, with results compared to average of 67 runs on the line below

CSIRO 163 213 174 181 199 204 226 271 307 299 255 166 53
AVERAGE 156 198 166 177 191 203 227 272 314 320 307 268 78
Std. Dev. 64 443 72 70 82 96 109 131 148 149 154 160 124

h/t Geoff Sherrington

Copenhagen a failure? Think again.

Attributed to NEIL BROWN, December 26, 2009

UNLIKE most people, who think Copenhagen was a failure, I think it was a great success. It has preserved the golden rule of international diplomacy.

Years ago, when I was a young fellow and started to go to international conferences, an old hand who was about to retire took me aside. ”I’ll be shoving off into retirement soon,” he said, ”so I thought I might pass on the golden rule of international conferences.”

I was fascinated. I was sure he would say I should always pursue noble objectives, lift up the downtrodden masses of Africa and Asia, stamp out disease and poverty and generally bring peace to the world. Alas, no.

”The most important item on the agenda at any international conference,” he resumed, ”is to fix the date of the next meeting – and of course the location.”

However – and it was a big however – if a conference succeeded in wiping out poverty or pestilence, there would be no prospect of trying to go to another conference the following year on the same subject. Concentrating on the date of the next conference would guarantee poverty and pestilence would still be there the next year and would provide the excuse for another year’s travel, entertainment, spending other people’s money, passing pious resolutions and generally being self-important, all of which are the only reasons for being in politics or diplomacy.

I soon learnt that seasoned players on the international scene knew very well the vital importance of the golden rule. For example, Sir Owen Dixon told me that when he was appointed the first UN troubleshooter on Kashmir, he went to New York to recruit an assistant. Someone recommended a young man in the UN building who, believe it or not, actually had the job description ”to bring peace to the world”.

”Do you like your job?” Sir Owen asked. ”Well, at least it’s permanent,” he replied. This young man, who had a stellar career at the UN thereafter, was right, because the intractable problem of Kashmir has, by definition, still not been solved and in the intervening years has provided immense fodder for studies, working groups, theses, working breakfasts, dinners, lunches and, of course, international conferences.

Statesmen and politicians did not achieve all of this by solving the problem of Kashmir; they did it by failing and by making sure that next year the crisis would be the gift that keeps on giving.

There was also a secret protocol to the golden rule, shielded from the prying eyes of the public as far as possible – to make sure that the location of the conference was somewhere nice, for example, fleshpots such as Casablanca, holiday spots such as Bali or ski resorts such as Davos.

The proof of the pudding is, of course, in the eating. Thus, despite the fact that almost everyone says that Copenhagen was a success because it narrowly avoided being a failure, the cognoscenti know it was a great success because it was such an appalling failure.

First, climate change is as bad as it ever was. None of the weasel words of progress and achievement about keeping temperatures down can conceal the good news that the whole thing was a disaster.

If the alarmists are right, climate change can only get worse. If they are wrong, the issue still is likely to have such life in it that it could last as long as Kashmir before the truth dawns.

Second, since Kyoto and again since Bali, we were told incessantly that Copenhagen was the last chance to prevent the world being plunged into a watery grave. Everyone was going to Copenhagen in the belief that it was a last chance to save the planet.

When I heard this, I mourned for the international political and diplomatic brotherhood of which I was once a part; they clearly were not going to be able to stretch climate change beyond Copenhagen as the excuse for more conferences, new taxes, tougher and more complicated laws and the perpetual extortion of money from poor workers in rich countries to rich kleptomaniacs in poor countries that foreign aid has become. Some other issue would have to be found.

Fortunately, this has turned out not to be the case. Mercifully, climate change will be there for at least another year to take its vengeance on a profligate and decadent world. It will provide the excuse for conferences next year and for years beyond. So also is the secret protocol intact: conferences on climate change will never be held anywhere near Darfur or Bangladesh.

Neil Brown is a barrister and former member of Federal Parliament, h/t to Geoff Sherrington.

How Bad are the Models – UHI

Urban areas differ from rural areas in a number of well known ways, but the IPCC summaries maintain that these effects have been effectively removed when they talk about the recent (post 1960) increases in global surface temperature.

Continuing the series on how bad climate models really are, another paper is in the pipeline on the long-standing influence of urban heat effects (UHI) in the surface temperature data. Ross McKitrick reports that between 1/2 and 1/3 of the recent increase in temperature is due to this contamination (Ross’s website here).

The methodology uses the regression coefficients from the socioeconomic variables to estimate the trend distribution after removing the estimated non-climatic biases in the temperature data. On observational data this reduces the mean warming trend by between one-third and one-half, but it does not affect the mean surface trend in the model-generated data. Again this is
consistent with the view that the observations contain a spatial contamination pattern not present in, or predicted by, the climate models.

Note that this rather gross bias is not present in or predicted by the climate models, meaning the climate models do not have the physical mechanisms to model it. One consequence is that if the models are to fit the recent increase in temperature, some other (incorrect) mechanism must be used (such as H2O feedback perhaps – I don’t know).

Ross has written up the backstory of the all too common obstacles to publication of articles questioning the IPCC here:

In the aftermath of Climategate a lot of scientists working on global warming-related topics are upset that their field has apparently lost credibility with the public. The public seems to believe that climatology is beset with cliquish gatekeeping, wagon-circling, biased peer-review, faulty data and statistical incompetence. In response to these perceptions, some scientists are casting around, in op-eds and weblogs, for ideas on how to hit back at their critics. I would like to suggest that the climate science community consider instead whether the public might actually have a point.

How Bad are Climate Models? Temperature

Due to building the website for The Climate Sceptics I haven’t been able to post despite some important events. My site and other files were deleted in some kind of attack, so I have had to rebuild it as well. I now have the WordPress 3.0 multiuser system which enable easy creation and management of multiple blogs, so its an ill wind eh?

The important event I refer to is the release of “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Series” by Ross McKitrick, Stephen McIntyre and Chad Herman (2010). Nobody is talking about it, and I don’t know why, as it has a history almost as long as the hockey stick on McIntyre’s blog (summary here), and is a powerful condemnation of climate models in the PRL.

I feel a series coming on, as these results deliver a stunning blow to the last leg that alarmists have been standing on, i.e. model credulity. Also because I have a paper coming out in a similar vein, dealing with drought models in regional Australia.

Using a rigorous methodology on 57 runs from 23 model simulations of the lower troposphere (LT) and mid-troposphere (MT) with forcing inputs from the realistic A1B emission scenario, and four observational temperature series: two satellite-borne microwave sounding unit (MSU)-derived series and two balloon-borne radiosonde series, over two time periods from 1979-99 and 1999-2009, they tested a mismatch between models and observed trends in the tropical troposphere. This represents a basic validation test of climate models over a 30 year period, a validation test which SHOULD be fundamental to any belief in the models, and their usefulness for projections of global warming in the future.

The results are shown in their figure:

… the differences between models and observations now exceed the 99% critical value. As shown in Table 1 and Section 3.3, the model trends are about twice as large as observations in the LT layer, and about four times as large in the MT layer.

Continue reading How Bad are Climate Models? Temperature

High Quality Climate Data, Not!

Ken Stewart has released his much awaited review of the Australian High Quality Sites. His conclusion:

The High Quality data does NOT give an accurate record of Australian temperatures over the last 100 years.

BOM has produced a climate record that can only be described as a guess.

The best we can say about Australian temperature trends over the last 100 years is “Temperatures have gone down and up where we have good enough records, but we don’t know enough.”

If Anthropogenic Global Warming is so certain, why the need to exaggerate?

It is most urgent and important that we have a full scientific investigation, completely independent of BOM, CSIRO, or the Department of Climate Change, into the official climate record of Australia.

I will ask Dr Jones for his response.

Australian Temperature Records in Question

Ken Stewart is engaged in the first ever independent study of the complete High Quality Australian Site Network. Ken has a series of posts, the first including a lot of background information and explanation. Subsequent posts are not be as long and part 6, the data from the Victorian sites has just been done.

Like many people, he thought that the analysis of climate change in Australia, and information given to the public and the government, was based on the raw temperature data. He was wrong. He averaged maxima and minima for all stations at each site, then compared the result with the High Quality means. By these calculations (averaging the trend at each site in Victoria) the raw trend is 0.35 degrees C per 100 years, and the High Quality state trend is 0.83C. That’s a warming bias of 133%!


Continue reading Australian Temperature Records in Question

Page-Proofs of the DECR Paper

Corrected the page-proofs of my drought paper today.

CRITIQUE OF DROUGHT MODELS IN THE AUSTRALIAN DROUGHT EXCEPTIONAL CIRCUMSTANCES REPORT (DECR)

ABSTRACT
This paper evaluates the reliability of modeling in the Drought Exceptional Circumstances Report (DECR) where global circulation (or climate) simulations were used to forecast future extremes of temperatures, rainfall and soil moisture. The DECR provided the Australian government with an assessment of the likely future change in the extent and frequency of drought resulting from anthropogenic global warming. Three specific and different statistical techniques show that the simulation of the occurrence of extreme high temperatures last century was adequate, but the simulation of the occurrence of extreme low rainfall was unacceptably poor. In particular, the simulations indicate that the measure of hydrological drought increased significantly last century, while the observations indicate a significant decrease. The main conclusion and purpose of the paper is to provide a case study showing the need for more rigorous and explicit validation of climate models if they are to advise government policy.

Meanwhile, scientists are finding new ways to communicate worthless forecasts to decision makers.

These models have been the basis of climate information issued for national and seasonal forecasting and have been used extensively by Australian industries and governments. The results of global climate models are complex, and constantly being refined. Scientists are trialling different ways of presenting climate information to make it more useful for a range of people.

Conducting professional validation assessment of models would be a start, followed by admitting they are so uncertain they should be ignored.

Continue reading Page-Proofs of the DECR Paper

Watts Tour at Emerald

Anthony’s Tour continues at a breakneck pace this week — with only four venues to go.

The talks at Emerald that I organized went quite well, considering this is a small regional town. About 80-100 people attended an teaser session during the Property Rights Australia meeting during the day, and around 40 attended at night. We got a standing ovation during the day — the first time for me! The crowd was a mixture of ages and sexes and I think messages of bureaucratic sloth and opportunism resonated with them. Central Queensland turned on one of its trademark sunsets for Anthony:

It was good to spend a bit of time with Anthony and catch up on the goss — well not really gossip, but about bloggers and the people behind the curtain. You know how it is, you tend to get a certain view of the people involved, but when you learn more about them, it turns out they are just regular people who put their hand up for something they believe in.

Continue reading Watts Tour at Emerald