BoM copies me, inadequately

Yesterday’s post noted the appearance of station summaries at the BoM adjustment page attempting to defend their adjustments to the temperature record at several stations. Some I have also examined. Today’s post compares and contrasts their approach with mine.

Deniliquin

The figures below compare the minimum temperatures at Deniliquin with neighbouring stations. On the left, the BoM compares Deniliquin with minimum temperatures at Kerang (95 km west of Deniliquin) in the years around 1971. The figure on the right from my Deniliquin report shows the relative trend of daily temperature data from 26 neighbouring stations (ie ACORN-SAT – neighbour). The rising trends mean that the ACORN-SAT site is warming faster that the neighbour.

BoMdeniliquin Deniliquin

The BoMs caption:

Deniliquin is consistently warmer than Kerang prior to 1971, with similar or cooler temperatures after 1971. This, combined with similar results when Deniliquin’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.

Problems: Note the cherrypicking of a single site for comparison and the handwaving about “similar results” with other sites.

In my analysis, the ACORN-SAT version warms at 0.13C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, this puts the ACORN-SAT version outside the limit. Therefore the adjustments have made the trend of the official long term series for Deniliquin significantly warmer than the regional neighbours. I find that the residual trend of the raw data (before adjustment) for Deniliquin is -0.02C/decade which is not significant and so consistent with its neighbours.

Rutherglen

Now look at the comparison of minimum temperatures for Rutherglen with neighbouring stations. On the left, the BoM compares Rutherglen with the adjusted data from three other ACORN-SAT stations in the region. The figure on the right from my Rutherglen report shows the relative trend of daily temperature in 24 neighbouring stations (ie ACORN-SAT – neighbour). As in Deniliquin, the rising trends mean that the ACORN-SAT site is warming faster that the neighbour.

BoMrutherglen Deniliquin

The BoMs caption is

While the situation is complicated by the large amount of
missing data at Rutherglen in the 1960s, it is clear that, relative to the other sites, Rutherglen’s raw minimum temperatures are very much cooler after 1974, whereas they were only slightly cooler before the 1960s.

Problems: Note the cherrypicking of only three sites, but more seriously, the versions chosen are from the adjusted ACORN-SAT. That is, the already adjusted data is used to justify an adjustment — a classic circularity! This is not stated in the other BoM reports, but probably applies to the other station comparisons. Loss of data due to aggregation to annual data is also clear.

In my analysis, the ACORN-SAT version warms at 0.14C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, this puts the ACORN-SAT version outside the limit. Once again, the adjustments have made the trend of the official long term series for Deniliquin significantly warmer than the regional neighbours. As with Deniliquin, the residual trend of the raw data (before adjustment) is not significant and so consistent with its neighbours.

Amberley

The raw data is not always more consistent, as Amberley shows. On the left, the BoM compares Amberley with Gatton (38 km west of Amberley) in
the years around 1980. On the right from my Amberley report is the relative trend of daily temperature to 19 neighbouring stations (ie ACORN-SAT – neighbour). In contrast to Rutherglen and Deniliquin, the mostly flat trends mean that the ACORN-SAT site is not warming faster than the raw neighbours.

BoMamberley Amberley

The BoMs caption:

Amberley is consistently warmer than Gatton prior to 1980 and consistently cooler after 1980. This, combined with similar results when Amberley’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.

Problems: Note the cherrypicking and hand waving.

In my analysis, the ACORN-SAT version warms at 0.09C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, I class the ACORN-SAT version as borderline. The residual trend of the raw data (before adjustment) is -0.32C/decade which is very significant and so there is clearly a problem with the raw station record.

Conclusions

More cherrypicking, circularity, and hand-waving from the BoM — excellent examples of the inadequacy of the adjusted ACORN-SAT reference network and justification for a full audit of the Bureau’s climate change division.

BoM publishing station summaries justifying adjustments

Last night George Christensen MP gave a speech accusing the Bureau of Meteorology of “fudging figures”. He waved a 28 page of adjustments around, and called for a review. These adjustments can be found here. While I dont agree that adjusting to account for station moves can necessarily be regarded as fudging figures, I am finding issues with the ACORN-SAT data set.

The problem is that most of the adjustments are not supported by known station moves, and many may be wrong or exaggerated. It also means that if the adjustment decreases temperatures in the past, claims of current record temperatures become tenuous. A maximum daily temperature of 50C written in 1890 in black and white is higher than a temperature of 48C in 2014, regardless of any post-hoc statistical manipulation.

But I do take issue with a set of summaries being released as blatant “cherry-picking”.

Scroll down to the bottom of the BoM adjustment page. Listed are station summaries justifying the adjustments to Amberley, Deniliquin, Mackay, Orbost, Rutherglen and Thargomindah. The overlaps with the ones I have evaluated are Deniliquin, Rutherglen and Amberley (see previous posts). While the BoM finds the adjustments to these stations justified, my quality control check finds problems with the minimum temperature at Deniliquin and Rutherglen. I think the Amberly raw data may have needed adjusting.

WRT Rutherglen, BoM defends the adjustments with Chart 3 (my emphasis):

Chart 3 shows a comparison of the raw minimum temperatures at Rutherglen with the adjusted data from three other ACORN-SAT stations in the region. While the situation is complicated by the large amount of missing data at Rutherglen in the 1960s, it is clear that, relative to the other sites, Rutherglen’s raw minimum temperatures are very much cooler after 1974, whereas they were only slightly cooler before the 1960s.

WRT Deniliquin, BoM defends the adjustments on Chart 3 (my emphasis):

Chart 3 shows a comparison of minimum temperatures at Kerang (95 km west of Deniliquin) and Deniliquin in the years around 1971. Deniliquin is consistently warmer than Kerang prior to 1971, with similar or cooler temperatures after 1971. This, combined with similar results when Deniliquin’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.

My analysis is superior to flawed the BoMs analysis in 3 important ways:
1. I compare the trend in Rutherglen and Deniliquin with 23 and 27 stations respectively, not 3 and 1 neighbouring stations respectively (aka cherry-picking).
2. I also use a rigorous statistical panel test to show that the trend of the Rutherglen minimum exceeds the neighbouring group by O.1C per decade, which is outside the 95% confidence interval for Australian stations trends — not a visual assessment of a chart (aka eyeballing).
3. I use the trends of daily data and not annual aggregates, which are very sensitive to missing data.

The Widening Gap Between Present Global Temperature and IPCC Model Projections

An increase in global temperature required to match the Intergovernmental Panel on Climate Change (IPCC) projections is becoming increasingly unlikely. A shift to a mean projected pathway of 3 degrees increase by the end of the century would require an immediate, large, and sustained increase in temperature which seems physically impossible.

Global surface temperatures have not increased at all in the last 18 years. The trend over the last few years is even falling slightly.

Global temperatures continue to track at the low end of the range of global warming scenarios, expanding a significant gap between current trends and the course needed to be consistent with IPCC projections.

On-going international climate negotiations fail to recognise the growing gap between the model projections based on global greenhouse emissions and the increasingly unlikely chance of those models being correct.

Research led by Ben Santer, compared temperatures under emission scenarios used to project climate change by the (IPCC) with satellite temperature observations at all latitudes.

“The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes.” reports their paper in PNAS. Moreover, it is not known why they are failing.

“The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive.”

Explaining why they are failing will require a commitment to skeptical inquiry and an increasing need to rely on the scientific method.

The unquestioning acceptance of projections of IPCC climate models by the CSIRO, Australian Climate Change Science Program, and many other traditional scientific bodies that has informed policies and decisions on energy use and associated costs must be called into question. So to must the long-term warming scenarios based on the link between emissions and increases in temperature.

Q: Where Do Climate Models Fail? A: Almost Everywhere

“How much do I fail thee. Let me count the ways”

Ben Santer’s latest model/observation comparison paper demonstrates that climate realists were right and climate models exaggerate warming:

The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes.

Where do the models fail?

1. Significantly warmer than reality (95% CI) in the lower troposphere at all latitudes, except for the arctic.

2. Significantly warmer than reality (95% CI) in the mid-troposphere at all latitudes, except for the possible polar regions.

3. Significant warmer that reality (95% CI) in the lower stratosphere at all latitudes, except possibly polar regions.

Answer: Everywhere except for polar regions where uncertainty is greater.

East Pacific Region Temperatures: Climate Models Fail Again

Bob Tisdale, author of the awesome book “Who Turned on the Heat?” presented an interesting problem that turns out to be a good application of robust statistical tests called empirical fluctuation processes.

Bob notes that sea surface temperature (SST) in a large region of the globe in the Eastern Pacific does not appear to have warmed at all in the last 30 years, in contrast to model simulations (CMIP SST) for that region that show strong warming. The region in question is shown below.

The question is, what is the statistical significance of the difference between model simulations and the observations? The graph comparing the models with observations from Bob’s book shows two CMIP model projections strongly increasing at 0.15C per decade for the region (brown and green) and the observations increasing at 0.006C per decade (magenta).

However, there is a lot of variability in the observations, so the natural question is whether the difference is statistically significant? A simple-minded approach would be to compare the temperature change between 1980 and 2012 relative to the standard deviation, but this would be a very low power test, and only used by someone who wanted to obfuscate the obvious failure of climate models in this region.

Empirical fluctuation processes are a natural way to examine such questions in a powerful and generalized way, as we can ask of a strongly autocorrelated series — Has there been a change in level? — without requiring the increase to be a linear trend.

To illustrate the difference, if we assume a linear regression model, as is the usual practice: Y = mt +c the statistical test for a trend is whether the trend coefficient m is greater than zero.

H0: m=0 Ha: m>0

If we test for a change in level, the EFP statistical test is whether m is constant for all of time t:

H0: mi = m0 for i over all time t.

For answering questions similar to tests of trends in linear regression, the EFP path determines if and when a simple constant model Y=m+c deviates from the data. In R this is represented as the model Y~1. If we were to use a full model Y~t then this would test whether the trend of Y is constant, not whether the level of Y is constant. This is clearer if you have run linear models in R.

Moving on to the analysis, below are the three data series given to me by Bob, and available with the R code here.

The figure below shows the series in question on the x axis, the EFP path is the black line, and 95% significance levels for the EFP path are in red.

It can be seen clearly that while the EFP path for the SST observations series shows a little unusual behavior, with a significant change in level in 1998 and again in 2005, the level is currently is not significantly above the level in 1985.

The EFP path for the CMIP3 model (CMIP5 is similar), however, exceeds the 95% significant level in 1990 and continues to increase, clearly indicating a structural increase in level in the model that has continued to intensify.

Furthermore, we can ask whether there is a change in level between the CMIP models and the SST observations. The figure below shows the EFP path for the differences CMIP3-SST and CMIP5-SST. After some deviation from zero at about 1990, around 2000 the difference becomes very significant at the 5% level, and continues to increase. Thus the EFP test shows a very significant and widening disagreement between the temperature simulation of the CMIP over the observational SST series in the Eastern Pacific region after the year 2000.

While the average of multiple model simulations show a significant change in level over the period, in the parlance of climate science, there is not yet a detectable change in level in the observations.

One could say I am comparing apples and oranges, as the models are average behavior while the SST observations are a single realization. But, the fact remains only the simulations of models show warming, because there is no support for warming of the region from the observations. This is consistent with the previous post on Santer’s paper showing failure of models to match the observations over most latitudinal bands.

Santer: Climate Models are Exaggerating Warming – We Don’t Know Why

Ben Santer’s latest model/observation comparison paper in PNAS finally admits what climate realists have been been saying for years — climate models are exaggerating warming. From the abstract:

On average, the models analyzed … overestimate the warming of the troposphere. Although the precise causes of such differences are unclear…

Their figure above shows the massive model fail. The blue and magenta lines are trend of the UAH and RSS satelite temperature observations averaged by latitude, with the Arctic at the left and Southern Hemisphere to the right. Except for the Arctic, the observations are well outside all of the model simulations. As they say:

The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes. The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive. Our results suggest that forcing errors are a serious concern.

Anyone who has been following the AGW issue for more than a few years remembers that, Ross McKitrick, Stephen McIntyre and Chad Herman already showed climate models exaggerating warming in the tropical troposphere in their 2010 paper. Before that was Douglass, and in their usual small-minded way, the Santer team do not acknowledge them. Prior to then a few studies had differed on whether models significantly overstate the warming or not. McKitrick found that up to 1999 there was only weak evidence for a difference, but on updated data the models appear to significantly overpredict observed warming.

Santer had a paper where data after 1999 had been deliberately truncated, even though the data was available at the time. As Steve McIntyre wrote in 2009:

Last year, I reported the invalidity using up-to-date data of Santer’s claim that none of the satellite data sets showed a “statistically significant” difference in trend from the model ensemble, after allowing for the effect of AR1 autocorrelation on confidence intervals. Including up-to-date data, the claim was untrue for UAH data sets and was on the verge of being untrue for RSS_T2. Ross and I submitted a comment on this topic to the International Journal of Climatology, which we’ve also posted on arxiv.org. I’m not going to comment right now on the status of this submission.

Santer already had form at truncating inconvenient data, going back to 1995, related by John Daly. It is claimed that he authored the notorious “… a discernible human influence on global climate”, made in Chapter 8 of the 1995 IPCC Report, added without the consent of the drafting scientists in Madrid.

As John Daly says:

When the full available time period of radio sonde data is shown (Nature, vol.384, 12 Dec 96, p522) we see that the warming indicated in Santer’s version is just a product of the dates chosen. The full time period shows little change at all to the data over a longer 38-year time period extending both before Santer et al”s start year, and extending after their end year.

It was 5 months before `Nature’ published two rebuttals from other climate scientists, exposing the faulty science employed by Santer et al. (Vol.384, 12 Dec 1996). The first was from Prof Patrick Michaels and Dr Paul Knappenberger, both of the University of Virginia. Ben Santer is credited in the ClimateGate emails with threatening Pat Michaels with physical violence:

Next time I see Pat Michaels at a scientific meeting, I’ll be tempted to beat the crap out of him. Very tempted.

I suppose that now faced with a disparity between models and observations that can no longer be ignored, he has had to face the inevitable. That’s hardly a classy act. Remember Douglass, McKitrick, McIntyre and other climate realists reported the significant model/observation disparity in the peer-reviewed literature first. You won’t see them in Santer’s list of citations.

Failing to give due credit. Hiding the decline. Truncating the data. Threatening violence to critics. This is the AGW way.

Solar Cycle 24 peaked? The experimentum crucis begins.

The WSO Polar field strengths – early indicators of solar maximums and minimums – have dived towards zero recently, indicating that its all down from here for solar cycle 24.

Polar field reversals can occur within a year of sunspot maximum, but cycle 24 has been so insipid, it would not be surprising if the maximum sunspot number fails to reach the NOAA predicted peak of 90 spots per month, and get no higher than the current 60 spots per month.

The peak in solar intensity was predicted for early 2013, so this would be early, and may be another indication that we are in for a long period of subdued solar cycles.

A prolonged decline in solar output will provide the first crucial experiment to distinguish the accumulation theory of solar driven temperature change, and the AGW theory of CO2 driven temperature change. The accumulation theory predicts global temperature will decline as solar activity falls below its long-term average of around 50 sunspots per month. The AGW theory predicts that temperature will continue to increase as CO2 increases, with little effect from the solar cycle.

An experimentum crucis is considered necessary for a particular hypothesis or theory to be considered an established part of the body of scientific knowledge. A given theory, such as AGW, while in accordance with known data but has not yet produced a critical experiment is typically considered unworthy of full scientific confidence.

Prior to this moment, BOTH solar intensity was generally above its long term average, AND greenhouse gases were increasing. BOTH of these factors could explain generally rising global temperature in the last 50 years. However, now that one factor, solar intensity, is starting to decline and the other, CO2, continues to increase, their effects are in opposition, and the causative factor will become decisive.

For more information see WUWT’s Solar Reference page.

AGW Doesn’t Cointegrate: Beenstock’s Challenging Analysis Published

The Beenstock, Reingewertz, and Paldor paper on lack of cointegration of global temperature with CO2 has been accepted! This is a technical paper that we have been following since 2009 when an unpublished manuscript appeared, rebutting the statistical link between global temperature increase and anthropogenic factors like CO2, and so represents another nail in the coffin of CAGW. The editor praised the work as “challenging” and “needed in our field of work.”

Does the increase in CO2 concentration and global temperature over the past century, constitute a “warrant” for the anthropogenic global warming (AGW) theory? Such a relationship is a necessary for global warming, but not sufficient, as as range of other effects may make warming due to AGW trivial or less than catastrophic.

While climate models, or GCMs shows enhancement of the greenhouse effect can cause a temperature increase, the observed upward drift in global temperature could have other causes, such as high sensitivity to persistent warming from enhanced solar insolation (accumulation theory). There is also the urban heat island effect and natural cycles in operation.

In short, the CO2/temperature relationship may be spurious, have an independent cause, or temperature may cause CO2 increase, all of which falsify CAGW here and now.

Cointegration attempts to fit the random changes in drift of two or more series together to provide positive evidence of association where those variables are close to a random walk.
The form of time series process appropriate to this model is referred to as I(n), having the property that n is the number of differencing operations needed before the series has a finite mean (stationary, or does not drift far from the mean). A range of statistical tests, the Dickey-Fuller and Phillips-Perron procedures, identify the I(1) property.

Beenstock et.al find that while temperature and solar irradiance series are I(1), anthropogenic greenhouse gas (GHG) series are I(2), requiring differencing twice to yield a stationary series.

This fact blocks any evidence for AGW from an analysis of the time series. The variable may still somehow be causally connected, but not in an obvious way. Previous studies using simple linear regression to make attribution claims must be discounted.

The authors also show evidence of a cointegrating relationship between the temperature (corrected for solar irradiance) and changes in the anthropogenic variables. This highlights what I have been saying in the accumulation that the dynamics relationships between these variables must be give due attention, lest spurious results are obtained.

While this paper does not debunk AGW, it does debunk naïve linear regression methods, and demonstrate the power of applying rigorous statistical methodologies to climate science.

Still no weakening of the Walker Circulation

Once upon a time, a weakening of the East-West Pacific overturning circulation – called the Walker circulation – was regarded in climate science as a robust response to anthropogenic global warming. This belief was based on studies in 2006 and 2007 using climate models.

Together with a number of El Nino events (that are associated with a weakening of the Walker circulation) the alarm was raised in a string of papers (3-6) that global warming was now impacting the Pacific Ocean and that the Walker circulation would further weaken in the 21st century, causing more El Ninos and consequently more severe droughts in Australia.

These types of alarms in the context of a severe Australian drought gave rise to an hysterical reaction of building water desalination plants in the major capital cities in Australia, all but one now moth-balled, and costing consumers upwards of $290 per year in additional water costs.

In 2009 I did a study with Anthony Cox to see if there was any significant evidence of a weakening of the Walker circulation when autocorrelation was taken into account. We found no empirical basis for the claim that observed changes differed from natural variation, and so could not be attributed to Anthropogenic Global Warming.

Since 2009, a number of articles show that, contrary to the predictions of climate models, the Walker has been strengthening (7-12). A recent article gives models a fail: “Observational evidences of Walker circulation change over the last 30 years contrasting with GCM results” here.

The paper by Sohn argues that inceases in the frequency of El Nino cause the apparent weakening of the Walker Circulation, not the other way around, and it is well known that climate models unsuccessfully reproduce such trends.

The problems with models may rest in their treatment of mass flows. In “Indian Ocean warming modulates Pacific climate change” here, they find that

“Extratropical ocean processes and the Indonesia Throughflow could play an important role in redistributing the tropical Indo-Pacific interbasin upper-ocean heat content under global warming.”

Finally from an abstract in 2012 “Reconciling disparate twentieth-century Indo-Pacific ocean temperature trends in the instrumental record” here:

“Additionally, none of the disparate estimates of post-1900 total eastern equatorial Pacific sea surface temperature trends are larger than can be generated by statistically stationary, stochastically forced empirical models that reproduce ENSO evolution in each reconstruction.”

Roughly translated this means there is no evidence of any change to the Walker ciriculation beyond natural variation – weakening or otherwise.

Nice to be proven right again. The “weakening of the Walker Circulation” is another scary bedtime story for global warming alarmists, dismissed by a cursory look at the evidence.

References

1. Held, I. M. and B. J. Soden, 2006: Robust responses of the hydrological cycle to global warming. J. Climate, 19, 5686–5699.

2. Vecchi, G. A. and B. J. Soden, 2007: Global warming and the weakening of the tropical circulation. J. Climate, 20, 4316–4340.

3. Scott B. Power and Ian N. Smith. Weakening of the walker circulation and apparent dominance of el ni˜no both reach record levels, but has enso really changed? Geophys. Res. Lett., 34, 09 2007.

4. Power SB, Kociuba G (2011) What caused the observed twentieth-century weakeningof the Walker circulation? J Clim 24:6501–56514.

5. Yeh SW, et al. (2009) El Niño in a changing climate. Nature 461(7263):511–514.

6. Collins M, et al. (2010) The impact of global warming on the tropical Pacific Ocean and El Niño. Nat Geosci 3:391–397.

7. Li G, Ren B (2012) Evidence for strengthening of the tropical Pacific Ocean surfacewind speed during 1979-2001. Theor Appl Climatol 107:59–72.

8. Feng M, et al. (2011) The reversal of the multidecadal trends of the equatorial Pacific easterly winds, and the Indonesian Throughflow and Leeuwin Current transports. Geophys Res Lett 38:L11604.

9. Feng M, McPhaden MJ, Lee T (2010) Decadal variability of the Pacific subtropical cells and their influence on the southeast Indian Ocean. Geophys Res Lett 37:L09606.

10. Qiu B, Chen S (2012) Multidecadal sea level and gyre circulation variability in the northwestern tropical Pacific Ocean. J Phys Oceanogr 42:193–206.

11. Merrifield MA (2011) A shift in western tropical Pacific sea-level trends during the 1990s. J Clim 24:4126–4138.

12. Merrifield MA, Maltrud ME (2011) Regional sea level trends due to a Pacific trade wind intensification. Geophys Res Lett 38:L21605.

13. “Observational evidences of Walker circulation change over the last 30 years contrasting with GCM results BJ Sohn, SW Yeh, J Schmetz, HJ Song – Climate Dynamics, 2012 – Springer

14. Indian Ocean warming modulates Pacific climate change Jing-Jia Luoa,b,c,1, Wataru Sasakia, and Yukio Masumotoa

15. “Reconciling disparate twentieth-century Indo-Pacific ocean temperature trends in the instrumental record”. Solomon, A. & Newman, M. Nature Clim. Change 2, 691–699 (2012).”

Circularity and the Hockeystick: coming around again

The recent posts at climateaudit and WUWT show that climate scientists Gergis and Karoly were willing to manipulate their study to ensure a hockeystick result in the Southern Hemisphere, and resisted advice from editors of the Journal of Climate to report alternative approaches to establish robustness of their study.

The alternative the editors suggested of detrending the data first, revealed that most of the proxies collected in the Gergis study were uncorrelated with temperature, and so would have to be thrown out.

A false finding of “unprecedented warming” is a false positive. False positives are a characteristic of the circular fallacy. The circular logic arising from the method of screening proxies by correlation was written up by myself in a geological magazine “Reconstruction of past climate using series with red noise” DRB Stockwell, AIG News 8, 314 in 2005, and also occupies a chapter in my 2006 book “Niche modeling: predictions from statistical distributions” D Stockwell Chapman & Hall/CRC.

It is gratifying to see the issue still has legs, though as McIntyre notes in the discussion of his post he has been the only one to cite the AIG article in the literature, its been widely discussed on the blogs, but a nettle not yet grasped by climate scientists.

Because the topic is undiscussed in climate science academic literature, we cited David Stockwell’s article in an Australian newsletter for geologists (smiling as we did so.) The topic has been aired in “critical” climate blogs on many occasions, but, as I observed in an earlier post, the inability to appreciate this point seems to be a litmus test of being a real climate scientist.

Its now fourteen years since the publication, with great fanfare, of Mann, Bradley, and Hughes “Global-scale temperature patterns and climate forcing over the past six centuries” with the “premature rush to adoption” that followed, the creation of research agendas in multiple countries and institutions devoted to proxy study, amassing of warehouses of cores. In any normal science the basics of the methodologies would be well understood before such a rush to judgment.

Considered in the context of almost a decade of related public blog discussion of the issue, that screening proxies on 20th century temperatures gives rise to hockeysticks is a topic apparently only discussed in private by climate scientists:

The Neukom email from 07 June 2012 08:55 “…I also see the point that the selection process forces a hockey stick result but: – We also performed the reconstruction using noise proxies with the same ARl properties as the real proxies. – And these are of course resulting in a noise-hockey stick.”

Of course, the problem is that rigorous analysis of many studies would fail to confirm the original results, many of the proxies collected and used by their colleagues are shown to be useless, and the abandonment of the theory that contemporary warming is “unprecedented”.

In light of all the data and studies from the last decade, I am convinced of only one thing – that the fallacy of data and method snooping is simply not understood by most climate scientists, who tend to see picking and choosing between datasets, ad hoc and multiple methods as opportunities to select the ones that produce their desired results.

This highlights the common wisdom of asking “What about all the catastrophe theories we have seen adopted and later abandoned over the years?” And while climate scientists dismiss such questions denial, after you have witnessed the rise and fall of countless environmental hysterias over the years, you become more circumspect, and adjust your estimates of confidence to account for the low level of diligence in the field.

Is the problem alarmism, or prestige-seeking?

We all make mistakes. Sometime we exaggerate the risks, and sometimes we foolishly blunder into situations we regret. Climate skeptics often characterize their opponents as ‘alarmist’. But is the real problem a tendency for climate scientists to be ‘nervous ninnies’?

I was intrigued by the recent verdict in the case of the scientists before an Italian court in the aftermath of a fatal earthquake. Roger Pielke Jr. relates that all is not as it seems.

There is a popular misconception in circulation that the guilty verdict was based on the scientists’ failure to accurately forecast the devastating earthquake.

Apparently the scientists were not charged with failing to predict a fatal earthquake, but with failure of due diligence:

Prosecutors didn’t charge commission members with failing to predict the earthquake but with conducting a hasty, superficial risk assessment and presenting incomplete, falsely reassuring findings to the public.

But when the article then goes to motivation, it is not laziness, but prestige.

Media reports of the Major Risk Committee meeting and the subsequent press conference seem to focus on countering the views offered by Mr. Giuliani, whom they viewed as unscientific and had been battling in preceding months. Thus, one interpretation of the Major Risks Committee’s statements is that they were not specifically about earthquakes at all, but instead were about which individuals the public should view as legitimate and authoritative and which they should not.

If officials were expressing a view about authority rather than a careful assessment of actual earthquake risks, this would help to explain their sloppy treatment of uncertainties.

So there are examples both of alarmism and failure to alarm by the responsible authorities. Both, potentially, motivated by maintenance of their prestige. Could the same motivations be behind climate alarmism? After all, what gains are there from asserting that ‘climate changes’.

The Creation of Consensus via Administrative Abuse

The existence of ‘consensus’ around core claims of global warming is often cited as some kind of warrant for action. A recent article by Roger Pielke Jr reported the IPCC response to his attempts to correct biases and errors in AR4 in his field of expertise — extreme events losses. As noted at CA, he made four proposed error corrections to IPCC, all of which were refused.

Since sociological psychological research is now regarded worthy of a generous share of science funding, a scholarly mind asks, “If failure to admit previous errors could be a strategy for building the climate consensus, what does that say about the logical correctness of the process. What are the other strategies?” Could denigration of people who disagree by Lewandowsky be worth $1.7m of Australian Research Council approved, taxpayer funds to help create climate consensus?

Wikipedia appears to be another experimental platform for consensus building. The recent comment by a disillusioned editor describes many unpleasant strategic moves executed in the name of building a consensus for the cold-fusion entries on Wikipedia.

Foremost is failure of administrators to follow the stated rules. Could this, along with failure to admit errors, and denigration of opponents be also a consensus creation strategy? The parallels with the IPCC are uncanny.

Some excerpts below.

Alan, do you know what “arbitration enforcement” is? Hint: it is not arbitration. Essentially, the editor threatened to ask that you be sanctioned for “wasting other editor’s time,” which, pretty much, you were. That was rude, but the cabal is not polite, it’s not their style. A functional community would educate you in what is okay and what is not. The cabal just wants you gone. *You* are the waste of time, for them, really, but they can’t say that.

Discouraging objectors – the main goal.

I remember now why I gave up in December last year. But I thought it was my turn to put in a shift or two at the coalface (or whatever).

Here is what I did on Wikipedia. I had a long-term interest in community consensus process, and when I started to edit Wikipedia in 2007, I became familiar with the policies and guidelines and was tempered in that by the mentorship of a quite outrageous editor who showed me, by demonstration, the difference or gap between policies and guidelines and actual practice. I was quite successful, and that included dealing with POV-pushers and abusive administrators, which is quite hazardous on Wikipedia. If you want to survive, don’t notice and document administrative abuse. Administrators don’t like it, *especially* if you are right. Only administrators, in practice, are long-term allowed to do that, with a few exceptions who are protected by enough administrators to survive.

Shades of the IPCC.

So if you want to affect Wikipedia content in a way that will stick, relatively speaking, you will need to become *intimately* familiar with policies. You can do almost anything in this process, except be uncivil or revert war. That is, you can make lots of mistakes, but *slowly*. What I saw you doing was making lots of edits. Andy asked you to slow down. That was a reasonable request. But I’d add, “… and listen.”

Good advice for dealing with administrators of consensus creation processes.

Instead, it appears you assumed that the position of the other editors was ridiculous. For some, perhaps. But you, yourself, didn’t show a knowledge of Reliable Source and content policies.

Lots of editors have gone down this road. It’s fairly easy to find errors and imbalance in the Wikipedia Cold fusion article. However, fixing them is not necessarily easy, there are constituencies attached to this or that, and averse to this or that. I actually took the issue of the Storms Review to WP:RSN, and obtained a judgment there that this was basically RS. Useless, because *there were no editors willing to work on the article who were not part of the pseudoskeptical faction.* By that time, I certainly couldn’t do it alone, I was WP:COI, voluntarily declared as such.

It seems you need an ally who is part of the in-crowd in order to move the consensus towards an alternative proposition.

When the community banned me, you can be sure that it was not mentioned that I had been following COI guidelines, and only working on the Talk page, except where I believed an edit would not be controversial. The same thing happened with PCarbonn and, for that matter, with Jed Rothwell. All were following COI guidelines.

Following the rules does not provide immunity.

The problem wasn’t the “bad guys,” the problem was an absence of “good guys.” There were various points where editors not with an agenda to portry cold fusion as “pathological science,” assembled, and I found that when the general committee was presented with RfCs, sanity prevailed. But that takes work, and the very work was framed by the cabal as evidence of POV-pushing. When I was finally topic banned, where was the community? There were only a collection of factional editors, plus a few “neutral editors” who took a look at discussions that they didn’t understand and judged them to be “wall of text.” Bad, in other words, and the discussion that was used as the main evidence was actually not on Wikipedia, it was on meta, where it was necessary. And where it was successful.

A better description of the real-world response to scholarship I have yet to see.

Yes, I was topic banned on Wikipedia for successfully creating a consensus on the meta wiki to delist lenr-canr.org from the global blacklist. And then the same editors as before acted, frequently, to remove links, giving the same bankrupt arguments, and nobody cares. So all that work was almost useless.

So consensus is ultimately created via administrative abuse!

Now its possible to see why blogs porporting to represent an authoritative consensus such as RealClimate, SkepticalScience and LewsWorld must delete objections:

… furiously deleting inconvenient comments that ask questions like “What are you going to do now that the removal of the fake responses shows a conclusion reverse of that of your title”?

But what is the result of administrative abuse?

That is why so many sane people have given up on Wikipedia, and because so many sane people have given up, what’s left?

There would be a way to turn this situation around, but what I’ve seen is that not enough people care. It might take two or three. Seriously.

Opinions on the New Zealand AGW Judgement

Apropos the New Zealand AGW case, comments below by Goon and Ross:

# Goon (8) Says:
September 8th, 2012 at 3:45 pm

Justifying the unjustifiable. Don’t believe me…. then here is where the raw data lives.

http://cliflo.niwa.co.nz/

Register and have a look for yourself. Nothing even remotely approaching a 1 degree/century trend in the raw data from longer term climate sites. The only way NIWA can come up with this is by applying an extremely dodgy ‘adjustment’ to make all pre-1950′s temperatures colder and everything after warmer and hey presto, woe is me, there’s a trend. The arguement being tested in the court wasn’t anything to do with AGW, rather it was just that the methodology applied by NIWA to calculate the ‘sky is falling faster than the rest of the world’ trend is a complete crock. A trend which is then used by the same scientists to justify ever more research and lapped up by politicians keen to get their hands into your wallet.

In terms of climate change, I’m agnostic about the whole thing…..climate changes naturallly all the time and human activities no doubt contribute as well but what pisses me off is the dodgyness put up by NIWA as science. It wouldn’t stand up in any other discipline but spin disguised as science seems to be de riguer for climate science.

# Ross12 (186) Says:
September 8th, 2012 at 4:16 pm

Goon
You are correct in my view. This case was nothing to with AGW as such. It was to do with how the temperature data was collected and how it was analysed. The judge was very wrong not to allow Bob Dedekind’s evidence ( because he was supposed not an expert) — the statistical analysis for the data would be using methods similar to a number of different fields. So Dedekinds stats expertise should have been allowed.

Here is a summary of his position :

“… In fact, NIWA had to do some pretty nifty footwork to avoid some difficult questions.

For instance, where was the evidence that RS93 had ever been used on the 7SS from 1853-2009? Absent. We were asked to believe Dr Wratt’s assertion that it had (in 1992), but ALL evidence had apparently disappeared. Not only that, but the adjustments coincidentally all matched the thesis adjustments, which all ended in 1975. And no new adjustments were made between 1975 and 1992. Hmm.

Another question: Why, when NIWA performed their Review at taxpayers’ expense in 2010, did they NOT use RS93? They kept referring to it whenever the 7SS adjustment method was discussed, and it was a prime opportunity to re-do their missing work, yet instead they used an unpublished, untested method from a student’s thesis written in 1981.

Please understand this: the method used in the NIWA Review in 2010 has no international peer-reviewed scientific standing. None. It is mentioned nowhere, outside of Salinger’s thesis. NIWA have never yet provided a journal or text-book reference to their technique.

Yet a few people were able to do (at zero cost to the taxpayer) what NIWA should have done in the first place – produce a sensible 7SS using the same peer-reviewed technique NIWA kept referencing repeatedly, viz: RS93. In fact, one of NIWA’s complaints during the court case was that we applied the RS93 method “too rigorously”! In other words, when we did the job properly using an internationally-accepted method, we got a different result to NIWA’s, and they didn’t like it. In fact, the actual trend over the last 100 years is only a third of NIWA’s trend.

Their only response to date has been a desperate effort to try to show that the RS93 method as published is “unstable”. Why then did they trumpet it all this time? And why did they never challenge it in the literature between 1993 and 2010?

NIWA got away with it in the end, but only because the judge decided that he shouldn’t intervene in a scientific dispute, and our credentials (not the work we did) were not impressive enough. ”

For the AGW supporters to suggest ( as Prof Renwick from Victoria said) this a vindication of the science is utter nonsense. The judge says he is not going to make decisions about the science.
Some how I don’t think we have heard the end of this.

Lewandowsky article is a truly appalling piece of social science – Aitkin

Don Aitkin just weighted in on the Lewandowsky affair as Queensland University’s John Cook doubles down at the Conversation.

about 1 hour ago
Don Aitkin writer, speaker and teacher (logged in via email @grapevine.com.au)

Oh dear. The Lewandowsky article is a truly appalling piece of social science. How did it ever get past ordinary peer review? It, and the one above, demonstrate the kind of problems that Jim Woodgett in Nature two days ago and John Ioannidis a few years ago have pointed out: the failure of researchers to get their own house in order, and the poor quality of much published research. I have posted on that subject today on my website: www.donaitkin.com. That was before I came to all this! Perhaps someone a little better than Lewandowsky could do some research on why people believe in’ climate change’, and what their characteristics are…

Thank God there are true scholars in Australia. Unfortunately they are retired.

Carbon abatement from wind power – zero

Zip, nil, nada. That’s the findings of a two-year analysis of Victoria’s wind-farm developments by mechanical engineer Hamish Cumming.

Despite hundreds of millions of dollars of taxpayers money, from subsidies and green energy schemes driven by the renewable energy target, surprise, surprise, Victoria’s wind-farm developments have saved virtually zero carbon dioxide emissions due to their intermittent, unreliable power output.

Wind power advocate Dr Mark Diesendorf, Australian academic who teaches Environmental Studies at the University of New South Wales, formerly Professor of Environmental Science at the University of Technology, Sydney and a principal research scientist with CSIRO has not been shy about bad-mouthing wind power realists. See the renewable industry lobby group energyscience for his ebook The Base Load Fallacy and other Fallacies disseminated by Renewable Energy Deniers.

The verdict is in – wind power is a green theory that simply generates waste heat.

However, Cumming said the reports on greenhouse gas abatement did not take into account the continuation of burning coal during the time the wind farms were operational.

“The reports you refer to are theoretical abatements, not real facts. Coal was still burnt and therefore little if any GHG was really abated,” he told Clarke.

“Rather than trying to convince me with reports done by or for the wind industry, or the government departments promoting the industry, I challenge you to give me actual coal consumption data in comparison to wind generation times data that supports your argument.

Also see JoNova

Lewandowsky — again

This guy, a UWA employee, was shown by a Arlene Composta to be the most naive of leftists.

He now says that climate skeptics are conspiracy theorist wackos.

We have responded to this guy before:

He thinks the cognitive processes of Anthropogenic Global Warming (AGW) sceptics is deficient and on the same level as “truthers” and other “conspiracy theorists”. This is serious, for merely questioning the ‘science’ of AGW one now faces the opprobrium of having one’s mental ability questioned.

JoNova raises valid questions about his survey methodology here.

The word “fabrication” has been bandied about.

If so, he gives proof that the term “Psychological Science” is a contradiction in terms.

Not cointegrated, so global warming is not anthropogenic – Beenstock

Cointegration has been mentioned previously and is one of the highest ranking search terms on landshape.

We have also discussed the cointegration manuscript from 2009 by Beenstock and Reingewertz, and I see he has picked up another author and submitted it to an open access journal here.

Here is the abstract.

Polynomial cointegration tests of anthropogenic impact on global warming M. Beenstock, Y. Reingewertz, and N. Paldor

Abstract. We use statistical methods for nonstationary time series to test the anthropogenic interpretation of global warming (AGW), according to which an increase in atmospheric greenhouse gas concentrations raised global temperature in the 20th century. Specifically, the methodology of polynomial cointegration is used to test AGW since during the observation period (1880–2007) global temperature and solar irradiance are stationary in 1st differences whereas greenhouse gases and aerosol forcings are stationary in 2nd differences. We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcing, aerosols, solar irradiance and global temperature are not polynomially cointegrated. This implies that recent global warming is not statistically significantly related to anthropogenic forcing. On the other hand, we find that greenhouse gas forcing might have had a temporary effect on global temperature.

The bottom line:

Once the I(2) status of anthopogenic forcings is taken into consideration, there is no significant effect of anthropogenic forcing on global temperature.

They do, however, find a possible effect of the CO2 first difference:

The ADF and PP test statistics suggest that there is a causal effect of the change in CO2 forcing on global temperature.

They suggest “… there is no physical theory for this modified theory of AGW”, although I would think the obvious one would be that the surface temperature adjusts over time to higher CO2 forcing, such as through intensified heat loss by convection, so returning to an equilibrium. However, when revised solar data is used the relationship disappears, so the point is probably moot.

When we use these revised data, Eqs. (11) and (12) remain polynomially uncointegrated. However, Eq. (15) ceases to be cointegrated.

Finally:

For physical reasons it might be expected that over the millennia these variables should share the same order of integration; they should all be I(1) or all I(2), otherwise there would be persistent energy imbalance. However, during 150 yr there is no physical reason why these variables should share the same order of integration. However, the fact that they do not share the same order of integration over this period means that scientists who make strong interpretations about the anthropogenic causes of recent global warming should be cautious. Our polynomial cointegration tests challenge their interpretation of the data.

Abandon Government Sponsored Research on Forecasting Climate – Green

Kesten Green, now of U South Australia, has a manuscript up called Evidence-based Improvements to Climate Forecasting: Progress and Recommendations arguing that evidence-based research on climate forecasting finds no support for fear of dangerous man-made global warming, because simple, inexpensive, extrapolation models are more accurate than the complex and expensive “General Circulation Models” used by the Intergovernmental Panel on Climate Change (IPCC).

Their rigorous evaluations of the poor accuracy of climate models supports the view there is no trend in global mean temperatures that is relevant for policy makers, and that…

[G]overnment initiatives that are predicated on a fear of dangerous man-made global warming should be abandoned. Among such initiatives we include government sponsored research on forecasting climate, which are unavoidably biased towards alarm (Armstrong, Green, and Soon 2011).

This is what I found also in the evaluation of the CSIRO’s use of IPCC drought model. In fact, the use of the climate model projections is positively misleading, as they show decreasing rainfall over the last century when rainfall actually increased.

This is not welcome news to the growing climate projection science industry that serves the rapidly growing needs of impact and adaptation assessments. A new paper called Use of Representative Climate Futures in impact and adaptation assessment by Penny Whetton, Kevin Hennessy and others proposes another ad-hoc fix to climate model inaccuracy called Representative Climate Futures (or RFCs for short). Apparently the idea is the wide range of results given by different climate models are classified as “most likely” or “high risk” or whatever, and the researcher is then free to chose whichever set of models he or she wishes to use.

Experiment Resources.Com condemns ad hoc-ery in science:

The scientific method dictates that, if a hypothesis is rejected, then that is final. The research needs to be redesigned or refined before the hypothesis can be tested again. Amongst pseudo-scientists, an ad hoc hypothesis is often appended, in an attempt to justify why the expected results were not obtained.

Read “poor accuracy of climate models” for “hypothesis is rejected” and you get the comparison. Models that are unfit for the purpose need to be thrown out. RCF appears to be a desperate attempt to do something, anything, with grossly inaccurate models.

On freedom of choice, Kesten Green says:

So uncertain and poorly understood is the global climate over the long term that the IPCC modelers have relied heavily on unaided judgment in selecting model variables and setting parameter values. In their section on “Simulation model validation in longer-term forecasting” (p. 969–973, F&K observe of the IPCC modeling procedures: “a major part of the model building is judgmental” (p. 970).

Which is why its not scientific.

Summary: NZ Climate Science Coalition vs. NIWA

More thought-provoking thoughts from Richard on the duties and responsibilities of statutory bodies like NIWA. (NIWA is actually an incorporated body that is owned by the Crown, where-ever that plays into things.)

Anyway, everyone seems to agree that their handling of the temperature records in New Zealand is biased and deficient. The issue is, does scientific incompetence violate their charter?

With all this evidence, the Coalition case is looking very good on the plain facts. The threat comes from the need to prove that NIWA has a duty to apply good science. They deny this, and effectively say that Parliament has given them a free hand to do what they like. They argue that the obligation to pursue excellence is merely “aspirational”, being un-measurable and unenforceable.

They are prepared to damage to their reputations by arguing they are under no compunction to do good science. They are prepared to jettison this in favor of “independence”. What follows from the independence of NIWA from a lack of duty to do good science? Only politics.

But how tied is their present funding to their present obedience to their present political masters? If strongly, it corrupts and debases their “independence” to mere subservience and makes a mockery of their representations to the Court. For it could be that their right to self-determination is no more than a claim to public funding by virtue of their obedience.

In fact the NIWA website states just that:

CRIs are stand-alone companies with a high degree of independence. Each year, the shareholding Ministers lay out their expectations for the Crown Research Institutes in an ‘Operating Framework’. Amongst other things, this defines how CRIs should interpret their obligation to maintain financial viability.

The link to “Operating Framework” is dead, unfortunately.

Waiting anxiously for the judges determination on this.

Final Day: NZ Climate Science Coalition vs. NIWA

Quote from the defense:

He must have been responding to our charge that NIWA did not perform its statutory duty. He said: “They’re not duties, they’re not called duties, they’re called operating principles.” This seemed to come from the current legislation, or recent decisions.

Where in the operating principles is the principle that government climate scientists be “in bed” with green groups like the WWF putting pressure on government to enact green policies.

The disclosures reveal several instances of government funded scientists working with environmental pressure groups. In one case, Greenpeace activists are seen helping CRU scientists to draft a letter to the Times and in another working closely with the World Wildlife Fund to put pressure on governments regarding climate change.

Industry groups and the general public are slowing realizing that carbon taxes are going to cost a truckload of money for little benefit.

Isn’t it time that industry groups stepped up to the place and started funding research into questions that are still controversial, such as the magnitude of solar effects on global temperature, and could mitigate green hysteria?

Day 2: NZ Climate Science Coalition vs. NIWA

Quote of the day from New Zealand’s National Institute of Water and Atmospheric Research:

The matters [at issue] arise between the plaintiff’s (the Coalition’s) Statement of Claim (SOC) and the Defendant’s (NIWA’s) Statement of Defence (SOD). NIWA counter-claimed they had no obligation to pursue excellence or to use best-quality scientific practices and also that the national temperature record was not only not official, but they themselves had no obligation to produce or maintain it.

Benefits of Global Warming

A new WSJ article signed by 16 scientists:

A recent study of a wide variety of policy options by Yale economist William Nordhaus showed that nearly the highest benefit-to-cost ratio is achieved for a policy that allows 50 more years of economic growth unimpeded by greenhouse gas controls. This would be especially beneficial to the less-developed parts of the world that would like to share some of the same advantages of material well-being, health and life expectancy that the fully developed parts of the world enjoy now. Many other policy responses would have a negative return on investment. And it is likely that more CO2 and the modest warming that may come with it will be an overall benefit to the planet.

Further comment on Econlog, discussing an article here.

I hope the reader will agree with me that Nordhaus is certainly inviting the reader to infer that now and in the future, the best available studies (as summarized by leading scholar Richard Tol) show that emissions of GHG will cause net damages.

The issue is whether most studies support the view that there will be a net benefit from CO2 emissions for at least 30 to 50 years, while net costs only occurr after an increase in global temperatures from 1.2 to 2 degrees C.

Perth 1940 Max Min Daily Temps

Previous posts have introduced the work that Chris Gillham is doing in spot auditing the accuracy of the Bureau of Meteorology’s temperature records. He has now re-recorded the daily max and min temperatures from one Australian weather station for one year, Perth 9034 in 1940, using original sources in The West Australian newspaper.

Below is an initial look at the historic data (in red) compared to the BoM’s “unadjusted” or “raw” records (grey) for the station.

Its fairly clear that there are a lot of errors. The minimum temperatures, however, are shockers. Each of the red lines seen on the lower series above is an error in the daily minimum — mostly down.

Mean of the max differences = +0.20C
Mean of the min differences = -1.18C
Average max all differences = +0.04C
Average min all differences = -0.33C

While the average error of the max temperatures is up 0.2C, the average magnitude of the errors in the min temperatures is a whopping 1.18C! Over the whole year that changes the annual minimum temperature by -0.33C.

The diurnal range is increased by an average of 0.4C. While these errors are only in one year in one station, it is noteworthy that the magnitude of these errors is similar to the change in the diurnal range attributed to global warming.

The data file is here – perth-1940-actual-raw. You need to open it in excel and save as a CVS file.

The code below should run on the datafile.

P1940=ts(read.csv("perth-1940-actual-raw.csv"),start=1940,freq=365)
l=2
plot(P1940[,3],col=2,ylim=c(0,45),main="Perth Regional Office 9034",ylab="Temperature C",lwd=l)
lines(P1940[,4],col="gray",lwd=l)
lines(P1940[,7],col=2,lwd=l)
lines(P1940[,8],col="gray",lwd=l)
maxErrs=P1940[P1940[,3]!=P1940[,4],]
print(mean(maxErrs[,4]-maxErrs[,3]))
minErrs=P1940[P1940[,7]!=P1940[,8],]
print(mean(minErrs[,8]-minErrs[,7]))
print(mean(P1940[,4]-P1940[,3]))
print(mean(P1940[,8]-P1940[,7]))

Perth 1940 Jan-Dec – Errors

Chris Gillham has completed re-digitizing one years worth of the daily temperature records for Perth in 1940 (perth-1940-actual-raw). These are digitised for all of 1940 at Perth Regional Office 9034 from temperatures published in The West Australian newspaper.

While the majority of the temperatures agree with contemporary BoM data, up to a third of the temperatures in some months disagreed, sometimes by over 1C! This is a very strange pattern of errors, and difficult to explain.

I will be doing more detailed analysis, but Chris reports that overall, the annual average of actual daily Perth max temperatures in 1940, as published in the newspaper, was the same as the BoM raw daily max. The annual average of newspaper daily min temperatures was .3C warmer than in the BoM raw daily min. ACORN max interpreted 1940 as 1.3C warmer than both actual newspaper max and BoM raw max, with ACORN min 1.5C cooler than actual newspaper min and 1.2C cooler than BoM raw min.

Anything above a .1C newspaper/raw difference is highlighted.

Chris notes:

It took a couple of days wading through about 310 newspapers to find all the weather reports and although it would be great to have all years from all locations (those with decimalised F newspaper sources) to confirm the Perth 1940 results, it’s a huge task. It would certainly be easier if the BoM just provided the temps from the old logbooks.

Rewriting the Temperature Records – Adelaide 1912

Record temperature always make the news, with climate alarmists trumpeting any record hot day. But what if the historic record temperatures recorded by BoM were adjusted down, and recent records were not records at all? More detective work using old newspapers by Chis Gillham in Adelaide this time.

The BoM claims the hottest ever Feb max at West Terrace was 43.4C on 1 February 1912. They got the date sort of right except the Adelaide Advertiser below shows Feb 1 at 112.5F (44.7C) and Feb 2 at 112.8F (44.9C). The BoM cut Feb 2 to 43.3C in raw.

Perth 1940 Jan-Mar Historic Comparisons

Continuing the comparison of historic sources of temperature and contemporary records, Chris Gillham has compiled a list of maximum and minimum daily temperatures for Perth for the months of January, February and March 1940 and uncovered some strange discrepancies (highlighted – all months at perth-newspapers-mar-qtr-1940).

Chris notes that while BoM’s contemporary temperatures largely agree with temperatures reported in newspapers of the day, a couple of temperatures in each month disagree by up to a degree C!

File attached comparing the March quarter 1940 daily newspaper and BoM raw data for Perth Regional Office 9034 (Perth Observatory atop Mt Eliza at the time), plus an ACORN average for each month.

Combining all days in the March 1940 quarter, average max in The West Australian newspaper was 29.51C and average BoM raw max was 29.56C. Average min in the newspaper was 17.38C and average BoM raw min was 17.15C. Rounded, max up .1C and min down .3C in BoM raw compared to what was reported in 1940. There seems a tendency for just two or three temps each month to be adjusted in raw, sometimes up but obviously with a downward bias in min.

ACORN-SAT judged the three months to have an average max of 31.32C and an average min of 16.17C. So max has been pushed up about 1.8C and min has been pushed down about 1.2C or 1C, depending on your point of view :-).

It always pays to go back to the source data.

Should the ABS take over the BoM?

I read an interesting article article about Peter Martin, head of the Australian Bureau of Statistics.

He has a refreshing, mature attitude to his job.

‘I want people to challenge our data – that’s a good thing, it helps us pick things up,’ he says.

Big contrast to the attitude of Climate Scientists. Examples that they believe they cannot be challenged are legion, from meetings to peer review. For example, emails expressing disagreement with the science are treated as threatening, as shown by the text of eleven emails released under ‘roo shooter’ FOI by the Climate Institute at Australian National University.

Australia’s Chief statistician is also egalitarian. In response to a complaint by the interviewer about employment figures, he responds:

He says he doesn’t believe there is a problem, but gives every indication he’ll put my concerns to his staff, giving them as much weight as if they came from the Treasurer.

This is a far cry from the stated policy of the CSIRO/BoM (Bureau of Meteorology) to only respond to peer-reviewed publications. Even when one does publish statistical audits identifying problems with datasets, as I have done, you are likely to get a curt review stating that “this paper should be thrown out because its only purpose is criticism”. It takes a certain type of editor to proceed with publication under those circumstances.

When the Federal Government changes this time, as appears inevitable, one initiative they might consider is a greater role for the ABS in overseeing the BoM responsibilities. Although the BoM is tasked with the collection of weather and water data by Acts of Parliament, it would benefit from an audit and ongoing supervision by the ABS, IMHO.

Dynamical vs Statistical Models Battle Over ENSO

There is a battle brewing between dynamical and statistical models. The winner will be determined when the current neural ENSO conditions resolve into an El Nino or not in the current months.

The International Research Institute for Climate and Society compares the predictions of ensembles of each type of model here.

Although most of the set of dynamical and statistical model predictions issued during late April and early May 2012 predict continuation of neutral ENSO conditions through the middle of northern summer (i.e., June-August), slightly more than half of the models predict development of El Nino conditions around the July-September season, continuing through the remainder of 2012. Still, a sizable 40-45% of the models predict a continuation of ENSO-neutral conditions throughout 2012. Most of the models predicting El Nino development are dynamical, while most of those predicting persistence of neutral conditions are statistical.

The figure above shows forecasts of dynamical (solid) and statistical (hollow) models for sea surface temperature (SST) in the Nino 3.4 region for nine overlapping 3-month periods. While differences among the forecasts of the models reflect both differences in model design, and actual uncertainty in the forecast of the possible future SST scenario, the divergence between dynamical and statistical models is clear.

This question fascinates me so much, I studied it for three years “Machine learning and the problem of prediction and explanation in ecological modelling” (1992). Why is there a distinction between dynamical and statistical models? What does it mean for prediction? What does it mean if one set of models are wrong?

For example, what if ENSO remains in a neutral or even La Nina state, thus ‘disproving’ the dynamical models. These models are based in the current understanding of physics (with a number of necessary approximations). Clearly this would say that something about the understanding of the climate system is wrong.

Alternatively, what if the currently neutral ENSO resolves into an El Nino, ‘disproving’ the statistical models. These models are based in past correlative relationships between variables. It would mean that some important physical feature of the system that is missing from the correlative variable has suddenly come into play.

Why should there be a distinction between dynamical and physical models at all? I have always argued that good, robust prediction requires no distinction. More precisely , the set of predictive models is at the intersection of statistical and dynamical models.

To achieve this intersection, from a starting point of a statistical model, each of the parameters and their relationships should by physically measurable. That is, if you use a simple linear regression model, each of the coefficients need to be physically measurable and the physical relationships between them additive.

From a starting point of a dynamical model, the gross, robust features of the systems should be properly described, and if necessary statistically parameterized. This usually entails a first or second order differential equation as the model.

This dynamical/statistical model is then positioned to incorporate both meaningful physical structure, and accurate correlative relationships.

It amazes me that most research models are developed along either dynamical or statistical lines, while ignoring the other.

Screening on the dependent, auto-correlated variable

To screen or not to screen? The question arises in the context of selecting which sets of tree-rings to use for millennial temperature reconstructions. One side, represented by CA, says screening is just plain wrong:

In the last few days, readers have drawn attention to relevant articles discussing closely related statistical errors under terms like “selecting on the dependent variable” or “double dipping – the use of the same data set for selection and selective analysis”.

Another side, represented by Jim Boulden say, says screening is just fine.

So, once again, if you are proposing that a random, red noise process with no actual relationship to the environmental variable of interest (seasonal temperature) causes a spurious correlation with that variable over the instrumental period, then I defy you to show how such a process, with ANY level of lag 1 auto-correlation, operating on individual trees, will lead to what you claim. And if it won’t produce spurious correlations at individual sites, then it won’t produce spurious correlations with larger networks of site either.

Furthermore, whatever extremely low probabilities for such a result might occur for a “typical” site having 20 too 30 cores, is rendered impossible in any practical sense of the word, by the much higher numbers of cores collected in each of the 11 tree ring sites they used. So your contention that this study falls four square within this so called “Screening Fallacy” is just plain wrong, until you demonstrate conclusively otherwise. Instead of addressing this issue–which is the crux issue of your argument–you don’t, you just go onto to one new post after another.

Yet another side, represented by Gergis et.al., says screening is OK providing some preparation such as linear detrending is imposed:

For predictor selection, both proxy climate and instrumental data were linearly detrended over the 1921–1990 period to avoid inflating the correlation coefficient due to the presence of the global warming signal present in the observed temperature record. Only records that were significantly (p<0.05) correlated with the detrended instrumental target over the 1921–1990 period were selected for analysis.

I always find guidance in going back to fundamentals, which people never seem to do in statistics. Firstly, what does “records that were significantly (p<0.05) correlated with the detrended instrumental target" mean? It states that they expect that 95% of the records in their sample are responding to temperature as they want, and that 5% are spurious, bogus, wring-ins, undesirable, caused by something else. It is implicit that being wrong about 5% is good enough for the purposes of their study.

For example, imagine a population of trees where some respond to rainfall and some respond to temperature. Both temperature and rainfall are autocorrelated, and for the sake of simplicity, lets assume they vary independently. If we want to screen those that respond to temperature-only with 95% confidence we can do that by correlating their growth with temperature. But we do have to make sure that the screen we use is sufficiently powerful to eliminate the other, autocorrelated rainfall responders.

The question that arises from autocorrelation in the records -- the tendency for temperature to follow-on and trend even though they are random -- is that the proportion of spurious records, in most tests, may be much higher than 5%. That would be unacceptable for the study. The onus is on the author, by monte-carlo simulation or some other method, to show that the 5% failure rate is really 5%, and not something larger, like 50%, which would invalidate the whole study.

As the tendency of autocorrelated records is to fool us into thinking the proportion of spurious records is lower than it is, then the simplest, most straightforward remedy is to increase the critical value so that the actual proportion of spurious records is once again, around the desired 5% level. This might mean adopting a 99% critical value, a 99.9% or a 99.999% critical value depending on the degree of autocorrelation.

So I would argue that it not correct that screening is an error in all cases. Tricky, but not an error. It is also not correct to impost ad-hocery such as correlating on the detrended variable, as this might simply result in select a different set of spurious records. It is also not correct to apply screening blindly.

What you need to do, is do the modeling stock-standard correctly, as argued in my book Niche Modeling. Work out the plausible error model, or some set of error models if you are uncertain, and establish robust bounds for your critical values using monte-carlo simulation. In the case of tree studies, you might create a population of two or more sets of highly autocorrelated records to test that the screening method performs to the desired tolerance. In my view, the correlation coefficient is fine as it is as good as anything else in this situation.

You get into a lot less trouble that way.