Lewandowsky article is a truly appalling piece of social science – Aitkin

Don Aitkin just weighted in on the Lewandowsky affair as Queensland University’s John Cook doubles down at the Conversation.

about 1 hour ago
Don Aitkin writer, speaker and teacher (logged in via email @grapevine.com.au)

Oh dear. The Lewandowsky article is a truly appalling piece of social science. How did it ever get past ordinary peer review? It, and the one above, demonstrate the kind of problems that Jim Woodgett in Nature two days ago and John Ioannidis a few years ago have pointed out: the failure of researchers to get their own house in order, and the poor quality of much published research. I have posted on that subject today on my website: www.donaitkin.com. That was before I came to all this! Perhaps someone a little better than Lewandowsky could do some research on why people believe in’ climate change’, and what their characteristics are…

Thank God there are true scholars in Australia. Unfortunately they are retired.

Carbon abatement from wind power – zero

Zip, nil, nada. That’s the findings of a two-year analysis of Victoria’s wind-farm developments by mechanical engineer Hamish Cumming.

Despite hundreds of millions of dollars of taxpayers money, from subsidies and green energy schemes driven by the renewable energy target, surprise, surprise, Victoria’s wind-farm developments have saved virtually zero carbon dioxide emissions due to their intermittent, unreliable power output.

Wind power advocate Dr Mark Diesendorf, Australian academic who teaches Environmental Studies at the University of New South Wales, formerly Professor of Environmental Science at the University of Technology, Sydney and a principal research scientist with CSIRO has not been shy about bad-mouthing wind power realists. See the renewable industry lobby group energyscience for his ebook The Base Load Fallacy and other Fallacies disseminated by Renewable Energy Deniers.

The verdict is in – wind power is a green theory that simply generates waste heat.

However, Cumming said the reports on greenhouse gas abatement did not take into account the continuation of burning coal during the time the wind farms were operational.

“The reports you refer to are theoretical abatements, not real facts. Coal was still burnt and therefore little if any GHG was really abated,” he told Clarke.

“Rather than trying to convince me with reports done by or for the wind industry, or the government departments promoting the industry, I challenge you to give me actual coal consumption data in comparison to wind generation times data that supports your argument.

Also see JoNova

Lewandowsky — again

This guy, a UWA employee, was shown by a Arlene Composta to be the most naive of leftists.

He now says that climate skeptics are conspiracy theorist wackos.

We have responded to this guy before:

He thinks the cognitive processes of Anthropogenic Global Warming (AGW) sceptics is deficient and on the same level as “truthers” and other “conspiracy theorists”. This is serious, for merely questioning the ‘science’ of AGW one now faces the opprobrium of having one’s mental ability questioned.

JoNova raises valid questions about his survey methodology here.

The word “fabrication” has been bandied about.

If so, he gives proof that the term “Psychological Science” is a contradiction in terms.

Not cointegrated, so global warming is not anthropogenic – Beenstock

Cointegration has been mentioned previously and is one of the highest ranking search terms on landshape.

We have also discussed the cointegration manuscript from 2009 by Beenstock and Reingewertz, and I see he has picked up another author and submitted it to an open access journal here.

Here is the abstract.

Polynomial cointegration tests of anthropogenic impact on global warming M. Beenstock, Y. Reingewertz, and N. Paldor

Abstract. We use statistical methods for nonstationary time series to test the anthropogenic interpretation of global warming (AGW), according to which an increase in atmospheric greenhouse gas concentrations raised global temperature in the 20th century. Specifically, the methodology of polynomial cointegration is used to test AGW since during the observation period (1880–2007) global temperature and solar irradiance are stationary in 1st differences whereas greenhouse gases and aerosol forcings are stationary in 2nd differences. We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcing, aerosols, solar irradiance and global temperature are not polynomially cointegrated. This implies that recent global warming is not statistically significantly related to anthropogenic forcing. On the other hand, we find that greenhouse gas forcing might have had a temporary effect on global temperature.

The bottom line:

Once the I(2) status of anthopogenic forcings is taken into consideration, there is no significant effect of anthropogenic forcing on global temperature.

They do, however, find a possible effect of the CO2 first difference:

The ADF and PP test statistics suggest that there is a causal effect of the change in CO2 forcing on global temperature.

They suggest “… there is no physical theory for this modified theory of AGW”, although I would think the obvious one would be that the surface temperature adjusts over time to higher CO2 forcing, such as through intensified heat loss by convection, so returning to an equilibrium. However, when revised solar data is used the relationship disappears, so the point is probably moot.

When we use these revised data, Eqs. (11) and (12) remain polynomially uncointegrated. However, Eq. (15) ceases to be cointegrated.

Finally:

For physical reasons it might be expected that over the millennia these variables should share the same order of integration; they should all be I(1) or all I(2), otherwise there would be persistent energy imbalance. However, during 150 yr there is no physical reason why these variables should share the same order of integration. However, the fact that they do not share the same order of integration over this period means that scientists who make strong interpretations about the anthropogenic causes of recent global warming should be cautious. Our polynomial cointegration tests challenge their interpretation of the data.

Abandon Government Sponsored Research on Forecasting Climate – Green

Kesten Green, now of U South Australia, has a manuscript up called Evidence-based Improvements to Climate Forecasting: Progress and Recommendations arguing that evidence-based research on climate forecasting finds no support for fear of dangerous man-made global warming, because simple, inexpensive, extrapolation models are more accurate than the complex and expensive “General Circulation Models” used by the Intergovernmental Panel on Climate Change (IPCC).

Their rigorous evaluations of the poor accuracy of climate models supports the view there is no trend in global mean temperatures that is relevant for policy makers, and that…

[G]overnment initiatives that are predicated on a fear of dangerous man-made global warming should be abandoned. Among such initiatives we include government sponsored research on forecasting climate, which are unavoidably biased towards alarm (Armstrong, Green, and Soon 2011).

This is what I found also in the evaluation of the CSIRO’s use of IPCC drought model. In fact, the use of the climate model projections is positively misleading, as they show decreasing rainfall over the last century when rainfall actually increased.

This is not welcome news to the growing climate projection science industry that serves the rapidly growing needs of impact and adaptation assessments. A new paper called Use of Representative Climate Futures in impact and adaptation assessment by Penny Whetton, Kevin Hennessy and others proposes another ad-hoc fix to climate model inaccuracy called Representative Climate Futures (or RFCs for short). Apparently the idea is the wide range of results given by different climate models are classified as “most likely” or “high risk” or whatever, and the researcher is then free to chose whichever set of models he or she wishes to use.

Experiment Resources.Com condemns ad hoc-ery in science:

The scientific method dictates that, if a hypothesis is rejected, then that is final. The research needs to be redesigned or refined before the hypothesis can be tested again. Amongst pseudo-scientists, an ad hoc hypothesis is often appended, in an attempt to justify why the expected results were not obtained.

Read “poor accuracy of climate models” for “hypothesis is rejected” and you get the comparison. Models that are unfit for the purpose need to be thrown out. RCF appears to be a desperate attempt to do something, anything, with grossly inaccurate models.

On freedom of choice, Kesten Green says:

So uncertain and poorly understood is the global climate over the long term that the IPCC modelers have relied heavily on unaided judgment in selecting model variables and setting parameter values. In their section on “Simulation model validation in longer-term forecasting” (p. 969–973, F&K observe of the IPCC modeling procedures: “a major part of the model building is judgmental” (p. 970).

Which is why its not scientific.

LENR demonstrated at Austin Trade Show

Renewables, man, get something for nothing. Harness the infinite power of wind, water, wave and tide. Why hasn’t it been done before? But of course it has – two thousand years ago – and has yet to overcome three basic and insurmountable problems”: intermittent, depends on locally weak, dispersed sources of energy, and no viable technology to store significant amounts of power. Bummer….

But Low Energy Nuclear Reactions, or LENR, are showing much more promise as an alternative energy source with a demonstration of a reliable, replicable, nickel-hydrogen gas low-energy nuclear reaction reactor on Monday at National Instruments national trade show at the Austin Convention Center in Texas.

The technique demonstrated by Celani is described here. He describes the desirable qualities for LENR of a class of Nickel/Copper alloys called the Constantans: high catalytic power at dissociating molecular hydrogen. Wiki indicates the name comes from metals with a low or negative temperature coefficient of resistance, or “constantan”.

He then describes his experiments with the measured resistance of Contantan wires while generating anomalous heat.

We observed a further increasing of anomalous
power that, if there are no mistakes around, was about
twice (i.e. absolute value of over 10W) of that detected
when the power was applied to inert wire. The R/Ro
value, after initial increasing, stabilized to 0.808.

If the consideration at point 11) is correct, we can
think that the reaction, apart some temperature threshold,
has a positive feedback with increasing temperature.

Summary: NZ Climate Science Coalition vs. NIWA

More thought-provoking thoughts from Richard on the duties and responsibilities of statutory bodies like NIWA. (NIWA is actually an incorporated body that is owned by the Crown, where-ever that plays into things.)

Anyway, everyone seems to agree that their handling of the temperature records in New Zealand is biased and deficient. The issue is, does scientific incompetence violate their charter?

With all this evidence, the Coalition case is looking very good on the plain facts. The threat comes from the need to prove that NIWA has a duty to apply good science. They deny this, and effectively say that Parliament has given them a free hand to do what they like. They argue that the obligation to pursue excellence is merely “aspirational”, being un-measurable and unenforceable.

They are prepared to damage to their reputations by arguing they are under no compunction to do good science. They are prepared to jettison this in favor of “independence”. What follows from the independence of NIWA from a lack of duty to do good science? Only politics.

But how tied is their present funding to their present obedience to their present political masters? If strongly, it corrupts and debases their “independence” to mere subservience and makes a mockery of their representations to the Court. For it could be that their right to self-determination is no more than a claim to public funding by virtue of their obedience.

In fact the NIWA website states just that:

CRIs are stand-alone companies with a high degree of independence. Each year, the shareholding Ministers lay out their expectations for the Crown Research Institutes in an ‘Operating Framework’. Amongst other things, this defines how CRIs should interpret their obligation to maintain financial viability.

The link to “Operating Framework” is dead, unfortunately.

Waiting anxiously for the judges determination on this.

Final Day: NZ Climate Science Coalition vs. NIWA

Quote from the defense:

He must have been responding to our charge that NIWA did not perform its statutory duty. He said: “They’re not duties, they’re not called duties, they’re called operating principles.” This seemed to come from the current legislation, or recent decisions.

Where in the operating principles is the principle that government climate scientists be “in bed” with green groups like the WWF putting pressure on government to enact green policies.

The disclosures reveal several instances of government funded scientists working with environmental pressure groups. In one case, Greenpeace activists are seen helping CRU scientists to draft a letter to the Times and in another working closely with the World Wildlife Fund to put pressure on governments regarding climate change.

Industry groups and the general public are slowing realizing that carbon taxes are going to cost a truckload of money for little benefit.

Isn’t it time that industry groups stepped up to the place and started funding research into questions that are still controversial, such as the magnitude of solar effects on global temperature, and could mitigate green hysteria?

Day 2: NZ Climate Science Coalition vs. NIWA

Quote of the day from New Zealand’s National Institute of Water and Atmospheric Research:

The matters [at issue] arise between the plaintiff’s (the Coalition’s) Statement of Claim (SOC) and the Defendant’s (NIWA’s) Statement of Defence (SOD). NIWA counter-claimed they had no obligation to pursue excellence or to use best-quality scientific practices and also that the national temperature record was not only not official, but they themselves had no obligation to produce or maintain it.

Benefits of Global Warming

A new WSJ article signed by 16 scientists:

A recent study of a wide variety of policy options by Yale economist William Nordhaus showed that nearly the highest benefit-to-cost ratio is achieved for a policy that allows 50 more years of economic growth unimpeded by greenhouse gas controls. This would be especially beneficial to the less-developed parts of the world that would like to share some of the same advantages of material well-being, health and life expectancy that the fully developed parts of the world enjoy now. Many other policy responses would have a negative return on investment. And it is likely that more CO2 and the modest warming that may come with it will be an overall benefit to the planet.

Further comment on Econlog, discussing an article here.

I hope the reader will agree with me that Nordhaus is certainly inviting the reader to infer that now and in the future, the best available studies (as summarized by leading scholar Richard Tol) show that emissions of GHG will cause net damages.

The issue is whether most studies support the view that there will be a net benefit from CO2 emissions for at least 30 to 50 years, while net costs only occurr after an increase in global temperatures from 1.2 to 2 degrees C.

Peer review doesn’t stop 172 fake papers

Its reported that a new record has been set by a Japanese anesthesiologist for most retractions by a single author.

An investigation of 212 of Yoshitaka Fujii’s 249 published papers found that he had invented patients, forged evidence that medication was administered, and signed on as co-authors other scientists who had no idea they were affiliated with his research.

There are a couple of simple solutions to this. First, archive all the data and code and expect reviewers to go through it. This almost never happens at present. Second, archive without peer review, using sites such as viXra. Without peer review there would be no fraud. The merit of papers would be determined in the literature and other outlets such as blogs. Here, like most scientific papers, they simply would have languished in obscurity.

Fortunately, nothing he published was regarded as very important, which is how his vault of fabricated material went undetected for decades. Popular Science offers some of his compelling titles, including “Antiemetic efficacy of low-dose midazolam in patients undergoing thyroidectomy,” and “Low-dose propofol to prevent nausea and vomiting after laparoscopic surgery.”

Perth 1940 Max Min Daily Temps

Previous posts have introduced the work that Chris Gillham is doing in spot auditing the accuracy of the Bureau of Meteorology’s temperature records. He has now re-recorded the daily max and min temperatures from one Australian weather station for one year, Perth 9034 in 1940, using original sources in The West Australian newspaper.

Below is an initial look at the historic data (in red) compared to the BoM’s “unadjusted” or “raw” records (grey) for the station.

Its fairly clear that there are a lot of errors. The minimum temperatures, however, are shockers. Each of the red lines seen on the lower series above is an error in the daily minimum — mostly down.

Mean of the max differences = +0.20C
Mean of the min differences = -1.18C
Average max all differences = +0.04C
Average min all differences = -0.33C

While the average error of the max temperatures is up 0.2C, the average magnitude of the errors in the min temperatures is a whopping 1.18C! Over the whole year that changes the annual minimum temperature by -0.33C.

The diurnal range is increased by an average of 0.4C. While these errors are only in one year in one station, it is noteworthy that the magnitude of these errors is similar to the change in the diurnal range attributed to global warming.

The data file is here – perth-1940-actual-raw. You need to open it in excel and save as a CVS file.

The code below should run on the datafile.

P1940=ts(read.csv("perth-1940-actual-raw.csv"),start=1940,freq=365)
l=2
plot(P1940[,3],col=2,ylim=c(0,45),main="Perth Regional Office 9034",ylab="Temperature C",lwd=l)
lines(P1940[,4],col="gray",lwd=l)
lines(P1940[,7],col=2,lwd=l)
lines(P1940[,8],col="gray",lwd=l)
maxErrs=P1940[P1940[,3]!=P1940[,4],]
print(mean(maxErrs[,4]-maxErrs[,3]))
minErrs=P1940[P1940[,7]!=P1940[,8],]
print(mean(minErrs[,8]-minErrs[,7]))
print(mean(P1940[,4]-P1940[,3]))
print(mean(P1940[,8]-P1940[,7]))

Perth 1940 Jan-Dec – Errors

Chris Gillham has completed re-digitizing one years worth of the daily temperature records for Perth in 1940 (perth-1940-actual-raw). These are digitised for all of 1940 at Perth Regional Office 9034 from temperatures published in The West Australian newspaper.

While the majority of the temperatures agree with contemporary BoM data, up to a third of the temperatures in some months disagreed, sometimes by over 1C! This is a very strange pattern of errors, and difficult to explain.

I will be doing more detailed analysis, but Chris reports that overall, the annual average of actual daily Perth max temperatures in 1940, as published in the newspaper, was the same as the BoM raw daily max. The annual average of newspaper daily min temperatures was .3C warmer than in the BoM raw daily min. ACORN max interpreted 1940 as 1.3C warmer than both actual newspaper max and BoM raw max, with ACORN min 1.5C cooler than actual newspaper min and 1.2C cooler than BoM raw min.

Anything above a .1C newspaper/raw difference is highlighted.

Chris notes:

It took a couple of days wading through about 310 newspapers to find all the weather reports and although it would be great to have all years from all locations (those with decimalised F newspaper sources) to confirm the Perth 1940 results, it’s a huge task. It would certainly be easier if the BoM just provided the temps from the old logbooks.

Rewriting the Temperature Records – Adelaide 1912

Record temperature always make the news, with climate alarmists trumpeting any record hot day. But what if the historic record temperatures recorded by BoM were adjusted down, and recent records were not records at all? More detective work using old newspapers by Chis Gillham in Adelaide this time.

The BoM claims the hottest ever Feb max at West Terrace was 43.4C on 1 February 1912. They got the date sort of right except the Adelaide Advertiser below shows Feb 1 at 112.5F (44.7C) and Feb 2 at 112.8F (44.9C). The BoM cut Feb 2 to 43.3C in raw.

Perth 1940 Jan-Mar Historic Comparisons

Continuing the comparison of historic sources of temperature and contemporary records, Chris Gillham has compiled a list of maximum and minimum daily temperatures for Perth for the months of January, February and March 1940 and uncovered some strange discrepancies (highlighted – all months at perth-newspapers-mar-qtr-1940).

Chris notes that while BoM’s contemporary temperatures largely agree with temperatures reported in newspapers of the day, a couple of temperatures in each month disagree by up to a degree C!

File attached comparing the March quarter 1940 daily newspaper and BoM raw data for Perth Regional Office 9034 (Perth Observatory atop Mt Eliza at the time), plus an ACORN average for each month.

Combining all days in the March 1940 quarter, average max in The West Australian newspaper was 29.51C and average BoM raw max was 29.56C. Average min in the newspaper was 17.38C and average BoM raw min was 17.15C. Rounded, max up .1C and min down .3C in BoM raw compared to what was reported in 1940. There seems a tendency for just two or three temps each month to be adjusted in raw, sometimes up but obviously with a downward bias in min.

ACORN-SAT judged the three months to have an average max of 31.32C and an average min of 16.17C. So max has been pushed up about 1.8C and min has been pushed down about 1.2C or 1C, depending on your point of view :-).

It always pays to go back to the source data.

Should the ABS take over the BoM?

I read an interesting article article about Peter Martin, head of the Australian Bureau of Statistics.

He has a refreshing, mature attitude to his job.

‘I want people to challenge our data – that’s a good thing, it helps us pick things up,’ he says.

Big contrast to the attitude of Climate Scientists. Examples that they believe they cannot be challenged are legion, from meetings to peer review. For example, emails expressing disagreement with the science are treated as threatening, as shown by the text of eleven emails released under ‘roo shooter’ FOI by the Climate Institute at Australian National University.

Australia’s Chief statistician is also egalitarian. In response to a complaint by the interviewer about employment figures, he responds:

He says he doesn’t believe there is a problem, but gives every indication he’ll put my concerns to his staff, giving them as much weight as if they came from the Treasurer.

This is a far cry from the stated policy of the CSIRO/BoM (Bureau of Meteorology) to only respond to peer-reviewed publications. Even when one does publish statistical audits identifying problems with datasets, as I have done, you are likely to get a curt review stating that “this paper should be thrown out because its only purpose is criticism”. It takes a certain type of editor to proceed with publication under those circumstances.

When the Federal Government changes this time, as appears inevitable, one initiative they might consider is a greater role for the ABS in overseeing the BoM responsibilities. Although the BoM is tasked with the collection of weather and water data by Acts of Parliament, it would benefit from an audit and ongoing supervision by the ABS, IMHO.

Dynamical vs Statistical Models Battle Over ENSO

There is a battle brewing between dynamical and statistical models. The winner will be determined when the current neural ENSO conditions resolve into an El Nino or not in the current months.

The International Research Institute for Climate and Society compares the predictions of ensembles of each type of model here.

Although most of the set of dynamical and statistical model predictions issued during late April and early May 2012 predict continuation of neutral ENSO conditions through the middle of northern summer (i.e., June-August), slightly more than half of the models predict development of El Nino conditions around the July-September season, continuing through the remainder of 2012. Still, a sizable 40-45% of the models predict a continuation of ENSO-neutral conditions throughout 2012. Most of the models predicting El Nino development are dynamical, while most of those predicting persistence of neutral conditions are statistical.

The figure above shows forecasts of dynamical (solid) and statistical (hollow) models for sea surface temperature (SST) in the Nino 3.4 region for nine overlapping 3-month periods. While differences among the forecasts of the models reflect both differences in model design, and actual uncertainty in the forecast of the possible future SST scenario, the divergence between dynamical and statistical models is clear.

This question fascinates me so much, I studied it for three years “Machine learning and the problem of prediction and explanation in ecological modelling” (1992). Why is there a distinction between dynamical and statistical models? What does it mean for prediction? What does it mean if one set of models are wrong?

For example, what if ENSO remains in a neutral or even La Nina state, thus ‘disproving’ the dynamical models. These models are based in the current understanding of physics (with a number of necessary approximations). Clearly this would say that something about the understanding of the climate system is wrong.

Alternatively, what if the currently neutral ENSO resolves into an El Nino, ‘disproving’ the statistical models. These models are based in past correlative relationships between variables. It would mean that some important physical feature of the system that is missing from the correlative variable has suddenly come into play.

Why should there be a distinction between dynamical and physical models at all? I have always argued that good, robust prediction requires no distinction. More precisely , the set of predictive models is at the intersection of statistical and dynamical models.

To achieve this intersection, from a starting point of a statistical model, each of the parameters and their relationships should by physically measurable. That is, if you use a simple linear regression model, each of the coefficients need to be physically measurable and the physical relationships between them additive.

From a starting point of a dynamical model, the gross, robust features of the systems should be properly described, and if necessary statistically parameterized. This usually entails a first or second order differential equation as the model.

This dynamical/statistical model is then positioned to incorporate both meaningful physical structure, and accurate correlative relationships.

It amazes me that most research models are developed along either dynamical or statistical lines, while ignoring the other.

Screening on the dependent, auto-correlated variable

To screen or not to screen? The question arises in the context of selecting which sets of tree-rings to use for millennial temperature reconstructions. One side, represented by CA, says screening is just plain wrong:

In the last few days, readers have drawn attention to relevant articles discussing closely related statistical errors under terms like “selecting on the dependent variable” or “double dipping – the use of the same data set for selection and selective analysis”.

Another side, represented by Jim Boulden say, says screening is just fine.

So, once again, if you are proposing that a random, red noise process with no actual relationship to the environmental variable of interest (seasonal temperature) causes a spurious correlation with that variable over the instrumental period, then I defy you to show how such a process, with ANY level of lag 1 auto-correlation, operating on individual trees, will lead to what you claim. And if it won’t produce spurious correlations at individual sites, then it won’t produce spurious correlations with larger networks of site either.

Furthermore, whatever extremely low probabilities for such a result might occur for a “typical” site having 20 too 30 cores, is rendered impossible in any practical sense of the word, by the much higher numbers of cores collected in each of the 11 tree ring sites they used. So your contention that this study falls four square within this so called “Screening Fallacy” is just plain wrong, until you demonstrate conclusively otherwise. Instead of addressing this issue–which is the crux issue of your argument–you don’t, you just go onto to one new post after another.

Yet another side, represented by Gergis et.al., says screening is OK providing some preparation such as linear detrending is imposed:

For predictor selection, both proxy climate and instrumental data were linearly detrended over the 1921–1990 period to avoid inflating the correlation coefficient due to the presence of the global warming signal present in the observed temperature record. Only records that were significantly (p<0.05) correlated with the detrended instrumental target over the 1921–1990 period were selected for analysis.

I always find guidance in going back to fundamentals, which people never seem to do in statistics. Firstly, what does “records that were significantly (p<0.05) correlated with the detrended instrumental target" mean? It states that they expect that 95% of the records in their sample are responding to temperature as they want, and that 5% are spurious, bogus, wring-ins, undesirable, caused by something else. It is implicit that being wrong about 5% is good enough for the purposes of their study.

For example, imagine a population of trees where some respond to rainfall and some respond to temperature. Both temperature and rainfall are autocorrelated, and for the sake of simplicity, lets assume they vary independently. If we want to screen those that respond to temperature-only with 95% confidence we can do that by correlating their growth with temperature. But we do have to make sure that the screen we use is sufficiently powerful to eliminate the other, autocorrelated rainfall responders.

The question that arises from autocorrelation in the records -- the tendency for temperature to follow-on and trend even though they are random -- is that the proportion of spurious records, in most tests, may be much higher than 5%. That would be unacceptable for the study. The onus is on the author, by monte-carlo simulation or some other method, to show that the 5% failure rate is really 5%, and not something larger, like 50%, which would invalidate the whole study.

As the tendency of autocorrelated records is to fool us into thinking the proportion of spurious records is lower than it is, then the simplest, most straightforward remedy is to increase the critical value so that the actual proportion of spurious records is once again, around the desired 5% level. This might mean adopting a 99% critical value, a 99.9% or a 99.999% critical value depending on the degree of autocorrelation.

So I would argue that it not correct that screening is an error in all cases. Tricky, but not an error. It is also not correct to impost ad-hocery such as correlating on the detrended variable, as this might simply result in select a different set of spurious records. It is also not correct to apply screening blindly.

What you need to do, is do the modeling stock-standard correctly, as argued in my book Niche Modeling. Work out the plausible error model, or some set of error models if you are uncertain, and establish robust bounds for your critical values using monte-carlo simulation. In the case of tree studies, you might create a population of two or more sets of highly autocorrelated records to test that the screening method performs to the desired tolerance. In my view, the correlation coefficient is fine as it is as good as anything else in this situation.

You get into a lot less trouble that way.

“This is commonly referred to as ‘research’” – Gergis

Just what is the ‘research’ that Gergis et.al. claims to have done? And what are the sceptics complaining about?

The ‘research’ claimed by the Gergis et.al. team is captured in the following graphical representation of the past temperature of the Australiasn region.

The hockey stick shape has also been produced using similar methods and random data, as shown in my AIG news article in 2006, and also in chapter 11 of my 2007 book “Niche Modeling“.

It is obvious that if the same result is achieved with random data and with real-world data, the real-world data are probably random. That is, whatever patterns seen are not proven to be significant patterns, by the yardsticks of rigorous statistical methods.

These problems have been widely discussed at ClimateAudit since 2006, and my publications probably grew out of those discussions. Moreover, the circular argument has become commonly known as the “Screening Fallacy” and widely discussed in relation to this area of research ever since.

To claim results when they could equally be achieved by random numbers would get you laughed off the podium in most areas of science. Gergis et.al. informed Steve McIntyre superciliously, however, that this is commonly referred to as ‘research’.

One of the co-authors, Ailie Gallant, stars in the cringe-worthy We Are Climate Scientists, a pretentious rap-video proclaiming they are “fucking climate scientists” and “their work is peer reviewed” in dollar-store sunglasses and lab coats. They have no reason to act superior, and this recent effort proves the point.

Of course, Gergis et.al. claimed to have detrended the data before performing the correlations, and whether this ad-hocery would mitigate the circularity above is questionable. Whether by oversight or intent, it appears the detrending was not performed anyway. I don’t know whether this is the reason for the paper being pulled. We shall find out in time. The paper appears to be the result of a three-year research program, announced on Gergis’ personal blog.

The project, funded by the Australian Research Council’s Linkage scheme, is worth a total of $950K and will run from mid-2009 to mid-2012.

It gives me a job for three years and money to bring a PhD student, research assistant and part time project manager on board.

More importantly, it will go a long way in strengthening the much needed ties between the sciences and humanities scrambling to understand climate change.

Who is contributing the most to research, unpaid bloggers, or a one million, three years, tied with humanities fiasco?

Gergis’ hockeystick “on hold”

You may by now have heard here or here that “Evidence of unusual late 20th century warming from an Australasian temperature reconstruction spanning the last millennium” by Joelle Gergis, Raphael Neukom, Stephen Phipps, Ailie Gallant and David Karoly, has been put “on-hold” by the Journal, due to “an issue” in the processing of the data used in the study.

It is illuminating to review the crowing commentary by Australian science intelligencia and the press reaction to the paper.

ABC’s AM show, “Australia’s most informative (government funded) morning current affairs program. AM sets the agenda for the nation’s daily news and current affairs coverage.”

TONY EASTLEY: For the first time scientists have provided the most complete climate record of the last millennium and they’ve found that the last five decades years in Australia have been the warmest.

He then speaks for the IPCC:

The Australian researchers used 27 different natural indicators like tree rings and ice cores to come to their conclusion which will be a part of the next report by the United Nations Intergovernmental Panel on Climate Change.

The Gergis paper was proof enough for the ABC Science Show, which gives “fascinating insights into all manner of things from the physics of cricket”.

Robyn Williams: Did you catch that research published last week analysing the last 1,000 years of climate indicators in Australia? It confirmed much of what climate scientists have been warning us about.

Here is another via ABC Statewide Drive tweet.

Dr Joelle Gergis from @unimelb: We are as confident that the warming in the post 1950 period is unprecedent in the past 1000 years.

Such shallow and gullible commentary is no better than blogs such as Gerry’s blogistic digression gerry’s blogistic digression “I’ve got a blog and I’m gonna use it.”

It’s offical: Australia is warming and it is your fault.

The tone of the Real Scientists from realclimate is no better, jubilant that the “hockey-stick” has now been seen in Australia.

First, a study by Gergis et al., in the Journal of Climate uses a proxy network from the Australasian region to reconstruct temperature over the last millennium, and finds what can only be described as an Australian hockey stick.

As Steve Mosher said, such papers cannot be trusted. Putting aside questions of the methodology (that I will get to later), the reviewers in climate science don’t check the data, don’t check the numbers produce the graphs and tables published, or check that the numbers actually do what the text describes.

Yet they approve the paper for publication.

He is stunned this has to be explained to anyone. Apparently it does.

Levitus data on ocean forcing confirms skeptics, falsifies IPCC

The IPCC, in the AR4 working group one, stated what could be called the central claim of global warming, the estimate of the net radiative forcing.

“The understanding of anthropogenic warming and cooling influences on climate has improved since the TAR, leading to very high confidence that the effect of human activities since 1750 has been a net positive forcing of +1.6 [+0.6 to +2.4] W m–2.”

Remember a forcing is an imbalance that causes heating, like a hot plate heating a saucepan of water. While the forcing continues, the temperature of the water will continue to rise. Global warming is the theory that increases in anthropogenic CO2 in the atmosphere are producing a radiative imbalance, or forcing, causing the earth to warm dangerously.

The IPCC level of forcing equates to the stated estimates for doubling of CO2 from around 1.5 to 6C per doubling, and the central estimates of warming to the end of the century from increasing CO2 of about 3C.

The paper by Levitus et al. uses the array of ARGO floats, and other historic ocean measurements, to determine the change in the heat content of the ocean from 0 to 2000m, and so derive the actual net radiative forcing that has caused it to warm over the last 50 years.

“The heat content of the world ocean for the 0-2000 m layer increased by 24.0×1022 J corresponding to a rate of 0.39 Wm-2 (per unit area of the world ocean) and a volume mean warming of 0.09ºC. This warming rate corresponds to a rate of 0.27 Wm-2 per unit area of earth’s surface.”

To compare these figures, say the continuous top-of-atmosphere forcing is 1Wm-2, a figure given by Meehl and Hansen and consistent with the IPCC estimates. The forcing of the ocean from a TOA forcing of 1Wm-2 is a lower 0.6m-2 due to losses, estimated by Hansen.

The best, recent measurements of the forcing 0f 0.3Wm-2 are half these IPCC estimates. The anthropogenic component of the forcing is even less, as a large part of the 0.3Wm-2 in the last 60 years is due to increased solar insolation during the Grand Solar Maximum.

This mild forcing is right in the ballpark that skeptic scientists such as Lindzen, Spencer, Loehle and Idso (and myself) have been consistently saying is all that is justified by the evidence. It appears that Levitus et al. confirms the skeptics, and the IPCC has been falsified.

What commentary on Levitus do we hear from the alarmists? Skeptical Science ignores that the IPCC has been exaggerating the net forcing, and attempts to save face:

“Levitus et al. Find Global Warming Continues to Heat the Oceans”

Skeptical Science “Put this amount of heat into perspective”, in a vain attempt to sound an alarm by quoting a scenario that is almost insane, having a infinitesimally small probability of happening.

“We have estimated an increase of 24×1022 J representing a volume mean warming of 0.09°C of the 0-2000m layer of the World Ocean. If this heat were instantly transferred to the lower 10 km of the global atmosphere it would result in a volume mean warming of this atmospheric layer by approximately 36°C (65°F).”

To do this, heat would have to defy all known physics and move backwards, from the boiling water to the hot plate.

The ocean is a big place. The best evidence is that its heating very slowly, much slower than the IPCC projected, and just as the skeptics predicted. The ARGO floats are arguably the most important experiment in climate science. It is all about good science: directly measuring the phenomenon of interest with sufficient accuracy to resolve the questions.

UPDATE: data1981 explains.

It’s definitely a confusing issue. What we’re talking about here is basically the amount of unrealized warming, whereas the radiative forcing tells you the total net energy imbalance since your choice of start date (the IPCC uses 1750). So they’re not directly comparable figures.

The unrealized warming has been fairly constant over the past ~50 years whereas the radiative forcing increases the further back in time you choose your initial point. So if you look at the unrealized warming starting at any date from 1950 to 2010, it will be a fairly constant number. But the radiative forcing from 1950 to 2010 is larger than the forcing from 1990 to 2010, for example.

Hopefully I got that right.

No he didn’t.

UPDATE: Roger Pielke Sr has a post on this topic.

How many readers is 40 hits a day?

To follow up on my previous post (“Is Finkelstein totally clueless about the Internet”) with real data, I examine the stats of the log files on my server.

Below is a table generated by the log file analyzer Awstats for the first 2 months of my server http://landshape.org.

Month Unique visitors Number of visits Pages Hits Bandwidth
Jan 2012 7,361 18,526 71,689 204,718 3.02 GB
Feb 2012 7,081 16,422 113,111 233,158 7.67 GB

You can see the number of hits for January and February is 205K and 233K respectively, and the number of visits is 19K and 16K, about 10% of the number of hits.

The number of unique visitors in each month, that is the number of unique IP addresses that views of the blog originate from is 7K, or about 40% of the number of visits. This would be the best indication of the number of possible readers of the blog.

But this still exaggerates the number of readers, as many people land on the pages from search engines, recognise its not what they were looking for, and click away almost instantly – the ‘blink’ effect.

Below is a table of duration of visits, where it can be seen that 79% of visits last for less than 30 seconds.

Duration of Visit Number of visits Percent
0s-30s 1,322 79.4 %
30s-2mn 48 2.8 %
2mn-5mn 21 1.2 %
5mn-15mn 31 1.8 %
15mn-30mn 36 2.1 %
30mn-1h 46 2.7 %
1h+ 121 7.2 %
Unknown 39 2.3 %

Therefore the total number of effective readers per month on http://landshape.org is approximately 0.1*0.4*0.2 = 0.008 or close to 1%. So my guesstimate from yesterday was pretty damn close. The ratios on other blogs may be a little different, but not so different as to matter.

Case closed your honor. Your proposal to regulate blogs with more than 15,000 hits per annum or 1250 hits per month would impact all blogs with more than 12.5 readers a month, or less than one per day.

Is Finkelstein Totally Clueless About the Internet?

The Media Inquiry by Finkelstein Q.C. proposed on page 301 the regulation of blogs with more than a specific number of hits per annum, suggesting an equivalency with print media:

If a publisher distributes more than 3000 copies of print per issue or a news internet site has a minimum of 15 000 hits per annum it should be subject to the jurisdiction of the News Media Council, but not otherwise. These numbers are arbitrary, but a line must be drawn somewhere.

Does he know how many actual readers that 15,000 hits a year represents?

Of the total number of hits a small blog receives, at least 90% are due to search bots (like Google and Bing), spiders, spammers, rss readers and sundry malicious automata. As hits are usually identified with client requests, each image on a page, logo, thumbnail etc. is technically recovered with a single hit.

Lets be generous and say that 10% of hits could be identified with real people, around 75% of these are bounces, people who click away within a few seconds.

Of the real readers, they might browse a few pages, contributing 3 or 4 hits.

Therefore, the ratio of hits to readers is around 0.1*0.25*0.25 or less than 1%.

Conservatively, 15000 hits per annum translates into 150 readers once a year, or less than one reader per day. Many of these will be returning, reducing the unique number further.

Yet Finkelstein seems to suggest that 15000 hits per annum is equivalent to a publication with a print run of 3000 copies.

Given losses and returns, a small regional paper might reach 1500 people twice a week with that kind of print run, or perhaps 15000 unique people per year.

One can explain the derivation of Finkelstein’s figures of 3000 paper copies and 15,000 hits per annum by assuming that one blog hit is equivalent to a single paper reader.

So one must then ask, is Finkelstein totally clueless about the Internet? One would think that before proposing to regulate blogs they would have done their homework.

Finkelstein the new face of totalitarianism

Members of the Independent Inquiry into Media Regulation at Sydney University. In the middle is Chair of Inquiry Ray Finkelstein centre, Chris Young (left) and Prof. Matthew Ricketson (right)

When I started in 2005 fighting to defend normal scientific standards over the exaggerations and biases of climate science extremism, I never thought it would end up in a fight for free speech over left-wing totalitarianism. Apparently, based on the Finkelstein Media Inquiry, it has come to this.

Some comments from blogs, proposed in the report to be regulated by a new Ministry of Truth:

So it can’t happen here, can’t it? by Steve Kates:

But surely they cannot be thinking of looking at my opinion to decide whether I can be prosecuted? But if not that, what, precisely, do they have in mind? This is more than just a thin edge of the wedge. This is how it starts and this is not where it will end.

Its worth skimming the report to see what social scientists are up to these days. It’s something called “social responsibility theory” and its origins are apparently in totalitatianism, as well described on page 50.

Authoritarian theory, the oldest and through history the most pervasive, reflected societies which held that all persons were not equal, that some were wiser than others and it was those persons whose opinions should therefore be preferred; societies in which fealty to the monarch or ruler or tyrant was demanded of all and where the people were told what their rulers thought they ought to know. Totalitarian theory shared many of these characteristics, but contained one important additional dimension: the education of the people in the ‘correct’ truth.

Note that the Review is proposing regulating of blogs “with a minimum of 15,000 hits per annum” – a miniscule traffic that would include Niche Modeling.

If a publisher distributes more than 3000 copies of print per issue or a news internet site has a minimum of 15 000 hits per annum it should be subject to the jurisdiction of the News Media Council, but not otherwise. These numbers are arbitrary, but a line must be drawn somewhere.

The figure of 40 hits a day is even more ridiculous considering around 90% of hits are from bots, crawlers, spammers and the like. So basically any blog that is read by anybody would be captured in the Green/Labor Government net.

A government regulator installed to monitor and control the free press. What could possibly go wrong? – Gab

If it moves, tax it. If it keeps moving, regulate it. And if it stops moving, subsidise it. Ronald Reagan

The Finkelstein Report in over 400 pages attempts to justify its intervention into free speech, devoting considerable space to the topic of “market failure,” where it is claimed that a free press has lead to undesirable results, such as the creation of monopolies, unjustified harm to people, and an unjust coverage of issues like global warming. In raising the notion of “market failure,” they never argue for the moral and productive superiority of capitalism, to which we owe a free press, and the moral bankruptcy and destructive economic consequences of Green environmentalism.

Sign the Menzies House Free Speech Petition

Dick Smith Offers $1M For Proof of LENR

Dick Smith, an Australian retail millionaire, has offered a $1M prize first to an italian inventor, and now to a greek company, Defkalion, if they can demonstrate a commercial LENR (low energy nuclear reaction, aka cold fusion) to the satisfaction of third-party scientific observers.

As I am convinced this is a scam similar to Firepower International (make sure you look it up on Wikipedia) I am not prepared to waste money on this until the test conditions have been agreed on.

As with the Rossie challenge the test must be one where the result will be accepted by reasonable people in the scientific community.

I hope the Swedish scientists will be involved. If not I feel sure we can get equivalent independent experts.

Thanks for the suggestion. I would like to this live on international television- say the U S 60 minutes.

To get up to speed on this quickly moving story, read here.

Scafetta vs the IPCC

Great new application from WUWT contrasts the predictions of two models of global warming, Scafetta’s empirical resonance model and the IPCC general circulation models.

I was asked to make sense of this from Rahmstorf and Foster:
http://iopscience.iop.org/1748-9326/6/4/044022, referenced here at RC: http://www.realclimate.org/index.php?p=10475.

I haven’t read the paper in detail, and I find I have to do that to really assess it. So I can only comment on the general approach. Although it seems superficially plausible, its also somewhat novel, and so I am uncomfortable with it, as I don’t fully appreciate the statistical limitations.

IMHO only really scientific way to approach a question is to contrast between competing hypotheses, eg. the null versus the alternative, or other combination, such as the Scafetta vs IPCC above. Its clear, easy to understand and not so prone to biases.

But it seems like climate scientists are very creative in coming up with novel ways to justify their theory, and almost always fail to clearly compare and contrast the alternatives. That is their weakness, they are so damn convinced of CAGW, and shows they are generally ill-equipped with the expertise and training for conducting rigorous scientific analysis.

And of course, “creative” is meant not in the good sense.

Newt Kills South Carolina Primary

Newt Gingrich recovered strongly to pip Romney at the post for the South Carolina Primary in a race with a number of reversals, shown clearly in the Gallup Daily Poll above. In the end it wasn’t close with Newt 40 to Mitt 28%, as shown on this cool Google App.

So the race of 2012 is turning out to be far more exciting than predicted, and with Mittens fading in the polls, the big-heads who named Mitt Romney a shoe-in for the nomination must be feeling the heat.

The results in SC reflect the performance in the national polls for the remaining 4 contenders, with Mitt and Newt in an upper tier, and Ron Paul and Santorum in a lower tier, at least for the moment.

Menzies House posts a good poll-based commentary on the race by Amir Iljazi, who has a Master’s Degree in Political Science at American University in Washington, D.C. and currently resides in Tampa, Florida.