Wish list for science journals

Many of my numerate readers will have read the account by Rick Trebino of Georgia Tech of trials and tribulations of responding to an error in the public record of the peer-reviewed literature, and have ideas of their own on what they would like to see.

Record ideas for what you would like to see below. (I am on vacation on the Great Barrier Reef right now, so excuse the brevity, typing this from the resort.) My wish list is below.

1. Code and data allow replication
2. Reviewers can act as coaches, where appropriate
3. Journals dedicated entirely to review of others’ studies

Continue reading Wish list for science journals

Example of Scientific Bias

When reading a paper by Richard Lindzen Climate Science: Is it currently designed to answer questions? I was struck by some comments towards the end by John P. Holdren, director of the Woods Hole Research Center about climate skeptics. He says:

First, they have not come up with any plausible alternative culprit for the disruption of global climate that is being observed, for example, a culprit other than the greenhouse-gas buildups in the atmosphere that have been measured and tied beyond doubt to human activities. (The argument that variations in the sun’s output might be responsible fails a number of elementary scientific tests.)

Continue reading Example of Scientific Bias

Downloading Monthly Mean Australian Temperatures from the BoM

This is a funny story about getting monthly mean Australian temperatures from the Bureau of Meteorology. I hope for happy ending, but we have yet to see.

About 2 weeks ago I tried to download these data from the BoM website set up for the purpose.

I have noticed before that temperature is NOT available as a MONTHLY mean series, but this time I was very perplexed. It is available by season, by January, February, etc. but not all together.

To get the data into a monthly series would require me to download all 12 raw datafiles (one for each month) then interleave them to create a continuous list with 12 values for each month. This would be timeconsuming and a potential source of error.

Note that all of the major sources of global mean temperature data on the web are available in as a monthly temperature series, GISS, HadCRUT, RSS MSU, etc…

It would seem relatively easy to add an option for download of data to the selection menu.

So I contacted the help desk, and after some time, had a delightfully frustrating conversation along the lines of: “I would like the monthly data series.” “But all the months are there.” etc, etc. Finally, she agreed to send my enquiry on to the technical department I presume, and today I found out why Australian temperature is not available as a monthly series.

Apparently, the reason why stock standard monthly data series are not available is because they don’t look good as a barchart.

Well, I would never have guessed.

Massive Extinctions: An Update

The online magazine CO2 Science has published a review and update on the massive extinction theory, featuring a large section on my guest editorial they graciously hosted back in 2004. This is where I lost my virginity, so to speak, so the treatment is a little more exuberant than I would use now, but not too much. I remember I was so disgusted that anyone could seriously propose an analysis that would show a high number of species extinctions from global warming even if the rate of species extinction decreased. I have seen a lot more now and such bias no longer surprises me.

I developed an error model for quantifying the various systematic biases in ‘shift’ analyses, and incorporated it into a chapter in my book, though I don’t think it has ever been used in earnest. Now I tend to avoid working on issues of bias, as they are too hard to describe and prove, and focus instead on provable errors in global warming science, such as Rahmstorf’s ‘worse than we thought’ idea.

So it is with some sense of vindication over my repudiation of the uncritical use of ecological niche models that I read the conclusion of the CO2 Science report on the Dormann paper:

… that shortcomings associated with climate-alarmist analyses of the present distributions of species “are so numerous and fundamental that common ecological sense should caution us against putting much faith in relying on their findings for further extrapolations,” in contrast to what is routinely done in studies such as that of Thomas et al., the latter of whose methods and findings, according to Dormann, “have been challenged for conceptual and statistical reasons” by many other researchers.

Dormann thus concluded that climate-alarmist “projections of species distributions are not merely generating hypotheses to be tested by later data,” they are being presented as “predictions of tomorrow’s diversity, and policy makers and the public will interpret them as forecasts, similar to forecasts about tomorrow’s weather,” which he clearly feels is both unwarranted and unwise … .

Two points:

1. High uncertainty in the models does not necessary produce a large bias in estimates of extinction. Just as standard deviations are independent of mean values, model inaccuracy is not necessarily a basis for ‘model bashing’. As long as the method is unbiased then some progress can be made providing the uncertainty is carried through the calculations.

The problem with Thomas et al. is like stating that surgery kills 1 million people a year worldwide, so doing more surgery is going to kill more people. Replace people with species and surgery with global warming and you have the gist of the result in Thomas et al. While literally true, an unbiased account weighs this against the number of people saved by surgery every year. Not Thomas and colleagues. Species that increased their area due to warming were removed.

2. The estimates of extinction rely in large part on the quality of the underlying GCM’s, the magnitude of the shift predicted in temperature and precipitation, and the maintainence of the correct relationship between these variables also. If the models are bad, then the extinction prediction will be also.

So to a large extent it is pointless to work on shift models of the form used unless there is enough confidence in the underlying climate model prediction at a scale and resolution and internal coherence (between temperature and precipitation) where such confidence is justified.

I see the widespread uncritical use of climate models to project the effects of global warming as more of a problem that the effects modelling itself. Most studies of climate effects don’t bother to check the validity of the climate models — they just download them. It’s a house built on sand.

Just out of interest, I applied a few days ago for a registration account on the CMIP database of climate models stating I was interested in auditing their simulation data. As of today, no response. I also contacted the Australian Bureau of Meteorology for Australian monthly temperature data recently. As of today, no response.