Drought predictions for this century

In The National Science Foundation Funds Multi-Decadal Climate Predictions Without An Ability To Verify Their Skill Roger Pielke Sr. links GCM skill at predicting drought with natural variation:

2. “Future efforts to predict drought will depend on models’ ability to predict tropical SSTs.”

In other words, there is NO way to assess the skill of these models are predicting drought as they have not yet shown any skill in SST predictions on time scales longer than a season, nor natural climate cycles such as El Niño [or the PDO, the NAO, etc.].

This seems an convoluted turn of phrase. There are ways to assess the skill of these models — by comparing them with past drought frequency and severity. Such assessments show the models have NO skill at predicting droughts.

The assumption is that IF they were able to predict cycles like PDO etc. then they would be able to predict droughts. But clearly if we average over these cycles, there is still the little problem of overall trends in extreme phenomena, which accuracy at PDO etal. would not necessarily satisfy.

His argument that drought efficacy swings on PDO prediction is useful, however, as a basis for excluding applications of models for climate phenomena that rely on them.

Roger is perhaps being polite about misleading policymakers when he continues:

Funding of multi-decadal regional climate predictions by the National Science Foundation which cannot be verified in terms of accuracy is not only a poor use of tax payer funds, but is misleading policymakers and others on the actual skill that exists in predicting changes in the frequency of drought in the future.

The review by Dai favours the PDSI drought index:

The PDSI was created by Palmer22 with the intent to measure the cumulative departure in surface water balance. It incorporates antecedent and current moisture supply (precipitation) and demand (PE) into a hydrological accounting system. Although the PDSI is a standardized measure, ranging from about −10 (dry) to +10 (wet)…

I always search for the assessment of accuracy first, and as usual the skill of models gets a very little, non-quantitative coverage. Climate scientists are loath judge the models, preferring to cloak their results in paragraphs of uncertainty, and present “dire predictions” of GCMs in garish figures (his Figure 11).

They need to start acting like scientists and stop these misleading practises until it is shown by rigorous empirical testing, and for fundamental reasons, that the current GCMs are fit for the purpose of drought modelling.

Just to show I am not always negative, this recent report has a lot to recommend in it. The authors of “Climate variability and change in south-eastern Australia” do quite a good job of describing the climatological features impacting the area, and putting technical issues, climate, hydrology and social impact together in an informative report.

While they say:

The current rainfall decline is apparently linked
(at least in part) to climate change, raising the
possibility that the current dry conditions may
persist, and even possibly intensify (as has been the
case in south-west Western Australia).

They also admit they don’t know how to combine the output of multiple models:

Some research (Smith & Chandler, 2009) suggests that
uncertainties in climate projections can be reduced
by careful selection of the global climate models, with
less weight being given to models that do not simulate
current climate adequately. Other work suggests that
explicit model selection may not be necessary (Watterson,
2008; Chiew et al., 2009c). Further research is being
done to determine how to combine the output of global
climate models to develop more accurate region-scale
projections of climate change.

I would fault that there is no suggestion that anything other than GCMs might be used, and no evidence the GCMs perform better than a mean value. If a model does no better than the long term average then there is good reason to suppose it has no skill, and throw it out. This is called ‘benchmarking’, but its an alien concept to reject any GCM from the IPCC, apparently.

Show us your tests – Australian climate projections

My critique of models used in a major Australian drought study appeared in Energy and Environment last month (read Critique-of-DECR-EE here). It deals with validation of models (the subject of a recent post by Judith Curry), and regional model disagreement with rainfall observations (see post by Willis here).

The main purpose is summed up in the last sentence of the abstract:

The main conclusion and purpose of the paper is to provide a case study showing the need for more rigorous and explicit validation of climate models if they are to advise government policy.

It is well known that despite persistent attempts and claims in the press, general circulation models are virtually worthless at projecting changes in regional rainfall, the IPCC says so, and the Australian Academy of Science agrees. The most basic statistical tests in the paper demonstrate this: the simulated drought trends are statistically inconsistent with the trend of the observations, a simple mean value shows more skill that any of the models, and drought frequency has dropped below the 95%CL of the simulations (see Figure).

Rainfall has increased in tropical and subtropical areas of Australia since the 70’s, while some areas of the country, particularly major population centers to the south-east and south-west have experienced multi-year deficits of rainfall. Overall Australian rainfall is increasing.

The larger issue is how to acknowledge that there will always be worthless models, and the task of genuinely committed modellers to identify and eliminate these. It’s not convincing to argue that validation is too hard for climate models, or they are justified by physical realism, or use the calibrated eyeball approach. The study shows that the obvious testing regimes would have eliminated these drought models from contention — if performed.

While scientists are mainly interested in the relative skill of models, where statistical measures such as root mean square (RMS) are appropriate, decision-makers are (or should) be concerned with whether the models should be used at all (are fit-for-use). Because of this, model testing regimes for decision-makers must have the potential to completely reject some or all models if they do not rise above a predetermined standard, or benchmark.

There are a number of ways that benchmarking can be set up, which engineers or others in critical disciplines would be familiar with, usually involving a degree of independent inspection, documentation of expected standards, and so on. My study makes the case that climate science needs to start adopting more rigorous validation practises. Until they do, regional climate projections should not be taken seriously by decision-makers.

It is up to the customers of these studies to not rely on the say-so of the IPCC, the CSIRO and the BoM, and to ask “Show me your tests”, as would be expected with any economic, medical or engineering study where the costs of making the wrong decision are high. Their duty of care requires they are confident that all reasonable means have been taken to validate all of the models that support the key conclusions.

Projected future runoff of the Breede River under climate change

More evidence of worthless model predictions from CO2 Science:

All of the future flow-rates calculated by Steynor et al. exhibited double-digit negative percentage changes that averaged -25% for one global climate model and -50% for another global climate model; and in like manner the mean past trend of four of Lloyd’s five stations was also negative (-13%). But the other station had a positive trend (+14.6%). In addition, by “examination of river flows over the past 43 years in the Breede River basin,” Lloyd was able to demonstrate that “changes in land use, creation of impoundments, and increasing abstraction have primarily been responsible for changes in the observed flows” of all of the negative-trend stations.

Interestingly, Steynor et al. had presumed that warming would lead to decreased flow rates, as their projections suggested; and they thus assumed their projections were correct. However, Lloyd was able to demonstrate that those results were driven primarily by unaccounted for land use changes in the five catchments, and that in his newer study the one site that had “a pristine watershed” was the one that had the “14% increase in flow over the study period,” which was “contrary to the climate change predictions” and indicative of the fact that “climate change models cannot yet account for local climate change effects.” As a result, he concluded that “predictions of possible adverse local impacts from global climate change should therefore be treated with the greatest caution,” and that, “above all, they must not form the basis for any policy decisions until such time as they can reproduce known climatic effects satisfactorily.”

How Bad are the Models – UHI

Urban areas differ from rural areas in a number of well known ways, but the IPCC summaries maintain that these effects have been effectively removed when they talk about the recent (post 1960) increases in global surface temperature.

Continuing the series on how bad climate models really are, another paper is in the pipeline on the long-standing influence of urban heat effects (UHI) in the surface temperature data. Ross McKitrick reports that between 1/2 and 1/3 of the recent increase in temperature is due to this contamination (Ross’s website here).

The methodology uses the regression coefficients from the socioeconomic variables to estimate the trend distribution after removing the estimated non-climatic biases in the temperature data. On observational data this reduces the mean warming trend by between one-third and one-half, but it does not affect the mean surface trend in the model-generated data. Again this is
consistent with the view that the observations contain a spatial contamination pattern not present in, or predicted by, the climate models.

Note that this rather gross bias is not present in or predicted by the climate models, meaning the climate models do not have the physical mechanisms to model it. One consequence is that if the models are to fit the recent increase in temperature, some other (incorrect) mechanism must be used (such as H2O feedback perhaps – I don’t know).

Ross has written up the backstory of the all too common obstacles to publication of articles questioning the IPCC here:

In the aftermath of Climategate a lot of scientists working on global warming-related topics are upset that their field has apparently lost credibility with the public. The public seems to believe that climatology is beset with cliquish gatekeeping, wagon-circling, biased peer-review, faulty data and statistical incompetence. In response to these perceptions, some scientists are casting around, in op-eds and weblogs, for ideas on how to hit back at their critics. I would like to suggest that the climate science community consider instead whether the public might actually have a point.

How Bad are Climate Models? Temperature

Due to building the website for The Climate Sceptics I haven’t been able to post despite some important events. My site and other files were deleted in some kind of attack, so I have had to rebuild it as well. I now have the WordPress 3.0 multiuser system which enable easy creation and management of multiple blogs, so its an ill wind eh?

The important event I refer to is the release of “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Series” by Ross McKitrick, Stephen McIntyre and Chad Herman (2010). Nobody is talking about it, and I don’t know why, as it has a history almost as long as the hockey stick on McIntyre’s blog (summary here), and is a powerful condemnation of climate models in the PRL.

I feel a series coming on, as these results deliver a stunning blow to the last leg that alarmists have been standing on, i.e. model credulity. Also because I have a paper coming out in a similar vein, dealing with drought models in regional Australia.

Using a rigorous methodology on 57 runs from 23 model simulations of the lower troposphere (LT) and mid-troposphere (MT) with forcing inputs from the realistic A1B emission scenario, and four observational temperature series: two satellite-borne microwave sounding unit (MSU)-derived series and two balloon-borne radiosonde series, over two time periods from 1979-99 and 1999-2009, they tested a mismatch between models and observed trends in the tropical troposphere. This represents a basic validation test of climate models over a 30 year period, a validation test which SHOULD be fundamental to any belief in the models, and their usefulness for projections of global warming in the future.

The results are shown in their figure:

… the differences between models and observations now exceed the 99% critical value. As shown in Table 1 and Section 3.3, the model trends are about twice as large as observations in the LT layer, and about four times as large in the MT layer.

Continue reading How Bad are Climate Models? Temperature

Monthly Roundup

  • The Age reports that Climategate was a game changer. Judith Curry said Dr Jones had shown himself to be ”genuinely repentant, and has been completely open and honest about what has been done and why … speaking with humility about the uncertainty in the data sets”.

    So far its a case of the academic defense: “Oops, I lied.” Sir Muir Russell, the chairman of the Judicial Appointments Board for Scotland, notes that senior climate scientists say their world has been dramatically changed by the affair. We welcome senior climate scientists to the real world of professional transparency. Steve McIntyre has received overwhelming financial support from his readers for his trip to the Guardian’s debate in England.

  • Senator Wong reminded a conference on the Gold Coast that scientists were responsible for this unpopular policy bind: “Remember why this debate started, why we all started talking about climate change and why people called for action?

    “It is because of you that we understand that climate change is real and it is because of you that we understand that climate change is happening now … and that it is caused by carbon dioxide emissions.”

    But she also challenged scientists to get their act together:

    … the science behind the political debate cannot be over-estimated. Unfortunately in the recent past, science has not been able to speak with one voice on climate change, making it impossible for politicians to enact practical measures to address the phenomenon.

    Reading between the lines, could it be that her political windsock no longer points towards the agenda of tenured liberal progressive moonbats and she is butching-up to the union bosses that put Ms Squiggle in command? Hmm…

  • Lubos reviews a sloppy article by Rasmus Benestad on climate feedbacks. He explains the system as I see it, with many short run positive feedbacks in the atmosphere (and oceans) but stronger negative feedbacks in the long run, producing a “half-pipe” response profile.
  • Lubos makes me laugh:

    Well, let me make it clear that there’s nothing controversial about negative feedbacks. In this battle between negative feedbacks and Rasmus Benestad, it is the latter who is an utterly controversial crackpot. The existence of crackpots may make basic concepts of science controversial among crackpots – and the remaining readers of Real Climate, if there are any – but it can’t make it controversial in the real science.

  • CSIRO is making science more accessible to decision-makers by “trialling different ways of presenting climate information”. And if they couldn’t be more non-committal, they are presenting the regional forecasts of models that “are complex, and constantly being refined” in a slick interface. If as my upcoming publication shows, the model forecasts are worthless, then you have to wonder — What is the point?

    The rainfall simulations in the models are completely opposite to reality over the last 100 years. To make this clear to climate scientists, when rainfall decreases the models increase. When rainfall increases, the models decrease. The best way decision-makers could use CSIRO model forecasts is as contrary indicators, i.e. buy when they say “sell”, and sell when they say “buy”.

</

Sceptics Tour Update

Having just returned from my leg of the tour, I have been offline for awhile, but expect to catch up this week. Here is my powerpoint presentation “Tweeter and the Monkey M(e)an — Negating Climate Change Policy” (4.3MB).

The title comes from a song by the Traveling Wilburys. The message is that without proper validation, climate models are no more credible than Tweets, and from my (and others’) validation testing, the model forecasts are not fit-for-forecasting, showing no more accuracy than the “Monkey Mean” — the average temperature and rainfall. I critique CSIRO and BoM reports and conclude with an example of how to make rational business decisions under climate forecast uncertainty.

Continue reading Sceptics Tour Update

On the Use of the Virial Theorem by Miskolczi

Virial Paper 6_12_2010 submitted by Adolf J. Giger.

Allow me to make some more comments on the Virial Theorem (VT) as used by Ferenc Miskolczi (FM) for the atmosphere.

As I said on this blog back in February, a very fundamental derivation of the VT was made by H. Goldstein in Section 3-4 of “Classical Mechanics”, 1980, Ref.[1] : PE= 2*KE (potential energy=2 x kinetic energy). Then, he also derives the Ideal Gas Law (IGL), P*V = N*k*T as a consequence of the VT, and shows that PE=3*P*V and KE=(3/2)*N*k*T. The two laws, IGL and VT, therefore are two ways to describe the same physical phenomenon. Despite its seemingly restrictive name, we know that the IGL is a good approximation for many gases, monatomic, biatomic, polyatomic and even water vapor, as long as they remain very dilute. Goldstein’s derivations are made for an enclosure of volume V with constant gas pressure P and temperature T in a central force field like the Earth’s gravitational field. They also hold for an open volume V anywhere in the atmosphere. As to FM, he points out that the VT reflects the fact that the atmosphere is gravitationally bounded.

Ferenc Miskolczi in his papers [2,3] relates the total potential energy of the atmosphere, PEtot, to the total IR upward radiation Su at the surface. This relationship has to be considered a proportionality rather than an exact equality, or Su=const* PEtot. We see that this linkage makes sense since Su determines the surface temperature Ts through the Stefan-Boltzmann law, Su = (5.6703/10^8)*Ts^4 , and finally the IGL ties together Ts, P(z=0) and PEtot.

FM then assigns the kinetic IR energy KE (temperature) in the atmosphere to the upward atmospheric IR emittance Eu, or Eu=const*KE. The flux Eu is made up of two terms F + K , where F is due to thermalized absorption of short wave solar radiation in atmospheric water vapor, and K due to heat transfers from the Earth’s surface to air masses and clouds through evaporation and convection. Neither F or K are directly radiated from the Earth’s surface. They represent radiation from the atmosphere itself. There is an obvious limitation for such an assignment mainly because for the VT , or the IGL in general, the temperature (the KE) has to be measured with a thermometer, whereas Eu represents the radiative temperature (flux) that has to be measured with a radiometer, and these two measurements can give vastly different results as we see for the two following extreme cases:

In between these two extremes we have the Earth where FM’s version of the VT , Su = 2 * Eu applies reasonably well. We will see next in a discussion of FM’s exact solution how close, and for what types of atmospheres FM’s VT ( Eu/Su=0.5) holds, but we can say already that no physical principle is violated if it doesn’t. The VT that always holds for gases is not being violated, it is simply not fully recognized by FM’s fluxes that have to be measured by radiometers. This may be an indication that the VT is less important for FM’s theory than normally assumed.

On the other hand, the IPCC assumes a positive water vapor feedback and arrives at very imprecise predictions for the Climate Sensitivity ranging from 1.5 to 5K (and even more). It is clear that this wide range of numbers is caused by the assumed positive feedback system, which apparently is close to instability (or singing, as the electrical engineer would call it in an unstable microphone-loudspeaker system). With such large uncertainties in their outputs true scientists should be reluctant to publish their results.

Continue reading On the Use of the Virial Theorem by Miskolczi

No evidence of global warming extinctions

My rebuttal of Thomas’ computer models of massive species extinctions has been mentioned in a statement by Sen. Orrin G. Hatch before the United States Senate, on June 10, 2010.

1. Stockwell (2000) observes that the Thomas models, due to lack of any observed extinction data, are not ‘tried and true,’ and their doctrine of ‘massive extinction’ is actually a case of ‘massive extinction bias.’

[Stockwell, D.R.B. 2004. Biased Toward Extinction, Guest Editorial, CO2 Science 7 (19): http://www.co2 science.org/articles/V7/N19/EDIT.php]

The one extinct species mentioned in the Thomas article is now thought to have fallen victim to the 1998 El Nino.

Continue reading No evidence of global warming extinctions

New Miskolczi Manuscript

Ferenc sent out reprints of his upcoming manuscript, and graciously acknowledges the contribution of a number of us for support, help and encouragement. I particularly like the perturbation and statistical power analysis, checking that a change in the greenhouse effect due to CO2 would likely have been detected if it had been present in the last 61 years.

The Stable Stationary Value of the Earth’s Global Average Atmospheric Planc-weighted Greenhouse-Gas Optical Thickness
by Ferenc Miskolczi,
Energy & Environment, 21:4 2010.

ABSTRACT
By the line-by-line method, a computer program is used to analyze Earth atmospheric radiosonde data from hundreds of weather balloon observations. In terms of a quasi-all-sky protocol, fundamental infrared atmospheric radiative flux components are calculated: at the top boundary, the outgoing long wave radiation, the surface transmitted radiation, and the upward atmospheric emittance; at the bottom boundary, the downward atmospheric emittance. The partition of the outgoing long wave radiation into upward atmospheric emittance and surface transmitted radiation components is based on the accurate computation of the true greenhouse-gas optical thickness for the radiosonde data. New relationships
among the flux components have been found and are used to construct a quasi-all- sky model of the earth’s atmospheric energy transfer process. In the 1948-2008 time period the global average annual mean true greenhouse-gas optical thickness is found to be time-stationary. Simulated radiative no-feedback effects of measured actual CO2 change over the 61years were calculated and found to be of magnitude easily detectable by the empirical data and analytical methods used. The data negate increase in CO2 in the atmosphere as a hypothetical cause for the apparently observed global warming. A hypothesis of significant positive feedback by water vapor effect on atmospheric infrared absorption is also negated by the observed measurements. Apparently major revision of the physics underlying the greenhouse effect is needed.
Continue reading New Miskolczi Manuscript

CSIRO Affair?

Terry McCran’s accusation that CSRIO ‘breached trust’ in The Australian this weekend sounds like an overly possessive lover saying he will never trust them again:

… our two pre-eminent centres of knowledge and public policy analysis across the social and hard sciences spectrum are now literally unbelievable.

In case you hadn’t heard, this is about the unseemly Treasury/Mining Co. cat fight over the RSPT, and Tom Quirk’s fracas with Paul Fraser, the Chief Research Scientist at CSIRO at Quadrant over his article CSIRO Abandons Science identifying a convenient omission in their State of the Climate position statement.

But the State of the Climate report has a number of very odd and questionable statements other than the one Tom wrote about. I will go through them in order:

Continue reading CSIRO Affair?

Celestial Origins of Climate Oscillations

Now reading…

Empirical evidence for a celestial origin of the climate oscillations and its implications by Nicola Scafetta

Abstract: We investigate whether or not the decadal and multi-decadal climate oscillations have an astronomical origin. Several global surface temperature records since 1850 and records deduced from the orbits of the planets present very similar power spectra. Eleven frequencies with period between 5 and 100 years closely correspond in the two records. Among them, large climate oscillations with peak-to-trough amplitude of about 0.1 $^oC$ and 0.25 $^oC$, and periods of about 20 and 60 years, respectively, are synchronized to the orbital periods of Jupiter and Saturn. Schwabe and Hale solar cycles are also visible in the temperature records. A 9.1-year cycle is synchronized to the Moon’s orbital cycles. A phenomenological model based on these astronomical cycles can be used to well reconstruct the temperature oscillations since 1850 and to make partial forecasts for the 21$^{st}$ century. It is found that at least 60\% of the global warming observed since 1970 has been induced by the combined effect of the above natural climate oscillations. The partial forecast indicates that climate may stabilize or cool until 2030-2040. Possible physical mechanisms are qualitatively discussed with an emphasis on the phenomenon of collective synchronization of coupled oscillators.

Continue reading Celestial Origins of Climate Oscillations

No Clue on Global Warming and El Nino

Almost a mea culpa in today’s publication of The impact of global warming on the tropical Pacific Ocean and El Niño (CSIRO and PDF) by a who’s who of atmospheric circulation research including Vecchi, and CSIRO/BoM researchers Cai and Power.

Therefore, despite considerable progress in our understanding of the impact of climate change on many of the processes that contribute to El Niño variability, it is not yet possible to say whether ENSO activity will be enhanced or damped, or if the frequency of events will change.

This is illustrated by figure 3, where some CGCMs show an increase in the amplitude of ENSO variability in the future, some show a decrease, some show no statistically significant changes.

enso models

Their paper flatly contradicts Power and Smith [2007] that increased El Ninos are caused by weakening of the pressure differential, indicated by the SOI:

Continue reading No Clue on Global Warming and El Nino

Three good posts

A couple of recent posts challenging global warming science have not been picked up by other observers. While real scientists find Climategate distasteful, it does not necessarily challenge the pillars of AGW logic. These latest developments do, and perhaps give the insightful a heads-up of the direction of challenges to come.

The first is Loehle, Craig. 2010. The estimation of historical CO2 trajectories is indeterminate: Comment on “A new look at atmospheric carbon dioxide.” Atmospheric Environment, in press.

Loehle critiques a paper by Hoffman, claiming the exponential model for characterizing CO2 trajectories for historical data is not estimated properly. He illustrates with the past 51 years of CO2 data that three different models capture the historical pattern of CO2 increase with R2 > 0.98 but forecast very different future trajectories.

To use the blog-speak, the exponential curve (that gives the highest forecast levels, surprise, surprise) is cherry-picked. To use Bayesian statistical terminology its ‘inductive bias‘, or in the financial parlance ‘model risk‘.

The breakthrough, as I see it, is that most analysts would view this result as ‘bleeding obvious’, so obvious that such comments almost never get published, because either the writer, reviewers, or editors think it is trivial, a negative result or some such party killer. However, I think he’s right, and it needs to be addressed. It also really irritates warmies as the discussion shows.

Clearly such comments do not need to be long or complicated. Like the note of mine on Rahmstorf’s ‘the climate is more sensitive that expected’ meme, Craig’s paper is only 2 pages. I reckon there are countless such comments possible, as climate science is full of such trivial errors, but the challenge is getting them published and Craig has been very good at it.

The second development is Lubos taking the bat here and here to traditional certainty levels in environmental science.

So 5% of the statements claimed to be right because of statistical observations are wrong while 95% of them are right. Is it a good enough success rate to build science? Someone who is not familiar with science or rational thinking in general may think that it is good enough to make 95% of correct statements.

Regardless of the character and interpretation of the hypotheses and theories, it’s clear that a working scientific discipline requires at least the 5-sigma standards if its insights are going to be quasi-reliably reused in realistic, slightly longer chains of reasoning that can be as long as 6 steps or more.

Once again, to most analysts this is obvious, and it seems to piss people off to say it, but the basis for most climate statements are too uncertain to be useful.

The third post I noticed was Rahmstorf (2009): Off the mark again (part 1)

Tom Moriarty is highly critical of Stefan Rahmstorf’s ever more fanciful models for scaring the world about sea level rise. The fact that the only really solid empirical relationship between sea level and temperature is a linear correlation, and that notions that an increase in the sea level rise rate would take a millennium to dampen out have no empirical basis, appear to matter not at all to reviewers or editors.

Tom’s argument against this madness is as follows:

If realistic data is applied to a model that is purported to explain a phenomenon, and the result is obviously unrealistic, then that model must be rejected. In this section I will explain how VR2009 apply a realistic temperature scenario to their model, namely a linear increasing temperature, to explain the effect of the counter-intuitively negative value of the model parameter, b. Their result is satisfying. But in the next section I will apply another realistic temperature scenario to their model, and the result will be outrageously bogus. This will force the rejection of their model and its predictive power for the 21st century.

Three good posts, in good taste, and well worth digesting.

CSIRO and BoM Report

A short post, but it doesn’t take much to show that CSIRO and BoM are telling porkies again in their just released State of Climate report. Just click here to get a graph showing the INCREASING trend in rainfall.

The report states:

2. Rainfall
While total rainfall on the Australian continent has been relatively stable …

rranom.aus.0112.20873

The fine print at the bottom left says: “Linear trend of 6.33mm decade.”
Continue reading CSIRO and BoM Report

Cointegration Summary

It’s incredible that a global warming theory could agree with both the IPCC (discernable anthropogenic influence) and the sceptics (low long term risk from emissions) but there you are. The analysis of Greenstock suggests it is not the amount of greenhouse gasses, particularly CO2, in the atmosphere that contributes to global warming, but the change in the amount. That is, when the rate of CO2 produced is increasing — as it was last century — this increases the global temperature. Conversely, if the rate of increase is constant so is temperature.

dCO2 and CRU

Continue reading Cointegration Summary

Vindication

A rash of stunning turnarounds have vindicated years of effort by climate sceptics. The day after ClimateGate broke I made three predictions:

. Disband the entire Federal Department of Climate Change along with all the individual State Departments of Climate Change.

. Vote down the Emissions Trading Scheme Legislation.

. Cancel Copenhagen.

Australia’s Department of Climate Change has been ‘watered down’ to become the Department of Climate Change, Energy Efficiency and Water. The ETS was voted down, and Copenhagen was such a net negative they are probably sorry they didn’t cancel it.

In another successful prediction, the end of drought in Australia came from a massive upswing in rainfall in 2010. This was done using the EMD algorithm and the assumption of stationarity of rainfall: i.e. long-term oscillations with zero trend, in contrast to a non-stationary drying trend as assumed by CSIRO climate models.

In another stunning vindication of Steve McIntyre, the Met Dept are proposing to take over global temperature data from the CRU. Steve has of course been railing for years about the sloppy, good old boys science in Jones’ department, and clearly the professionals agree with his assessment. Gladly the proposal includes a transparent verification process.

In efforts that are long overdue, Lucia reports that various people are attempting to verify the absence of bias in the CRU surface dataset in various ways. Whatever the result, this can only be a good thing, and I hope it becomes a habit.

Continue reading Vindication

Polynomial Cointegration Rebuts AGW

Please discuss the new paper by Michael Beenstock and Yaniv Reingewertz here.

Way back in early 2006 I posted on an exchange with R. Kaufmann, whose cointegration modelling is referenced in the paper, entitled Peer censorship and fraud. He was complaining at RealClimate about the supression of these lines of inquiry by the general circulation modellers. The post gives a number of examples that were topical at the time. ClimateGate bears it out.

Steve McIntyre wrote a long post on the affair here.

[R]ealclimate’s commitment to their stated policy that “serious rebuttals and discussions are welcomed” in the context that they devoted a post to criticize Ross and me and then refused to post serious responses. In this case, they couldn’t get away with censoring Kaufmann, but it’s pretty clear that they didn’t want to have a “serious” discussion online.

Continue reading Polynomial Cointegration Rebuts AGW

Antarctic Snowfall Data Visualisation

An issue in question here is whether the recent snowfall at Law Dome is unusually high relative to the 750 year long record (and therefore, so the argument goes, probably due to AGW).

Below is the snowfall at Law Dome from the ice core. Above is the actual snowfall, and below is the accumulation of the series minus the mean (using the R function cumsum) indicating where snowfall is above or below average.

fig1LD

Continue reading Antarctic Snowfall Data Visualisation

Disproving Global Warming II

The latest submission to arXiv:physics.ao-ph is entitled Interglacials, Milankovitch Cycles, and Carbon Dioxide by Gerald E. Marsh. Here is a review of the evidence regarding the timing of Termination II, the penultimate interglacial transition 140k years ago, and factors that may have caused it: CO2, Milankovitch induced insolation changes, or changes in solar magnetic flux, altering the Earth’s albedo through cosmic ray flux.

To appreciate the importance of this period, and a clear logical analysis of it, consider the recent lecture tour of Australia by Lord Monckton and Prof. Plimer. Lord Monckton argues strongly that climate sensitivity to CO2 is very low, too low to be of concern, and an increasing number of peer-reviewed papers using independent observational methods — Douglass, Lindzen, Spencer, Schwartz, Pinker, Shaviv — back him up. Prof. Plimer argues that the history of climate has been enormously variable, and not related to CO2 levels in the atmosphere.

This contrast of low sensitivity but high natural variation has prompted criticism on the irony of a tour by sceptics with contradictory viewpoints. As I understand their view, they maintain “the sensitivity of the climate to CO2 cannot be as low as suggested by these results because low sensitivity cannot explain the large glacial-interglacial transitions”. A solar cause for the penultimate transition has been scoffed at because the timing is wrong. It must have been a volcano or something that kicked off the chain of CO2 feedback that resulted in the warm interglacial.

Continue reading Disproving Global Warming II

Briffa McIntyre tree-rings etc

My comments on the topical ‘Yamal’ issue:

My AIG article demonstrating reconstruction of a hockey stick with red noise, neatly illustrated the possibility of circular reasoning in screening trees by their response to temperature. Around 20% of random series (or 40% if you count the inverted ones) correlate significantly with the temperature instrument record of the last 150 years, and when averaged back beyond the present create the straight handle of the stick.

Continue reading Briffa McIntyre tree-rings etc

Wish list for science journals

Many of my numerate readers will have read the account by Rick Trebino of Georgia Tech of trials and tribulations of responding to an error in the public record of the peer-reviewed literature, and have ideas of their own on what they would like to see.

Record ideas for what you would like to see below. (I am on vacation on the Great Barrier Reef right now, so excuse the brevity, typing this from the resort.) My wish list is below.

1. Code and data allow replication
2. Reviewers can act as coaches, where appropriate
3. Journals dedicated entirely to review of others’ studies

Continue reading Wish list for science journals

Comment on McLean et al Submitted

Here is the abstract for our comment submitted to Geophysical Research Letters today. Bob Tisdale is acknowledged as the source of the idea in the first paragraph. Lets see how it goes. If you would like a copy, contact me via the form above.

Update: Now available from arXiv

Comment on “Influence of the Southern Oscillation on tropospheric temperature” by J. D. McLean, C. R. de Freitas, and R. M. Carter

David R.B. Stockwell and Anthony Cox

Abstract

We demonstrate an alternative correlation between the El Nino Southern Oscillation (ENSO) and global temperature variation to that shown by McLean et al. [2009]. We show 52% of the variation in RATPAC-A tropospheric temperature (and 59% of HadCRUT3) is explained by a novel cumulative Southern Oscillation Index (cSOI) term in a simple linear regression model and 65% of RATPAC-A variation (67% of HadCRUT3) when volcanic and solar effect terms are included. We review evidence from physical and statistical research in support of the hypothesis that accumulation of the effects of ENSO can produce natural multi-decadal warming trends. Although it is not possible to reliably determine the relative contribution of anthropogenic forcing and SOI accumulation from multiple regression models due to collinearity, these results suggest a residual accumulation of around 5 ± 1% and up to 9 ± 2% of ENSO-events has contributed to the global temperature trend.

Continue reading Comment on McLean et al Submitted

Comedy Synthesis Report

The revision of the Copenhagen Synthesis Report was advertised at the ANU Climate Change Institute, directed by Prof. Will Steffen. But they just can’t seem to get it right. The ANU web site refers to Stefan Rahmstorf as Stefan Rahmonstorf.


Ian Castles on the July 5th, 2009
compiled the list of amendments of errors. Below is an update of the current situation.

Continue reading Comedy Synthesis Report

Renewable Energy Uneconomic and Ecologically Dangerous

Renewable energy is a nice idea, but Peter Lang crunches the numbers and finds solar and wind power are crushingly expensive, do little for greenhouse gas reduction, and are ecologically dangerous. Cap and trade is actually a giant scheme to tax and redistribute, for the benefit of political insiders.

A letter submitted by Peter Lang argues that the numbers prove nuclear power is the only way.

Solar realities: Solar power is uneconomic. The capital cost of solar power would be 25 times more than nuclear power to provide for demand. The minimum power output, not peak or average, is the main factor governing solar power’s economic viability. The least cost solar option would emit 20 times more CO2 (over the full life cycle) and use at least 400 times more land area compared with nuclear Government mandates and subsidies hide the true cost of renewable energy.

Wind realities: Wind power does not avoid significant amounts of greenhouse gas emissions. Wind power is very high cost way to avoid greenhouse gas emissions. Wind power, even with high capacity penetration, can not make a significant contribution to reducing greenhouse gas emissions.

Nuclear power is the least-cost, low-emission electricity generation technology that can provide the large amounts of electricity needed to power modern economies.

Continue reading Renewable Energy Uneconomic and Ecologically Dangerous

A semi-empirical approach to sea level rise

Published in Science, this Rahmstorf 2007 article provides a high-end estimate of sea level rise of over a meter by the end of the century (rate of 10mm/yr). Linear extrapolation puts the rate of increase at only 1.4mm and 1.7mm per year depending on start date (1860 or 1950).

The paper was followed by two critical comments, both bashing the statistics, and these are attached to the link above. Rahmstorf replied to those comments. The issues raised are familiar to readers of this, CA, Lucia, and other statistical blogs: significance, autocorrelation, etc. and worth a read.

Worthwhile as the comments are, they do not look into the problem of the end-treatment used by Rahmstorf, and I look at that here.

All of the papers projecting these high end rates, and they all depend on the assumption of recent ‘acceleration’ in sea levels. That is, seem to depend on the rate of increase getting faster and faster.

Rahmstorf 2007 paper uses the smoothing method most recently savaged at CA here, where it was shown despite all the high-falutin’ language to be equivalent to a simple triangular filter of length 2M, padded with M points of slope equal to the last M points. My main concern is that at this crucial end-section, the data has been duplicated by the padding, effectively increasing the number of data points of very high slope.

The figure below shows a replication of the Rahmstorf smoothing with and without padding (moved down for clarity) (code below). Two sea level data sets are shown, one by Church “A 20th century acceleration in global sea level rise” (used in Rahmstorf, data available from CSIRO here) another by Jevrejeva “Recent global sea level acceleration started over 200 years ago?” (data here)

It should be noted this data ends in 2001-2, a truncation bound to maximize recent temperature increases.

fig1

Continue reading A semi-empirical approach to sea level rise

Preprint on climatic regime shifts

Download: Structural break models of climatic regime-shifts: claims and forecasts

Anthony asked if it would be difficult to statistically justify the breaks in temperature between 1976 and 1979 proposed by Quirk (2009) for Australian temperature. He has an interest in oceanographic regime-shifts and climate change. Sure, I said, a simple Chow test.

We ended up rebutting the Easterling & Wehner (2009) claim that describing temperatures since 1998 as declining is ‘cherry picking’, finding a major regime shift occurred in 1997, statistically justifying the use of 1997 as a starting point for temperature trends.

A regime-shift based temperature forecast follows logically from identification of significant breaks. Our paper, “Structural break models of climatic regime-shifts: claims and forecasts“, has been submitted to the International Journal of Forecasting, and is downloadable from arXiv.

article-003

Continue reading Preprint on climatic regime shifts

Recent Climate Observations: Disagreement With Projections

Appearing in Energy and Environment (ee-20-4_7-stockwell2) is a note by myself on a paper by IPCC lead authors Rahmstorf, S., Cazenave A., Church J.A., Hansen J.E., Keeling R.F., Parker D.E., and R.C.J. Somerville, Recent climate observations compared to projections published in Science in 2007.

As shown by 102 citations in Google Scholar already, Rahmstorf et al 2007 has been one of the main references for alarmist calls to action because the “climate system is responding more quickly than the climate models indicate”. Taking the first one off Google:

The strong trends in climate change already evident, the likelihood of further changes occurring, and the increasing scale of potential climate impacts give urgency to addressing agricultural adaptation more coherently. There are …

Adapting agriculture to climate change – pnas.org, SM Howden, JF Soussana, FN Tubiello, N Chhetri, M … Proceedings of the National Academy of Sciences, 2007 – National Acad Sciences.

Respected on-line authors like Peter Gallagher, Mark Lawson and Lucia were concerned with the paper. Lucia attacked the ‘slide and eyeball’ approach. I engaged with Rahmstorf at RealClimate and wrote a number of articles on the uncertainty, until he told me in effect to ‘sod off and publish’. But rather than try to diagnose a sloppy methodology and be ignored, time and evidence has done the job instead. Here is my abstract.

Abstract: The non-linear trend in Rahmstorf et al. [2007] is updated with recent global temperature data. The evidence does not support the basis for their claim that the sensitivity of the climate system has been underestimated.

Its gratifying to read that the authors of the Copenhagen Synthesis Report do not seem to agree with Rahmstorf et al 2007 either, in reference to analysis in a figure that ostensibly used the same method as Rahmstorf et al 2007.

Figure 3 … shows the long-term trend of increasing temperature is clear and the trajectory of atmospheric temperature at the Earth’s surface is proceeding within the range of IPCC projections.

synthesis3

Continue reading Recent Climate Observations: Disagreement With Projections