Copenhagen a failure? Think again.

Attributed to NEIL BROWN, December 26, 2009

UNLIKE most people, who think Copenhagen was a failure, I think it was a great success. It has preserved the golden rule of international diplomacy.

Years ago, when I was a young fellow and started to go to international conferences, an old hand who was about to retire took me aside. ”I’ll be shoving off into retirement soon,” he said, ”so I thought I might pass on the golden rule of international conferences.”

I was fascinated. I was sure he would say I should always pursue noble objectives, lift up the downtrodden masses of Africa and Asia, stamp out disease and poverty and generally bring peace to the world. Alas, no.

”The most important item on the agenda at any international conference,” he resumed, ”is to fix the date of the next meeting – and of course the location.”

However – and it was a big however – if a conference succeeded in wiping out poverty or pestilence, there would be no prospect of trying to go to another conference the following year on the same subject. Concentrating on the date of the next conference would guarantee poverty and pestilence would still be there the next year and would provide the excuse for another year’s travel, entertainment, spending other people’s money, passing pious resolutions and generally being self-important, all of which are the only reasons for being in politics or diplomacy.

I soon learnt that seasoned players on the international scene knew very well the vital importance of the golden rule. For example, Sir Owen Dixon told me that when he was appointed the first UN troubleshooter on Kashmir, he went to New York to recruit an assistant. Someone recommended a young man in the UN building who, believe it or not, actually had the job description ”to bring peace to the world”.

”Do you like your job?” Sir Owen asked. ”Well, at least it’s permanent,” he replied. This young man, who had a stellar career at the UN thereafter, was right, because the intractable problem of Kashmir has, by definition, still not been solved and in the intervening years has provided immense fodder for studies, working groups, theses, working breakfasts, dinners, lunches and, of course, international conferences.

Statesmen and politicians did not achieve all of this by solving the problem of Kashmir; they did it by failing and by making sure that next year the crisis would be the gift that keeps on giving.

There was also a secret protocol to the golden rule, shielded from the prying eyes of the public as far as possible – to make sure that the location of the conference was somewhere nice, for example, fleshpots such as Casablanca, holiday spots such as Bali or ski resorts such as Davos.

The proof of the pudding is, of course, in the eating. Thus, despite the fact that almost everyone says that Copenhagen was a success because it narrowly avoided being a failure, the cognoscenti know it was a great success because it was such an appalling failure.

First, climate change is as bad as it ever was. None of the weasel words of progress and achievement about keeping temperatures down can conceal the good news that the whole thing was a disaster.

If the alarmists are right, climate change can only get worse. If they are wrong, the issue still is likely to have such life in it that it could last as long as Kashmir before the truth dawns.

Second, since Kyoto and again since Bali, we were told incessantly that Copenhagen was the last chance to prevent the world being plunged into a watery grave. Everyone was going to Copenhagen in the belief that it was a last chance to save the planet.

When I heard this, I mourned for the international political and diplomatic brotherhood of which I was once a part; they clearly were not going to be able to stretch climate change beyond Copenhagen as the excuse for more conferences, new taxes, tougher and more complicated laws and the perpetual extortion of money from poor workers in rich countries to rich kleptomaniacs in poor countries that foreign aid has become. Some other issue would have to be found.

Fortunately, this has turned out not to be the case. Mercifully, climate change will be there for at least another year to take its vengeance on a profligate and decadent world. It will provide the excuse for conferences next year and for years beyond. So also is the secret protocol intact: conferences on climate change will never be held anywhere near Darfur or Bangladesh.

Neil Brown is a barrister and former member of Federal Parliament, h/t to Geoff Sherrington.

Page-Proofs of the DECR Paper

Corrected the page-proofs of my drought paper today.

CRITIQUE OF DROUGHT MODELS IN THE AUSTRALIAN DROUGHT EXCEPTIONAL CIRCUMSTANCES REPORT (DECR)

ABSTRACT
This paper evaluates the reliability of modeling in the Drought Exceptional Circumstances Report (DECR) where global circulation (or climate) simulations were used to forecast future extremes of temperatures, rainfall and soil moisture. The DECR provided the Australian government with an assessment of the likely future change in the extent and frequency of drought resulting from anthropogenic global warming. Three specific and different statistical techniques show that the simulation of the occurrence of extreme high temperatures last century was adequate, but the simulation of the occurrence of extreme low rainfall was unacceptably poor. In particular, the simulations indicate that the measure of hydrological drought increased significantly last century, while the observations indicate a significant decrease. The main conclusion and purpose of the paper is to provide a case study showing the need for more rigorous and explicit validation of climate models if they are to advise government policy.

Meanwhile, scientists are finding new ways to communicate worthless forecasts to decision makers.

These models have been the basis of climate information issued for national and seasonal forecasting and have been used extensively by Australian industries and governments. The results of global climate models are complex, and constantly being refined. Scientists are trialling different ways of presenting climate information to make it more useful for a range of people.

Conducting professional validation assessment of models would be a start, followed by admitting they are so uncertain they should be ignored.

Continue reading Page-Proofs of the DECR Paper

Watts Tour at Emerald

Anthony’s Tour continues at a breakneck pace this week — with only four venues to go.

The talks at Emerald that I organized went quite well, considering this is a small regional town. About 80-100 people attended an teaser session during the Property Rights Australia meeting during the day, and around 40 attended at night. We got a standing ovation during the day — the first time for me! The crowd was a mixture of ages and sexes and I think messages of bureaucratic sloth and opportunism resonated with them. Central Queensland turned on one of its trademark sunsets for Anthony:

It was good to spend a bit of time with Anthony and catch up on the goss — well not really gossip, but about bloggers and the people behind the curtain. You know how it is, you tend to get a certain view of the people involved, but when you learn more about them, it turns out they are just regular people who put their hand up for something they believe in.

Continue reading Watts Tour at Emerald

Niche Logic

The ‘strongest male’ is itself a highly variable component.

How to formalise this as a niche? Preamble. All we have, really, are observations. To put niches into a statistical framework, we only have the expected distributions of those observations (both singly and jointly). Selection (either natural or through our study design) changes the distribution of features, and we observe those changes.

For example, if the sample of breeding males is generally taller than the population of breeding males, then we could presume there is selective pressure on this feature — an important item of information. This could be detected statistical significant (e.g. the distribution differ in a Chi–squared test).

Continue reading Niche Logic

Niche Theory

A couple of questions from the last nichey post prompted this post. Geoff said that:

I’m not even sure what is meant by an optimal environment for a species/genus/whatever.

while Andrew said that:

it wouldn’t surprise me if a lot of species tend to live at the margins of their “ideal” habitat.

We need a bit of abstraction to address these questions. In a laboratory, a plant would be expected to show a humped response to the main variables of temperature and water availability. The parameterisation of this function can be termed the ‘fundamental niche’ of the species, and may be equated with a physiochemical optimum unaffected by competition.

Continue reading Niche Theory

On the Use of the Virial Theorem by Miskolczi

Virial Paper 6_12_2010 submitted by Adolf J. Giger.

Allow me to make some more comments on the Virial Theorem (VT) as used by Ferenc Miskolczi (FM) for the atmosphere.

As I said on this blog back in February, a very fundamental derivation of the VT was made by H. Goldstein in Section 3-4 of “Classical Mechanics”, 1980, Ref.[1] : PE= 2*KE (potential energy=2 x kinetic energy). Then, he also derives the Ideal Gas Law (IGL), P*V = N*k*T as a consequence of the VT, and shows that PE=3*P*V and KE=(3/2)*N*k*T. The two laws, IGL and VT, therefore are two ways to describe the same physical phenomenon. Despite its seemingly restrictive name, we know that the IGL is a good approximation for many gases, monatomic, biatomic, polyatomic and even water vapor, as long as they remain very dilute. Goldstein’s derivations are made for an enclosure of volume V with constant gas pressure P and temperature T in a central force field like the Earth’s gravitational field. They also hold for an open volume V anywhere in the atmosphere. As to FM, he points out that the VT reflects the fact that the atmosphere is gravitationally bounded.

Ferenc Miskolczi in his papers [2,3] relates the total potential energy of the atmosphere, PEtot, to the total IR upward radiation Su at the surface. This relationship has to be considered a proportionality rather than an exact equality, or Su=const* PEtot. We see that this linkage makes sense since Su determines the surface temperature Ts through the Stefan-Boltzmann law, Su = (5.6703/10^8)*Ts^4 , and finally the IGL ties together Ts, P(z=0) and PEtot.

FM then assigns the kinetic IR energy KE (temperature) in the atmosphere to the upward atmospheric IR emittance Eu, or Eu=const*KE. The flux Eu is made up of two terms F + K , where F is due to thermalized absorption of short wave solar radiation in atmospheric water vapor, and K due to heat transfers from the Earth’s surface to air masses and clouds through evaporation and convection. Neither F or K are directly radiated from the Earth’s surface. They represent radiation from the atmosphere itself. There is an obvious limitation for such an assignment mainly because for the VT , or the IGL in general, the temperature (the KE) has to be measured with a thermometer, whereas Eu represents the radiative temperature (flux) that has to be measured with a radiometer, and these two measurements can give vastly different results as we see for the two following extreme cases:

In between these two extremes we have the Earth where FM’s version of the VT , Su = 2 * Eu applies reasonably well. We will see next in a discussion of FM’s exact solution how close, and for what types of atmospheres FM’s VT ( Eu/Su=0.5) holds, but we can say already that no physical principle is violated if it doesn’t. The VT that always holds for gases is not being violated, it is simply not fully recognized by FM’s fluxes that have to be measured by radiometers. This may be an indication that the VT is less important for FM’s theory than normally assumed.

On the other hand, the IPCC assumes a positive water vapor feedback and arrives at very imprecise predictions for the Climate Sensitivity ranging from 1.5 to 5K (and even more). It is clear that this wide range of numbers is caused by the assumed positive feedback system, which apparently is close to instability (or singing, as the electrical engineer would call it in an unstable microphone-loudspeaker system). With such large uncertainties in their outputs true scientists should be reluctant to publish their results.

Continue reading On the Use of the Virial Theorem by Miskolczi

No evidence of global warming extinctions

My rebuttal of Thomas’ computer models of massive species extinctions has been mentioned in a statement by Sen. Orrin G. Hatch before the United States Senate, on June 10, 2010.

1. Stockwell (2000) observes that the Thomas models, due to lack of any observed extinction data, are not ‘tried and true,’ and their doctrine of ‘massive extinction’ is actually a case of ‘massive extinction bias.’

[Stockwell, D.R.B. 2004. Biased Toward Extinction, Guest Editorial, CO2 Science 7 (19): http://www.co2 science.org/articles/V7/N19/EDIT.php]

The one extinct species mentioned in the Thomas article is now thought to have fallen victim to the 1998 El Nino.

Continue reading No evidence of global warming extinctions

New Miskolczi Manuscript

Ferenc sent out reprints of his upcoming manuscript, and graciously acknowledges the contribution of a number of us for support, help and encouragement. I particularly like the perturbation and statistical power analysis, checking that a change in the greenhouse effect due to CO2 would likely have been detected if it had been present in the last 61 years.

The Stable Stationary Value of the Earth’s Global Average Atmospheric Planc-weighted Greenhouse-Gas Optical Thickness
by Ferenc Miskolczi,
Energy & Environment, 21:4 2010.

ABSTRACT
By the line-by-line method, a computer program is used to analyze Earth atmospheric radiosonde data from hundreds of weather balloon observations. In terms of a quasi-all-sky protocol, fundamental infrared atmospheric radiative flux components are calculated: at the top boundary, the outgoing long wave radiation, the surface transmitted radiation, and the upward atmospheric emittance; at the bottom boundary, the downward atmospheric emittance. The partition of the outgoing long wave radiation into upward atmospheric emittance and surface transmitted radiation components is based on the accurate computation of the true greenhouse-gas optical thickness for the radiosonde data. New relationships
among the flux components have been found and are used to construct a quasi-all- sky model of the earth’s atmospheric energy transfer process. In the 1948-2008 time period the global average annual mean true greenhouse-gas optical thickness is found to be time-stationary. Simulated radiative no-feedback effects of measured actual CO2 change over the 61years were calculated and found to be of magnitude easily detectable by the empirical data and analytical methods used. The data negate increase in CO2 in the atmosphere as a hypothetical cause for the apparently observed global warming. A hypothesis of significant positive feedback by water vapor effect on atmospheric infrared absorption is also negated by the observed measurements. Apparently major revision of the physics underlying the greenhouse effect is needed.
Continue reading New Miskolczi Manuscript

Problem 4: Why has certainty not improved

Problem 4. Why has a community of thousands or tens of thousands of climate scientists not managed to improve certainty in core areas in any significant way in more than a decade (eg the climate sensitivity caused by CO2 doubling as evidenced by little change in the IPCC bounds)?

This problem has been the hardest, probably because it takes enormous hubris to claim solution to a problem that defeats thousands of usually intelligent people. One man who does that is — Dr Roy Spencer — claiming a huge ‘blunder’ pervades the whole of climate science regarding the direction and magnitude of ocean-cloud feedback, the subject of his upcoming book and paper.

What I want to demonstrate is one of the issues that is almost totally forgotten in the global warming debate: long-term climate changes can be caused by short-term random cloud variations.

The main reason this counter-intuitive mechanism is possible is that the large heat capacity of the ocean retains a memory of past temperature change, and so it experiences a “random-walk” like behavior. It is not a true random walk because the temperature excursions from the average climate state are somewhat constrained by the temperature-dependent emission of infrared radiation to space.

As showed previously, an AR coefficient of 0.99 is sufficient to change a random walk behavior (AR=1) to the kind of mean-reverting behavior his model shows. This difference is virtually undetectable using the usual tests on the available 150 years of global temperature data. Global temperature cannot be a random walk, but it can be ‘almost a random walk’. It can also respond to random shocks, such as volcanic eruptions, and sudden injections of GHGs, and oscillating solar forcings while still retaining the random walk character.

Continue reading Problem 4: Why has certainty not improved

What is Scepticism?

Sceptic – from the Greek skeptikos one who reflects upon, from skeptesthai to consider.

Scepticism is variously described as a doubting or questioning attitude, a person who uses their mind creatively, or even someone who demands physical evidence in order to be convinced (especially when this demand is out of place).

To consider carefully with regard to evidence is the professional way to approach public affairs, or matters of considerable importance — the opposite of sloppy, credulous, reckless or even foolhardy behavior.

A sceptic is painfully aware that the casual believer is punished when reality is understated and underestimated.

-oil spill greater than first estimated
-Euro zone bailout underestimated
-USD rise underestimated
-oil’s fall underestimated
-tarp underestimated
-unemployment understated
-recession length underestimated
-volcano impact underestimated
-gold’s persistence understated
-housing slump underestimated

Continue reading What is Scepticism?

Lessons from La-La Land

Is the anthropogenic climate change controversy just an episode in an ongoing drama, and if so what is the main theme?

The Catallaxyfiles in RSPT in la la land review a number of biting newspaper editorials on the Resource Super Profits Tax (RSPT), providing a possible answer by replacing a few words.

Funny thing is, Garimpeiro does not think the government and its Treasury [IPCC] advisers actually know that they have been practising deceptions. It’s more a case of them not having an even basic understanding of the industry [climate system] they are tinkering with in a big way.

Which is scarier? They do know what they are doing but are being misleading and deceptive to achieve a given goal. Let’s call that the Nobel Lie Hypothesis. They don’t know what they are doing and are making it up as they go; the Incompetence Hypothesis.

Continue reading Lessons from La-La Land

Problem 2 of Climate Science

Problem 2. Cointegration was developed in economics to deal with a problem of spurious correlation between series with stochastic trends. Why should spurious correlation be a concern if the trends in temperature and GHGs are deterministic?

Sometimes I’ve been accused of over-simplifying, but I do try to make models as simple as possible, because it avoids a lot of speculation. With that view, this simple model represents paradoxical features of unit roots. Even if there was a deterministic relation of temperature and CO2, the correlation is spurious.

This simple model shows why temperature can look like a random walk but not go off to infinity.

The model simply includes a reaction to disequilibrium, like a half-pipe – that’s it.

There are a few ways you could get to this model. You could assume a restoring ‘force’ — a second derivative — is a linear function of the anomaly temperature T (difference from equilibrium). Or you could invoke Stefan-Boltzmans Law which states that radiant flux increases with the fourth power of temperature.

The model consists of the non-linear restoring effect f=b*T which is some fraction of T with remembered ‘shocks’ e. The simplification follows:

T(t+1) = T(t)+e-b*T(t)
T(t+1) = (1-b)*T(t)+e

Without f (or b=0), T would this would be a simple random walk RW (black). With f (and b=0.01 my model) (shown in green) the series T (red) has a slight restoring effect.

fig1

RW Dickey-Fuller = -3.0692, Lag order = 4, p-value = 0.1337
T – Dickey-Fuller = -2.9767, Lag order = 4, p-value = 0.1721

It doesn’t take many runs to find one that looks like current global temperature, and the random walk vs T is almost identical over 100 points. Moreover, the adf.test does not reject the presence of a unit root in T (is not stationary).

However, their real difference becomes clear with more data. At 1000 points the random walk (RW) is unbridled, while the T drifts off, it eventually crosses zero again (is mean-reverting).

fig2

Dickey-Fuller = -2.4645, Lag order = 9, p-value = 0.3817
Dickey-Fuller = -3.7678, Lag order = 9, p-value = 0.02060

The adf test now correctly shows that T is stationary, shown by the low p-value, while the RW is not rejected. The series contains a ‘fractional unit root’ of 0.99 instead of one. This difficult to detect difference is enough to guarantee the series reverts to the mean.

The probability distributional is also instructive. The histogram of temperatures has broad flat distribution (red), as would be expected from ‘surfing the half-pipe’, created by the restoring effect (shown in green).

fig3

Moreover, if you squint you can see periodicity in the series. This is because the series hangs around the upper or lower limit for a while, before eventually moving back. The period is only an illusion though. Could this the the origin of PDO?

Also, it’s not hard to see a ‘break’ in the series in the first figure — another illusion.

The distribution of these temperatures is in a broad and flat ‘niche’. Within that ‘niche’ the series is relatively unconstrained, like a random walk, but responding deterministically to small forcings (like CO2?). It is only with more data that the half-pipe is apparent.

It seems to me, that the problem for the deterministic paradigm is that even if CO2 increases temperature deterministically, this deterministic relationship breaks down as the temperature hits the sides and encounters resistance from the non-linear response.

At most, as CO2 keeps increasing, temperature would stay pinned to the side of the bowl, surfing up and down at the limit. When locked into the deterministic view, you would be wondering ‘where is the heat going‘ as extrapolation from increasing CO2 fails.

Global temperature can look like a unit root for all purposes over the last 150 years, but even a small negative feedback in a random walk provides physically-realistic behaviour.

R Code

sim1<-function() {
l<-1000
plot(rep(0,l),ylab="Anomaly",type="l",ylim=c(-10,10))
a=0;b=0.01
v=NULL;d=NULL
r<-rnorm(l,sd=0.2)
c<-cumsum(r)
for (i in r) {
x<-b*a^3
a=a+i-x
v=c(v,x)
d<-c(d,a)
}
lines(c)
lines(d,col=2)
lines(v,col=3)
print(adf.test(c))
print(adf.test(d))
}

Problem 5 of Climate Science

Problem 5. Why do most of the forecasts of climate science fail?

If climate science had a history of accurate forecasts, it would have a foundation for greater credibility. That is what is expected in other fields. Instead, it is “denialist” to say that climate science has a lousy record of predictions.

When I started analysing ecological models in my doctoral studies, it wasn’t ideologically unsound to say that the models did a lousy job, and I spent 3 years trying to work out why. Wouldn’t you think that something could be learned by diagnosing why predictions fail, and coming up with solutions?

What do these examples have in common?

Example 1. Arctic Ice
June 5th, 2009: On Climate Progress, NSIDC director Serreze explains the “death spiral” of Arctic ice, (and the “breathtaking ignorance” for blogs like WattsUpWithThat).
April 4, 2010: Dr. Serreze said this week in an interview with The Sunday Times: “In retrospect, the reactions to the 2007 melt were overstated.” (not breathtakingly ignorant?)

Example 2. Australian Drought
Sept 8th 2003: Dr James Risbey, Monash University: “That means in southern Australia we’d see more or less permanent drought conditions…”
Jan 2009-10: Less permanent drought conditions …

Continue reading Problem 5 of Climate Science

Problem 3 of Climate Science

Problem 3. Why is the concept of ‘climate’ distinguished from the concept of ‘weather’ by an arbitrary free parameter, usually involved in averaging or smoothing or ’scale’ transformations of 10 to 30 years?

The recent article on Question #9 by Meiers and response by Stephen Goddard used a coin toss analogy to answer this question. Meiers states that while the uncertainty of the probability of heads in the short term is high, over the long term we expect the answer to become more certain as we average over more examples. This, he says, is an analogy to weather and climate, and why climate models can be poor at predicting the immediate future, yet can be reliable in the more distant future.

A series containing a unit root contradicts this intuition.

To illustrate I have developed an excel spreadsheet DStrends with a deterministic and stochastic trend (call them I(0) deterministic and I(1) stochastic). It generates a new set of random numbers and trends every time it is reset.

Given a coin toss at time t of Y(t) with equal probability (values -1, 0 and 1) the two series are given by:

D(t) = 0.01t + Y(t)
S(t) = 0.01 + Y(t) + S(t-1)

There is an underlying trend of 0.01t in each series. The two series are quite different as can be seen in the figure below. While the deterministic series shows a slight upward trend, the trend is drowned out by the variability in the series with the unit root.

DStrends

The uncertainty in each series can be seen by plotting the standard error of the mean of each series for increasing observations. The standard error is given by the standard deviation divided by the square root of the number of observations, and represents the uncertainty in the estimation of the mean value.

stderr

The standard error of the deterministic series starts high and decreases. With a low trend (0.01 per year) the uncertainty will keep decreasing. With higher trends (0.1 per year) there will be a minimum around thirty observations (years?) before increasing again as the trend becomes more important.

The standard error of the stochastic series is constant over the range of observations (run the spreadsheet a number of times for yourself). Constant uncertainty is theoretically predicted for a series with a unit root, no-matter what scale we look at the observations.

This illustrates clearly that the notion of uncertain ‘weather’ and more certain ‘climate’ only makes sense for a deterministic series (as shown by the local minimum of uncertainty around 30 observations). There is no sense in a weather/climate distinction when a unit root is present as there is no scale over which estimates of the mean become less uncertain.

So why is there a distinction between weather and climate? The answer is because of the assumption that global temperature is a deterministic trend. However, all the empirical evidence points to the overwhelming effects of a unit (or near unit) root in the global temperature series, which means the deterministic trend is a false assumption.

Dr Meiers proposed a bet which illustrates his erroneous assumption:

How can a more complex situation be modeled more easily and accurately than a simpler situation? Let me answer that with a couple more questions:1. You are given the opportunity to bet on a coin flip. Heads you win a million dollars. Tails you die. You are assured that it is a completely fair and unbiased coin. Would you take the bet? I certainly wouldn’t, as much as it’d be nice to have a million dollars.2. You are given the opportunity to bet on 10000 coin flips. If heads comes up between 4000 and 6000 times, you win a million dollars. If heads comes up less than 4000 or more than 6000 times, you die. Again, you are assured that the coin is completely fair and unbiased. Would you take this bet? I think I would.

If the question was, “Is the coin biased?” and the observable (like temperature) was the final value of an integrating variable I(1), then Dr Meiers’ bet after 10000 would actually be very unwise, as the value of the observable (positive or negative) after 10000 flips is no more certain than after 1 or 100.

Continue reading Problem 3 of Climate Science

Problem 1 of Climate Science

Problem 1. If temperature is adequately represented by a deterministic trend due to increasing GHGs, why be concerned with the presence of a unit root?

Rather than bloviate over the implications of a unit root (integrative behavior) in the global temperature series, a more productive approach is to formulate an hypothesis, and test it.

A deterministic model of global temperature (y) and anthropogenic forcing (g) with random errors e is:

yt=a+b.gt

An autoregressive model of changes in temperature Δyt uses a difference equation with a deterministic trend b.gt-1 and the previous value of y or yt-1:

Δyt =b.gt-1+c.yt-1

Written this way, the presence of the unit root in an AR1 series y is equivalent to the coefficient c equaling zero (see http://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test).

I suspect the controversy can be reduced to two simple hypotheses:

H0: The size of the coefficient b is not significantly different from zero.
Ha: The size of the coefficient b is significantly different from zero.

The size of the coefficient should be indicative of the contribution of the deterministic trend (in this case anthropogenic warming) to the global temperature.

We transform the global temperature by differencing (an autoregressive or AR coordinate system), and then fit a model just as we would with any model.

In the deterministic coordinate system, b is highly significant with a strong contribution from AGW. For the AGW forcing I use the sum of the anthropogenic forcings in the RadF.txt file W-M_GHGs, O3, StratH2O, LandUse, and AIE.

fig1


Call: lm(formula = y ~ g)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.34054 0.01521 -22.39 <2e-16 ***
g 0.31573 0.01802 17.52 <2e-16 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1251 on 121 degrees of freedom
Multiple R-squared: 0.7172, Adjusted R-squared: 0.7149
F-statistic: 306.9 on 1 and 121 DF, p-value: < 2.2e-16

The result is very different in the AR coordinate system. The coefficient of y is not significantly greater than zero (at 95%) and neither is b.

fig2


Call: lm(formula = d ~ y + g + 0)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
y -0.06261 0.03234 -1.936 0.0552 .
g 0.01439 0.01088 1.322 0.1887

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.101 on 121 degrees of freedom
Multiple R-squared: 0.0389, Adjusted R-squared: 0.02302
F-statistic: 2.449 on 2 and 121 DF, p-value: 0.09066

Perhaps the main contribution of AGW is since 1960, so we restrict the data to this period and examine the effect. The deterministic trend in AGW is greater, but still not significant.

fig3


Prob1(window(CRU,start=1960),GHG)
Call: lm(formula = d ~ y + g + 0)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
y -0.24378 0.10652 -2.289 0.0273 *
g 0.03050 0.01512 2.017 0.0503 .

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1149 on 41 degrees of freedom
Multiple R-squared: 0.1284, Adjusted R-squared: 0.08591
F-statistic: 3.021 on 2 and 41 DF, p-value: 0.05974

But what happens when we use another data set. Below is the result using GISS. The coefficients are significant but the effect is still small.

fig4


> Prob1(GISS,GHG)
Call: lm(formula = d ~ y + g + 0)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
y -0.27142 0.06334 -4.285 3.69e-05 ***
g 0.06403 0.01895 3.379 0.00098 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1405 on 121 degrees of freedom
Multiple R-squared: 0.1375, Adjusted R-squared: 0.1232
F-statistic: 9.645 on 2 and 121 DF, p-value: 0.0001298

So why be concerned with the presence of a unit root? It has been argued that while the presence of a unit indicates that using OLS regression is wrong, this does not contradict AGW because the effect of greenhouse gas forcings can still be incorporated as deterministic trends.

I am not 100% sure of this, as the differencing removes most of the deterministic trend that could be potentially explained by g.

If the above is true, there is a problem. When the analysis respects the unit root on real data, the deterministic trend due to increasing GHGs is so small that the null hypothesis is not rejected, i.e. the large contribution of anthropogenic global warming suggested by a simple OLS regression model is a spurious result.

Here is my code. Orient is a functions that matches two time series to the same start and end date.


Prob1<-function(y,g) {
v<-orient(list(y,g))
d<-diff(v[,1]);y<-v[1:(dim(v)[1]-1),1];
g<-v[1:(dim(v)[1]-1),2]
l<-lm(d~y+g+0)
print(summary(l))
plot(y,type="l")
lines((g*l$coef[2]+y[1]),col="blue")
}

Central Problems of Climate Science

Prompted by the interest VS has rekindled in fundamental analysis of the temperature series at Bart’s and Lucia’s blogs, below are a small set of core ‘problems’ facing statistical climate science (CS) — kind of a challenge.

Remember a deterministic trend is one brought about by a changing value of the mean, due to a change in an equilibrium value for example (ie non-stationary). A stochastic trend is due the accumulation of random variations; all parameters are stationary.

Problem 1. If temperature is adequately represented by a deterministic trend due to increasing GHGs, why be concerned with the presence of a unit root?

Problem 2. Cointegration was developed in economics to deal with a problem of spurious correlation between series with stochastic trends. Why should spurious correlation be a concern if the trends in temperature and GHGs are deterministic?

Problem 3. Why is the concept of ‘climate’ distinguished from the concept of ‘weather’ by an arbitrary free parameter, usually involved in averaging or smoothing or ‘scale’ transformations of 10 to 30 years?

Problem 4. Why has a community of thousands or tens of thousands of climate scientists not managed to improve certainty in core areas in any significant way in more than a decade (eg the climate sensitivity caused by CO2 doubling as evidenced by little change in the IPCC bounds)?

Problem 5. Why do so many of the forecasts of CS fail (see C3 for list)?

Continue reading Central Problems of Climate Science

Cointegration

So now the fun starts. We have established the integration order of the variables in the RadF file, we impose the rule that only variables of the same order can be combined, and in particular that they cannot be cointegrated with temperature which is I(1). In this case all the anthropogenic variables in RadF are I(2) — W-M_GHGs, O3, StratH2O, LandUse, SnowAlb, BC, ReflAer, AIE — while Solar and StratAer are I(1) or I(0).

Adding these AGW variables together would be one way, as they are all I(2) and are in the same units of Watts/m2. However, the result is still I(2), as it takes two differencings before the ADF test rejects.

d Root ADF Padf
[1,] 0 0.9662216 -1.088566 0.9208762
[2,] 1 0.5309850 -2.192266 0.4966766
[3,] 2 -0.4976567 -4.875666 0.0100000

What we can do is see if the variables cointegrate. To do this, we fit a linear regression model to the variables and test the residuals for integration order. If the residual is a lower order I(1) then the variables are all related and can be treated as a common trend. It turns out the residuals are I(1).

d Root ADF Padf
[1,] 0 0.9988194 -3.087874 0.1245392
[2,] 1 0.4176963 -4.468787 0.0100000
[3,] 2 -0.4492167 -6.912500 0.0100000

Some of the variables are not significant however, so we restrict the cointegration vector to the following:

W-M_GHGs = -4.51*O3 + 54.56*StratH2O + 11.21*LandUse – 0.35*AIE, R2=0.9984

The plot of the residuals of the full (black) and restricted (red) model is below. Interestingly, htere is a clear 20 year cycle not apparent in the original data.

fig4

Finally we have the information we need to develop our full model. We have Temperature and Solar as I(1), and we will leave out StratAer (volcanics) as it turns out to be non-significant. The sum of the I(2) variables is AGW=I(2) so we use the first difference (or rate of change) deltaAGW as a variable which will be I(1). We also use the residuals of the cointegration g=I(1).

The resulting linear model turns out to have marginal cointegration on the ADF test (as also found by Beenstock) but the PP test clearly rejects, so the result is I(0) and the variables cointegrate.

d Root ADF Padf
[1,] 0 0.7316112 -3.132276 0.1060901
[2,] 1 -0.1910926 -6.674352 0.0100000
[3,] 2 -0.9669920 -8.286269 0.0100000

Phillips-Perron Unit Root Test
data: l$residuals
Dickey-Fuller Z(alpha) = -33.0459, Truncation lag parameter = 4, p-value = 0.01

T = -0.36007 + Solar*1.22403 + deltaAGW*5.31922 + g*1.12901, R2=0.629

Below I plot the temperature (black), and the prediction from the model (red).

The plot also explores the impact of different levels of CO2. The upper (green) is what would have happened to temperature if CO2 had increased at twice the rate. The lower (blue) is the result if CO2 had been zero throughout.

fig5

So far the reproduction of the Beenstock analysis on an alternate data set resembles his analysis, suggesting some robustness to the result. The message of this analysis is that higher emissions of CO2 make no difference to the eventual temperature levels, although they change faster, arriving at the equilibrium levels sooner.

Integration Order of RadF.txt

The test of integration order from the previous post is applied to the major atmospheric forcings used in the GISS global climate models in recent years. These are available for 1880 to 2003 in a file called RadF.txt The codes for the forcings are self explanatory: W-M_GHGs, O3, StratH2O, Solar, LandUse, SnowAlb, StratAer, BC, ReflAer, AIE.

RadF

Below is the sum of the forcings.

NetF

We want to know the integration order of the global surface temperature series as well. The result:

Temp=1, W-M_GHGs=2, O3=2, StratH2O=2, Solar=0, LandUse=2, SnowAlb=2, StratAer=0, BC=2, ReflAer=2, AIE=2.

Temperature has an integration order I(1), while the most of the forcings have an integration order I(2). The only exceptions are Solar which comes in at I(0) in this test, and Stratospheric Aerosols (due to ultra-Plinian eruptions) also I(0). Its pretty clear that forcings related to anthropogenic factors are I(2), and temperature is I(1).

Our results are (mostly) consistent with Beenstock’s in this regard. Remembering that the test is not foolproof, the result for solar is suspicious, as he (and others) find it to be I(1). Solar has a huge 11 year cycle in it, and that might have something to do with it. You need to be very careful with confounding factors in these tests.

And it is (largely) because of this incompatibility between the orders of integration that Beenstock told London’s Cass Business School that the link between rising greenhouse gas emissions and rising temperatures is “spurious”:

“The greenhouse effect is an illusion.”

More choice quotes:

He warned that climatologists have misused statistics, leading them to the mistaken conclusion global warming is evidence of the greenhouse effect.

Professor Beenstock said that just because greenhouse gases and temperatures have risen together does not mean they are linked.

In the next post I’ll look at the basis for his prediction:

“If the sun’s heat continues to remain stable, and if carbon emissions continue to grow with the rate of growth of the world economy, global temperatures will fall by about 0.5C by 2050,” Professor Beenstock said.

editorial

I try not to pen editorials. OK here goes. I respect the attention given to this blog, as there are plenty of other great blogs on climate change, politics, finance, etc to read. I try to stay an ‘on message’ advocate for numeracy. Everyone has something to offer from their experiences though. Right at this moment, there is something to say that is important about numeracy, but takes a bit to explain.

I would encourage y’all to read the discussion on New paper on mathematical analysis of GHG in relation to VS, not because I believe in it, or because I believe in balance of probabilities it is right, but because I believe it is the way scientific progress is made. Its what I have tried to do here. Check the numbers.

This is not another ‘me too’ paper inventing there own ‘novel’ approach to affirming the cause du jour in the name of ‘research’. Its about contesting methodology of other experts in the field. Boring? No. It’s what it’s all about. Jargon? No. Its no more complex than ‘regression’. Just unfamiliar. And I am not taking a dig at anyone, as I respect everyone who posts here. Peer reviewed? Uncertain.

When Steve McIntyre started his blog 5 years ago, and I did around the same time, I sent him a email saying to the effect that he would change the way science is done. He called the FOI’s and journal processes of peer review, comments etc ‘quasi-litigation’. I agree, and acknowledge that scientists should use the available processes more. It is a natural extension of the search for truth.

The main failing of the IPCC, IMHO, is in ignoring peer-reviewed papers and comments in favor of confirmatory dreck. What to do about that? The only answer I know is by contesting the logic, models, mathematics, and results using data. Don’t just check your assumptions, check your calculations. Check the stated results are justified.

What point is there in talking about philosophy of science, when the by all accounts, most science is wrong? The overwhelming reason is because:

Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

Climbing into the stadium and dueling over the technical details is the only way, despite the personal cost.

Polynomial Cointegration Rebuts AGW

Please discuss the new paper by Michael Beenstock and Yaniv Reingewertz here.

Way back in early 2006 I posted on an exchange with R. Kaufmann, whose cointegration modelling is referenced in the paper, entitled Peer censorship and fraud. He was complaining at RealClimate about the supression of these lines of inquiry by the general circulation modellers. The post gives a number of examples that were topical at the time. ClimateGate bears it out.

Steve McIntyre wrote a long post on the affair here.

[R]ealclimate’s commitment to their stated policy that “serious rebuttals and discussions are welcomed” in the context that they devoted a post to criticize Ross and me and then refused to post serious responses. In this case, they couldn’t get away with censoring Kaufmann, but it’s pretty clear that they didn’t want to have a “serious” discussion online.

Continue reading Polynomial Cointegration Rebuts AGW

Rahmstorf Reamed

The Australian reports a major new controversy after Britain’s Met Office denounced research from Stefan Rahmstorf suggesting that sea levels may increase by more than 1.8m by 2100.

Jason Lowe, a leading Met Office climate researcher, said: “We think such a big rise by 2100 is actually incredibly unlikely. The mathematical approach used to calculate the rise is completely unsatisfactory.”

Critic Simon Holgate, a sea-level expert at the Proudman Oceanographic Laboratory, Merseyside, has written to Science magazine, attacking Professor Rahmstorf’s work as “simplistic”.

“Rahmstorf’s real skill seems to be in publishing extreme papers just before big conferences like Copenhagen, when they are guaranteed attention,” Dr Holgate said.

Seems like the concerns with work from the Clown of Climate Science that the blogosphere have been voicing for years are finally going mainstream. Interested readers might want to read:

Recent Climate Observations: Diagreement with Projections

Warning to the Garnaut Commission about the excessive reliance on the work of Rahmstorf and friends.

Also search Niche Modelling, and see ClimateAudit, the Blackboard, as well as many other posts from these and other statistical blog sites for examples of R’s statistical incompetence.

I don’t have a problem with academics being wrong. The problem is with the Federal Government and Penny Wong who are suckers for this charlatan offering the scientific reliability of an astrologer.

The report states one objection:

Based on the 17cm increase that occurred from 1881 to 2001, Professor Rahmstorf calculated that a predicted 5C increase in global temperature would raise sea levels by up to 188cm.

Its worse than that. It appears that the extrapolation in R’s model is actually based in a non-significant rate increase of sea level w.r.t. temperature, i.e. a tiny derivative of already problematic data.

It uses simple measurements of historic changes in the real world to show a direct relationship between temperature rise and sea level increase and it works stunningly well – Rahmstorf

But R cheerfully admits here:

How do we know that the relationship between temperature rise and sea level rate is linear, also for the several degrees to be expected, when the 20th century has only given us a foretaste of 0.7 degrees? The short answer is: we don’t.

Why then should anyone take any notice of predictions from a model when, as the author admits, the truth of the fundamental assumption is unknown? How do we know the stars affect human behaviour? The short answer is: we don’t.

Heed the fourth commandment of statistics: When using multivariate models, always get the most for the least.

Recent Climate Illogic

Environment minister Peter Garratt claimed recent figures on Australian temperature prove Opposition leader Tony Abbott was wrong to claim that the world had stopped warming.

Substitute Australia for the World, and the last 100 years for 10 years, and you might get close to the actual claim, similar to that made by respected climate physicist Roy Spencer that “there has been no net warming in the last 11 years or so”.

It’s easy — but confused — to find a limited region with a different climate, over a different time period, then use it for rebuttal.

David Jones of the BoM claims, referring to their Annual Report that the upward trend of temperature since 1920(?) should silence the critics once and for all.

Continue reading Recent Climate Illogic

EMD Estimates of Natural Variation

Our approach so far has been to model natural climate variation of global temperature with sinusoidal curves, and potential AGW as increasing trends. A new algorithm called EMD (Empirical Mode Decomposition) promises to more robustly identify cyclical natural variation (NV), showing the contribution of NV and AGW to global temperature, and testing the IPCC claim that most of the recent warming is due to AGW.

Underestimation of natural variation (NV) is a crucial flaw in the IPCC’s logic, according to Dr Roy Spencer:

They ignore the effect of natural cloud variations when trying to diagnose feedback, which then leads to overestimates of climate sensitivity. … By ignoring natural variability, they can end up claiming that natural variability does not exist. Admittedly, their position is internally consistent. But then, so is all circular reasoning.

The relative contribution of AGW to temperature increase in the late 20th century underpins the IPCC global warming claims, according to the Wiki page on Scientific Opinion on Climate Change:

National and international science academies and scientific societies have assessed the current scientific opinion, in particular on recent global warming. These assessments have largely followed or endorsed the Intergovernmental Panel on Climate Change (IPCC) position of January 2001 that states:

An increasing body of observations gives a collective picture of a warming world and other changes in the climate system… There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities.[1]

Since 2007, no scientific body of national or international standing has maintained a dissenting opinion.

So estimating the relative proportion of natural variation vs. trend is very important. While widely used in other fields, EMD is relatively little used in climate science.

As an example, Lin and Wang (2004) used EMD for analysis of solar insolation. They claim that the solar eccentricity signal is much larger than previously estimated, more than 1% of solar irradiance, and adequate for controlling the formation and maintenance of quaternary ice sheets. This is a potential resolution of the 100,000 year problem, that has also been used to justify the necessity of CO2 feedback in producing ice ages.

Conventional spectral methods are strictly periodic — the period is constant in both frequency and amplitude. EMD relaxes these assumptions, allowing quasi-periodicity, which might explain why more variation is potentially explained. The EMD algorithm proceeds by first extracting out the highest frequency, called an intrinsic mode function (IMF) and leaving a residual. It does this to the next highest frequency, and so on, until only a trend is left.

While it is possible the residual is also part of a cycle — it is always possible to model a trend with a sinusoidal of long enough period — we treat this as AGW trend in order to estimate the maximum possible contribution of AGW to global warming.

Here are the results of applying EMD to the CRU global temperature series. Figure 1 below shows each of the 5 IMF’s and the residual, the remainder after subtracting out the periodics.

fig11

Each of the IMF’s is shown, with mean periods of 4.0, 6.6, 11.9, 23.4, and 55.1 years respectively. Most readers would be well aware of the similarity of these periods to major solar and oceanic cycles.

Continue reading EMD Estimates of Natural Variation

Magnetism Science Exercise

This numeracy exercise for schools can be adapted to grades pre-up. You need:

1. A number of strong magnets, at least two per group
2. Iron filings
3. Selection of nuts of various metals: iron, copper, brass, some lead sinkers.

The hour long exercise is presented as an introduction to scientific thinking in the higher grades. I don’t worry about this for lower grades.

I usually choose an assistant to help handing out magnets, though this changes throughout.

1. Discovery – I introduce the magnets as my “floater rays”. I let them explore the repulsion of the magnets, showing how the magnet can “float” above another if in the correct orientation. I also do the moving the magnet under the desk trick.

2. Theory formation/induction – I hand out more magnets suggesting they find out what magnets are attracted too. After exploring for a while I solicit the theory, invariably “metal”. I ask them to put up their hand if they agree this is a good theory or not. (All usually agree with the theory).

3. Falsification/tests – I hand out nuts of various metals, indicating that we are going to test this theory. They soon get the point that their theory is wrong, and I cross it out on the board.

4. Refinement – Together we refine the theory that only certain metals are attracted, etc.

5. Visualization/schematics – I place a bar magnet (or the group of 8 or so magnets) under a piece of paper and sprinkle iron filings on top, Awesome! and Cool! are the usual responses at this stage. I introduce it by asking “Would you like to see my floater rays?”.

I get them to draw the rays on a piece of paper, but then go around an draw the schematic lines of magnetic force (familiar magnetic field for a bar magnet). I explain the difference between an artist’s impression, and a schematic diagram.

6. Measurement/Numeracy – For prep and lower classes, I will get them to number the field lines in their schematic. For higher grades I go into the measurement phase, asking them to design an apparatus for measuring the strength of one or two magnets.

The way I do it is to attach one of the iron nuts to three or so rubber bands, and measure the extension as a magnet is pulls the nut against the band. This can be done next to a ruler, so is very easy to set up. I am sure there are other ways.

As I collect the (3) lengths from the groups, I list them in a table, calculate the differences (to get the extension) then add the numbers up. This is an opportunity to test their mental arithmetic and encourage agility.

The strength of one, two, three magnets etc. can be indicated from the overall result. More elaborate approaches are possible with higher level classes.

Comment on McLean et al Submitted

Here is the abstract for our comment submitted to Geophysical Research Letters today. Bob Tisdale is acknowledged as the source of the idea in the first paragraph. Lets see how it goes. If you would like a copy, contact me via the form above.

Update: Now available from arXiv

Comment on “Influence of the Southern Oscillation on tropospheric temperature” by J. D. McLean, C. R. de Freitas, and R. M. Carter

David R.B. Stockwell and Anthony Cox

Abstract

We demonstrate an alternative correlation between the El Nino Southern Oscillation (ENSO) and global temperature variation to that shown by McLean et al. [2009]. We show 52% of the variation in RATPAC-A tropospheric temperature (and 59% of HadCRUT3) is explained by a novel cumulative Southern Oscillation Index (cSOI) term in a simple linear regression model and 65% of RATPAC-A variation (67% of HadCRUT3) when volcanic and solar effect terms are included. We review evidence from physical and statistical research in support of the hypothesis that accumulation of the effects of ENSO can produce natural multi-decadal warming trends. Although it is not possible to reliably determine the relative contribution of anthropogenic forcing and SOI accumulation from multiple regression models due to collinearity, these results suggest a residual accumulation of around 5 ± 1% and up to 9 ± 2% of ENSO-events has contributed to the global temperature trend.

Continue reading Comment on McLean et al Submitted

Influence of the Southern Oscillation on tropospheric temperature

A potential AGW buster, attributing decadal temperature variation largely to internal oceanic effects, ENSO and over the longer term the 1976 Great Pacific Climate Shift, as we did here, is a new paper by Australian John McLean, with New Zealander Chris de Freitas, and Australian ex-pat Kiwi Bob Carter.

That mean global tropospheric temperature has for the last 50 years fallen and risen in close accord with the SOI of 5–7 months earlier shows the potential of natural forcing mechanisms to account for most of the temperature variation.

While the bottom line of this paper is that the change in SOI accounts for 72% of the variance in global temperature for the 29-year-long MSU record and 68% of the variance in global temperature for the longer 50-year RATPAC record, I think the claim of a longer term temperature effect could have been better supported. They stated:

Lean and Rind [2008] stated that anthropogenic warming is more pronounced between 45°S and 50°N and that no natural process can account for the overall warming trend in global surface temperature. We have shown here that ENSO and the 1976 Great Pacific Climate Shift can account for a large part of the overall warming and the temperature variation in tropical regions.

mclean

However, the assertion comes down to Figure 4 where they identify that the mean of the SOI (and temperature) seems to change at 1976. This model is not identified rigorously with any analysis, but is stated as an observation in the text.

Continue reading Influence of the Southern Oscillation on tropospheric temperature

Smooth Operator

The replication of the highly influential Rahmstorf 2007 A Semi-Empirical Approach to Sea Level Rise, one of the main sources of projected sea level rise, was reported in the previous post.

In a now discredited (and disowned) Rahmstorf et al 2007 publication, Steve McIntyre showed that Rahmstorf had pulled an elaborate stunt on the community by dressing up a simple triangular filter with “singular spectrum analysis” with “embedding dimensions”, I can now report another, perhaps even more spectacular stunt.

His Figure 2 is crucial, as it is where the correlation between the rate of sea level increase, deltaSL, and the global temperature, Temp, is established. If these were not correlated, then there would be no basis for his claims of a significant “acceleration” in the increase in sea level when temperature increases, and his estimates of sea level rise by 2100 would not be nearly so high.

It is well known that smoothing introduces spurious autocorrelations into data that can artificially inflate correlations, and one of the comments on his paper (attached to the first link above) picked up on this. Rahmstorf’s procedure introduces no less than 5 different types of smoothing to produce his Figure 2:

1. singular spectrum analysis – the first EOF
2. he then pads the end of the series with a linear extrapolation of 15 points
3. convolution, (or 15 point filtering)
4. calculates the linear trend from 15 points (on the sea level data only)
5. binning of size 5

I replicated his procedure in the previous post in the series. Here, the entire procedure is substituted with a single binning (averaging each successive M data points). The figure below compares the Rahmstorf procedure at parameters m=13:16 (red line), and the result of binning the same data into bins of size m=13:16 (black line). The sea level data is differenced after binning to get a delta SL.

fig441

Continue reading Smooth Operator

A semi-empirical approach to sea level rise

Published in Science, this Rahmstorf 2007 article provides a high-end estimate of sea level rise of over a meter by the end of the century (rate of 10mm/yr). Linear extrapolation puts the rate of increase at only 1.4mm and 1.7mm per year depending on start date (1860 or 1950).

The paper was followed by two critical comments, both bashing the statistics, and these are attached to the link above. Rahmstorf replied to those comments. The issues raised are familiar to readers of this, CA, Lucia, and other statistical blogs: significance, autocorrelation, etc. and worth a read.

Worthwhile as the comments are, they do not look into the problem of the end-treatment used by Rahmstorf, and I look at that here.

All of the papers projecting these high end rates, and they all depend on the assumption of recent ‘acceleration’ in sea levels. That is, seem to depend on the rate of increase getting faster and faster.

Rahmstorf 2007 paper uses the smoothing method most recently savaged at CA here, where it was shown despite all the high-falutin’ language to be equivalent to a simple triangular filter of length 2M, padded with M points of slope equal to the last M points. My main concern is that at this crucial end-section, the data has been duplicated by the padding, effectively increasing the number of data points of very high slope.

The figure below shows a replication of the Rahmstorf smoothing with and without padding (moved down for clarity) (code below). Two sea level data sets are shown, one by Church “A 20th century acceleration in global sea level rise” (used in Rahmstorf, data available from CSIRO here) another by Jevrejeva “Recent global sea level acceleration started over 200 years ago?” (data here)

It should be noted this data ends in 2001-2, a truncation bound to maximize recent temperature increases.

fig1

Continue reading A semi-empirical approach to sea level rise