There appears to be an error in the influential paper by Rahmstorf et al. (2007). Rahmstorf et.al. (Science Brevia, 4 May 2007, p709 [1]) reports that the trend of the global mean of surface temperature and sea level raise concerns that the climate system “may be responding more quickly to climate change than our current generation of models indicates”. At least one major study, Interim Report of the Garnaut Review, relies on the paper for advocating prompt and extreme action on carbon emissions, one of its major conclusions (Section 2.4 Consequences of Climate Change, Observed Climate Change). But there seems to be a problem.

As previously reported here,

the conclusions of Rahmstorf’s 7 (Rahmstorf, Cazenave, Church, Hansen, Keeling, Parker, and Somerville) rely on a trend line lying above the IPCC projections on their Figure 1, shown enlarged below. No statistical tests are performed, the basis for their claim is purely based on the visual aid. Their Figure 1 is below, with the key part of the image containing the IPCC projection enlarged.

**Figures 1 and 2: Rahmstorf et al. (2007) Figure 1. Whole and enlarged.**

Rahmstorf’s 7 state in the figure caption that

“All trends are nonlinear trend lines

and are computed with an embedding period of 11 years

and a minimum roughness criterion at the end (Moore

et.al. 2006 [2])”. On reading Moore’s paper, it would

appear the nonlinear methodology used was Singular

Spectrum Analysis (SSA). The Moore paper suggests the

minimum roughness criterion (MRC) would follow the

Mann 2004 [3] recipe of padding the end of the series with

data reflected about the final value.

**Figure 3 and 4: GISS temperature with SSA trend, unpadded and padded.**

The peculiar property of MRC of “pinning” the trend line

to the final end point of the series has been noted in

a post “Mannomatic Smoothing and Pinned End-points” at ClimateAudit.

The comparison of MRC and non-MRC padded series

is shown in figures captured from CaterpillarSSA.

The first figure “unpadded.png” shows

a SSA trend line that approximates the Rahmstorf figure result. The

second figure “padded.png” shows the SSA trend line with an MRC

padded GISS series, passing directly through the 2006

value. This is as it should be, as the MRC effectively

‘pins’ the trend line to the final value due to the

symmetry about the final value.

Was a direct application

of the SSA trend line used and not an MRC padded series

as described?

If an MRC padded series had been used in the figure,

it would have been end-pinned to the 2006 value,

at the center of the IPCC

projections. The figure would then not have conveyed the

impression that temperatures are in the upper range of

the IPCC projections, as claimed.

As it was in 2006, it appears that SSA

without MRC padding produces a higher trend line than

with MRC padding, necessary for supporting their claim.

An additional puzzling factor is the reference to MRC padding

at all. Padding the end of the series is not actually

necessary to ensure a SSA trend line is drawn to the

end of the series values. Padding is only necessary in

acausal filters such as moving averages that stop a

window length short of the end of the series.

To date, attempts to contact Prof. Rahmstorf and clarify the actual methodology

used have been unsuccessful.

## References

[1]Stefan Rahmstorf, Anny Cazenave, John A. Church, James E. Hansen,

Ralph F. Keeling, David E. Parker, and Richard C. J. Somerville. Recent

Climate Observations Compared to Projections. Science, 316(5825):709â€“,

2007.

[2] A. Grinsted Moore, J.C. and S. Jevrejeva. New tools for analyzing time series relationships and trends. Eos, 86(24), 1995.

[3] M. E. Mann. On smoothing potentially non-stationary climate time series.

Geophys. Res. Lett., 31:L07214, doi:10.1029/2004GL019569 2004.

David–

Does the method describe how to estimate the uncertainty in the “mean” value around the trend line?

I’m puzzled about this because the Rahmstorf7 method involves: Fit, slide (to the match 1990’s average to the IPCC zero point) and eyeball.

You are discussing the fit on the endpoint. But uncertainty in the “slide” amount should normally be discussed in a paper like Rahmstorf. If I estimate uncertainty in means calculated using an 11 year centered average, that uncertainty is also enough to significantly affect the later “calibrated eyeball” comparison of the observations to the projections.

Gosh… R won’t call you? And Tamino suggested that I was silly to ask my readers a question about his paper. Evidently, I should have just picked up the phone and called!

http://rankexploits.com/musings

David–

Does the method describe how to estimate the uncertainty in the “mean” value around the trend line?

I’m puzzled about this because the Rahmstorf7 method involves: Fit, slide (to the match 1990’s average to the IPCC zero point) and eyeball.

You are discussing the fit on the endpoint. But uncertainty in the “slide” amount should normally be discussed in a paper like Rahmstorf. If I estimate uncertainty in means calculated using an 11 year centered average, that uncertainty is also enough to significantly affect the later “calibrated eyeball” comparison of the observations to the projections.

Gosh… R won’t call you? And Tamino suggested that I was silly to ask my readers a question about his paper. Evidently, I should have just picked up the phone and called!

http://rankexploits.com/musings

Thanks, David. Could you also explain to the ignorant, please, the meaning of an ‘embedding period of 11 years’ for the non-linear trend analysis?

The Rhamstorf et.al. note treats data in the IPCC charts over the 1990-2006 period as ‘projections’ and tests them against observations. Lucia Liljegren argues, however, that only the IPCC chart-data over the period 2000-2006 are in fact a projection of the IPCC models. Ian Castles seems to share that view. This implies Rhamstorff et.al. have at best a 6 year period on which to base their ‘non-linear trends'; not 16 years and not 11 years. Does the ‘embedding period’ have a relevance to the reliability of the trends asserted in Rhamstorff et.al.?

http://www.petergallagher.com.au

Thanks, David. Could you also explain to the ignorant, please, the meaning of an ‘embedding period of 11 years’ for the non-linear trend analysis?

The Rhamstorf et.al. note treats data in the IPCC charts over the 1990-2006 period as ‘projections’ and tests them against observations. Lucia Liljegren argues, however, that only the IPCC chart-data over the period 2000-2006 are in fact a projection of the IPCC models. Ian Castles seems to share that view. This implies Rhamstorff et.al. have at best a 6 year period on which to base their ‘non-linear trends'; not 16 years and not 11 years. Does the ‘embedding period’ have a relevance to the reliability of the trends asserted in Rhamstorff et.al.?

http://www.petergallagher.com.au

#1 Lucia, No, there are no measures of uncertainty or statistical tests in the Rahmstorf’7s paper. The claims are purely based on eyeball tests.

#2 Peter, You know, I am not sure either, but it is an option in the SSA analysis. I used the same option. Someone with more knowledge of SSA might be able to answer that.

I don’t think so. The point is that they state they used the ‘minimum roughness criterion’, but if they had used it, then the trend line would have gone through the 2006 value (due to pining) and fallen in the center of the IPCC projection. I don’t think it matters what embedding period is used. If MRC is used, due to symmetry, it should ‘pin’ the trend on the end point. As the graph doesn’t, some other method must have been used.

At the moment I don’t know what method exactly. But they used a methodology that gives a trend above the IPCC projections. If they had used the methodology they appear to have described it would not.

http://landshape.org/enm

#1 Lucia, No, there are no measures of uncertainty or statistical tests in the Rahmstorf’7s paper. The claims are purely based on eyeball tests.

#2 Peter, You know, I am not sure either, but it is an option in the SSA analysis. I used the same option. Someone with more knowledge of SSA might be able to answer that.

I don’t think so. The point is that they state they used the ‘minimum roughness criterion’, but if they had used it, then the trend line would have gone through the 2006 value (due to pining) and fallen in the center of the IPCC projection. I don’t think it matters what embedding period is used. If MRC is used, due to symmetry, it should ‘pin’ the trend on the end point. As the graph doesn’t, some other method must have been used.

At the moment I don’t know what method exactly. But they used a methodology that gives a trend above the IPCC projections. If they had used the methodology they appear to have described it would not.

http://landshape.org/enm

Peter–

I specifically examined the AR4 projections, which project higher values than the TAR. So, that’s part of the difference. But there are puzzling things about Rahmstorf 7. The paper is very brief, and the analysis method explained in a rather cursory fashion. Ultimately, the method is “Fit a fancy curve, slide the data “up” to the 1990 predictions match the fancy curve in the year 1990, eyeball, tell people how it looks to you!”

That’s pretty much it.

http://rankexploits.com/musings

Peter–

I specifically examined the AR4 projections, which project higher values than the TAR. So, that’s part of the difference. But there are puzzling things about Rahmstorf 7. The paper is very brief, and the analysis method explained in a rather cursory fashion. Ultimately, the method is “Fit a fancy curve, slide the data “up” to the 1990 predictions match the fancy curve in the year 1990, eyeball, tell people how it looks to you!”

That’s pretty much it.

http://rankexploits.com/musings

Peter, David, Lucia,

The IPCC’s 2007 Report states explicitly that “SIX ADDITIONAL years of observations SINCE the TAR (Chapter 3) SHOW THAT temperatures ARE CONTINUING to warm near the surface of the planet” (Chapter 9, p. 683, EMPHASES added). As the TAR was released in January 2001, the six additional years to which the 2007 Report referred happen to begin with the initial month of Lucia’s analysis.

If I have correctly understood her results, she finds that the observational record since 2001 is CONSISTENT with continued warming (though probably at a lower rate than the 0.2 C/decade projected by the IPCC). But it does not SHOW that temperatures are continuing to rise, as claimed in the IPCC Report.

If my interpretation of Lucia’s conclusion is correct, and Lucia is correct (I believe that she is) the statement in AR4 is at best very sloppy. This is surprising for a Report that governments, scientific academies, etc. recognise as the ultimate authority on the science of climate change.

As a separate matter, it seems that there is a technical error (maybe more than one) in the paper by the Rahmstorf7 – again this is surprising for a much-cited paper that was published in ‘Science’ And the paper’s conclusion that warming has been at the upper end of the IPCC projections is incorrect, at least for the period from 2001 onwards. As I’ve pointed out in other posts, there are other problems with the 1990-2000 part of the Rahmstorf7 analysis.

Peter, David, Lucia,

The IPCC’s 2007 Report states explicitly that â€œSIX ADDITIONAL years of observations SINCE the TAR (Chapter 3) SHOW THAT temperatures ARE CONTINUING to warm near the surface of the planetâ€ (Chapter 9, p. 683, EMPHASES added). As the TAR was released in January 2001, the six additional years to which the 2007 Report referred happen to begin with the initial month of Lucia’s analysis.

If I have correctly understood her results, she finds that the observational record since 2001 is CONSISTENT with continued warming (though probably at a lower rate than the 0.2 C/decade projected by the IPCC). But it does not SHOW that temperatures are continuing to rise, as claimed in the IPCC Report.

If my interpretation of Lucia’s conclusion is correct, and Lucia is correct (I believe that she is) the statement in AR4 is at best very sloppy. This is surprising for a Report that governments, scientific academies, etc. recognise as the ultimate authority on the science of climate change.

As a separate matter, it seems that there is a technical error (maybe more than one) in the paper by the Rahmstorf7 – again this is surprising for a much-cited paper that was published in ‘Science’ And the paper’s conclusion that warming has been at the upper end of the IPCC projections is incorrect, at least for the period from 2001 onwards. As I’ve pointed out in other posts, there are other problems with the 1990-2000 part of the Rahmstorf7 analysis.

Ian, That’s right. There seems to be a technical error that resulted in the trend being higher than the IPCC projections, OR the method they used is misdescribed. I’ll have more on the sea level analysis later.

http://landshape.org/enm

Ian, That’s right. There seems to be a technical error that resulted in the trend being higher than the IPCC projections, OR the method they used is misdescribed. I’ll have more on the sea level analysis later.

http://landshape.org/enm

I commented on Moore paper at CA ( http://www.climateaudit.org/?p=2984#comment-234031 ), some further thoughts about Figure 3 CIs, the text says

“The confidence interval of the nonlinear trend is usually much smaller than for a least squares fit, as the data are not forced to fi t any specified set of basis functions.”

and indeed, CIs in Fig 3. LS-fit seem larger than in the ‘non-linear trend’ fit. I got a silly idea how this was done:

1) Smooth original data y using method A (let it be OLS, low-pass, SSA+MR, whatever).

2) Compute standard deviation of the residuals,std( A*y-y)

3) Generate white noise sequence (length of y) with this standard deviation, add to the original series y

4) Filter this ‘noised’ data using method A

5) Repeat 3 and 4 N-times and compute 95 % range.

I don’t have the exact data set, but I used http://www.pol.ac.uk/psmsl/pubi/rlr.annual.data/190091.rlrdata to obtain http://signals.auditblogs.com/files/2008/04/brest_sealevel.png , still experimenting, and I hope I’m wrong..

I commented on Moore paper at CA ( http://www.climateaudit.org/?p=2984#comment-234031 ), some further thoughts about Figure 3 CIs, the text says

“The confidence interval of the nonlinear trend is usually much smaller than for a least squares fit, as the data are not forced to fi t any specified set of basis functions.”

and indeed, CIs in Fig 3. LS-fit seem larger than in the â€˜non-linear trendâ€™ fit. I got a silly idea how this was done:

1) Smooth original data y using method A (let it be OLS, low-pass, SSA+MR, whatever).

2) Compute standard deviation of the residuals,std( A*y-y)

3) Generate white noise sequence (length of y) with this standard deviation, add to the original series y

4) Filter this â€˜noisedâ€™ data using method A

5) Repeat 3 and 4 N-times and compute 95 % range.

I donâ€™t have the exact data set, but I used http://www.pol.ac.uk/psmsl/pubi/rlr.annual.data/190091.rlrdata to obtain http://signals.auditblogs.com/files/2008/04/brest_sealevel.png , still experimenting, and I hope Iâ€™m wrong..

Looking forward to that. There is a relevant new post above. Cheers

http://landshape.org/enm

Looking forward to that. There is a relevant new post above. Cheers

http://landshape.org/enm

Yes. The data I looked at don’t rule out continued warming, but they also don’t

showwarming.So, at best one could say that warming is not ruled out.

Also, evidently, the IPCC thinks it can make statements like this based on 6 or 7 years of data, but those criticizing the falsification insist that one can’t make statements like this with 7 years of data. But… well… there ya’ go!

http://rankexploits.com/musings

Yes. The data I looked at don’t rule out continued warming, but they also don’t

showwarming.So, at best one could say that warming is not ruled out.

Also, evidently, the IPCC thinks it can make statements like this based on 6 or 7 years of data, but those criticizing the falsification insist that one can’t make statements like this with 7 years of data. But… well… there ya’ go!

http://rankexploits.com/musings

Re #10. The following is a slightly edited version of a post I have made to the ‘Real Climate Tries to Spin Pielke: A curious lesson 2′ thread on ‘The Blackboard':

‘I share [the] view that a reply by Dr. Rahmstorf to the serious criticisms that have been made of Rahmstorf et al (2007) would be most welcome. This should have been a higher priority for him than that of posting his ‘Model-data-comparison, Lesson 2’ [for Roger Pielke, Jr] on RealClimate.

‘But all seven of the Rahmstorf et al co-authors have some responsibility in this matter, as do the six prestigious research institutions in five countries (Germany, France, Australia, the UK and the US) with which they are affiliated. And so too does ‘Science’, which published their paper.

‘Stefan Rahmstorf was a Lead Author of Chapter 6 of the 2007 WGI report, and other Rahmstorf7 co-authors were Coordinating Lead Authors, Lead Authors or expert reviewers of Chapters 1 (Somerville), 3 (Parker), and 5 (Cazenave and Church).

‘Yet another co-author, James Hansen, has recently written to Australia’s Prime Minister and all State Premiers, calling on them to exercise leadership ‘on a matter concerning coal-fired power plants and carbon dioxide emission rates in your country.’ Hansen explains that, because of the urgency of the matter, he has not collected signatures – but he offers the names of seven Australian scientists whom Mr. Rudd could consult.

‘Hansen also says that he had ‘read and commend[s] the Interim Report of Professor Ross Garnaut’, which has been submitted to the Australian Commonwealth, State and Territory Governments. According to this Report, ‘Comparisons between observed data and model predictions (sic) suggest that the climate system may be responding more quickly than climate models indicate’ (Rahmstorf et al, 2007); and, specifically, that ‘Global mean surface temperature increase since 1990 has been measured at 0.33 C, which is in the upper end of the range predicted (sic) by the IPCC in the Third Assessment Report in 2001, as shown in Figure 5 (Rahmstorf et al 2007).’

‘In making these claims, Professor Garnaut is relying on the paper which has been criticised in forthright terms in the above post and in several of the comments thereon.

Re #10. The following is a slightly edited version of a post I have made to the ‘Real Climate Tries to Spin Pielke: A curious lesson 2′ thread on ‘The Blackboard':

‘I share [the] view that a reply by Dr. Rahmstorf to the serious criticisms that have been made of Rahmstorf et al (2007) would be most welcome. This should have been a higher priority for him than that of posting his â€˜Model-data-comparison, Lesson 2â€™ [for Roger Pielke, Jr] on RealClimate.

‘But all seven of the Rahmstorf et al co-authors have some responsibility in this matter, as do the six prestigious research institutions in five countries (Germany, France, Australia, the UK and the US) with which they are affiliated. And so too does ‘Science’, which published their paper.

‘Stefan Rahmstorf was a Lead Author of Chapter 6 of the 2007 WGI report, and other Rahmstorf7 co-authors were Coordinating Lead Authors, Lead Authors or expert reviewers of Chapters 1 (Somerville), 3 (Parker), and 5 (Cazenave and Church).

‘Yet another co-author, James Hansen, has recently written to Australiaâ€™s Prime Minister and all State Premiers, calling on them to exercise leadership â€˜on a matter concerning coal-fired power plants and carbon dioxide emission rates in your country.â€™ Hansen explains that, because of the urgency of the matter, he has not collected signatures â€“ but he offers the names of seven Australian scientists whom Mr. Rudd could consult.

‘Hansen also says that he had â€˜read and commend[s] the Interim Report of Professor Ross Garnautâ€™, which has been submitted to the Australian Commonwealth, State and Territory Governments. According to this Report, â€˜Comparisons between observed data and model predictions (sic) suggest that the climate system may be responding more quickly than climate models indicateâ€™ (Rahmstorf et al, 2007); and, specifically, that â€˜Global mean surface temperature increase since 1990 has been measured at 0.33 C, which is in the upper end of the range predicted (sic) by the IPCC in the Third Assessment Report in 2001, as shown in Figure 5 (Rahmstorf et al 2007).â€™

‘In making these claims, Professor Garnaut is relying on the paper which has been criticised in forthright terms in the above post and in several of the comments thereon.

#11. I will keep working at getting Rahmstorf to address why the trend line appears to be in the upper part of IPCC projections rather than the middle, and have given him time to respond to emails. Certainly temperature rise has clearly moderated, and so the Rahmstorf7 should really clarify why they still think temperature is accelerating, or publically retract their findings.

http://landshape.org/enm

#11. I will keep working at getting Rahmstorf to address why the trend line appears to be in the upper part of IPCC projections rather than the middle, and have given him time to respond to emails. Certainly temperature rise has clearly moderated, and so the Rahmstorf7 should really clarify why they still think temperature is accelerating, or publically retract their findings.

http://landshape.org/enm

Dear David,

thanks for this discussion of our paper. I have no mails from you in my inbox nor have I received any phone call. So I only learned just now through your Realclimate comment about your questions on our paper. I’d like to make two comments:

1. The smoothing algorithm we used is the SSA algorithm (c) by Aslak Grinsted (2004), distributed in the matlab file ssatrend.m. This has two alternatives for the boundary condition: (1) minimum roughness (which is what we used in our paper) and (2) minimum slope. This is described in the Moore et al. paper. If you have questions about the details of this algorithm please contact its authors. I think the confusion that arises is that you equate “minimum roughness” with “padding with reflected values”. Indeed such padding would mean that the trend line runs into the last point, which it does not in our graph, and hence you (wrongly) conclude that we did not use minimum roughness. The correct conclusion is that we did not use padding. Note that Moore et al. call their minimum roughness “a variation” on the minimum roughness criterion described by Mann (2004). This already makes clear it is not the same.

2.None of the conclusions of our paper depend on the use of this particular boundary condition at the end, which only affects the last five years of the trend line. As you can see, the temperatures 2002-2006 lie in the upper half of the IPCC range.

Best wishes, Stefan

Dear David,

thanks for this discussion of our paper. I have no mails from you in my inbox nor have I received any phone call. So I only learned just now through your Realclimate comment about your questions on our paper. I’d like to make two comments:

1. The smoothing algorithm we used is the SSA algorithm (c) by Aslak Grinsted (2004), distributed in the matlab file ssatrend.m. This has two alternatives for the boundary condition: (1) minimum roughness (which is what we used in our paper) and (2) minimum slope. This is described in the Moore et al. paper. If you have questions about the details of this algorithm please contact its authors. I think the confusion that arises is that you equate “minimum roughness” with “padding with reflected values”. Indeed such padding would mean that the trend line runs into the last point, which it does not in our graph, and hence you (wrongly) conclude that we did not use minimum roughness. The correct conclusion is that we did not use padding. Note that Moore et al. call their minimum roughness “a variation” on the minimum roughness criterion described by Mann (2004). This already makes clear it is not the same.

2.None of the conclusions of our paper depend on the use of this particular boundary condition at the end, which only affects the last five years of the trend line. As you can see, the temperatures 2002-2006 lie in the upper half of the IPCC range.

Best wishes, Stefan

Stefan (posted to realclimate.org)

Thank you for your reply in #124 and for clarifying the source of the methodology used in your paper. It is still not clear how the ‘minimum roughness criterion’ (MRC) was implemented. However, I will contact Grinsted as you suggest.

Re 1: Moore (2004) states in the text “Here, the series is padded so the local trend is preserved (cf. minimum roughness criterion [Mann, 2004])” pointing to Mann’s method. The word ‘variation’ appears only in the caption of Fig. 2 in reference (I thought) to a specific construction. Still, you have clarified that you did not use Mann 2004 data padding and the confusion arises from reading of Moore 2004.

Re 2: You state:

Doesn’t this statement contradict the fact that another boundary condition would not support your conclusions, as you acknowledge in point 1?

Such a trend line passes through and terminates near the center of the IPCC range, not in the upper half of the range.

The general point here is that there is additional uncertainty in the trend at the end points not only due to the shortage of data, but also due to the choice of methodology. These do not appear to have been accounted for in your paper.

3. Thank you for any errors you might care to identify on my website.

4. Emails were sent to the same address you gave in your comment on Thurs 3 Apr 2008 and Tues 8 Apr 2008. Neither have returned.

Thanks again for the opportunity to clarify this issue.

http://landshape.org/enm

Stefan (posted to realclimate.org)

Thank you for your reply in #124 and for clarifying the source of the methodology used in your paper. It is still not clear how the â€˜minimum roughness criterionâ€™ (MRC) was implemented. However, I will contact Grinsted as you suggest.

Re 1: Moore (2004) states in the text â€œHere, the series is padded so the local trend is preserved (cf. minimum roughness criterion [Mann, 2004])â€ pointing to Mannâ€™s method. The word â€˜variationâ€™ appears only in the caption of Fig. 2 in reference (I thought) to a specific construction. Still, you have clarified that you did not use Mann 2004 data padding and the confusion arises from reading of Moore 2004.

Re 2: You state:

Doesn’t this statement contradict the fact that another boundary condition would not support your conclusions, as you acknowledge in point 1?

Such a trend line passes through and terminates near the center of the IPCC range, not in the upper half of the range.

The general point here is that there is additional uncertainty in the trend at the end points not only due to the shortage of data, but also due to the choice of methodology. These do not appear to have been accounted for in your paper.

3. Thank you for any errors you might care to identify on my website.

4. Emails were sent to the same address you gave in your comment on Thurs 3 Apr 2008 and Tues 8 Apr 2008. Neither have returned.

Thanks again for the opportunity to clarify this issue.

http://landshape.org/enm

Where do people get the idea that you compare projections of

ensemble averagequantities to data points without regard to the uncertainty in determining average properties from the data?Stefan rebaselined the temperatures which on paper, looks like “sliding” the temperatures all up or down.

The “slide” procedure requires someone to estimate the “true” mean value for the 1990 temperature from the data and pin them. There should be an uncertainty associated with the determination of that mean underlying value. None is stated in that paper. But assuming 11 year centered means give similar sized uncertainties, data could

alllie higher or lower depending on that uncertainty.So, all data lying high or low means nothing unless the uncertainty is reported and doesn’t explain the shift. The relative positions could just be an artifact of the re-baselining.

There are other uncertainty issues associated with the data.

http://rankexploits.com/musings

Where do people get the idea that you compare projections of

ensemble averagequantities to data points without regard to the uncertainty in determining average properties from the data?Stefan rebaselined the temperatures which on paper, looks like “sliding” the temperatures all up or down.

The “slide” procedure requires someone to estimate the “true” mean value for the 1990 temperature from the data and pin them. There should be an uncertainty associated with the determination of that mean underlying value. None is stated in that paper. But assuming 11 year centered means give similar sized uncertainties, data could

alllie higher or lower depending on that uncertainty.So, all data lying high or low means nothing unless the uncertainty is reported and doesn’t explain the shift. The relative positions could just be an artifact of the re-baselining.

There are other uncertainty issues associated with the data.

http://rankexploits.com/musings

Lucia,

In a nutshell, there is additional uncertainty at both ends of the trend line that make inferences about points lying above or below a trend line unreliable. This question of comparison of trends is treated so much more appropriately by orthodox statistical tests evaluating the significance of slopes as you and I have done (and arrived at different conclusions to Rahmstorf7 BTW).

http://landshape.org/enm

Lucia,

In a nutshell, there is additional uncertainty at both ends of the trend line that make inferences about points lying above or below a trend line unreliable. This question of comparison of trends is treated so much more appropriately by orthodox statistical tests evaluating the significance of slopes as you and I have done (and arrived at different conclusions to Rahmstorf7 BTW).

http://landshape.org/enm

Dear David, a response to your comment #15:

Since we use an 11-year embedding period, the uncertainty in the trend determination and the boundary condition only affect the last 5 years, agreed? (I double-checked this with the two boundary condition options of the ssatrend routine.) That is the years 2002, 2003, 2004, 2005 and 2006 in our paper. For 2001, I have a full 11-year embedding period, namely 1996-2006. Now the temperature values of 2002-2006 are all without exception in the upper half of the IPCC range, no matter whether you use the Hadley or the NASA data. So you have to choose a strange boundary condition to come up with a trend line that is not in the upper half here.

Your update post http://landshape.org/enm/rahmstorf-et-al-2007-update/ does show a graph with such a strange trend line, the one you label “SSA+MRC”. This line runs

below allthe data 2002-2006. It also deviates from your other lines well before 2002, at times which should not be affected by a boundary condition. You better double-check what you did there.I maintain that in our paper, the choice of boundary condition does not affect any of the conclusions. We might also just have left out the last 5 years of the trend line – it would have made no difference to our paper whatsoever.

Dear David, a response to your comment #15:

Since we use an 11-year embedding period, the uncertainty in the trend determination and the boundary condition only affect the last 5 years, agreed? (I double-checked this with the two boundary condition options of the ssatrend routine.) That is the years 2002, 2003, 2004, 2005 and 2006 in our paper. For 2001, I have a full 11-year embedding period, namely 1996-2006. Now the temperature values of 2002-2006 are all without exception in the upper half of the IPCC range, no matter whether you use the Hadley or the NASA data. So you have to choose a strange boundary condition to come up with a trend line that is not in the upper half here.

Your update post http://landshape.org/enm/rahmstorf-et-al-2007-update/ does show a graph with such a strange trend line, the one you label “SSA+MRC”. This line runs

below allthe data 2002-2006. It also deviates from your other lines well before 2002, at times which should not be affected by a boundary condition. You better double-check what you did there.I maintain that in our paper, the choice of boundary condition does not affect any of the conclusions. We might also just have left out the last 5 years of the trend line – it would have made no difference to our paper whatsoever.

Note the tactic employed by Rahmstorf. Instead of explaining what he did, refer to a paper most people cannot get access to (and a google search for ssatrend.m gives no hits). Please give the formula for how you pad the points at the end.

Then falsely claim it makes no difference. Of course it makes a difference, as pointed out by admin, and as recently seen in the HADCRUT manipulations.

Admin: Thanks for your comment. I just want to remind my rare readers that most are here to learn and understand statistics so I would like to steer comments towards numerical issues. Your point that it makes a difference is valid.Note the tactic employed by Rahmstorf. Instead of explaining what he did, refer to a paper most people cannot get access to (and a google search for ssatrend.m gives no hits). Please give the formula for how you pad the points at the end.

Then falsely claim it makes no difference. Of course it makes a difference, as pointed out by admin, and as recently seen in the HADCRUT manipulations.

Admin: Thanks for your comment. I just want to remind my rare readers that most are here to learn and understand statistics so I would like to steer comments towards numerical issues. Your point that it makes a difference is valid.Where can I find ssatrend.m ? How did Moore vary Mann’s minimum roughness ?

Mann04:

“Finally, to approximate the ‘minimum roughness’ constraint, one pads the series with the values within one filter width of the boundary reflected about the time boundary, and reflected vertically (i.e., about the ‘‘y’’ axis) relative to the final value.”

Where can I find ssatrend.m ? How did Moore vary Mann’s minimum roughness ?

Mann04:

“Finally, to approximate the â€˜minimum roughnessâ€™ constraint, one pads the series with the values within one filter width of the boundary reflected about the time boundary, and reflected vertically (i.e., about the â€˜â€˜yâ€™â€™ axis) relative to the final value.”

#18 Stefan,

You state:

To test this I have shown three different methods with variation here. http://landshape.org/enm/examples-of-simple-smoothers/

The results are as follows:

1. Singular spectrum analysis (SSA used in your paper Rahmstorf7), 11 year embedding period, comparing ‘minimum roughness criterion’ end padding with no padding. The trend lines flex about the 8th point, deviating every other point in the trend line, particularly the last 7.

2. Smooth spline, 11df, end-padding from top and bottom of trend channel (95%CL of a single value). The trend lines flex about the 11th point.

3. Moving average with final point varied to top and bottom of trend channel (95%CL of a single value). The last point in the trend changes, but the it stops 5 points short of the end of course.

In general, the causal filters do not localize variations, while the acausal moving average does. Perhaps you were thinking of a moving average when you said it only affects the last five points?

Now I don’t know what happens in the Matlab implementation you used. Unfortunately, estimates of uncertainty are not included in Rahmstorf et al. 7, which is most of my concern.

The green line in first SSA figure in the post is the simple linear regression of the 34 years data. This trend line is virtually the same as middle of the IPCC trends, is not in the upper half, and is not ‘strange’.

The simple regression line is almost the same as the SSA with the MRC, and an example where choice boundary conditions affects the conclusions of your paper. As a point of interest, what was the result of the other choice of boundary condition in matlab that you mention? Did it shift the end of the trend line above or below the one you used?

You suggest that 5 points above is significant. However, you do not know unless you test it, and in series with high serial correlation, such runs are probably not unusual. That’s a kind of non-parametric runs test and I forget what it is called.

Stepping back from this analysis, the whole point of statistical testing is to ensure your conclusions are not overturned by subsequent data. The final figure in the post above shows a smooth spline of monthly temperatures from Hadley and GISS. The more recent data suggests that if an orthodox statistical test had been performed in 2006 in Rahmstorf et al. 2007 it would have given a different conclusion, borne out by the subsequent data — these are random fluctuation about a long term trend.

Note: Simultaneously posed to RealClimate.org. Yet to appear.http://landshape.org/enm

#18 Stefan,

You state:

To test this I have shown three different methods with variation here. http://landshape.org/enm/examples-of-simple-smoothers/

The results are as follows:

1. Singular spectrum analysis (SSA used in your paper Rahmstorf7), 11 year embedding period, comparing ‘minimum roughness criterion’ end padding with no padding. The trend lines flex about the 8th point, deviating every other point in the trend line, particularly the last 7.

2. Smooth spline, 11df, end-padding from top and bottom of trend channel (95%CL of a single value). The trend lines flex about the 11th point.

3. Moving average with final point varied to top and bottom of trend channel (95%CL of a single value). The last point in the trend changes, but the it stops 5 points short of the end of course.

In general, the causal filters do not localize variations, while the acausal moving average does. Perhaps you were thinking of a moving average when you said it only affects the last five points?

Now I don’t know what happens in the Matlab implementation you used. Unfortunately, estimates of uncertainty are not included in Rahmstorf et al. 7, which is most of my concern.

The green line in first SSA figure in the post is the simple linear regression of the 34 years data. This trend line is virtually the same as middle of the IPCC trends, is not in the upper half, and is not ‘strange’.

The simple regression line is almost the same as the SSA with the MRC, and an example where choice boundary conditions affects the conclusions of your paper. As a point of interest, what was the result of the other choice of boundary condition in matlab that you mention? Did it shift the end of the trend line above or below the one you used?

You suggest that 5 points above is significant. However, you do not know unless you test it, and in series with high serial correlation, such runs are probably not unusual. That’s a kind of non-parametric runs test and I forget what it is called.

Stepping back from this analysis, the whole point of statistical testing is to ensure your conclusions are not overturned by subsequent data. The final figure in the post above shows a smooth spline of monthly temperatures from Hadley and GISS. The more recent data suggests that if an orthodox statistical test had been performed in 2006 in Rahmstorf et al. 2007 it would have given a different conclusion, borne out by the subsequent data — these are random fluctuation about a long term trend.

Note: Simultaneously posed to RealClimate.org. Yet to appear.http://landshape.org/enm

It’s clear looking at figure 1 from Rahmstorf that there was a decision to pin the T=0 point in 1990 from the IPCC projections to

some numerical estimatefor the “real-underlying” temperature in 1990.Scare quote intentional because, even if we all agree such a thing exists, then “real” temperature is not the actual GMST from 1990, but something obtained by averaging. It must be estimated based on other data.

Using

anystatistical method, there is always an uncertainty in the determination of the “real” 1990 value.David’s various tests using different analytical techniques show what has always been obvious: The uncertainty in the determination of the the temperature for the “real” 1990 would result in different positioning of the IPCC projection on that graph. The conclusions could range from all temperatures fall

belowto all fall above the centerline of the projections.http://rankexploits.com/musings

It’s clear looking at figure 1 from Rahmstorf that there was a decision to pin the T=0 point in 1990 from the IPCC projections to

some numerical estimatefor the “real-underlying” temperature in 1990.Scare quote intentional because, even if we all agree such a thing exists, then “real” temperature is not the actual GMST from 1990, but something obtained by averaging. It must be estimated based on other data.

Using

anystatistical method, there is always an uncertainty in the determination of the “real” 1990 value.David’s various tests using different analytical techniques show what has always been obvious: The uncertainty in the determination of the the temperature for the “real” 1990 would result in different positioning of the IPCC projection on that graph. The conclusions could range from all temperatures fall

belowto all fall above the centerline of the projections.http://rankexploits.com/musings

I have posted the following to the ‘Comment on the slide and eyeball method’ thread on Lucia’s blog ‘The Blackboard':

‘In Rahmstorf et al (2007) it is stated that ‘The global mean surface temperature increase (land and ocean combined) in both the NASA GISS data set and the Hadley Centre/Climatic Research Unit dataset is 0.33 C for the sixteen years since 1990 …’

‘In the light of (a) the above analysis, (b) previous related posts on this blog, (c) David Stockwell’s posts at Niche Modeling and (d) comments by others at both blogs, including Stefan Rahmstorf’s replies to David, does any reader care to defend this unqualified statement by Rahmstorf and his co-authors?’

For information.

I have posted the following to the ‘Comment on the slide and eyeball method’ thread on Lucia’s blog ‘The Blackboard':

‘In Rahmstorf et al (2007) it is stated that ‘The global mean surface temperature increase (land and ocean combined) in both the NASA GISS data set and the Hadley Centre/Climatic Research Unit dataset is 0.33 C for the sixteen years since 1990 …’

‘In the light of (a) the above analysis, (b) previous related posts on this blog, (c) David Stockwell’s posts at Niche Modeling and (d) comments by others at both blogs, including Stefan Rahmstorf’s replies to David, does any reader care to defend this unqualified statement by Rahmstorf and his co-authors?’

For information.

Still looking for ssatrend.m , I guess it is not freely available in www ?

http://signals.auditblogs.com/

Still looking for ssatrend.m , I guess it is not freely available in www ?

http://signals.auditblogs.com/

Sorry, ssatrend.m MIA.

http://landshape.org/enm

Sorry, ssatrend.m MIA.

http://landshape.org/enm

[url=http://www.myspace.com/buyphentermine]buy phentermine [url=http://www.myspace.com/viagraonline]cialis viagra [url=http://www.myspace.com/viagracanada]impotence drugs [url=http://www.myspace.com/buycialis]buy cialis online [url=http://www.myspace.com/orderviagra]cheap cialis online buy cialis generic

http://www.bikegaba.org/gaba/buy-pills-with-no-prescription-online/index.html