# Problem 1 of Climate Science 123

Problem 1. If temperature is adequately represented by a deterministic trend due to increasing GHGs, why be concerned with the presence of a unit root?

Rather than bloviate over the implications of a unit root (integrative behavior) in the global temperature series, a more productive approach is to formulate an hypothesis, and test it.

A deterministic model of global temperature (y) and anthropogenic forcing (g) with random errors e is:

yt=a+b.gt

An autoregressive model of changes in temperature Δyt uses a difference equation with a deterministic trend b.gt-1 and the previous value of y or yt-1:

Δyt =b.gt-1+c.yt-1

Written this way, the presence of the unit root in an AR1 series y is equivalent to the coefficient c equaling zero (see http://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test).

I suspect the controversy can be reduced to two simple hypotheses:

H0: The size of the coefficient b is not significantly different from zero.
Ha: The size of the coefficient b is significantly different from zero.

The size of the coefficient should be indicative of the contribution of the deterministic trend (in this case anthropogenic warming) to the global temperature.

We transform the global temperature by differencing (an autoregressive or AR coordinate system), and then fit a model just as we would with any model.

In the deterministic coordinate system, b is highly significant with a strong contribution from AGW. For the AGW forcing I use the sum of the anthropogenic forcings in the RadF.txt file W-M_GHGs, O3, StratH2O, LandUse, and AIE.

Call: lm(formula = y ~ g)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.34054 0.01521 -22.39 <2e-16 ***
g 0.31573 0.01802 17.52 <2e-16 ***

Signif. codes: 0 â€˜***â€™ 0.001 â€˜**â€™ 0.01 â€˜*â€™ 0.05 â€˜.â€™ 0.1 â€˜ â€™ 1
Residual standard error: 0.1251 on 121 degrees of freedom
Multiple R-squared: 0.7172, Adjusted R-squared: 0.7149
F-statistic: 306.9 on 1 and 121 DF, p-value: < 2.2e-16

The result is very different in the AR coordinate system. The coefficient of y is not significantly greater than zero (at 95%) and neither is b.

Call: lm(formula = d ~ y + g + 0)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
y -0.06261 0.03234 -1.936 0.0552 .
g 0.01439 0.01088 1.322 0.1887

Signif. codes: 0 â€˜***â€™ 0.001 â€˜**â€™ 0.01 â€˜*â€™ 0.05 â€˜.â€™ 0.1 â€˜ â€™ 1
Residual standard error: 0.101 on 121 degrees of freedom
Multiple R-squared: 0.0389, Adjusted R-squared: 0.02302
F-statistic: 2.449 on 2 and 121 DF, p-value: 0.09066

Perhaps the main contribution of AGW is since 1960, so we restrict the data to this period and examine the effect. The deterministic trend in AGW is greater, but still not significant.

Prob1(window(CRU,start=1960),GHG)
Call: lm(formula = d ~ y + g + 0)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
y -0.24378 0.10652 -2.289 0.0273 *
g 0.03050 0.01512 2.017 0.0503 .

Signif. codes: 0 â€˜***â€™ 0.001 â€˜**â€™ 0.01 â€˜*â€™ 0.05 â€˜.â€™ 0.1 â€˜ â€™ 1
Residual standard error: 0.1149 on 41 degrees of freedom
Multiple R-squared: 0.1284, Adjusted R-squared: 0.08591
F-statistic: 3.021 on 2 and 41 DF, p-value: 0.05974

But what happens when we use another data set. Below is the result using GISS. The coefficients are significant but the effect is still small.

> Prob1(GISS,GHG)
Call: lm(formula = d ~ y + g + 0)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
y -0.27142 0.06334 -4.285 3.69e-05 ***
g 0.06403 0.01895 3.379 0.00098 ***

Signif. codes: 0 â€˜***â€™ 0.001 â€˜**â€™ 0.01 â€˜*â€™ 0.05 â€˜.â€™ 0.1 â€˜ â€™ 1
Residual standard error: 0.1405 on 121 degrees of freedom
Multiple R-squared: 0.1375, Adjusted R-squared: 0.1232
F-statistic: 9.645 on 2 and 121 DF, p-value: 0.0001298

So why be concerned with the presence of a unit root? It has been argued that while the presence of a unit indicates that using OLS regression is wrong, this does not contradict AGW because the effect of greenhouse gas forcings can still be incorporated as deterministic trends.

I am not 100% sure of this, as the differencing removes most of the deterministic trend that could be potentially explained by g.

If the above is true, there is a problem. When the analysis respects the unit root on real data, the deterministic trend due to increasing GHGs is so small that the null hypothesis is not rejected, i.e. the large contribution of anthropogenic global warming suggested by a simple OLS regression model is a spurious result.

Here is my code. Orient is a functions that matches two time series to the same start and end date.

Prob1<-function(y,g) {
v<-orient(list(y,g))
d<-diff(v[,1]);y<-v[1:(dim(v)[1]-1),1];
g<-v[1:(dim(v)[1]-1),2]
l<-lm(d~y+g+0)
print(summary(l))
plot(y,type="l")
lines((g*l\$coef[2]+y[1]),col="blue")
}

• Anonymous

I wonder if this has post has relevance for reanalysis of the data in Infuence of the Southern Oscillation on tropospheric temperature by McLean et al, which used the the SOI (Southern Oscillation Index) and GTTA (global tropospheric temperature anomaly).

The Comment by Foster et al reiterated the IPCC position formulated by the authors of the Comment who quote themselves and Ben Santer in the introduction.

In my opinion, rather than claiming they had been censored, McLean et al should have retracted the paper and attempted to reanalyze the data. We still don’t know what the result would be if the statistical methods did not have the defects described in the Comment and we know from experience that to have confidence, we have to check all the calculations performed by the authors of the Comment.

Both the original paper and the Comment are available on the internet.

http://ruby.fgcu.edu/courses/twimberley/EnviroPhilo/InfluenceSoOscillation.pdf

http://www.jamstec.go.jp/frsgc/research/d5/jdannan/comment_on_mclean.pdf

• http://www.ecoengineers.com/ Steve Short

Solar energy as modeled over the last three centuries contains patterns that match the full 160 year instrument record of Earth’s surface temperature. Earth’s surface temperature throughout the modern record is given by EQ01

T(t) = m134.S134(t-τ) + m46.S46(t-τ) + b (1)

where Sn is the increase in Total Solar Irradiance (TSI) measured as the running percentage rise in the trend at every instance in time, t, for the previous n years. The parameters are best fits with the values m134=18.33ºC/%, m46=-3.68ºC/%, b=13.57(-0.43)ºC, and τ=6 years. The value of b in parenthesis gives T(t) as a temperature anomaly. One standard deviation of the error between the equation and the HadCRUT3 data is 0.11ºC (about one ordinate interval). Values for a good approximation (σ=0.13ºC) with a single solar running trend are m134=17.50ºC/%, m46=0, b=13.55(-0.45)ºC, and τ=10 years.

Global average surface temperature with solar formula overlay.

The figure is IPCC’s AR4 Figure 3.6 from HadCRUT3, with Earth’s surface temperature from Equation (1) added in berry color. The new temperature model is a linear combination of two variables. The variables are causal, running trend lines from the solar model of Wang, et al. (2005). IPCC’s blue curve is the temperature smoothed by a backward and forward symmetric, non-causal filter.

FIGURE 1

All data for this model are primary data preferred by IPCC in its Reports for solar radiation and for Earth’s surface temperature. The solar running trends are elementary, backward-looking (realizable) mathematical trend lines as used by IPCC for the current year temperature, but computed every year for the Sun.

Any variations in the solar radiation model sufficient to affect the short term variability of Earth’s climate must be selected and amplified by Earthly processes. This model hypothesizes that cloud albedo produces broadband amplification, using established physical processes. The hypothesis is that while cloud albedo is a powerful, negative feedback to warming in the longer term, it creates a short term, positive feedback to TSI that enables its variations to imprint solar insolation at the surface. A calculation of the linear fit of surface temperature to suitably filtered solar radiation shows the level of amplification necessary to support the model, and isolates the short term positive feedback from the long term negative cloud albedo feedback.

This model hypothesizes that the natural responses of Earth to solar radiation produce a selecting mechanism. The model exploits evidence that the ocean dominates Earth’s surface temperature, as it does the atmospheric CO2 concentration, through a set of delays in the accumulation and release of heat caused by three dimensional ocean currents. The ocean thus behaves like a tapped delay line, a well-known filtering device found in other fields, such as electronics and acoustics, to amplify or suppress source variations at certain intervals on the scale of decades to centuries. A search with running trend lines, which are first-order, finite-time filters, produced a family of representations of TSI as might be favored by Earth’s natural responses. One of these, the 134-year running trend line, bore a strong resemblance to the complete record of instrumented surface temperature, the signal called S134.

Because the fingerprint of solar radiation appears on Earth’s surface temperature, that temperature cannot reasonably bear the fingerprint of human activity. IPCC claims that human fingerprint exists by several methods. These include its hockey stick pattern, in which temperature and gas concentrations behave benignly until the onset of the industrial revolution or later, and rise in concert. IPCC claims include that the pattern of atmospheric oxygen depletion corresponds to the burning of fossil fuels in air, and that the pattern of isotopic lightening in atmospheric CO2 corresponds to the increase in CO2 attributed to human activities. This paper shows that each of IPCC’s alleged imprints due to human activities is in error.

The extremely good and simple match of filtered TSI to Earth’s complex temperature record tends to validate the model. The cause of global warming is in hand. Conversely, the fact that Earth’s temperature pattern appears in solar radiation invalidates Anthropogenic Global Warming (AGW).

http://www.rocketscientistsjournal.com/2010/03/sgw.html

• Anonymous

A lot of interesting stuff in that paper. Thanks

• Anonymous

Has anyone ‘neutral’ gone through it? My impression was that the objection was semantic, in the wording, that allowed it to be misinterpreted.

But it is relevant, in that the differencing reduces linear trends to a constant, so by doing it you exclude the linear trends as contributors to correlation.

Now causally related variables should still correlate with the linear trend removed, ie the bumps should match. If not, then either the data are too few, or the correlation was spurious.

• Anonymous

Re Southern Oscillation on tropospheric temperature by McLean et al (Reply to David).

I am getting set up now to reanalyse using both the same and a different approach, for what that is worth.

I agree the main problem may be semantic. Specifically, some of the analytical issues appear to be resolved in the “Discussion” section, but I did not realize that until I went back and read the Discussion after reading McClean’s comment in the Drum, “Our use of derivative data ended when the time lag was established,” (http://www.abc.net.au/unleashed/stories/s2861936.htm)

Even so, the methodology was not clear and, since the AGU refused to print the authors’ response, we don’t know if it would have clarified the methodology or if it was just a repetition of the content of the paper, as the AGU’s defenders claim.

McLean has refused to post their response on his web site and instead has complained that the refusal to publish is yet another example of the corruption of the peer-review process.

The Icecap blog refers to an appendix with the Response, but there is no appendix to the blog.

• Frank_White

I wonder if this has post has relevance for reanalysis of the data in Infuence of the Southern Oscillation on tropospheric temperature by McLean et al, which used the the SOI (Southern Oscillation Index) and GTTA (global tropospheric temperature anomaly). The Comment by Foster et al reiterated the IPCC position formulated by the authors of the Comment who quote themselves and Ben Santer in the introduction.In my opinion, rather than claiming they had been censored, McLean et al should have retracted the paper and attempted to reanalyze the data. We still don't know what the result would be if the statistical methods did not have the defects described in the Comment and we know from experience that to have confidence, we have to check all the calculations performed by the authors of the Comment. Both the original paper and the Comment are available on the internet.http://ruby.fgcu.edu/courses/twimberley/EnviroPhttp://www.jamstec.go.jp/frsgc/research/d5/jdan

• http://www.ecoengineers.com/ Steve Short

• davids99us

Has anyone 'neutral' gone through it? My impression was that the objection was semantic, in the wording, that allowed it to be misinterpreted. But it is relevant, in that the differencing reduces linear trends to a constant, so by doing it you exclude the linear trends as contributors to correlation.Now causally related variables should still correlate with the linear trend removed, ie the bumps should match. If not, then either the data are too few, or the correlation was spurious.

• davids99us

A lot of interesting stuff in that paper. Thanks

• Frank_White

Re Southern Oscillation on tropospheric temperature by McLean et al (Reply to David).I am getting set up now to reanalyse using both the same and a different approach, for what that is worth.I agree the main problem may be semantic. Specifically, some of the analytical issues appear to be resolved in the “Discussion” section, but I did not realize that until I went back and read the Discussion after reading McClean's comment in the Drum, “Our use of derivative data ended when the time lag was established,” (http://www.abc.net.au/unleashed/stories/s286193…)Even so, the methodology was not clear and, since the AGU refused to print the authors' response, we don't know if it would have clarified the methodology or if it was just a repetition of the content of the paper, as the AGU's defenders claim. McLean has refused to post their response on his web site and instead has complained that the refusal to publish is yet another example of the corruption of the peer-review process.http://www.quadrant.org.au/blogs/doomed-planet/http://icecap.us/index.php/go/joes-blog/censors…The Icecap blog refers to an appendix with the Response, but there is no appendix to the blog.

• Anonymous

David, This is marginal to the questions you posed, but given here to illustrate the complexity of extracting signals from noise.

Graph 1 shows a month by month difference, year 1997 (normal) subtracted from 1998 (hot). Site is Meekatharra, one of a number I’ve looked at. Notice the 4 lobes.

http://i260.photobucket.com/albums/ii14/sherro_2008/MeekaJ.jpg?t=1270555441

Graph 2 takes UAH lower tropo data in 2008 version and does much the same. The colours are hard among the spaghetti, but the big amplitude peaks are tropical tropo over ocean. Again, there is suggestion of 4 quarterly patterns or peaks, with the Antarctic being so poorly sampled that it just drifts.

http://i260.photobucket.com/albums/ii14/sherro_2008/uahocean.jpg?t=1270555516

Does this say anything important about differencing? I wonder what would happen on a daily basis. Might be a good way to detect adjustments.

Finally, graph 3 gives global UAH bands over land. The big swinger is south polar land belt, probably too little data.

http://i260.photobucket.com/albums/ii14/sherro_2008/uahland-1.jpg?t=1270555576

Please be aware of the dangers of using just two adjacent years. But please be entertained by the cyclicity.

While the McLean paper is having a rough time, it at least seems to infer a mechanism for temperature change that goes beyond the usual “it was an El Nino year”. I hope it gets a good discussion.

• geoffsherrington

David, This is marginal to the questions you posed, but given here to illustrate the complexity of extracting signals from noise.Graph 1 shows a month by month difference, year 1997 (normal) subtracted from 1998 (hot). Site is Meekatharra, one of a number I've looked at. Notice the 4 lobes.http://i260.photobucket.com/albums/ii14/sherro_…Graph 2 takes UAH lower tropo data in 2008 version and does much the same. The colours are hard among the spaghetti, but the big amplitude peaks are tropical tropo over ocean. Again, there is suggestion of 4 quarterly patterns or peaks, with the Antarctic being so poorly sampled that it just drifts.http://i260.photobucket.com/albums/ii14/sherro_…Does this say anything important about differencing? I wonder what would happen on a daily basis. Might be a good way to detect adjustments.Finally, graph 3 gives global UAH bands over land. The big swinger is south polar land belt, probably too little data. http://i260.photobucket.com/albums/ii14/sherro_…Please be aware of the dangers of using just two adjacent years. But please be entertained by the cyclicity.While the McLean paper is having a rough time, it at least seems to infer a mechanism for temperature change that goes beyond the usual “it was an El Nino year”. I hope it gets a good discussion.

• cohenite

One of the things I take from the McLean paper is that the uncontroversial 7 month lag for the effect of the principle natural variable means that it is unlikely that there is any natural pipeline effect whereby future heating is delayed. But this is a basic part of AGW with the notion of Equilibrium Sensitivity of CO2 heating. The Lui paper confirms that there is a substantial ES consistent with AGW whereby the impact of CO2 increases on temperature is delayed;

http://mpra.ub.uni-muenchen.de/9939/1/MPRA_paper_9939.pdf

This contradicts the cointegration analysis of B&R but is consistent with Kaufmann as VS’s comment quoted here shows;

http://landshape.org/enm/polynomial-cointegration-rebuts-agw/#more-3771

The debate at Barts has now started to consider the difference between statistical analysis of data and the physcial reasons for any trends or lack of in that data and between data and physical factors such as increasing CO2 and SOI/PDO etc. In respect of increasing CO2 and temperature it would seem at first instance that any cointegration analysis of CO2/temperature must relate to Beer-lambert and the exponetial decline of CO2 heating from incremental increases; but is the BL effect sufficient to explain away the ES aspect of AGW heating based on CO2 increases and support the B&R side of the argument?

• cohenite

One of the things I take from the McLean paper is that the uncontroversial 7 month lag for the effect of the principle natural variable means that it is unlikely that there is any natural pipeline effect whereby future heating is delayed. But this is a basic part of AGW with the notion of Equilibrium Sensitivity of CO2 heating. The Lui paper confirms that there is a substantial ES consistent with AGW whereby the impact of CO2 increases on temperature is delayed;http://mpra.ub.uni-muenchen.de/9939/1/MPRA_pape…This contradicts the cointegration analysis of B&R but is consistent with Kaufmann as VS's comment quoted here shows;http://landshape.org/enm/polynomial-cointegrati…The debate at Barts has now started to consider the difference between statistical analysis of data and the physcial reasons for any trends or lack of in that data and between data and physical factors such as increasing CO2 and SOI/PDO etc. In respect of increasing CO2 and temperature it would seem at first instance that any cointegration analysis of CO2/temperature must relate to Beer-lambert and the exponetial decline of CO2 heating from incremental increases; but is the BL effect sufficient to explain away the ES aspect of AGW heating based on CO2 increases and support the B&R side of the argument?

• Anonymous

Cohnite,

One of the reasons for the post that I put in just before yours is that I can’t see the 7 month lag (and this caveat is important) from the type of analysis I have done. That is, there does not seem to be a recognisable time taken from whatever alters temperature first somewhere in the world, to wherever that shock ends its life. It’s complicated of course by different mechanisms over land and water. Have you see any paper where the progres of a pulse like the 1998 high can be tracked over the globe at a measurable rate?

I’m not doubting what you wrote, I’m more seeking an explanation for the absence of pointers I expected I might see.

Re Beer Lambert, sudied a long time ago, the effect is not a one-off. If you have a container of CO2 and you shine light through it, the intensity of light drops off at various measureable rates at nominated wavelengths. Loss of light equates to generation of heat. If, after you have turned the light off and restored equilibrium, you can repeat the exercise and get more heating. If you then move from discrete episodes to merged processes, you can reach the stage where the ability of the CO2 to absorb light is limited at a plateau and the appartus becomes essentially transparent to more light. It is possible to use exponential equations if some boundary conditions are defined and not exceeded, if there is one process only operating (like absence of other absorbing gases) and if you do not reach saturation. But then, the exponential curve is not a single curve; you are looking at different smaller regions of a single curve each time you change the input light intensity. Just because you can construct a single exponential Beer-Lambert case in the lab, it does not follow that it is directly applicable to the natural system, where light levels and wavelengths and light directions are altering all the time. (I do not think it is like devolving a complex mathematical curve into a number of sine curves). Even then I doubt that it is a true exponential in nature, because that implies it can increase infinitely, whereas in nature another effect would be more likely to make it plateau long before its infinite limit. It follows that I am uncertain that it is correct to say that a doubling of CO2 will produce a heating that is capable of calculation, because of the difficulty in discerning on which part of the B-L curve you sit.

I worked for a while with CO2 lasers so not all of the above is conjecture; but natural systems can be complex enough to defeat workshop experience.

• sherro

Cohnite,One of the reasons for the post that I put in just before yours is that I can't see the 7 month lag (and this caveat is important) from the type of analysis I have done. That is, there does not seem to be a recognisable time taken from whatever alters temperature first somewhere in the world, to wherever that shock ends its life. It's complicated of course by different mechanisms over land and water. Have you see any paper where the progres of a pulse like the 1998 high can be tracked over the globe at a measurable rate?I'm not doubting what you wrote, I'm more seeking an explanation for the absence of pointers I expected I might see.Re Beer Lambert, sudied a long time ago, the effect is not a one-off. If you have a container of CO2 and you shine light through it, the intensity of light drops off at various measureable rates at nominated wavelengths. Loss of light equates to generation of heat. If, after you have turned the light off and restored equilibrium, you can repeat the exercise and get more heating. If you then move from discrete episodes to merged processes, you can reach the stage where the ability of the CO2 to absorb light is limited at a plateau and the appartus becomes essentially transparent to more light. It is possible to use exponential equations if some boundary conditions are defined and not exceeded, if there is one process only operating (like absence of other absorbing gases) and if you do not reach saturation. But then, the exponential curve is not a single curve; you are looking at different smaller regions of a single curve each time you change the input light intensity. Just because you can construct a single exponential Beer-Lambert case in the lab, it does not follow that it is directly applicable to the natural system, where light levels and wavelengths and light directions are altering all the time. (I do not think it is like devolving a complex mathematical curve into a number of sine curves). Even then I doubt that it is a true exponential in nature, because that implies it can increase infinitely, whereas in nature another effect would be more likely to make it plateau long before its infinite limit. It follows that I am uncertain that it is correct to say that a doubling of CO2 will produce a heating that is capable of calculation, because of the difficulty in discerning on which part of the B-L curve you sit.I worked for a while with CO2 lasers so not all of the above is conjecture; but natural systems can be complex enough to defeat workshop experience.

• cohenite

Sherro, with lags, or lack thereof, do you mean this;

Of course david’s Break paper looks at this idea of sudden responses;

http://arxiv.org/PS_cache/arxiv/pdf/0907/0907.1650v3.pdf

A paper which looks in detail at the idea of lags from major climatic events is this Trenberth paper;

http://www.cgd.ucar.edu/cas/papers/2000JD000298.pdf

In respect of Beer-Lambert when you say this:

“Loss of light equates to generation of heat. If, after you have turned the light off and restored equilibrium, you can repeat the exercise and get more heating”

I presume it would be the case that the second dose of heating does not build on the first but takes place from the restored equilibrium of the first light absorption; that is heating from a level of CO2 produces a non-lagged response which only continues {per B&R} as long as the CO2 increases; if CO2 is stabilised then, since the temperature is not related to the final level of CO2, temperature would revert to the pre-CO2 increase level [?]

• cohenite

Sherro, with lags, or lack thereof, do you mean this;http://www.woodfortrees.org/plot/hadcrut3vgl/fr…Of course david's Break paper looks at this idea of sudden responses;http://arxiv.org/PS_cache/arxiv/pdf/0907/0907.1…A paper which looks in detail at the idea of lags from major climatic events is this Trenberth paper;http://www.cgd.ucar.edu/cas/papers/2000JD000298…In respect of Beer-Lambert when you say this:”Loss of light equates to generation of heat. If, after you have turned the light off and restored equilibrium, you can repeat the exercise and get more heating”I presume it would be the case that the second dose of heating does not build on the first but takes place from the restored equilibrium of the first light absorption; that is heating from a level of CO2 produces a non-lagged response which only continues {per B&R} as long as the CO2 increases; if CO2 is stabilised then, since the temperature is not related to the final level of CO2, temperature would revert to the pre-CO2 increase level [?]

• Anonymous

Cohenite, thanks for the refs.

The first one from woodfortrees shows the 1998 peak as nearly coincidental in the 3 regions cited. It does not allow deduction of where the hot spot started. I relate it to the ref I gave from UAH at
http://i260.photobucket.com/albums/ii14/sherro_2008/uahocean.jpg?t=1270555516
where there does not seem to be a sequence of a tropical band 1998 hotspot followed by a sub tropical peak a while later followed by a temperate peak a while later again. There does not seem to be any orderly progression.

The David & Anthony paper gives amath procedure for confirming break points and notes one in 1998 for precipitation. So we can be confident that some anomalies are recognizable.

The Trenberth paper is along the lines I’m thinking but it stops at 1998. It indicated that the movement of the anomaly is most strongly in the air and I take this to mean that it’s valid to link it to the UAH graph above.

These papers are all adding to my understanding but I’m not there yet.

Now to Beer Lambert. You key question is “if CO2 is stabilised then, since the temperature is not related to the final level of CO2, temperature would revert to the pre-CO2 increase level [?]”. My answer is that I have not done the experiment. In the first part of my explanation I carefully clarified that the second try happened after equilibrium had been restored. While I have measured light absorption many times, I have not measured the temperature change. So my qualified answer raises more questions. My answer is that the addition of further CO2 above the pre-release level as you describe would produce additional heat only for so long as the heat remains coupled to the system under study. If you lose the heat, you regain the capacity to heat up again with either more light or more CO2. If you do not lose the heat, you rise to a plateau temperature. But, most importantly, you have to assume that the incoming light spectrum and intensity remained constant or could be modelled while you looked at the temperature/CO2 relation. (Day and night upsets that, happening faster than the thermal loss).

I see it as yet another case of some authors having difficulty translating a quasi-stationary lab experiment into a dynamic natural system. But then, I’m often wrong.

• sherro

Cohenite, thanks for the refs.The first one from woodfortrees shows the 1998 peak as nearly coincidental in the 3 regions cited. It does not allow deduction of where the hot spot started. I relate it to the ref I gave from UAH athttp://i260.photobucket.com/albums/ii14/sherro_…where there does not seem to be a sequence of a tropical band 1998 hotspot followed by a sub tropical peak a while later followed by a temperate peak a while later again. There does not seem to be any orderly progression.The David & Anthony paper gives amath procedure for confirming break points and notes one in 1998 for precipitation. So we can be confident that some anomalies are recognizable.The Trenberth paper is along the lines I'm thinking but it stops at 1998. It indicated that the movement of the anomaly is most strongly in the air and I take this to mean that it's valid to link it to the UAH graph above.These papers are all adding to my understanding but I'm not there yet.Now to Beer Lambert. You key question is “if CO2 is stabilised then, since the temperature is not related to the final level of CO2, temperature would revert to the pre-CO2 increase level [?]”. My answer is that I have not done the experiment. In the first part of my explanation I carefully clarified that the second try happened after equilibrium had been restored. While I have measured light absorption many times, I have not measured the temperature change. So my qualified answer raises more questions. My answer is that the addition of further CO2 above the pre-release level as you describe would produce additional heat only for so long as the heat remains coupled to the system under study. If you lose the heat, you regain the capacity to heat up again with either more light or more CO2. If you do not lose the heat, you rise to a plateau temperature. But, most importantly, you have to assume that the incoming light spectrum and intensity remained constant or could be modelled while you looked at the temperature/CO2 relation. (Day and night upsets that, happening faster than the thermal loss).I see it as yet another case of some authors having difficulty translating a quasi-stationary lab experiment into a dynamic natural system. But then, I'm often wrong.

• cohenite

Sherro; you say:

“I see it as yet another case of some authors having difficulty translating a quasi-stationary lab experiment into a dynamic natural system.”

And truer words were never spoken; this has been the issue ever since Arrhenius garbled his lab experiement way back when. Steve Short gives an intersting exposition of Beers here [his comment is the one 2 weeks ago]:

My limited understanding of Beers and radiative, “heat-trapping” by additional CO2 is that natural process are always going to confound the lab/theoretical CO2 absorption process; for instance convectional transfer of heat based on the state change of water is much quicker than heat transfer by diffusion and arguably negates such transfer completely until the rising air reaches what Douglass and Christy call the CEL:

http://arxiv.org/ftp/arxiv/papers/0809/0809.0581.pdf

Not withstanding this Douglass and Christy still give a small non-feedback temperature forcing to ^CO2.

The point is something restricts ^CO2 heating; and Beers seems to have some major issues for being the agent of such restriction, not the least of is that it fails to distinguish electromagnetically between absorption and emission. As Steve notes the IPCC and AGW spruikers generally seem to have taken Beers for granted; and a mistaken view of Beers at that.

• http://devoidofnulls.wordpres.com/ Andrew

The Characteristic Emission Level as a term does not originate with Douglass and Christy, in the paper itself they are referring to a paper by Lindzen. I’m not sure he invented it either.

• cohenite

Sherro; you say:”I see it as yet another case of some authors having difficulty translating a quasi-stationary lab experiment into a dynamic natural system.”And truer words were never spoken; this has been the issue ever since Arrhenius garbled his lab experiement way back when. Steve Short gives an intersting exposition of Beers here [his comment is the one 2 weeks ago]: http://landshape.org/enm/orders-of-integration/…My limited understanding of Beers and radiative, “heat-trapping” by additional CO2 is that natural process are always going to confound the lab/theoretical CO2 absorption process; for instance convectional transfer of heat based on the state change of water is much quicker than heat transfer by diffusion and arguably negates such transfer completely until the rising air reaches what Douglass and Christy call the CEL:http://arxiv.org/ftp/arxiv/papers/0809/0809.058…Not withstanding this Douglass and Christy still give a small non-feedback temperature forcing to ^CO2.The point is something restricts ^CO2 heating; and Beers seems to have some major issues for being the agent of such restriction, not the least of is that it fails to distinguish electromagnetically between absorption and emission. As Steve notes the IPCC and AGW spruikers generally seem to have taken Beers for granted; and a mistaken view of Beers at that.

• Anonymous

Cohenite, It’s tied up with open and closed systems. Easy in lab where you can wait for everything to cool overnight to aircon temp and run the test again. In the atmosphere, you can’t make an assumption like that because the subject of the question that you seek to answer is, what is the heat balance pattern with time?

Philosophically, near my final analysis, the temperature effect of adding more CO2 to the air is intimately related to how quickly any generated heat is dissipated, with various plausible scenarios allowing the temperature to move either not at all, up or down. This is another way to express what you wrote, …. “for instance convectional transfer of heat based on the state change of water is much quicker than heat transfer by diffusion and arguably negates such transfer completely until the rising air reaches what Douglass and Christy call the CEL:”

• sherro

Cohenite, It's tied up with open and closed systems. Easy in lab where you can wait for everything to cool overnight to aircon temp and run the test again. In the atmosphere, you can't make an assumption like that because the subject of the question that you seek to answer is, what is the heat balance pattern with time?Philosophically, near my final analysis, the temperature effect of adding more CO2 to the air is intimately related to how quickly any generated heat is dissipated, with various plausible scenarios allowing the temperature to move either not at all, up or down. This is another way to express what you wrote, …. “for instance convectional transfer of heat based on the state change of water is much quicker than heat transfer by diffusion and arguably negates such transfer completely until the rising air reaches what Douglass and Christy call the CEL:”

• http://www.ecoengineers.com/ Steve Short

May be considered a bit OT here (if so my apology) but Jeffrey Glassman has just posted up a highly significant extended analysis of some issues I have been exploring with him off and on for quite some time:

http://www.rocketscientistsjournal.com/2010/03/sgw.html

The AGW cabal’s attempts to understand the global heat balance are shown to be deeply flawed with respect to TSI and Bond albedo simply by their own ‘evidences’. To use one of my favourite Shakespearian phrases – ‘hoist with their own petard’.

As the great Global Warming Crisis bandwagon rumbles on towards – the abyss (?) or more likely a long drawn out saga of weasely squealing, such nit picking will simply be ignored as usual.

• Anonymous

Steve, I liked the part where the oscillator equation is derived, and after I go through it I want to do a post on it.

• http://www.ecoengineers.com/ Steve Short

There is a rather sweet irony in Glassman’s SGW paper for me.

J. Roger Bray was an early mentor (and close friend) of mine 1971 – 1973 after I left uni for the first time (with an MS in physical chemistry). I credit Roger with showing me there were far more fascinating things in life than acid, pot and heavy rock music.

A highly intelligent Canadian ecologist who migrated to my home town in New Zealand, Roger was a true Renaissance man also published quite a few papers between about 1970 and 1985 on such things as solar cycles in the montane glacial record e.g.

http://www.sciencemag.org/cgi/content/abstract/171/3977/1242

Formerly with the NZ Department of Scientific and Industrial Research Roger had traced the variations in the Sun’s activity since 527 BC by connecting increased production of radio carbon (in tree rings) by cosmic rays to other recorded symptoms of feeble solar magnetic activity.

Roger linked low solar activity and high cosmic rays with historically recorded advances of glaciers, pushing their cold snouts down many valleys. The advances were most numerous in the 17th and 18th centuries, which straddled the coldest period of the Little Ice Age.

I can well recall Roger saying to me that he thought there were solar cycles apparent in all sorts of published proxy records, including European grape and olive harvest seasons going back to the pre-Roman era, of the order of ~2500, ~140 and ~45 years.

I did my post-doc at Uni. Bern so I am also somewhat familiar with the fascinating relics record from the Schnidejoch Pass above Thun in Canton Bern (which controlled northerly access to/from the Rhone Valley).

• Anonymous

Irony is sweet. I prick up when I see a 140/45 ~ 3 ratio, which has been found in a couple of papers to do with oceanographic periods, including ENSO/Annual and another ratio recently.

Before I get accused of numerology, there is a good reason for this x3 period ratio, as it is a ‘signature’ of a sub-harmonic response. The reason is obvious – a x2 period would cancel, but in a x3 period the peaks of the wavelengths reinforce. That is, the smaller 45 year period could ‘drive’ the larger 140 year subharmonic. Another way that amplification can take place.

• http://www.ecoengineers.com/ Steve Short

Given that Jeffrey Glassman appears to have found a TSI signal at 134 and 46 years (ratio = 2.91), one (S134) giving a negative feedback on warming, the other (S46) a positive feedback on warming, how would that work from a reinforcing harmonic?

BTW – have a look at Jeff and my latest exchange on his Rocket Scientists Journal blog. Jeff has identified the essential weirdness in the Trenberth, Fasullo and Kiehl 2009 (and others) view of the CERES studies (not to mention the earlier K&T97 view of the ERBE studies).

No wonder Ann Henderson-Sellers scurried off the ship!

• Anonymous

One of the papers is here: http://arxiv.org/abs/1002.1024, Subharmonic resonance of global climate to solar forcing.

It shouldn’t matter whether the response is + or – but the phase is important. I wouldn’t want to second-guess nonlinear systems without doing the maths anyway.

The Glassman analysis of the hockeystick is interesting too. More support for my AIG paper that cherry-picking random data is sufficient to produce a hockey-stick. SteveMc cited it but its probably the only citation I’ll get. Haven’t seen anyone publish any analysis of the degree of bias in proxy selection.

I will check out the Rocket Scientists!

• http://www.ecoengineers.com/ Steve Short

May be considered a bit OT here (if so my apology) but Jeffrey Glassman has just posted up a highly significant extended analysis of some issues I have been exploring with him off and on for quite some time:http://www.rocketscientistsjournal.com/2010/03/…The AGW cabal's attempts to understand the global heat balance are shown to be deeply flawed with respect to TSI and Bond albedo simply by their own 'evidences'. To use one of my favourite Shakespearian phrases – 'hoist with their own petard'.As the great Global Warming Crisis bandwagon rumbles on towards – the abyss (?) or more likely a long drawn out saga of weasely squealing, such nit picking will simply be ignored as usual.

• davids99us

Steve, I liked the part where the oscillator equation is derived, and after I go through it I want to do a post on it.

• Anonymous

I have been on vacation, and on my way back. I hope to work on the problems some more. There is a single answer to all of them, but I leave it as an exercise for readers.

Since we haven’t yet engaged the topic to a great deal, I post some correspondence with VS below to stimulate the discussion.

I liked VS’s exit on BV’s blog. He really handles bloggers well! My guess is we will be hearing more from him in the future.

I want you to say whether, in light of this entire discussion, you feel that this WG1 figure (together with the confidence intervals listed in the legend) is deceiving/non-science, or not.

The analysis I did above is simply Dickey-Fuller analysis, based on the differenced series, so I don’t see why there would be
any of the issues of the undifferenced series operating.

Its just a little different in that I am looking at the possible magnitude of a possible deterministic GHG trend, in the presence of an integrating series. It suggests that when we assume a unit root (no tests of unit root here) there is very little else for a deterministic trend to explain.

The result of Breusch&Vahid is understandable, and also other authors, who tested whether temperature increase was significant in the presence of a unit root or similar integrating series. Its consistent that temperature variation is not ENTIRELY explained by a series with a unit root (at the 95%CL) and still the effect of a deterministic trend such as GHGs is very small. The 95% confidence does not mean that the contribution of GHGs is large. It only means its ‘detectable’ significantly. If the effect was large, such as ‘most of the temperature increase since 1960′ as the IPCC claim, then the significance level would be very high, in the order of >99%, I suspect.

• davids99us

I have been on vacation, and on my way back. I hope to work on the problems some more. There is a single answer to all of them, but I leave it as an exercise for readers.Since we haven't yet engaged the topic to a great deal, I post some correspondence with VS below to stimulate the discussion.I liked VS's exit on BV's blog. He really handles bloggers well! My guess is we will be hearing more from him in the future.”

I want you to say whether, in light of this entire discussion, you feel that this WG1 figure (together with the confidence intervals listed in the legend) is deceiving/non-science, or not.

“The analysis I did above is simply Dickey-Fuller analysis, based on the differenced series, so I don't see why there would be any of the issues of the undifferenced series operating.Its just a little different in that I am looking at the possible magnitude of a possible deterministic GHG trend, in the presence of an integrating series. It suggests that when we assume a unit root (no tests of unit root here) there is very little else for a deterministic trend to explain. The result of Breusch&Vahid is understandable, and also other authors, who tested whether temperature increase was significant in the presence of a unit root or similar integrating series. Its consistent that temperature variation is not ENTIRELY explained by a series with a unit root (at the 95%CL) and still the effect of a deterministic trend such as GHGs is very small. The 95% confidence does not mean that the contribution of GHGs is large. It only means its 'detectable' significantly. If the effect was large, such as 'most of the temperature increase since 1960' as the IPCC claim, then the significance level would be very high, in the order of >99%, I suspect.

• http://www.ecoengineers.com/ Steve Short

There is a rather sweet irony in Glassman's SGW paper for me. J. Roger Bray was an early mentor (and close friend) of mine 1971 – 1973 after I left uni for the first time (with an MS in physical chemistry). I credit Roger with showing me there were far more fascinating things in life than acid, pot and heavy rock music.A highly intelligent Canadian ecologist who migrated to my home town in New Zealand, Roger was a true Renaissance man also published quite a few papers between about 1970 and 1985 on such things as solar cycles in the montane glacial record e.g.http://www.sciencemag.org/cgi/content/abstract/…Formerly with the NZ Department of Scientific and Industrial Research Roger had traced the variations in the Sunâ€™s activity since 527 BC by connecting increased production of radio carbon (in tree rings) by cosmic rays to other recorded symptoms of feeble solar magnetic activity.Roger linked low solar activity and high cosmic rays with historically recorded advances of glaciers, pushing their cold snouts down many valleys. The advances were most numerous in the 17th and 18th centuries, which straddled the coldest period of the Little Ice Age.I can well recall Roger saying to me that he thought there were solar cycles apparent in all sorts of published proxy records, including European grape and olive harvest seasons going back to the pre-Roman era, of the order of ~2500, ~140 and ~45 years.I did my post-doc at Uni. Bern so I am also somewhat familiar with the fascinating relics record from the Schnidejoch Pass above Thun in Canton Bern (which controlled northerly access to/from the Rhone Valley).

• cohenite

Happy holiday David; Bart at the VS thread didn’t want to comment too much on this issue of whether Fig 1 is fraudulent although he did say this at April 8 15:50:

“No, I don’t think that wg1 figure is deceiving. The trendlines help to see the temperature increase for different time intervals, that are different enough from each other so as to be climatically relevant. One could argue whether they are a necessary addition: The bare data show the increase on temp quite clearly all by itself. If you take issue with the fact that they are linear fits, then you could just connect the begin- and endpoint (which you -surprisingly- seem to find a superior method of estimating the change in temperature). The lines from the longer periods would be a little steeper, but the basic message of the figure would not be different. Idem with omitting them altogether.”

“How are arbitary 150, 100, 50 and 25 year periods climatically relevant; what physical forcings are they consistent with? Wouldn’t climatically relevant periods look like this:

That is, the periods are consistent with PDO phase shift.”

A number of other graphs showing simarlarity between the start and end of the 20thC temperature record were forthcoming including this alteration of Bart’s temperature graph which started the thread:

http://cryp.dontexist.org/imags/bart2.jpg

http://farm3.static.flickr.com/2702/4503452885_79b5c09c4f_o.jpg

Of course Monckton was ripping Figure 1 to shreads during his Australian tour and his rebuttal graph is not bad either.

• Anonymous

How to objectively compare WG1 and cycles?

I think you could just ask which makes the best prediction. WG1 predicts an exponentially increasing temperature, which should be now over 0.2C/decade. Cycles predicts flat to falling temperatures for the next 30 years. Stochastic drift predicts either. Which is true? Exponentially increasing temperatures — a gently oscillating upward trend — or a stochastic drift?

• cohenite

• davids99us

Irony is sweet. I prick up when I see a 140/45 ~ 3 ratio, which has been found in a couple of papers to do with oceanographic periods, including ENSO/Annual and another ratio recently.Before I get accused of numerology, there is a good reason for this x3 period ratio, as it is a 'signature' of a sub-harmonic response. The reason is obvious – a x2 period would cancel, but in a x3 period the peaks of the wavelengths reinforce. That is, the smaller 45 year period could 'drive' the larger 140 year subharmonic. Another way that amplification can take place.

• davids99us

How to objectively compare WG1 and cycles?I think you could just ask which makes the best prediction. WG1 predicts an exponentially increasing temperature, which should be now over 0.2C/decade. Cycles predicts flat to falling temperatures for the next 30 years. Stochastic drift predicts either. Which is true? Exponentially increasing temperatures — a gently oscillating upward trend — or a stochastic drift?

• http://www.ecoengineers.com/ Steve Short

Given that Jeffrey Glassman appears to have found a TSI signal at 134 and 46 years (ratio = 2.91), one (S134) giving a negative feedback on warming, the other (S46) a positive feedback on warming, how would that work from a reinforcing harmonic?BTW – have a look at Jeff and my latest exchange on his Rocket Scientists Journal blog. Jeff has identified the essential weirdness in the Trenberth, Fasullo and Kiehl 2009 (and others) view of the CERES studies (not to mention the earlier K&T97 view of the ERBE studies).No wonder Ann Henderson-Sellers scurried off the ship!

• davids99us

One of the papers is here: http://arxiv.org/abs/1002.1024, Subharmonic resonance of global climate to solar forcing. It shouldn't matter whether the response is + or – but the phase is important. I wouldn't want to second-guess nonlinear systems without doing the maths anyway. The Glassman analysis of the hockeystick is interesting too. More support for my AIG paper that cherry-picking random data is sufficient to produce a hockey-stick. SteveMc cited it but its probably the only citation I'll get. Haven't seen anyone publish any analysis of the degree of bias in proxy selection.I will check out the Rocket Scientists!

• http://devoidofnulls.wordpres.com/ Andrew

The Characteristic Emission Level as a term does not originate with Douglass and Christy, in the paper itself they are referring to a paper by Lindzen. I'm not sure he invented it either.

• Anonymous

David, Did not know you were on hols. Hope it was good.

Re your last Q on “Cycles predicts flat to falling temperatures for the next 30 years. Stochastic drift predicts either. Which is true? Exponentially increasing temperatures — a gently oscillating upward trend — or a stochastic drift?”

The answer has to be in mechanism, rather than pattern. That’s why we need a set of equations linking atmospheric light variation to atmospheric heat variation by proven mechanisms and physical laws. As countless others have noted.

• Anonymous

Geoff, The mechanism is contentious. Alarmists and the IPCC models claim mechanism produces the exponential version. Conservatives (eg Spencer) might go for the cycles or drift.

The purpose of the modelling is to constrain or at least estimate the probability of certain models (or parameterizations).

Pattern and process are inextricably linked, but not 1:1.

• sherro

David, Did not know you were on hols. Hope it was good.Re your last Q on “Cycles predicts flat to falling temperatures for the next 30 years. Stochastic drift predicts either. Which is true? Exponentially increasing temperatures — a gently oscillating upward trend — or a stochastic drift?”The answer has to be in mechanism, rather than pattern. That's why we need a set of equations linking atmospheric light variation to atmospheric heat variation by proven mechanisms and physical laws. As countless others have noted.

• davids99us

Geoff, The mechanism is contentious. CAGW and IPCC modelling claimmechanism produces the exponential version. Low sensitivity (egSpencer) might go for the cycles or drift.

• Anonymous

Agreed, David, the choice of numerical analysis to guide better investigation of mechanisms is productive.

Take this lightly, but there are credible reports that lightning can produce effects similar to those found in a CO2 laser (which, BTW, has a lot of nitrogen in the gas mix, otherwise the efficiency is very low); that the ‘lasing’ can produce light of wavelengths of interest to atmospheric balances; that nitrogen cannot be regarded as an inert bystander in heat processes; that appropriate lasing can efficiently change the isotope ratios of carbon in selected substances; that the isotope changes are easily large enough to upset dating.

I’m in no way saying that future models should include these odd effects. The sole point is that the more the subject advances, the more mechanisms need be considered, dismissed or incorporated. As Alice said in 1872, ““Why, sometimes I’ve believed as many as six impossible things before breakfast.” Some might have been stochastic, some not.

The purpose of the physics is to constrain or at least estimate the probability of certain mechanisms.

• Anonymous

Geoff, I illustrated the conclusion of my PhD thesis with an
intersecting Venn diagram. the ‘good’ models are both theoretically
and empirically correct. The space of theoretically correct, or
empirically correct models is not sufficient. One can’t be either a
Platonist or empiricist if you want successful real-world models.

• http://www.ecoengineers.com/ Steve Short

“The space of theoretically correct, or empirically correct models is not sufficient. One can’t be either a Platonist or empiricist if you want successful real-world models.”

Too true!

For example, regarding Trenberth, Fasullo and Kiehl, 2009., I note that they kindly provide us with all the key data for the:

* ISCCP-FD;

* NRA;

* JRA; and

* Trenberth et al, 2009,

‘global mean heat balances’. Presumably these all reflect the ’slop’ in the state-of-play of our ‘best’ available understanding of the mean global heat balance from the last decade.

If IPCC can consider an ensemble of empirical models, why can’t I?

So, let us just try plotting the ratio OLR/ASR (OLR/Y) versus Albedo % (A) for these 4 analyses studies of the March 2000 – May 2004 CERES period satellite data set.

Oh, I nearly forgot, BTW, just for the fun of it, I also threw into the correlation the old K&T97 numbers for good measure (i.e. OLR/ASR = 1.0; Albedo (A) = 0.313) from the February 1985 – April 1989 ERBE satellite dataset.

You will then get a curve of the type:

OLR/ASR = 0.0052A^2-0.3197A+5.9014

Holy cow!

Are they trying to (subtly) tell us that for just about all % Albedo on either side of a minimum in this curve at A ~ 30.7%±1.0% the ratio OLR/ASR is >1.000 and hence the global heat balance system is naturally ‘air conditioned’?

Wouldn’t that be ‘cool’ (empirically speaking)!

• Anonymous

Hmm. Negative feedback.

• http://www.ecoengineers.com/ Steve Short

Check out Jeffrey Glassman’s latest reply to a similar flippant post by me on his (ultra-quiet) blog.

http://www.rocketscientistsjournal.com/2010/03/sgw.html

IMHO Jeffrey is incredibly good value!

You can copy any one of his super sharp analyses and post it onto any warmist blog.

Then just sit back and watch them go into a blue funk (or simply side step your post as though it can’t possibly have existed in this particular universe).

• Anonymous

Very long response!

• sherro

Agreed, David, the choice of numerical analysis to guide better investigation of mechanisms is productive. Take this lightly, but there are credible reports that lightning can produce effects similar to those found in a CO2 laser (which, BTW, has a lot of nitrogen in the gas mix, otherwise the efficiency is very low); that the 'lasing' can produce light of wavelengths of interest to atmospheric balances; that nitrogen cannot be regarded as an inert bystander in heat processes; that appropriate lasing can efficiently change the isotope ratios of carbon in selected substances; that the isotope changes are easily large enough to upset dating. I'm in no way saying that future models should include these odd effects. The sole point is that the more the subject advances, the more mechanisms need be considered, dismissed or incorporated. As Alice said in 1872, “â€œWhy, sometimes Iâ€™ve believed as many as six impossible things before breakfast.â€ Some might have been stochastic, some not.The purpose of the physics is to constrain or at least estimate the probability of certain mechanisms.

• davids99us

Geoff, I illustrated the conclusion of my PhD thesis with anintersecting Venn diagram. the 'good' models are both theoreticallyand empirically correct. The space of theoretically correct, orempirically correct models is not sufficient. One can't be either aPlatonist or empiricist if you want successful real-world models.

• http://www.ecoengineers.com/ Steve Short

“The space of theoretically correct, or empirically correct models is not sufficient. One can't be either a Platonist or empiricist if you want successful real-world models.”Too true!For example, regarding Trenberth, Fasullo and Kiehl, 2009., I note that they kindly provide us with all the key data for the:* ISCCP-FD;* NRA;* JRA; and* Trenberth et al, 2009,â€˜global mean heat balancesâ€™. Presumably these all reflect the â€™slopâ€™ in the state-of-play of our â€˜bestâ€™ available understanding of the mean global heat balance from the last decade. If IPCC can consider an ensemble of empirical models, why can't I?So, let us just try plotting the ratio OLR/ASR (OLR/Y) versus Albedo % (A) for these 4 analyses studies of the March 2000 â€“ May 2004 CERES period satellite data set.Oh, I nearly forgot, BTW, just for the fun of it, I also threw into the correlation the old K&T97 numbers for good measure (i.e. OLR/ASR = 1.0; Albedo (A) = 0.313) from the February 1985 â€“ April 1989 ERBE satellite dataset.You will then get a curve of the type:OLR/ASR = 0.0052A^2-0.3197A+5.9014and R^2 = 0.9940 (gadzooks)Holy cow!Are they trying to (subtly) tell us that for just about all % Albedo on either side of a minimum in this curve at A ~ 30.7%Â±1.0% the ratio OLR/ASR is >1.000 and hence the global heat balance system is naturally â€˜air conditionedâ€™?Wouldnâ€™t that be 'cool' (empirically speaking)!

• davids99us

Hmm. Negative feedback.

• http://www.ecoengineers.com/ Steve Short

Check out Jeffrey Glassman's latest reply to a similar flippant post by me on his (ultra-quiet) blog.http://www.rocketscientistsjournal.com/2010/03/…IMHO Jeffrey is incredibly good value! You can copy any one of his super sharp analyses and post it onto any warmist blog.Then just sit back and watch them go into a blue funk (or simply side step your post as though it can't possibly have existed in this particular universe).

• davids99us

Very long response!

• cohenite

Yes Steve, Glassman is good value; what is his background and scientific pedigree?

http://www.rocketscientistsjournal.com/2010/03/_res/AR4_FTS_6_25yr.jpg

This dovetails with the VS discussion at Bart’s where the WG1 graph has come into focus.

• http://www.ecoengineers.com/ Steve Short

Hi Anthony

I don’t know much about Jeffrey Glassman other than that he was (for 30 years) the Division Chief Scientist for Missile Development and Microelectronics Systems Divisions at Hughes Aircraft Corporation. I would guess he is retired.

Recently my (enjoyable) exchanges with him have got me thinking about the so-called global heat balances. These are actually budgets, not balances, because I agree with Jeff that the global heat budget is never really in balance except for brief instances. At any one time is always a net input or output of energy.Here are the most recent examples:

ISCCP-FD

NRA

JRA

Trenberth et al., 2009

Loeb et al., 2009 (both ‘old’ balance and ‘optimal’ balance)?

K&T97

Thus I have even begun to ask myself the radical question of whether these are in effect ALL correct as budgets as TSI, cloud cover and Bond albedo drifts around like, say, this example (for a fixed TSI of 1366 W/m^2):

Now that would be ……a good response to a lot of alarmist talking points….a cool air-conditioned Earth which warms up EITHER as cloud cover rises above about 74% (excess positive feedback) OR below about 66% (removal of negative feedback).

(1) the minimum in downwelling LW IR (which I have labelled E-D al la Miskolczi for lack of a better term); and

(2) that I eventually solved the Miskolczi dodgy LW tau issue (which arose from FMs incorrect accounting for LW IR emitted to TOA off cloud and by atmosphere-realized sensible heat). The true (transmitted) LW tau is indeed almost exactly 2.30 – consistent with a host of past literature AND the fact that BOA-emitted LW IR cannot pass through cloud.

• cohenite

Yes Steve, Glassman is good value; what is his background and scientific pedigree?I have already linked his site with the Lindzen thread at WUWT; I especially like this:http://www.rocketscientistsjournal.com/2010/03/…This dovetails with the VS discussion at Bart's where the WG1 graph has come into focus.

• http://www.ecoengineers.com/ Steve Short

Hi AnthonyI don't know much about Jeffrey Glassman other than that he was (for 30 years) the Division Chief Scientist for Missile Development and Microelectronics Systems Divisions at Hughes Aircraft Corporation. I would guess he is retired.Recently my (enjoyable) exchanges with him have got me thinking about the so-called global heat balances. These are actually budgets, not balances, because I agree with Jeff that the global heat budget is never really in balance except for brief instances. At any one time is always a net input or output of energy.Here are the most recent examples:ISCCP-FDNRAJRATrenberth et al., 2009Loeb et al., 2009 (both â€˜oldâ€™ balance and â€˜optimalâ€™ balance)?K&T97Thus I have even begun to ask myself the radical question of whether these are in effect ALL correct as budgets as TSI, cloud cover and Bond albedo drifts around like, say, this example (for a fixed TSI of 1366 W/m^2):https://www.yousendit.com/download/bFFNbGtDeFVF…Now that would be â€¦â€¦a good response to a lot of alarmist talking pointsâ€¦.a cool air-conditioned Earth which warms up EITHER as cloud cover rises above about 74% (excess positive feedback) OR below about 66% (removal of negative feedback). BTW, please note:(1) the minimum in downwelling LW IR (which I have labelled E-D al la Miskolczi for lack of a better term); and(2) that I eventually solved the Miskolczi dodgy LW tau issue (which arose from FMs incorrect accounting for LW IR emitted to TOA off cloud and by atmosphere-realized sensible heat). The true (transmitted) LW tau is indeed almost exactly 2.30 – consistent with a host of past literature AND the fact that BOA-emitted LW IR cannot pass through cloud.

• cohenite

Steve, the OD has been constant for the last 60 years, as I understand it; you’ve found a higher LW Tau than M but is that higher figure consistent with a constant OD [optical depth/density]?

• http://www.ecoengineers.com/ Steve Short

Yes, the LW tau is approximately constant i.e. there is a constant LW IR OD for any part of the atmosphere which is unaffected by variable cloud cover and/or the release (somewhere in the vertical column) of latent or sensible heat AND is of average humidity. This is of course a rare condition! IMHO, for heat budget purposes, the LW IR tau can best be assumed approximately constant only PROVIDED that observations are corrected for the following major effects:

(1) The size of the optical window i.e. the % amount of clear sky. Naturally the entire surface emits LW IR (Miskolczi’s S_U) but the true LW IR transmitted right through to TOA is not, by and large, transmitted through clouds (of any sort). So Miskolczi’s S_T reaches TOA through that fraction of sky not covered by cloud.

(2) LW IR is also emitted upwards (to pass through TOA) from the tops of clouds which are forming (or have formed) rain or ice i.e. emitting Latent Heat (LH). I call that fraction LH_U. It is on average about 37.4% of the total LH (the other 62.6% is emitted downwards to be part of E_D).

(3) LW IR is also emitted upwards (to pass through TOA) from the top ends of dry thermals when they peter out at altitude i.e. emitting Sensible Heat (SH). I call that fraction SH_U. Again it is on average about 37.4% of the total SH (the other 62.6% is emitted downwards to be part of E_D).

I have seen no evidence that Miskolczi has been able to fully deconvolute the above effects – especially effects (1) and (3) ina global averaging sense.

If you try to make sense of all the published global heat budgets from K&T97 onwards (I shy away from the word balance because they are only balances in an approximate sense) then you will find S_Ts that differ from Miskolczi’s for everything except (approximately) the clear sky case. This is a goodly part of the reason why Miskolczi ended up with the nonsense of a constant g of 0.333.

Look closely at my spreadsheet. It faithfully reproduces (simultaneously) the budgets of not one but seven global heat budgets (previously listed) to a precision of <1 w/m^2 in each and every case. Check out, in column G, how I normalize S_T (at each value of albedo) with (a) a tau of 2.30 (noting exp(-2.3) = 0.1003) and (b) a value of (1 – cloud cover) of 0.340 because the average cloud cover over the March 2000 – May 2004 CERES period was close to 66.0%. By scaling S_T to the absence of average cloud cover you are also automatically scaling to the average cloud free humidity. You can't do that with a tau of 1.87 – it just won't work for any published budget (too transparent).

• cohenite

Steve, the OD has been constant for the last 60 years, as I understand it; you've found a higher LW Tau than M but is that higher figure consistent with a constant OD [optical depth/density]?

• http://www.ecoengineers.com/ Steve Short

Yes, the LW tau is approximately constant i.e. there is a constant LW IR OD for any part of the atmosphere which is unaffected by variable cloud cover and/or the release (somewhere in the vertical column) of latent or sensible heat AND is of average humidity. This is of course a rare condition! IMHO, for heat budget purposes, the LW IR tau can best be assumed approximately constant only PROVIDED that observations are corrected for the following major effects:(1) The size of the optical window i.e. the % amount of clear sky. Naturally the entire surface emits LW IR (Miskolczi's S_U) but the true LW IR transmitted right through to TOA is not, by and large, transmitted through clouds (of any sort). So Miskolczi's S_T reaches TOA through that fraction of sky not covered by cloud.(2) LW IR is also emitted upwards (to pass through TOA) from the tops of clouds which are forming (or have formed) rain or ice i.e. emitting Latent Heat (LH). I call that fraction LH_U. It is on average about 37.4% of the total LH (the other 62.6% is emitted downwards to be part of E_D).(3) LW IR is also emitted upwards (to pass through TOA) from the top ends of dry thermals when they peter out at altitude i.e. emitting Sensible Heat (SH). I call that fraction SH_U. Again it is on average about 37.4% of the total SH (the other 62.6% is emitted downwards to be part of E_D).I have seen no evidence that Miskolczi has been able to fully deconvolute the above effects – especially effects (1) and (3) ina global averaging sense. If you try to make sense of all the published global heat budgets from K&T97 onwards (I shy away from the word balance because they are only balances in an approximate sense) then you will find S_Ts that differ from Miskolczi's for everything except (approximately) the clear sky case. This is a goodly part of the reason why Miskolczi ended up with the nonsense of a constant g of 0.333.Look closely at my spreadsheet. It faithfully reproduces (simultaneously) the budgets of not one but seven global heat budgets (previously listed) to a precision of <1 w/m^2 in each and every case. Check out, in column G, how I normalize S_T (at each value of albedo) with (a) a tau of 2.30 (noting exp(-2.3) = 0.1003) and (b) a value of (1 – cloud cover) of 0.340 because the average cloud cover over the March 2000 – May 2004 CERES period was close to 66.0%. By scaling S_T to the absence of average cloud cover you are also automatically scaling to the average cloud free humidity. You can't do that with a tau of 1.87 – it just won't work for any published budget (too transparent).

• cohenite

Yes Miklos Zagoni said that a window of 60w/m2 would rectify the difference between M and the other radiative ‘balance’/heat budgets. He said he had solved the ‘problem’ bu I don’t know what happened. In respect of a constant g of 0.333, by way of tying up lose ends what do make of this;

Section 5.4 is the relevant part.

• http://www.ecoengineers.com/ Steve Short

Hi Anthony

Thanks for that Ramanathan and Inamdar chapter (5). Do you have references? Where does is appear?

I am snowed under with work at present so must keep this brief. Two key points:

(1) The spreadsheet I posted is (presently) nothing more than an exercise to show just how ridiculous the ensemble of modern so-called global heat budgets can be. As Jeffrey Glassman says albedo must go up with increasing ASR – neither the reverse or a bimodal system as the ensemble of such budgets would imply if we include all 7 budgets from K&T97 onwards. With Jeff’s guidance I am currently working on a budget which hopefully makes some sense.

(2) Don’t be fooled by the statement on page 134 of Ramanathan and Inamdar where they say: “The global average Ga is 131 W m−2 or the normalized gsuba is 0.33, i.e., the atmosphere reduces the energy escaping to space by 131 W m−2 (or by a factor of 1/3).” into thinking that somehow endorses FM’s contention of a constant g = 0.333. It does not. R&I are talking about Ga the greenhouse effect due to atmosphere only. The Total Greenhouse Effect = G = Ga + C1 where C1 is the greenhouse effect due to clouds. The term Ga is defined as Ga = E − Fc where Fc is the measured OLR for clear skies and E is the emission from the surface. However, M persistently used the nomenclature G and g and did not link his K term (which is the latent and sensible heat effects) into his system so we can only assume that he was talking about the total greenhouse effect (having a constant normalized g of 0.333). This is strictly not the same as only gsuba = 0.333. No doubt M was totally familiar with the common nomenclature so it would not be acceptable to substitute gsuba for g even if that might subsequently claimed. We’ve been down that slippery street many times before.

• cohenite

Yes Miklos Zagoni said that a window of 60w/m2 would rectify the difference between M and the other radiative 'balance'/heat budgets. He said he had solved the 'problem' bu I don't know what happened. In respect of a constant g of 0.333, by way of tying up lose ends what do make of this;http://www-ramanathan.ucsd.edu/FCMTheRadiativeF…Section 5.4 is the relevant part.

• http://www.ecoengineers.com/ Steve Short

Hi AnthonyThanks for that Ramanathan and Inamdar chapter (5). Do you have references? Where does is appear? I am snowed under with work at present so must keep this brief. Two key points:(1) The spreadsheet I posted is (presently) nothing more than an exercise to show just how ridiculous the ensemble of modern so-called global heat budgets can be. As Jeffrey Glassman says albedo must go up with increasing ASR – neither the reverse or a bimodal system as the ensemble of such budgets would imply if we include all 7 budgets from K&T97 onwards. With Jeff's guidance I am currently working on a budget which hopefully makes some sense.(2) Don't be fooled by the statement on page 134 of Ramanathan and Inamdar where they say: “The global average Ga is 131 W mâˆ’2 or the normalized gsuba is 0.33, i.e., the atmosphere reduces the energy escaping to space by 131 W mâˆ’2 (or by a factor of 1/3).” into thinking that somehow endorses FM's contention of a constant g = 0.333. It does not. R&I are talking about Ga the greenhouse effect due to atmosphere only. The Total Greenhouse Effect = G = Ga + C1 where C1 is the greenhouse effect due to clouds. The term Ga is defined as Ga = E âˆ’ Fc where Fc is the measured OLR for clear skies and E is the emission from the surface. However, M persistently used the nomenclature G and g and did not link his K term (which is the latent and sensible heat effects) into his system so we can only assume that he was talking about the total greenhouse effect (having a constant normalized g of 0.333). This is strictly not the same as only gsuba = 0.333. No doubt M was totally familiar with the common nomenclature so it would not be acceptable to substitute gsuba for g even if that might subsequently claimed. We've been down that slippery street many times before.

• cohenite

Steve, it’s from “Frontiers of Climate Modeling”, edited by J.T. Liehl and V. Ramanathan, Cambridge Uni Press, 2006

• cohenite

Steve, it's from “Frontiers of Climate Modeling”, edited by J.T. Liehl and V. Ramanathan, Cambridge Uni Press, 2006

• cohenite

Crickey, that should be J. T. Kiehl of K&T fame.

• http://www.ecoengineers.com/ Steve Short

Hi Anthony

Thanks for your feedback. As always, excellent. I’m somewhat stressed-out by my regular work at the moment but what do you think about this (files attached)?

I’ve included FYI the (present) clear sky budget but I think I still have the whole clear sky row a bit wrong (g should be ~0.333 rather than 0.281). Notice I have the clear sky S_T at 104 W/m^2 which is also a bit high. The problem with the clear sky row may relate to the fact that under global clear sky conditions albedo would not be ~0.068 anyway and hence the OLR algorithm I used (based on ISCCP-FD, NRA, TF&K09, K&T97, Loeb et al old & Loeb et al optima) extrapolation probably doesn’t work down that far (albedo-wise). But hey, g= 0.281 is not too far from g = 0.333 (;-)!

Best regards

Steve

• cohenite

Crickey, that should be J. T. Kiehl of K&T fame.

• http://www.ecoengineers.com/ Steve Short

Hi AnthonyThanks for your feedback. As always, excellent. Iâ€™m somewhat stressed-out by my regular work at the moment but what do you think about this (files attached)?https://www.yousendit.com/download/bFFNYUp6SEJP…I've included FYI the (present) clear sky budget but I think I still have the whole clear sky row a bit wrong (g should be ~0.333 rather than 0.281). Notice I have the clear sky S_T at 104 W/m^2 which is also a bit high. The problem with the clear sky row may relate to the fact that under global clear sky conditions albedo would not be ~0.068 anyway and hence the OLR algorithm I used (based on ISCCP-FD, NRA, TF&K09, K&T97, Loeb et al old & Loeb et al optima) extrapolation probably doesn't work down that far (albedo-wise). But hey, g= 0.281 is not too far from g = 0.333 (;-)!Best regardsSteve

• cohenite

Steve; are these the calculations you sent to Pinker? There is currently a discussion at Watts about the missing heat and where it is stored; the increasing biomass is being put forward as one likely storage place by such commentators as Anna v near the end of the thread; it sounds like cynaobacteria territory;

http://wattsupwiththat.com/2010/04/16/ncars-missing-heat-they-could-not-find-it-any-where/#comment-370926

• cohenite

Steve; are these the calculations you sent to Pinker? There is currently a discussion at Watts about the missing heat and where it is stored; the increasing biomass is being put forward as one likely storage place by such commentators as Anna v near the end of the thread; it sounds like cynaobacteria territory;http://wattsupwiththat.com/2010/04/16/ncars-mis

• Paul_K

David,
I am not sure I agree with your statistics experiment as set out.
VS suggested on Bart’s blog rather uncharitably that B&V had been biased towards accepting a model which accommodated a drift term rather than face the political consequences of “no trend” in the temperature series (my paraphrase). I don’t believe that this is fair.
I did a comparison of the B&V ARIMA (0,1,2) model against the VS-proposed ARIMA (3,1,0) model on the GISStemp annual J-D dataset, and it seems to be statistically superior to a best-fit of the model proposed by VS. It also appears to be a much better characterisation of the data. If my finding is valid, then your hypothesis above would have to be redefined, since this result changes the assumptive form of the underlying model.
However, to demonstrate this I need to upload some images. Can you point me to the easiest way to do this for this site?

• Anonymous

Hi Paul,
Would you like to register and prepare a post?
Cheers

• Anonymous

Hi David,
Very much so, but I seem to be getting the runaround from the registration process (hence the change of alias for DISQUS). Is there an idiot’s guide to registering and uploading data?
Paul

• Anonymous

Sorry about that. Would you like to email your text and images and I
will sort it out?

• Anonymous

Hi David,
I sent you a text file with embedded images a couple of days ago via your “contact David Stockwell” link. Can you confirm receipt? I don’t know whether you bumped it as unworthy or never received it.
Paul

• Paul_K

David,I am not sure I agree with your statistics experiment as set out. VS suggested on Bart's blog rather uncharitably that B&V had been biased towards accepting a model which accommodated a drift term rather than face the political consequences of “no trend” in the temperature series (my paraphrase). I don't believe that this is fair.I did a comparison of the B&V ARIMA (0,1,2) model against the VS-proposed ARIMA (3,1,0) model on the GISStemp annual J-D dataset, and it seems to be statistically superior to a best-fit of the model proposed by VS. It also appears to be a much better characterisation of the data. If my finding is valid, then your hypothesis above would have to be redefined, since this result changes the assumptive form of the underlying model. However, to demonstrate this I need to upload some images. Can you point me to the easiest way to do this for this site?

• davids99us

Hi Paul,Would you like to register and prepare a post?Cheers

• Paul_K2

Hi David,Very much so, but I seem to be getting the runaround from the registration process (hence the change of alias for DISQUS). Is there an idiot's guide to registering and uploading data?Paul

• davids99us

Sorry about that. Would you like to email your text and images and Iwill sort it out?

• Anonymous

Steve Short
“I credit Roger with showing me there were far more fascinating things in life than acid, pot and heavy rock music.”

Well? That was a week ago. How long do we have to guess what they were?

• http://www.ecoengineers.com/ Steve Short

That’s a bit rough! I had followed up in the same post with 5 paragraphs and an example web reference!

If interested you could have easily Googled ‘J. Roger Bray’, in which case you would have found a wealth of good papers by him in the 70s and 80s on the historical imprints of solar cycles – especially in historical agricultural writings (start of season, length of season, size of crop, failed crops etc., etc). Roger was one of the very first, if not the first, to identify solar effects in the historical records from the last 2000 or more years.

Too prosaic for you? Get out of bed on the wrong side this morning did we?

• sherro

Steve Short”I credit Roger with showing me there were far more fascinating things in life than acid, pot and heavy rock music.”Well? That was a week ago. How long do we have to guess what they were?

• http://www.ecoengineers.com/ Steve Short

That's a bit rough! I had followed up in the same post with 5 paragraphs and an example web reference! If interested you could have easily Googled 'J. Roger Bray', in which case you would have found a wealth of good papers by him in the 70s and 80s on the historical imprints of solar cycles – especially in historical agricultural writings (start of season, length of season, size of crop, failed crops etc., etc). Roger was one of the very first, if not the first, to identify solar effects in the historical records from the last 2000 or more years.Too prosaic for you? Get out of bed on the wrong side this morning did we?

• Anonymous

Steve Short, Yes, I read that with great interest, thank you. But you did leave us hanging as to what was better than “acid, pot and heavy rock music”. Never tried any, honest. T’was a but a joke. Like the one David caught me on, hook line & sinker, with “less permanent drought conditions”.

• Anonymous

It was a odd phrase.

• sherro

Steve Short, Yes, I read that with great interest, thank you. But you did leave us hanging as to what was better than “acid, pot and heavy rock music”. Never tried any, honest. T'was a but a joke. Like the one David caught me on, hook line & sinker, with “less permanent drought conditions”.

• Paul_K2

Hi David,I sent you a text file with embedded images a couple of days ago via your “contact David Stockwell” link. Can you confirm receipt? I don't know whether you bumped it as unworthy or never received it.Paul

• Anonymous

Steve or somebody else might have more insight than I do on this http://rogerpielkejr.blogspot.com/2010/04/in-released-cru-emails-ncar-climate.html.

To me it looks like the ‘missing heat’ could be explained by increase in outgoing radiation at TOA, but less radiation hitting the surface.

• davids99us

Steve or somebody else might have more insight than I do on this http://rogerpielkejr.blogspot.com/2010/04/in-re….To me it looks like the 'missing heat' could be explained by increase in outgoing radiation at TOA, but less radiation hitting the surface.

• davids99us

It was a odd phrase.

• cohenite

David, this paper has surfaced on the issue of OHC claiming that at 2000m levels the ocean is heating;

http://www.mercator.eu.org/documents/lettre/lettre_33_en.pdf#page=3

The issue of OHC and the pipeline effect or not is discussed at this WUWT thread where the concept of growth in the biomass is put forward as a beneficary of extra/missing heat and where that heat is stored;

• Anonymous

Hi Cohenite,
I think the Schuckmann paper is either very important or very wrong. John Cook in Skeptical Science has been pushing the Shuckmann paper in several articles to support the case for a continued gain in global energy over the last few years.

The paper is – superficially at least – in conflict with Willis 2008 and Levitus 2009, both of which showed flat trends in OHC, but which did not consider data down to the depths considered in the Schuckmann paper. Bob Tisdale on his blog comments that the paper appears to be an outlier and he raises the question of how heat gets into the deep oceans without leaving a heating signature on the shallower (e.g. Willis’s 0-700m) depths. There may however be an answer to this question in a paper by Johnson et al (2007), which provides a mechanism for abyssal heating by non-ubiquitous deep convection currents.
The other thing I did note was that the Schuckmann paper is also in conflict with Cazenave et al 2008 , which is the consensus science view on reconciliation of TG data, altimetry, ocean mass balance (including GRACE data) and ARGO data.
As I posted somewhere on Skeptical Science:

“[The Cazenave paper]… calculates steric sea level rise (thermal plus salinity) from 2003 to 2008 from Altimetry minus mass balance (two different ways) as 0.31mm/year, and independently calculates the value by thermal expansion from ARGO data as 0.37mm/year. This uses 0-900m ARGO data. It concludes:-
QUOTE The steric sea level estimated from the difference between altimetric (total) sea level and ocean mass displays increase over 2003–2006 and decrease since 2006. On average over the 5 year period (2003–2008), the steric contribution has been small (on the order of 0.3+/−0.15 mm/yr), confirming recent Argo results (this study and Willis et al., 2008).ENDQUOTE

You will note that there is no room in this analysis for any additional deep OHC, quite the contrary. Either Schuckmann or Cazenave has some further explaining to do in order to reconcile these two papers.”

If the Schuckmann paper is confirmed, it would go a long way towards explaining Trenberth’s missing heat problem.

• cohenite

David, this paper has surfaced on the issue of OHC claiming that at 2000m levels the ocean is heating;http://www.mercator.eu.org/documents/lettre/let…The issue of OHC and the pipeline effect or not is discussed at this WUWT thread where the concept of growth in the biomass is put forward as a beneficary of extra/missing heat and where that heat is stored;http://wattsupwiththat.com/2010/04/16/ncars-mis

• Paul_K2

Hi Cohenite,I think the Schuckmann paper is either very important or very wrong. John Cook in Skeptical Science has been pushing the Shuckmann paper in several articles to support the case for a continued gain in global energy over the last few years.The paper is – superficially at least – in conflict with Willis 2008 and Levitus 2009, both of which showed flat trends in OHC, but which did not consider data down to the depths considered in the Schuckmann paper. Bob Tisdale on his blog comments that the paper appears to be an outlier and he raises the question of how heat gets into the deep oceans without leaving a heating signature on the shallower (e.g. Willis's 0-700m) depths. There may however be an answer to this question in a paper by Johnson et al (2007), which provides a mechanism for abyssal heating by non-ubiquitous deep convection currents. The other thing I did note was that the Schuckmann paper is also in conflict with Cazenave et al 2008 , which is the consensus science view on reconciliation of TG data, altimetry, ocean mass balance (including GRACE data) and ARGO data. As I posted somewhere on Skeptical Science: “[The Cazenave paper]… calculates steric sea level rise (thermal plus salinity) from 2003 to 2008 from Altimetry minus mass balance (two different ways) as 0.31mm/year, and independently calculates the value by thermal expansion from ARGO data as 0.37mm/year. This uses 0-900m ARGO data. It concludes:-QUOTE The steric sea level estimated from the difference between altimetric (total) sea level and ocean mass displays increase over 2003â€“2006 and decrease since 2006. On average over the 5 year period (2003â€“2008), the steric contribution has been small (on the order of 0.3+/âˆ’0.15 mm/yr), confirming recent Argo results (this study and Willis et al., 2008).ENDQUOTEYou will note that there is no room in this analysis for any additional deep OHC, quite the contrary. Either Schuckmann or Cazenave has some further explaining to do in order to reconcile these two papers.”If the Schuckmann paper is confirmed, it would go a long way towards explaining Trenberth's missing heat problem.

• cohenite

Paul_K2; I couldn’t agree more; one explanation I have heard, I think from Nick Stokes, is that the upper cooling, or flat temperature in OHC [Loehle finds cooling, as Does Dipuccio], is caused by El Nino transfer of the heat from the ocean surface to the atmosphere; I think Bob Tisdale’s discussion of the reemergence mechanism counters that though, which leaves the problem, as you say, for Schuckmann of how to explain a surface cooling and a warming at depth which is counter-intuitive.

• cohenite

Paul_K2; I couldn't agree more; one explanation I have heard, I think from Nick Stokes, is that the upper cooling, or flat temperature in OHC [Loehle finds cooling, as Does Dipuccio], is caused by El Nino transfer of the heat from the ocean surface to the atmosphere; I think Bob Tisdale's discussion of the reemergence mechanism counters that though, which leaves the problem, as you say, for Schuckmann of how to explain a surface cooling and a warming at depth which is counter-intuitive.

• Anonymous

Is this the “Cazenave” paper you mention? If not, it has some interesting recent data also on sea levels.

Ocean Sci. Discuss., 6, 31–56, 2009
http://www.ocean-sci-discuss.net/6/31/2009/

A new assessment of global mean sea
level from altimeters highlights
a reduction of global trend
from 2005 to 2008
M. Ablain, A. Cazenave, G. Valladeau, and S. Guinehut
1CLS, Ramonville Saint-Agne, France

• sherro

Is this the “Cazenave” paper you mention? If not, it has some interesting recent data also on sea levels.Ocean Sci. Discuss., 6, 31â€“56, 2009http://www.ocean-sci-discuss.net/6/31/2009/A new assessment of global mean sealevel from altimeters highlightsa reduction of global trendfrom 2005 to 2008M. Ablain, A. Cazenave, G. Valladeau, and S. Guinehut1CLS, Ramonville Saint-Agne, France

• cohenite
• cohenite

Sherro;that is one of them; the other is this:http://sciences.blogs.liberation.fr/home/files/

• Anonymous

Cohenite,
Thanks for digging out the complete Cazenave reference.
Paul

• Paul_K2

Cohenite,Thanks for digging out the complete Cazenave reference.Paul

• Anonymous

Paul_K2,
The Ablain, Cazenave et al 2009 paper is also in public .pdf if you do not have it. I’d have to dig a bit for it, but am happy to do so.

• sherro

Paul_K2,The Ablain, Cazenave et al 2009 paper is also in public .pdf if you do not have it. I'd have to dig a bit for it, but am happy to do so.

• Anonymous

A newer 2010 Cazenave paper is-

Cazenave, A. and Llovel, W. Contemporary sea level rise. Annual Review of Marine Science 2: 145-173, 2010.
Notes: Measuring sea level change and understanding its causes has considerably improved in the recent years, essentially
because new in situ and remote sensing observations have become available. Here we report on most recent results on
contemporary sea level rise. We first present sea level observations from tide gauges over the twentieth century and from
satellite altimetry since the early 1990s. We next discuss the most recent progress made in quantifying the processes causing
sea level change on timescales ranging from years to decades, i.e., thermal expansion of the oceans, land ice mass loss, and land
water-storage change. We show that for the 1993-2007 time span, the sum of climate-related contributions (2.85 ± 0.35 mm
year -1 ) is only slightly less than altimetry-based sea level rise (3.3 ± 0.4 mm year -1 ): ~30% of the observed rate of rise is due to
ocean thermal expansion and ~55% results from land ice melt. Recent acceleration in glacier melting and ice mass loss from
the ice sheets increases the latter contribution up to 80% for the past five years. We also review the main causes of regional
variability in sea level trends: The dominant contribution results from nonuniform changes in ocean thermal expansion.

• sherro

A newer 2010 Cazenave paper is-Cazenave, A. and Llovel, W. Contemporary sea level rise. Annual Review of Marine Science 2: 145-173, 2010.Notes: Measuring sea level change and understanding its causes has considerably improved in the recent years, essentiallybecause new in situ and remote sensing observations have become available. Here we report on most recent results oncontemporary sea level rise. We first present sea level observations from tide gauges over the twentieth century and fromsatellite altimetry since the early 1990s. We next discuss the most recent progress made in quantifying the processes causingsea level change on timescales ranging from years to decades, i.e., thermal expansion of the oceans, land ice mass loss, and landwater-storage change. We show that for the 1993-2007 time span, the sum of climate-related contributions (2.85 Â± 0.35 mmyear -1 ) is only slightly less than altimetry-based sea level rise (3.3 Â± 0.4 mm year -1 ): ~30% of the observed rate of rise is due toocean thermal expansion and ~55% results from land ice melt. Recent acceleration in glacier melting and ice mass loss fromthe ice sheets increases the latter contribution up to 80% for the past five years. We also review the main causes of regionalvariability in sea level trends: The dominant contribution results from nonuniform changes in ocean thermal expansion.

• cohenite

Sherro, that new Cazenave paper seems to contradict this prior paper in respect of rate of sea level increase:

A new assessment of global mean sea level from altimeters highlights a reduction of global trend from 2005 to 2008

M. Ablain1, A. Cazenave2, G. Valladeau1, and S. Guinehut1
1CLS, Ramonville Saint-Agne, France
2LEGOS, OMP, Toulouse, France

Abstract. A new error budget assessment of the global Mean Sea Level (MSL) determined by TOPEX/Poseidon and Jason-1 altimeter satellites between January 1993 and June 2008 is presented. We discuss all potential errors affecting the calculation of the global MSL rate. We also compare altimetry-based sea level with tide gauge measurements over the altimetric period. This allows us to provide a realistic error budget of the MSL rise measured by satellite altimetry. These new calculations highlight a reduction in the rate of sea level rise since 2005, by ~2 mm/yr. This represents a 60% reduction compared to the 3.3 mm/yr sea level rise (glacial isostatic adjustment correction applied) measured between 1993 and 2005. Since November 2005, MSL is accurately measured by a single satellite, Jason-1. However the error analysis performed here indicates that the recent reduction in MSL rate is real.

• cohenite

Sherro, that new Cazenave paper seems to contradict this prior paper in respect of rate of sea level increase:A new assessment of global mean sea level from altimeters highlights a reduction of global trend from 2005 to 2008M. Ablain1, A. Cazenave2, G. Valladeau1, and S. Guinehut11CLS, Ramonville Saint-Agne, France2LEGOS, OMP, Toulouse, FranceAbstract. A new error budget assessment of the global Mean Sea Level (MSL) determined by TOPEX/Poseidon and Jason-1 altimeter satellites between January 1993 and June 2008 is presented. We discuss all potential errors affecting the calculation of the global MSL rate. We also compare altimetry-based sea level with tide gauge measurements over the altimetric period. This allows us to provide a realistic error budget of the MSL rise measured by satellite altimetry. These new calculations highlight a reduction in the rate of sea level rise since 2005, by ~2 mm/yr. This represents a 60% reduction compared to the 3.3 mm/yr sea level rise (glacial isostatic adjustment correction applied) measured between 1993 and 2005. Since November 2005, MSL is accurately measured by a single satellite, Jason-1. However the error analysis performed here indicates that the recent reduction in MSL rate is real.

• cohenite

Sherro, that new Cazenave paper seems to contradict this prior paper in respect of rate of sea level increase:A new assessment of global mean sea level from altimeters highlights a reduction of global trend from 2005 to 2008M. Ablain1, A. Cazenave2, G. Valladeau1, and S. Guinehut11CLS, Ramonville Saint-Agne, France2LEGOS, OMP, Toulouse, FranceAbstract. A new error budget assessment of the global Mean Sea Level (MSL) determined by TOPEX/Poseidon and Jason-1 altimeter satellites between January 1993 and June 2008 is presented. We discuss all potential errors affecting the calculation of the global MSL rate. We also compare altimetry-based sea level with tide gauge measurements over the altimetric period. This allows us to provide a realistic error budget of the MSL rise measured by satellite altimetry. These new calculations highlight a reduction in the rate of sea level rise since 2005, by ~2 mm/yr. This represents a 60% reduction compared to the 3.3 mm/yr sea level rise (glacial isostatic adjustment correction applied) measured between 1993 and 2005. Since November 2005, MSL is accurately measured by a single satellite, Jason-1. However the error analysis performed here indicates that the recent reduction in MSL rate is real.

• Pingback: bateria do laptopa toshiba()

• Pingback: tutaj()

• Pingback: fijzoterapii i Dietetyki()