The Widening Gap Between Present Global Temperature and IPCC Model Projections

An increase in global temperature required to match the Intergovernmental Panel on Climate Change (IPCC) projections is becoming increasingly unlikely. A shift to a mean projected pathway of 3 degrees increase by the end of the century would require an immediate, large, and sustained increase in temperature which seems physically impossible.

Global surface temperatures have not increased at all in the last 18 years. The trend over the last few years is even falling slightly.

Global temperatures continue to track at the low end of the range of global warming scenarios, expanding a significant gap between current trends and the course needed to be consistent with IPCC projections.

On-going international climate negotiations fail to recognise the growing gap between the model projections based on global greenhouse emissions and the increasingly unlikely chance of those models being correct.

Research led by Ben Santer, compared temperatures under emission scenarios used to project climate change by the (IPCC) with satellite temperature observations at all latitudes.

“The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes.” reports their paper in PNAS. Moreover, it is not known why they are failing.

“The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive.”

Explaining why they are failing will require a commitment to skeptical inquiry and an increasing need to rely on the scientific method.

The unquestioning acceptance of projections of IPCC climate models by the CSIRO, Australian Climate Change Science Program, and many other traditional scientific bodies that has informed policies and decisions on energy use and associated costs must be called into question. So to must the long-term warming scenarios based on the link between emissions and increases in temperature.

Q: Where Do Climate Models Fail? A: Almost Everywhere

“How much do I fail thee. Let me count the ways”

Ben Santer’s latest model/observation comparison paper demonstrates that climate realists were right and climate models exaggerate warming:

The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes.

Where do the models fail?

1. Significantly warmer than reality (95% CI) in the lower troposphere at all latitudes, except for the arctic.

2. Significantly warmer than reality (95% CI) in the mid-troposphere at all latitudes, except for the possible polar regions.

3. Significant warmer that reality (95% CI) in the lower stratosphere at all latitudes, except possibly polar regions.

Answer: Everywhere except for polar regions where uncertainty is greater.

East Pacific Region Temperatures: Climate Models Fail Again

Bob Tisdale, author of the awesome book “Who Turned on the Heat?” presented an interesting problem that turns out to be a good application of robust statistical tests called empirical fluctuation processes.

Bob notes that sea surface temperature (SST) in a large region of the globe in the Eastern Pacific does not appear to have warmed at all in the last 30 years, in contrast to model simulations (CMIP SST) for that region that show strong warming. The region in question is shown below.

The question is, what is the statistical significance of the difference between model simulations and the observations? The graph comparing the models with observations from Bob’s book shows two CMIP model projections strongly increasing at 0.15C per decade for the region (brown and green) and the observations increasing at 0.006C per decade (magenta).

However, there is a lot of variability in the observations, so the natural question is whether the difference is statistically significant? A simple-minded approach would be to compare the temperature change between 1980 and 2012 relative to the standard deviation, but this would be a very low power test, and only used by someone who wanted to obfuscate the obvious failure of climate models in this region.

Empirical fluctuation processes are a natural way to examine such questions in a powerful and generalized way, as we can ask of a strongly autocorrelated series — Has there been a change in level? — without requiring the increase to be a linear trend.

To illustrate the difference, if we assume a linear regression model, as is the usual practice: Y = mt +c the statistical test for a trend is whether the trend coefficient m is greater than zero.

H0: m=0 Ha: m>0

If we test for a change in level, the EFP statistical test is whether m is constant for all of time t:

H0: mi = m0 for i over all time t.

For answering questions similar to tests of trends in linear regression, the EFP path determines if and when a simple constant model Y=m+c deviates from the data. In R this is represented as the model Y~1. If we were to use a full model Y~t then this would test whether the trend of Y is constant, not whether the level of Y is constant. This is clearer if you have run linear models in R.

Moving on to the analysis, below are the three data series given to me by Bob, and available with the R code here.

The figure below shows the series in question on the x axis, the EFP path is the black line, and 95% significance levels for the EFP path are in red.

It can be seen clearly that while the EFP path for the SST observations series shows a little unusual behavior, with a significant change in level in 1998 and again in 2005, the level is currently is not significantly above the level in 1985.

The EFP path for the CMIP3 model (CMIP5 is similar), however, exceeds the 95% significant level in 1990 and continues to increase, clearly indicating a structural increase in level in the model that has continued to intensify.

Furthermore, we can ask whether there is a change in level between the CMIP models and the SST observations. The figure below shows the EFP path for the differences CMIP3-SST and CMIP5-SST. After some deviation from zero at about 1990, around 2000 the difference becomes very significant at the 5% level, and continues to increase. Thus the EFP test shows a very significant and widening disagreement between the temperature simulation of the CMIP over the observational SST series in the Eastern Pacific region after the year 2000.

While the average of multiple model simulations show a significant change in level over the period, in the parlance of climate science, there is not yet a detectable change in level in the observations.

One could say I am comparing apples and oranges, as the models are average behavior while the SST observations are a single realization. But, the fact remains only the simulations of models show warming, because there is no support for warming of the region from the observations. This is consistent with the previous post on Santer’s paper showing failure of models to match the observations over most latitudinal bands.

Santer: Climate Models are Exaggerating Warming – We Don’t Know Why

Ben Santer’s latest model/observation comparison paper in PNAS finally admits what climate realists have been been saying for years — climate models are exaggerating warming. From the abstract:

On average, the models analyzed … overestimate the warming of the troposphere. Although the precise causes of such differences are unclear…

Their figure above shows the massive model fail. The blue and magenta lines are trend of the UAH and RSS satelite temperature observations averaged by latitude, with the Arctic at the left and Southern Hemisphere to the right. Except for the Arctic, the observations are well outside all of the model simulations. As they say:

The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes. The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive. Our results suggest that forcing errors are a serious concern.

Anyone who has been following the AGW issue for more than a few years remembers that, Ross McKitrick, Stephen McIntyre and Chad Herman already showed climate models exaggerating warming in the tropical troposphere in their 2010 paper. Before that was Douglass, and in their usual small-minded way, the Santer team do not acknowledge them. Prior to then a few studies had differed on whether models significantly overstate the warming or not. McKitrick found that up to 1999 there was only weak evidence for a difference, but on updated data the models appear to significantly overpredict observed warming.

Santer had a paper where data after 1999 had been deliberately truncated, even though the data was available at the time. As Steve McIntyre wrote in 2009:

Last year, I reported the invalidity using up-to-date data of Santer’s claim that none of the satellite data sets showed a “statistically significant” difference in trend from the model ensemble, after allowing for the effect of AR1 autocorrelation on confidence intervals. Including up-to-date data, the claim was untrue for UAH data sets and was on the verge of being untrue for RSS_T2. Ross and I submitted a comment on this topic to the International Journal of Climatology, which we’ve also posted on arxiv.org. I’m not going to comment right now on the status of this submission.

Santer already had form at truncating inconvenient data, going back to 1995, related by John Daly. It is claimed that he authored the notorious “… a discernible human influence on global climate”, made in Chapter 8 of the 1995 IPCC Report, added without the consent of the drafting scientists in Madrid.

As John Daly says:

When the full available time period of radio sonde data is shown (Nature, vol.384, 12 Dec 96, p522) we see that the warming indicated in Santer’s version is just a product of the dates chosen. The full time period shows little change at all to the data over a longer 38-year time period extending both before Santer et al”s start year, and extending after their end year.

It was 5 months before `Nature’ published two rebuttals from other climate scientists, exposing the faulty science employed by Santer et al. (Vol.384, 12 Dec 1996). The first was from Prof Patrick Michaels and Dr Paul Knappenberger, both of the University of Virginia. Ben Santer is credited in the ClimateGate emails with threatening Pat Michaels with physical violence:

Next time I see Pat Michaels at a scientific meeting, I’ll be tempted to beat the crap out of him. Very tempted.

I suppose that now faced with a disparity between models and observations that can no longer be ignored, he has had to face the inevitable. That’s hardly a classy act. Remember Douglass, McKitrick, McIntyre and other climate realists reported the significant model/observation disparity in the peer-reviewed literature first. You won’t see them in Santer’s list of citations.

Failing to give due credit. Hiding the decline. Truncating the data. Threatening violence to critics. This is the AGW way.