NIPCC Report on Species Extinctions due to Climate Change

The NIPCC – Interim Report 2011 updates their last 2009 Report, with an overview of the research on climate change that the IPCC did not see fit to print. Its published by the Heartland Institute with lead authors Craig D. Idso, Australian Robert Carter, and S. Fred Singer with a number of other significant contributions.

I am grateful for inclusion of some of my work in Chapter 6 on the uncertainty of the range-shift method for modeling biodiversity under climate change.

The controversy centered on a paper by Thomas 2004 called “Extinction Risk from Climate Change“, that received exceptional worldwide media attention for its claims of potentially massive extinctions from global warming.

Briefly, the idea is to simulate the change in the range of a species under climate change by ‘shifting’ the range using a presumed climate change scenario.

Daniel Botkin said of the Thomas 2004 study

Yes, unfortunately, I do consider it to be the worst paper I have ever read in a major scientific journal. There are some close rivals, of course. I class this paper as I do for two reasons, which are explained more fully in the recent article in BioScience:

… written by 17 scientists from a range of fields and myself (here).

While there are many problems with this paper, the most amazing, as I see it, is the way they used changes in the size of species-ranges to determine extinctions. Its generally believed that contracting a species-range increases the probability of extinction.

Consider the case of a species that disperses freely under climate change. While the range-size of individuals change, the average range-size should stay the same, unless there is a major obstruction like an ocean or mountain range. Species whose range size decreases are balanced by species whose range size increases. Overall, the net rate of extinction should be unchanged.

However, Thomas 2004 simply deleted all species whose range expanded. A massive increase in extinctions was therefore a foregone conclusion, even assuming free dispersion.

There are a number of other ways a bias towards range-reduction can be introduced, such as edge effects and over-fitting assumptions, that I show in my book “Niche Modeling“. In a normal science this would have been a cautionary tale of the dangers of ad-hoc methodologies.

It’s an example of the intellectual bankruptcy of the IPCC report that the uncertainties of Thomas 2004 and other similar studies were ignored by Working Group II. For example, in Impacts, Adaption and Vulnerability, 13.4.1 Natural ecosystems

Modelling studies show that the ranges occupied by many species will become unsuitable for them as the climate changes (IUCN, 2004). Using modelling projections of species distributions for future climate scenarios, Thomas et al. (2004) show, for the year 2050 and for a mid-range climate change scenario, that species extinction in Mexico could sharply increase: mammals 8% or 26% loss of species (with or without dispersal), birds 5% or 8% loss of species (with or without dispersal), and butterflies 7% or 19% loss of species (with or without dispersal).

And in 19.3.4 Ecosystems and biodiversity:

… up to 30% of known species being committed to extinction * (Chapter 4 Section 4.4.11 and Table 4.1; Thomas et al., 2004;

And in other summaries Table 4.1

Clearly the major difficulty with all this work, something that turned me off it but few acknowledge, is that the lack of skill of simulations of climate change renders fraudulent any claim to skill at the species habitat scale. Only now is the broader climate community finally starting to accept this about multi-decadal climate model predictions, such as contained in the 2007 IPCC WG1 the climate assessments. The NIPCC illustrates the broader opinion which should have been integral to the IPCC process from the beginning, IMHO.

NIWA’s Station Temperature Adjustments – CCG Audit

The New Zealand Climate Conversation Group have released their report and reanalysis of the NIWA 7-Station Review. CCG claim NIWA misrepresented the statistical techniques it used, and exaggerated warming over the last hundred years.

The CCG results (Figure 20 above) prove there are real problems in the adjustments to temperature measurements for moves and equipment changes in NZ (also seen in Australia).

As any trained scientist or engineer knows, failure to follow a well-documented and justified method is a sign of pseudoscience. The New Zealand Climate Conversation Group is correct in examining if Rhoades & Salinger (1993) has been followed, as advertised.

In 2010, NIWA published their review of their 7-station temperature series for New Zealand. The review was based upon the statistically-based adjustment method of Rhoades & Salinger (1993) for neighbouring stations. In this report, we examine the adjustments in detail, and show that NIWA did not follow the Rhoades & Salinger method correctly. We also show that had NIWA followed Rhoades & Salinger correctly, the resultant trend for the 7-station temperature series for New Zealand would have been significantly lower than the trend they obtained.

Despite searching, I cannot see a methodology section in NIWA’s report, which is a disjoint analysis of each of the seven sites, although it is clear in a number of places that they infer that Rhoades and Salinger (1993) forms the basis. For example, page 145 on Dunedin.

In February 2010, NIWA documented the adjustments in use at that time (see web link above). These adjustments to the multiple sites comprising the ‘seven-station’ series were calculated by Salinger et al. (1992), using the methodology of Rhoades and Salinger (1993), which extended the early work on New Zealand temperatures by Salinger (1981). Subsequent to 1992, the time series have been updated regularly, taking account of further site changes as circumstances required.

The Climate Conversation Group summarize the differences between Rhoades and Salinger (1993) and the method actually used by NIWA. The R&S method for comparing a station with neighbouring stations involves the use of:

– Monthly data
– Symmetric interval centred on the shift
– A 1-2 year period before and after the shift
– Weighted averages based on correlations with neighbouring stations
– Adjustments only performed if results are significant at the 95% confidence level

In contrast, the NIWA method uses:

– Annual data
– Asymmetric intervals
– Varying periods of up to 11 years before and after the shift
– No weighted averages
– No evidence of significance tests – adjustments are always applied.

Any of these methodological deviation could create substantial differences between the results, but the Climate Conversation Group (nor I) could not find a rationale or discussion in the NIWA review reports for not implementing the R&S method as stated.

What are the details of the methods? The CCG report compares a single station at Dunedin, using NIWA and R&S methods in their Table 1. There were five site moves — 1913, 1942, 1947, 1960, and 1997 — with five potential adjustments. The NIWA method adjusts at each of the moves, resulting in an increasing trend of 0.62C/century for Dunedin. The R&S method only implements two adjustments resulting in a 0.24C/century increasing trend.

The other six stations are similar (Masterton, Wellington, Nelson, Hokitika, and Lincoln), with the NIWA method doing generally more frequent, and more negative adjustments, and resulting in exaggerated trends, as shown in Figure 20 at the top of this post.

It would seem that significance tests and weighting of neighboring sites is very important. It ensures the nearby sites used to calibrate the site moves actually provide information on the site in question. A larger neighborhood of 11 years would probably confound short-term changes with the long-term warming trend, and may bias the adjustments to exaggerate the trend.

To ignore significance tests, weightings, and modify the method arbitrarily, whether sloppy or intentional, is bad practice, and would not be favorable to NIWA in their upcoming court case, brought by CCG.

Debt Wave Grows

Congratulations Julia and Wayne, on your new milestone – Australia’s National debt has topped $200 billion after Labor borrowing $100 million per day.

Australia now has its largest debt in history, after we borrowed $3.2 billion over the last week. On 11 March 2009, Treasurer Wayne Swan invoked “special circumstances” to increase the debt ceiling to a “temporary” level of $200 billion. In the last budget the government has increased the debt ceiling permanently to $250 billion.

See the Total Commonwealth Government Securities on Issue.

There are 12.3 million taxpayers in Australia, so thats $16,260 of debt on behalf of each of us. Are you any better off?

h/t Senator Barnaby Joyce (LNP)

Global Warming Trends – Gimme Some Truth

Richard Treadgold from the New Zealand Climate Conversation Group reports on the Statistical Audit of the NIWA 7-Station Review, claiming that New Zealand’s National Climate Center, NIWA, misrepresented the statistical techniques it used (Rhoades & Salinger – Adjustment of temperature and rainfall records for site changes) in order to fabricate strong warming over the last hundred years.

NIWA shows 168% more warming than Rhoades & Salinger – the method NIWA betrayed. The blue dashed line shows the warming trend when the method is used correctly. The red line reveals NIWA’s outrageous fraud – it’s much stronger warming, but it’s empty of truth.

The results of this audit corroborate the results of Ken Stewart’s audit of the Australian temperature record.

As yet, Ken has received an apology from the Australian BoM for tardiness, but no explanation for the 140% exaggeration of warming trends in Australia.

I have been begging BOM- or anyone- to check my analysis but to no avail.

Are we getting value from our public-funded science?

Just Gimme Some Truth original and HD version.

No short-haired, yellow-bellied, son of Tricky Dicky; Is gonna mother hubbard soft soap me; With just a pocketful of hope; It’s money for dope; Money for rope

Best Business Presentation Ever

You have probably heard about Steve Jobs retirement from CEO at Apple. If like me, you find him an inspiration, you might enjoy this video from the Apple Music Event in 2001, “The First Ever iPod Introduction”.

What I like is the steel-trap logic, the “quantum leap” vision, the love of speed, the sparse visuals, and the impeccable timing of the delivery.

Phase Lag of Global Temperature

Lag or phase relationships are to me one of the most convincing pieces of evidence for the accumulative theory.

The solar cycle varies over 11 years on average like a sine wave. This property can be used to probe contribution of total solar insolation (TSI) to global temperature.

Above is a plot of two linear regression models of the HadCRU global temperature series since 1950. The time since 1950 is chosen because it is the period that the IPCC states that most of the warming has been caused by greenhouse gasses GHG, like CO2, and because the data is more accurate.

The red model is a linear regression using TSI and a straight line representing the contributions of GHGs. This could be called the conventional IPCC model. The green model is the accumulated TSI only, the model I am exploring. Accumulative TSI is calculated by integrating the deviations from the long-term mean value of TSI.

You can see that both models are indistinguishable by their R2 values (CumTSI is slightly better than GHG+TSI at R2=0.73 and 0.71 respectively).

You can also see a lag or shift in the phase of the TSI between the direct solar influence (in the red model) and the accumulated TSI (green model). This shift comes about because integration shifts a periodic like a sine wave by 90 degrees.

While there is nothing to distinguish between the models on fit alone, the shift provides independent confirmation of the accumulative theory. Volcanic eruptions in the latter part of the century obscure the phase relation over this period somewhat, so I look at the phase relationships over the whole period of the data since 1850.

Above is the cross-correlation of HacCRU and TSI (ccf in R) showing the correlation at all the shifts between -10 and +10 years. The red dashed line is at 2.75 years, a 90 degree shift of the solar cycle, or 11 years divided by 4. This is the shift expected if the relationship between global temperature and TSI is an accumulative one.

The peak of the cross-correlation lies at exactly 2.75 years!

This is not a result I thought of when I started working on the accumulation theory. The situation reminds me of the famous talk by Richard Feynmann on “Cargo Cult Science“.

When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Direct solar irradiance is almost uncorrelated with global temperature partly due to the phase lag, and partly due to the accumulation dynamics. This is why previous studies have found little contribution from the Sun.

Accumulated solar irradiance, without recourse to GHGs, is highly correlated with global temperature, and recovers exactly the right phase lag.

Accumulation of TSI comes about simply from the accumulation of heat in the ocean, and also the land.

I think it is highly likely that previous studies have grossly underestimated the Sun’s contribution to climate change by incorrectly specifying the dynamic relationship between the Sun and global temperature.

Climate Sensitivity Reconsidered

The point of this post is to show a calculation by guest, Pochas, of the decay time that should be expected from the accumulation of heat in the mixed layer of the ocean.

I realized this prediction gives another test of the accumulation theory of climate change, that potentially explains high climate sensitivity to variations in solar forcing, without recourse to feedbacks, or greenhouse gasses, in more detail here and here.

The analysis is based on the most important parameter in all dynamic systems, called the time constant, Tau. Tau quantifies two aspects of the dynamics:

1. The time taken for an impulse forcing of the system, such as a sudden spike in solar radiation, to decay to 63% of the original response.

2. The inherent gain, or amplification. That is if the Tau=10, the amplification of a step increase in forcing will be x10. This is because at Tau=10, around one tenth of an increase above the equilibrium level will be released per time period. So the new equilibrium level must be 10 times higher than the forcing, before the energy output equals the energy input.

I previously estimated Tau from global temperature series, simply from the correlation between successive temperature values, a. The Tau is then given by:

Tau = 1/(1-a)

Pochas posted the theoretical estimate of the time constant, Tau, below, that results from a reasonable assumption of the ocean mixed zone depth of 100m.

The input – output = accumulation equation is:

q sin ωt /4 – kT = nCp dT/dt

where q = input flux signal amplitude, watts/(m^2 sec). The factor 4 corrects for the disk to sphere surface geometry.

k = relates thermal flux to temperature (see below) J/(sec m^2 ºK).

T = ocean temperature,

ºKn = mass of ocean, grams.

Cp = ocean heat capacity J/(g ºK)t = time, sec or years.

Rearranging to standard form (terms with T on the left side):

nCp dT/dt + kT = q sin ωt /4

Divide by k

nCp/k dT/dt + θ = q sin ωt /(4k)

The factor nCp/k has units of time and is the time constant Tau in the solution via Laplace Transform of the above.

n = mass of water 100 m deep and 1m^2 surface area = 10E8 grams.

Cp = Joules to heat 1 gram of water by 1ºK = 4.187 J/gram.

k = thermal flux equivalent to blackbody temperature, J/(m^2 sec ºK).

Solution after inverse transform, after transients die out:

Amplitude Ratio = 1/(1+ω²T²)^½

where ω = frequency, rad/yr

Derivation of k Stefan Boltzmann equation

q = σT^4k = dq/dt

Differentiating: dq/dt = 4σT^3

Evaluating at T = blackbody temp of the earth, -18 ºC = 256 ºK

k = 4 (5.67E-8) 256^3 = 3.8 J/(sec m^2 ºK)

Calculating Time Constant Tau

Tau = nCp/k = 10E8 (4.187) / 3.8 = 1.10E8 sec

Tau = 1.10E8 / 31,557,000 sec/yr = 3.4857 yr


The figure of Tau=3.5 yrs is in good agreement with the empirical figures from the correlation of the actual global surface temperature data of 6 to 10. The effective mixed zone may be closer to 150m, and so explains the difference.

This confirms another prediction of the theory that amplification of solar forcing can be explained entirely by the accumulation of heat, without recourse to feedbacks from changing concentrations of greenhouse gases.

Critics of Niche Science

When the MSM reports on the commercialization of Ni-H cold fusion energy generation, they see parallels to the scientific treatment of AGW sceptics, citing “follow the money”.

If this new technology is real, it should be easy to prove and past failures – and outside agendas – shouldn’t stand in the way. Still, scientific discovery is expensive and money is often the X factor. Fortunes and reputations are made and lost based on results. Orthodoxies develop that discredit ideas posing a threat to the money flow, whether from government sources or from private investment. In the debate over “global warming,” scientists and politicians alike have resorted to repeating the mantra “the science is settled” as a means of freezing out researchers whose climate findings undermine public acceptance of the warming-planet credo and jeopardize billions in research funds.

This could be regarded an example of basic monopoly theory where the producers have an advantage in getting together and dividing up the higher profits. However, the more cartel member there are, the more difficult it is to maintain the “consensus”, and the smaller the slice of the pie. This is why a mainstream climate science has tried to limit and marginalize skeptical scientists who “undercut” the alarmist claims.

As always, the market will decide, but not without considerable sacrifice and dedication over a long period on the part of the skeptics and the truly innovative. Another solution would be the discourage academic cartels, by opening peer review and grant applications to a wider range of participants.

51 Australian Labor/Green Party Achievements

And the list of failures keeps growing.

1. Carbon Tax Lie – ‘There will be no carbon tax under the Government I lead.’
2. NBN – $50 billion Telstra subsidy
3. Building the Education Revolution – The school halls fiasco
4. Home Insulation Plan (Pink Batts) – Dumped after 3 deaths, and x house fires.
5. Citizens Assembly – Dumped
6. Cash for Clunkers – Dumped
7. Hospital Reform – Nothing
8. Digital set-top boxes – almost redundant technology that is cheaper at Harvey Norman
9. Emissions Trading Scheme – Abandoned
10. Mining Tax – Continuing uncertainty for our miners
11. Livestock export ban to Indonesia: – over-reaction, without trouble-shooting, that almost sent an industry broke
12. Detention Centres – Riots and cost blowouts
13. East Timor ‘solution’ – Announced before agreed
14. Malaysia ‘solution’ – In shambles
15. Manus Island ‘solution’ – On the backburner
16. Computers in Schools – $1.4 billion blow out; less than half delivered
17. Cutting Red Tape – 12,835 new regulations, only 58 repealed
18. Asia Pacific Community – Another expensive Rudd frolic. Going nowhere
19. Green Loans Program – Abandoned. Only 3.5% of promised loans delivered
20. Solar Homes & Communities plan – Shut down after $534 million blow out
21. Green Car Innovation Fund – Abandoned
22. Solar Credits Scheme – Scaled back
23. Green Start Program – Scrapped
24. Retooling for Climate Change Program – Abolished
25. Childcare Centres – Abandoned. 260 promised, only 38 delivered
26. Take a “meat axe”‘ to the Public Service – 24,000 more public servants
and growing!
27. Murray Darling Basin Plan – back to the drawing board
28. 2020 Summit – Meaningless talkfest
29. Tax Summit – Deferred and downgraded
30. Population Policy – Sets no targets
31. Fuel Watch – Abandoned
32. Grocery Choice – Abandoned
33. $900 Stimulus cheques – Sent to dead people and overseas residents
34. Foreign Policy – In turmoil with Rudd running riot
35. National Schools Solar Program – Closing two years early
36. Solar Hot Water Rebate – Abandoned
37. Oceanic Viking – Caved in
38. GP Super Clinics – 64 promised, only 11 operational
39. Defense Family Healthcare Clinics – 12 promised, none delivered
40. Trade Training Centres – 2650 promised, 70 operational
41. Bid for UN Security Council seat – An expensive Rudd frolic
42. My School Website – Revamped but problems continue
43. National Curriculum – States in uproar
44. Small Business Superannuation Clearing House – 99% of small businesses reject it
45. Indigenous Housing Program – way behind schedule
46. Rudd Bank – Went nowhere
47. Using cheap Chinese fabrics for ADF uniforms – Ditched
48. Innovation Ambassadors Program – Junked
49. Six new Submarines – none operational
50. Copenhagen Climate Summit:- Rudd took 112 advisors on a big “carbon footprint”; for nothing.
51. Took a $20 billion surplus and turned it into a $57 billion deficit:- a $77 Billion turnaround. Took the $60 billion ‘Futures Fund’ and turned it into a $100 billion debt:- a $160 Billion turnaround.

From Tomy Gomme of the Climate Sceptic Party

Nickel Hydrogen LENR Theories

The main candidate theories for low energy nuclear reactions involving Nickel-Hydrogen:

Widom-Larsen Theory

Polyneutron Theory of Fisher

Piantelli Hydride Capture Theory

Review of Possible Cold Fusion Mechanisms

Solar Supersensitivity – a new theory?

Do the results described here and here constitute a new theory? What is the relationship to the AGW theory? What is a theory anyway?

The models I have been exploring, dubbed solar supersensitivity, predict a lot of global temperature observations: the dynamics of recent and paleoclimate climate variations, the range of glacial/interglacial transitions, the recent warming coinciding with the Grand Solar Maximum, and the more recent flattening of warming.

They make sense of the statistical character of the global temperature time series as an ‘almost random walk’, the shift in phase between solar insolation and surface temperature, and the range of autoregressive structure of temperature series in the atmosphere. These are all dynamic phenomena.

Conventional global warming models, based in atmospheric radiative physics, explain static phenomena such as the magnitude of the greenhouse effect, and are used to estimate the equilibrium climate sensitivity. The climate models, however, have very large error bands around their dynamics, and describe shorter term dynamics as chaotic. Does this mean they are primarily theories of climate statics, and supersensitivity is concerned with dynamics?

No. I see no reason why the accumulation theory could not be reconciled with coupled ocean/atmosphere general circulation models, once the parameterisation of these models is corrected, particularly the gross exaggeration of ocean mixing. Similarly there is no reason a model based on the accumulation of solar anomaly could not recover equilibrium states.

The difference between AGW theory and solar supersensitivity (SS) might lie more in the mechanisms. SS treats the ocean as a conventional greenhouse — shortwave solar isolation is easily absorbed, but the release of heat by convection at the ocean/atmosphere boundary is suppressed, so gradually warming the interior. In contrast, conventional AGW theory is focused more on mechanisms in the atmosphere, the direct radiative effects of gasses and water vapor. It combines many theories, of CO2 cycling, water relations, meteorology.

If mechanisms differentiate the theories, then the issue is the relative balance of the two mechanisms. Which is more responsible for recent warming? Which is more responsible for paleoclimate variations?

From basic recurrence matrix theory, the system with the largest eigenvalue will dominate the long-term, ultimate dynamics of a system, suggesting the ocean-related low loss accumulative mechanisms would dominate the short time-scale, high loss, low sensitivity atmospheric mechanisms.

If this view is correct, then what we have is a completion of an incomplete theory that promises to increase understanding and improve prediction by collapsing the range of uncertainty in the current crop of climate models.

Solar Supersensitivity – a worked example

Below is a worked example of the theory of high solar sensitivity, supersensitivity if you will, explained in detail in manuscripts here and here.

The temperature increase of a body of water is:

T = Joules/(Specific Heat water x Mass)

The accumulation of 1 Watt per sq meter on a 100 metre column of water for one year gives an expected temperature increase of

T = 32 x 10^6/(4.2 x 10^8)

= 0.08 C

Given that about one third attenuation of radiation from top-of-atmosphere to the surface, and a duration of solar cycle of 11 years, the increase in temperature due to the solar cycle will be:

Ta = 0.08 x 11 x 0.3 = 0.26 C

The expectation of the temperature increase for the direct forcing (no accumulation) using the Plank relationship of 0.3C/W would be 0.09 C. So the gain is:

Gain = Accumulated/Direct = 0.26/(0.3×0.3) = 3

For a longer accumulation of solar anomaly, from a succession of strong solar cycles such as we saw late last century, the apparent amplification will be more. From the AR correlation of surface temperature you get an estimate of gain of 10. But this is only apparent amplification, as the system is accumulative, the calculated gain increases with the duration of the forcing. For long time scales, gain (and hence solar sensitivity) approaches infinity — a singularity — and ceases to be useful. Hence the term ‘supersensitivity’. For long periods the non-linearity of the Stephan-Boltzmann law will become dominant.

Sensitivity cannot be represented in Watts/K (or K/Watt). It will be in units of rate like K/Watt/Year.

Extend this calculation for 1000 years and a small solar forcing can cause a transition between ice ages with no other input. The role of GHGs, water vapor and albedo in this theory is to maintain the heat state of the system, e.g. solar forcing increases temperature increase which causes CO2 concentrations to change. But this does not mean an increase in CO2 ‘necessarily’ increases temperature, because the system is being heated by accumulation of solar anomaly. The reason that a forcing from CO2 has apparently very low sensitivity, but solar very high, would be due to other issues that I haven’t worked through fully yet (coming soon).

Examples of Scientific Beliefs

Some scientific beliefs are wrong on some of the facts all of the time. Beliefs such as the Ether filling space, Lamarkism, Four Humours make up everything, Sun circles the earth, and Canals on Mars all had the value of explaining some of the facts. They were pushed aside by new beliefs that explained more of the facts.

Some scientific beliefs are wrong on all of the facts some of the time. These include nine planets in the Solar System, atoms are not divisible into anything else.

Some scientific beliefs are wrong on all of the facts all of the time. These include a young Earth, Vitalism (or Chi), bioethanol, and the “casualties of global warming”, including the golden toad, massive extinctions, Possums, Polar Bears, Tuvalu, the Maldives, snow, rain, the Great Barrier Reef…

Inspired by A Short History of Climate Science Hysteria by Gavin Atkins.

The Problem with Renewables

Jim Hansen made his opinion of renewable energy clear in his latest newsletter.

[M]uch less than worthless. If you drink the kool-aid represented in the right part of Fig. 7, you are a big part of the problem.

…a humiliating assessment of the poster child of the Australian Intelligentsia coming from the author of the global warming scare.

More kindness than Jim is shown by Steve McIntyre in reviewing two other articles on renewables by representatives of green factions, when he concludes that proponents of renewable energy are out of touch with reality.

Hansen appears to conclude that no policy maker could be so stupid as to actually believe such drivel and therefore is forced to postulate that they have ulterior and unscrupulous motives:

But what is the problem with renewables? Andrew Bolt crunches the numbers:

[Lets] do some sums to see how such a plant would compare with some old coal-fired power station such as Hazelwood, which produces 100 times more power: … it would take more than 100 plants similar to Gemasolar, costing more than $30 billion to replace the Greens’ pet hate dirty coal fired plant, Hazelwood.

Global Warming Temperature Trends

Roy Spencer posted the following comparison between the 20th Century runs from most (15) of the IPCC AR4 climate models, and Levitus observations of ocean warming during 1955-1999. Here are the best 4 models:

The accuracy of the other models is far worse.

In Roy’s assessment:

Previous investigators (as well as the IPCC AR4 report) have claimed that warming of the oceans is “consistent with” anthropogenic forcing of the climate system.

The actual rate of accumulated heat — the area between the green dots and the vertical line — much smaller than any of the models.

As Roy notes, it is generally believed that all of the increase in ocean heat is from increasing GHGs.

It should be mentioned the above analysis assumes that there has been no significant natural source of warming during 1955-1999. If there has, then the diagnosed climate sensitivity would be even lower still.

As I show here, the observations are consistent with the accumulation of heat from an excess of 0.2W/m^2 forcing by the Sun over the period of the Grand Solar Maximum. In other words, the increase caused by GHGs may not even be detectable.

An increase of 0.1W/m2 for one year would move 3.1×10^6 Joules of heat (31×10^6 sec in a Yr) to the ocean, heating the mixed zone to 150m by 0.006K (at 4.2 J/gK), producing a rate of global temperature increase of 0.06K per decade.

Accumulation Theory of Solar Influence

The physical structure of the oceans and atmosphere entails very long equilibrium dynamics due the slow accumulation of heat in the land and ocean. An ARMA analysis evaluates the potential of accumulation of solar anomaly to explain the global temperature changes over glacial/interglacial and recent time-frames.

Click image above for animation of the accumulation model for the 1950-2011 period.

The results of an early version of the accumulation theory are here.

Contrary to the consensus view, the historic temperature data displays high sensitivity (x10 gain) to solar variations when related by slow equilibration dynamics. A variety of results suggest that inappropriate specification of the relationship between forcing and temperature may be responsible for previous studies finding low correlations of solar variation to temperature. The accumulation model is a feasible alternative mechanism for explaining both paleoclimatic temperature variability and contemporary warming without recourse to increases in heat-trapping gases produced by human activities.

There are no valid grounds to dismiss the potential domination of 20th century warming by solar variations.

UPDATE: David Hagen alerted me to a post at WUWT where sun-spots were accumulated from 1500.

The sunspot record needs to be examined in its entirety rather than as individual sunspot cycles. The method to do this is by calculating the accumulated departure from the average of all the sunspot numbers of the entire 500-year index. This reveals the cooling during the Maunder Minimum and the current “global warming”. The current warming of 15 watts per square meter began in 1935, based on the sunspot record.

ARIMA theory of climate change

I have just uploaded a manuscript to the preprint archive viXra. ViXra is an interesting alter-ego to the other preprint archive arXiv. The goals of viXra are:

It is inevitable that viXra will therefore contain e-prints that many scientists will consider clearly wrong and unscientific. However, it will also be a repository for new ideas that the scientific establishment is not currently willing to consider. Other perfectly conventional e-prints will be found here simply because the authors were not able to find a suitable endorser for the arXiv or because they prefer a more open system. It is our belief that anybody who considers themselves to have done scientific work should have the right to place it in an archive in order to communicate the idea to a wide public. They should also be allowed to stake their claim of priority in case the idea is recognised as important in the future.

Natural CO2 isotope changes mimic burning fossil fuels

Global Emission of Carbon Dioxide: The Contribution from Natural Sources by atmospheric physicist Professor Murry Salby, Chair of Climate, Macquarie University, talking at the Sydney Institute, on his conversion from agnostic to skeptic by his recent results on the natural origin of declining C12/C13 ratios in atmospheric carbon dioxide.

In his paper due out in early 2012, he claims results that invalidate the main conclusions of the IPCC 2007 report called the AR4. Previous research showing humans are responsible for emissions of CO2 is wrong. Increasing CO2 is not ‘driving the climate bus’ but is very much in the back seat to temperature. Natural processes alter isotopic ratios similar to expected from burning fossil fuels.

Here for abstract and cable times.