Cherry Picking — The paper formerly known as “Voodoo Correlations in Social Neuroscience”

Edward Vul, Christine Harris, Piotr Winkielman, & Harold Pashler have published research that provides useful insights into the practice of ‘cherry picking’ or prior selection of desirable results leading to exaggerated significance. They also demonstrates the effect in a comprehensive survey of studies in the field of social neuroscience.

To further ‘pin the thumbs of researchers to the table’, and ensure they are noticed and not ignored, they name all the studies explicitly, listing those that exaggerate significance and those that don’t. This is a great example of how not to win friends while influencing people, and gets 5 stars from me. Here is the back story on the brow-chewing response by his colleagues.

The statistical basis of the paper is this: the strength of the correlation observed between measures A and B (rObsA,ObsB ) reflects not only the strength of the relationship between the traits underlying A and B (rA,B), but also the reliability of the measures of A and B (reliabilityA and reliabilityB). In general,

rObsA,ObsB = rA,B * sqrt(reliabilityA * reliabilityB)

As the maximum rA,B=1 for perfect correlation, the reliabilities of the two measures provide an upper bound on the possible correlation that can be observed between the two measures (Nunnally, 1970).

The problem is that many reported correlations, such as a subject’s proneness to anxiety reactions (Carver and White, 1994 reference omitted from the paper) correlated at a very high r=.96. Measures of personality and emotion evidently do not often have reliabilities greater than .8. Neuroimaging measures seem typically to be reliable at .7 or less. Assuming a perfect correlation the maximum that could be obtained would be sqrt(.8 * .7), or .74. A correlation of 0.94 is therefore impossible.

The result must there fore have been achieved by some process of ‘data-peeking’ or ‘cherry-picking’, thresholding higher correlations in the results while discarding those results that are uncorrelated.

An analogous situation in climate science is the selection of a subset of tree-ring proxies by calibration on global temperatures. This procedure alone can produce a hockey-stick shaped temperature history from random data series. It should be possible to estimate the expected correlation given (1) reliability of tree ring signals for given climate changes, (2) reliability of given climate measurements. The square root of the product of these numbers should set an upper limit on the calibration correlation. If published calibration statistics exceed this figure, the a sample of poorly correlated trees must have been discarded in order to enhance the correlation.

Using this process it should be possible to quantify the degree of ‘cherry-picking’ that has taken place, resolving one of the main contentions that skeptics have had with this field.

HT to Geoff Sherrington.


References:

Nunnally, JC. Introduction to Psychological Measurement. New York: McGraw-Hill; 1970.

March 2009 Global Temperatures from RSS

Its the time of the month for the competition to guess the change in global temperatures for the previous month. Currently Jan Pompe is leading. Luck or insight? Time will tell.

Results from RSS normally are released on the 5th of the month. Voting is now open for March. Place your vote below. All with at least one correct guess will be listed.

Update: RSS is out, and down.

Continue reading March 2009 Global Temperatures from RSS

Green Technology Mobsters

Jeremy Clarkson on the reaction to his review of the Tesla Roadster.

I fear that what we are seeing here is much the same thing professors see when they claim there is no such thing as man-made global warming. Immediately, they are drowned out by an unseen mob, and then their funding dries up. It’s actually quite frightening.

The problem is, though, that really and honestly, the US-made Tesla works only at dinner parties. Tell someone you have one and in minutes you will be having sex. But as a device for moving you and your things around, it is about as much use as a bag of muddy spinach.

Redoubt now Ultraplinian

The rumbling Alaskan volcano Redoubt has exploded producing a stratosphere-reaching plume in excess of 60,000 ft (17 km). An eruption is termed ‘ultraplinian’ if its ejecta reaches the stratosphere, about 10km in height. Dust and gases in the stratosphere are known to depress the global temperature for up to a few years after the eruption. The extent of cooling depends on the amount and type of material, the size and duration of ultraplinian eruption, and the latitude (high latitude eruptions like Redoubt are less effective than lower ones).

The plume could be seen easily on infrared here (top center of first radar).

Continue reading Redoubt now Ultraplinian

Nir Shaviv explains climate sensitivity

Transcript of the introduction of talk by Nir Shaviv, skeptical Astrophysicist, at the 2009 Heartland Conference entitled: New solar climate links and their implications to our understanding of climate change (listen to audio).

In the introduction, Nir explains the concept of climate sensitivity very clearly. After this, one can understand the origin of such claims as ‘if the hockey stick is wrong, then sensitivity is higher than we thought and global warming will be worse’.

OK I am glad to see the number of people who have evaporated is not very large in this last 5 min break.

To answer a question that was asked before I think one of the problems with the models, first of all with respect to feedback, it is cloud cover. That because you cannot resolve very small scales the cloud cover physics is basically parameterized with a recipe, and because its parameterized with a recipe, whichever recipe you decide to use, whatever you cook depends on the recipe that you use.

Continue reading Nir Shaviv explains climate sensitivity

Global Temperature Change and Geomagnetic Field Intensity

Alan Cheetham drew my attention to a post on his blog, showing the close relationship between geomagnetic field strength, and rate of temperature change (warming in the N Hemisphere and cooling in the S Hemisphere). The idea is that the the effect of cosmic rays on the Earth’s temperature by seeding low clouds, will be most apparent where the magnetic field is weakest. Maps of the geomagnetic field show an uncanny correlation with ‘recent warming’ (UAH 1978-2006):

Continue reading Global Temperature Change and Geomagnetic Field Intensity

Weird Equations

This is a brilliant riff on the financial crisis and how it got to where it is today.

Some of the things you learn in an advanced maths degree, are that 1+1 is not always 2 (as in modulo 2 with addition), and big numbers can be small numbers (like the measure of a Cantor set). Irrelevant, but by way of introduction, small numbers like the size of executive bonuses at AIG relative to the scale of the company bailout, can be large numbers when it comes to their impact, as shown by the panicky half-baked legislation passing into US law.

Here are some selected paragraphs from the Rolling Stone article that really put the bat to the numbers.

Nor did anyone mention that when AIG finally got up from its seat at the Wall Street casino, broke and busted in the afterdawn light, it owed money all over town — and that a huge chunk of your taxpayer dollars in this particular bailout scam will be going to pay off the other high rollers at its table. Or that this was a casino unique among all casinos, one where middle-class taxpayers cover the bets of billionaires.

The mistake most people make in looking at the financial crisis is thinking of it in terms of money, a habit that might lead you to look at the unfolding mess as a huge bonus-killing downer for the Wall Street class. But if you look at it in purely Machiavellian terms, what you see is a colossal power grab that threatens to turn the federal government into a kind of giant Enron — a huge, impenetrable black box filled with self-dealing insiders whose scheme is the securing of individual profits at the expense of an ocean of unwitting involuntary shareholders, previously known as taxpayers.

The problem was, none of this was based on reality. “The banks knew they were selling crap,” says a London-based trader from one of the bailed-out companies. To get AAA ratings, the CDOs relied not on their actual underlying assets but on crazy mathematical formulas that the banks cooked up to make the investments look safer than they really were. “They had some back room somewhere where a bunch of Indian guys who’d been doing nothing but math for God knows how many years would come up with some kind of model saying that this or that combination of debtors would only default once every 10,000 years,” says one young trader who sold CDOs for a major investment bank. “It was nuts.”

The following February, when AIG posted $11.5 billion in annual losses, it announced the resignation of Cassano as head of AIGFP, saying an auditor had found a “material weakness” in the CDS portfolio. But amazingly, the company not only allowed Cassano to keep $34 million in bonuses, it kept him on as a consultant for $1 million a month. In fact, Cassano remained on the payroll and kept collecting his monthly million through the end of September 2008, even after taxpayers had been forced to hand AIG $85 billion to patch up his fuck-ups. When asked in October why the company still retained Cassano at his $1 million-a-month rate despite his role in the probable downfall of Western civilization, CEO Martin Sullivan told Congress with a straight face that AIG wanted to “retain the 20-year knowledge that Mr. Cassano had.” (Cassano, who is apparently hiding out in his lavish town house near Harrods in London, could not be reached for comment.)

Then, in January 2009, the company did it again. After all those years letting Cassano run wild, and after already getting caught paying out insane bonuses while on the public till, AIG decided to pay out another $450 million in bonuses. And to whom? To the 400 or so employees in Cassano’s old unit, AIGFP, which is due to go out of business shortly! Yes, that’s right, an average of $1.1 million in taxpayer-backed money apiece, to the very people who spent the past decade or so punching a hole in the fabric of the universe!

The bonuses are a nice comic touch highlighting one of the more outrageous tangents of the bailout age, namely the fact that, even with the planet in flames, some members of the Wall Street class can’t even get used to the tragedy of having to fly coach. “These people need their trips to Baja, their spa treatments, their hand jobs,” says an official involved in the AIG bailout, a serious look on his face, apparently not even half-kidding. “They don’t function well without them.”

While the rest of America, and most of Congress, have been bugging out about the $700 billion bailout program called TARP, all of these newly created organisms in the Federal Reserve zoo have quietly been pumping not billions but trillions of dollars into the hands of private companies (at least $3 trillion so far in loans, with as much as $5.7 trillion more in guarantees of private investments). Although this technically isn’t taxpayer money, it still affects taxpayers directly, because the activities of the Fed impact the economy as a whole. And this new, secretive activity by the Fed completely eclipses the TARP program in terms of its influence on the economy.

None other than disgraced senator Ted Stevens was the poor sap who made the unpleasant discovery that if Congress didn’t like the Fed handing trillions of dollars to banks without any oversight, Congress could apparently go fuck itself — or so said the law. When Stevens asked the GAO about what authority Congress has to monitor the Fed, he got back a letter citing an obscure statute that nobody had ever heard of before: the Accounting and Auditing Act of 1950. The relevant section, 31 USC 714(b), dictated that congressional audits of the Federal Reserve may not include “deliberations, decisions and actions on monetary policy matters.” The exemption, as Foss notes, “basically includes everything.” According to the law, in other words, the Fed simply cannot be audited by Congress. Or by anyone else, for that matter.

For the rest of 2008, the numbers (in the weekly H4 reports) remained similarly in the stratosphere, the Fed pumping as much as $125 billion of these short-term loans into the economy — until suddenly, at the start of this year, the number drops to nothing. Zero.

The reason the number has dropped to nothing is that the Fed had simply stopped using relatively transparent devices like repurchase agreements to pump its money into the hands of private companies. By early 2009, a whole series of new government operations had been invented to inject cash into the economy, most all of them completely secretive and with names you’ve never heard of. There is the Term Auction Facility, the Term Securities Lending Facility, the Primary Dealer Credit Facility, the Commercial Paper Funding Facility and a monster called the Asset-Backed Commercial Paper Money Market Mutual Fund Liquidity Facility (boasting the chat-room horror-show acronym ABCPMMMFLF). For good measure, there’s also something called a Money Market Investor Funding Facility, plus three facilities called Maiden Lane I, II and III to aid bailout recipients like Bear Stearns and AIG.

When one considers the comparatively extensive system of congressional checks and balances that goes into the spending of every dollar in the budget via the normal appropriations process, what’s happening in the Fed amounts to something truly revolutionary — a kind of shadow government with a budget many times the size of the normal federal outlay, administered dictatorially by one man, Fed chairman Ben Bernanke. “We spend hours and hours and hours arguing over $10 million amendments on the floor of the Senate, but there has been no discussion about who has been receiving this $3 trillion,” says Sen. Bernie Sanders. “It is beyond comprehension.”

If you are interested in knowing more about Maiden I, II & III, here is a pdf complete with boxes and arrows.

Geomagnetic field variations and CRF climate

Probably the last in this series on cosmic rays and recent warming, we look at two recent papers on the position of the geomagnetic field and recent climate, most likely mediated by changes in cosmic ray flux.

Regional cosmic ray induced ionization and geomagnetic field changes by Kovaltsov and Usoskin examines regional effects on atmospheric ionization of the migration of the geomagnetic dipole axis over the last thousand years. The dipole migrated by 20 deg. of latitude and 180 deg. of longitude during the last 1000 years. This trajectory is compared with the cosmic ray flux (CRF) reconstructed from the cosmogenic isotope 14C from tree rings.

They present a picture of climate effects for two regions, Europe and the Far East. The variations for Europe show the familiar profile (inverted) of a Medieval Warm Period, a Little Ice Age and general warming over the last 200 years to the present. The picture for the Far East is for generally increasing warmth from about 1200 to the present.
Continue reading Geomagnetic field variations and CRF climate

Cosmic Ray Flux and the IPCC

Here is a roundup on the current IPCC thinking on cosmic rays and recent warming.

IPCC and Solar Correlations from ClimateAudit reviews the dismissal of a solar influence on climate in IPCC 1992, 1994 and 2001.

IPCC relied to some extent on MBH98 in dismissing these supposed relationships, but, given the defects in this specific area of MBH98 (as well as more general problems), alternative grounds for dismissal have to be sought if one repudiates MBH98. I’m not saying that such alternative grounds are not possible – merely that it is not prudent to rely on MBH98 in respect to taking a position on solar correlations. The other large issue is whether there are physical reasons why the efficacy of solar forcing (high-energy low-entropy at surface) might differ from the efficacy of additional CO2 forcing (low-energy high-entropy at altitude).

Evidence of cosmic rays causing decreased cloudiness and increased temperatures has continued to accumulate since 2001. For example, cores of levels of the cosmogenic isotope 10Be, a product of particle collisions with atmospheric nitrogen and oxygen, show high correlations, as shown on Anthony Watt’s in guest post by David Archibald Beryllium 10 and climate.

Instead of wading through hundreds of papers for evidence of the Sun’s influence on terrestrial climate, all you have to do is look at this graph.

Further, Anthony posts that Cosmic Ray Flux and Neutron monitors suggest we may not have hit solar minimum yet and shows neutron flux has been increasing over at least the last year, and should continue suggesting colder weather to come.

What does the latest IPCC report say about cosmic rays? Working Group 1. the Physical Science Basis of Climate Change, Chapter 2: Changes in Atmospheric Constituents and in Radiative Forcing contains the discussion. The often repeated position is:

Whether solar wind fluctuations (Boberg and Lundstedt, 2002) or solar-induced heliospheric modulation of galactic cosmic rays (Marsh and Svensmark, 2000b) also contribute indirect forcings remains ambiguous.

The entire IPCC review is here, concluding the level of scientific understanding of cosmic ray influences is considered to be very low.

Many empirical associations have been reported between
globally averaged low-level cloud cover and cosmic ray
fl uxes (e.g., Marsh and Svensmark, 2000a,b). Hypothesised
to result from changing ionization of the atmosphere from
solar-modulated cosmic ray fl uxes, an empirical association
of cloud cover variations during 1984 to 1990 and the solar
cycle remains controversial because of uncertainties about the
reality of the decadal signal itself, the phasing or anti-phasing
with solar activity, and its separate dependence for low, middle
and high clouds. In particular, the cosmic ray time series
does not correspond to global total cloud cover after 1991 or
to global low-level cloud cover after 1994 (Kristjánsson and
Kristiansen, 2000; Sun and Bradley, 2002) without unproven
de-trending (Usoskin et al., 2004). Furthermore, the correlation
is significant with low-level cloud cover based only on infrared
(not visible) detection. Nor do multi-decadal (1952 to 1997)
time series of cloud cover from ship synoptic reports exhibit a
relationship to cosmic ray flux. However, there appears to be a
small but statistically signifi cant positive correlation between
cloud over the UK and galactic cosmic ray fl ux during 1951 to
2000 (Harrison and Stephenson, 2006). Contrarily, cloud cover
anomalies from 1900 to 1987 over the USA do have a signal
at 11 years that is anti-phased with the galactic cosmic ray
fl ux (Udelhofen and Cess, 2001). Because the mechanisms are
uncertain, the apparent relationship between solar variability
and cloud cover has been interpreted to result not only from
changing cosmic ray fl uxes modulated by solar activity in the
heliosphere (Usoskin et al., 2004) and solar-induced changes in
ozone (Udelhofen and Cess, 2001), but also from sea surface
temperatures altered directly by changing total solar irradiance
(Kristjánsson et al., 2002) and by internal variability due to
the El Niño-Southern Oscillation (Kernthaler et al., 1999). In
reality, different direct and indirect physical processes (such as
those described in Section 9.2) may operate simultaneously.
The direct RF due to increase in solar irradiance is reduced
from the TAR. The best estimate is +0.12 W m–2 (90%
confi dence interval: +0.06 to +0.30 W m–2). While there have
been advances in the direct solar irradiance variation, there
remain large uncertainties. The level of scientifi c understanding
is elevated to low relative to TAR for solar forcing due to direct
irradiance change, while declared as very low for cosmic ray
infl uences (Section 2.9, Table 2.11).

However, there is a much more abundant literature not discussed, as shown in the quote by David Archibald on Anthony’s blog. As of 2007, cosmic rays do not rate a mention in the Summary for Policymakers. The report mentions lack of knowledge of the strength of possible mechanisms repeatedly. It restricts its references to a few of the most prominent. Even in the latest report, despite the burgeoning of results and data, they have not seriously considered cosmic rays as a possible cause of recent warming.

A high rigor process ensures that all important processes have been seriously considered, before making strong claims. To prematurely settle on one factor, and reject another that is highly promising, simply because its not well understood or abundantly researched, is referred to as ‘looking for lost keys under the streetlight’ (because that’s where the light is).

Given the largest uncertainty in climate change is the effect of clouds, cosmic rays have always had the potential to overturn the IPCC claims that carbon emissions are a big problem. When the shock of the revelation that CO2 has little or nothing to do with warming turns to disgust, people will justifiably ask who got us into this mess. The answer, in part, is the UN sponsored IPCC.

How do you describe the dismissal of a solar connection on the basis of a single, now discredited hockey-stick study; dismissal of an emerging picture as ambiguous, and dismissal of a large, accessible, peer-reviewed literature as very low understanding? ‘Asinine‘ sounds like a good word.

Cosmic rays, cloud condensation nuclei and clouds – a reassessment using MODIS data

Steve Short sent me this curious paper. Cosmic rays, cloud condensation nuclei and clouds – a reassessment using MODIS data by J. E. Kristjansson, C. W. Stjern, F. Stordal, A. M. Fjæraa, G. Myhre, and K. Jonasson. They looked at the response of clouds to sudden decreases in the flux of galactic cosmic rays (GCR) – Forbush decrease events using cloud products from the space-borne MODIS instrument, which has been in operation since 2000. They focussed on pristine Southern Hemisphere ocean regions where it is believed that a cosmic ray signal should be easier to detect than elsewhere.

This is an almost schizophrenic paper. The figures and table indicate highly significant results. For example, the response of clouds to GCR, averaged over all regions for the 18 day event is particularly apparent in cloud amount CA below.

forbush-tots

Nevertheless, the conclusions were negative:

The overall conclusion, built on a series of independent statistical tests, is that no clear cosmic ray signal associated with Forbush decrease events is found in highly susceptible marine low clouds over the southern hemisphere oceans.

Continue reading Cosmic rays, cloud condensation nuclei and clouds – a reassessment using MODIS data

Using the oceans as a calorimeter to quantify the solar radiative forcing — the background

Global warming real’ say new studies according to the Financial Times, February 18, 2005. Tim Barnett of Scripps Institute of Oceanography crowed:

“The debate over whether there is a global warming signal is over now at least for rational people.”

The article records the team’s triumph:

A leading US team of climate researchers on Friday released “the most compelling evidence yet” that human activities are responsible for global warming. They said their analysis should “wipe out” claims by sceptics that recent warming is due to non-human factors such as natural fluctuations in climate or variations in solar or volcanic activity.

In a related article reported from the reliable UK Met Office.

The world’s best efforts at combating climate change are likely to offer no more than a 50-50 chance of keeping temperature rises below the threshold of disaster, according to research from the UK Met Office.

The chilling forecast from the supercomputer climate model of the Met Office’s Hadley Centre for Climate Prediction and Research will provide a sobering wake-up call for governments around the world

Fast forward to 2008, and Nir J. Shaviv, Dr. Shaviv, 37, an associate professor at the Racah Institute of Physics of the Hebrew University of Jerusalem, claims the theory that solar and cosmic rays, not human activity, are the driving forces behind climate change is gaining traction. He recently published a paper on ocean heat flux called “Using the oceans as a calorimeter to quantify the solar radiative forcing“.

My main criticism of this paper is that it provides no background on studies of heat transport into oceans, hence this preamble. I wanted to find out what the various numeric values of heat flux into the ocean are, to attempt to reconcile the various views.

The paragraphs from Climate Change 2001: Working Group I: The Scientific Basis are worth reading; they seem to indicate how uncertain the ocean flux is, and flag the problems caused by clouds:

Improved resolution and understanding of the important facets of coupling in both atmosphere and ocean components of global climate models have also been proven to reduce flux imbalance problems arising in the coupling of the oceanic and the atmospheric components. However, it must still be noted that uncertainties associated with clouds still cause problems in the computation of surface fluxes.

Among the prominent web accessible works that record estimates of the fluxes are these:

Anthropogenic Warming of the Oceans: Observations and Model Results by David W. Pierce and Tim P. Barnett et al..

Analysis of PCM’s heat budget indicates the warming is driven by an increase in net surface heat flux that reaches 0.7 watts m2 by the 1990s; the downward longwave flux increases by 3.7 watts m2, which is not fully compensated by an increase in the upward longwave flux of 2.2 watts m2. Latent and net solar heat flux each decrease by about 0.6 watts m2.

In another by Southern Ocean warming due to human influence by John C. Fyfe was equally impressed by a “remarkable agreement”.

I show that the latest series of climate models reproduce the observed mid-depth Southern Ocean warming since the 1950s if they include time-varying changes in anthropogenic greenhouse gases, sulphate aerosols and volcanic aerosols in the Earth’s atmosphere. The remarkable agreement between observations and state-of-the art climate models suggests significant human influence on Southern Ocean temperatures.

Nir Shaviv provides an alternative view:

Another interesting point to note is that the solar cycle induced variations in low-altitude cloud cover [Marsh and Svensmark, 2000b], presumably from CRF modulation over the oceans (where CCNs are most likely to be a bottleneck), give rise to a radiative imbalance which can be estimated [Marsh and Svensmark, 2000a; Shaviv, 2005] to be of order 1.1 ± 0.3 W/m2 over the past two cycles. Together, with the TSI variations, we find that the ratio between the cloud + TSI variations compared with the change in the solar constant is: 1:3 ± 0:4 W/m2. After comparing with equation (21), we can conclude that the heat flux going into the oceans is consistent with the apparent variations in the low-altitude clouds.

Clearly there is rough quantitative agreement between Barnett and Shaviv about the heat entering the oceans, but there is disagreement on the source. Nir’s evidence shows the influx is consistent with a net increase in solar forcing of 1:3 ± 0:4 W/m2. Tim Barnett’s proof of AGW, is that without CO2, climate models cannot reproduce the warming seen in the oceans. But this is assuming the latent and net solar heat flux has decreased by about 0.6 W/m2. Presumably, this assumption is crucial to his findings and if solar were increased to +1W/m2, would not indicate CO2 as the cause.

Nir Shaviv is already known for his contribution to the field of astrophysics, where he demonstrated that the Eddington luminosity is not a strict limit. He would seem to regard the proof of Prof Barnett, the rationalist, as a case of ‘mistaken identity’, where CO2 stands falsely accused. There is also an element of ‘proof by calling the other guy an idiot‘.

The real numeric disagreement between the camps appears to be over the extent of solar forcing: surely a measurable, resolvable dispute. Shaviv claims to have proved this enhanced forcing, a finding that seems to me to be worth a Noble Prize nomination:

We find that the total radiative forcing associated with solar cycles variations is about 5 to 7 times larger than just those associated with the TSI (Total Solar Irradiance) variations, thus implying the necessary existence of an amplification mechanism, …

Shaviv said in a recent interview:

“People will see that the apocalyptic forecasts are not coming true. Today there is no fingerprint attesting that carbon dioxide emission causes a rise in temperature.”

The statements from the project leader at Scripts Institute of Oceanography that the debate is over for ‘rational’ people, and their results should “wipe out” claims by sceptics, are already beginning to sound like famous last words.

Are Changes in Water Vapor Consistent with the Models

Recent conversations at ClimateAudit about the observations of a steady fall in water vapor in the upper atmosphere have been the subject of some controversy, as contrary to the climate models, they appear to show a strong negative feedback from water vapor as greenhouse gases increase. Climate liberals argue the data are so flawed they should not even be discussed. Climate conservatives argue that the whatever information is there should not be wasted simply because it does not agree with the models.

Andrew sends in a much more sensible approach, asking, along the lines of Douglass et al. 2007 and their study of upper tropical tropospheric temperatures:

“Are the models and data ‘consistent’.”

This is what one of Hansen’s models showed with regard to humidity changes for doubling CO2.

image001

See here: http://icecap.us/images/uploads/GrayAppendix_PartA.doc
What do the observations show? I calculated their rates of change per Century from 1979 to 2008: Specific Humidity Changes, g/kg per Century -> Relative Humidity Changes, % per Century

image002

image003

See Here: http://www.cdc.noaa.gov/cgi-bin/data/timeseries/timeseries1.pl
If the trends in the data are to be believed, the models are completely qualitatively wrong. Now, the humidity data have their problems, but that they could be so far off seems unlikely. Indeed, a recent paper, while noting the problems with the data, advised against hand-waving around the issue:

Paltridge, G., Arking, A., and M. Pook Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data, Theoretical and Applied Climatology DOI 10.1007/s00704-009-0117-x

Cosmic Ray Basics

Variation in cosmic ray flux causes changes in the formation of clouds in the atmosphere, by affecting the formation of droplets by charged nuclei, similar to the process of cloud seeding. The seeds are formed when high energy cosmic ray particles collide with other atmospheric particles producing a cosmic ray shower.

cosmic-rays

There are two main products of cosmic ray showers illustrated in the figure: short term products such as electromagnetic radiation, and the various products of collisions (called spallation) that persist, can form nuclei for droplet condensation, and can be detected later to provide useful information on the intensity of past cosmic ray showers.

The process is probably not the same as the rapid formation of clouds above and around atomic blasts, such as in the clip below. My guess is these clouds are formed by the blast wave. Still, it seems possible that a demonstration of high energy particles affecting cloud formation in the atmosphere should be possible.

Particles crashing into nitrogen and oxygen in the atmosphere create isotopes. Most collisions are with stable forms of the two most common gases, Nitrogen 14N7, and Oxygen 16O8. The most useful products are the unstable isotopes Carbon 14C6 and Beryllium 10B4, due to their half lives. 14C with a half life of 5730 years is useful over many millennia, but not so accurate at the long term. 10Be with a half life of 1.36 million years is useful for longer, but not as accurate in the short term.

Typically materials are aged by comparing the ration of the unstable isotope to the stable one. Over time, the unstable isotope is a lesser fraction of the total. However, there are factors that can cause errors. In the case of 12Carbon, because fossil fuels are low in 14C, present day deposition of 14C is anomalously low, so this needs to be taken into account when dating material less than a few hundred years old. Also, due to the difference in atomic weight, plants preferentially absorb the lighter stable 12C form, thus changing the isotopic ratios.

However, isotopic rations are an incredibly useful tool in studying the past because of the capacity to precisely quantify isotopic ratios, and helps to provide greater confidence in the causal role of cosmic ray flux in past climate changes. While the sediments where isotopes are recovered are still subject to confounding factors, my impression is that biological indicators such as tree ring width and density have even more uncertainties associated with them.

Probability of the Cosmic Ray Flux Theory of Climate Change

Should we believe the cosmic ray flux theory (CRF)? Here I attempt to answer this question quantitatively, by calculating the strength of evidence so-far presented for CRF as a major forcing factor in climate change. Specifically we need to ask, what is the probability of being wrong about CRF? This can be calculated by combining the significance values of independent lines of evidence.

Below I have started calculating and tabulating the P values. The first 8 rows were worked out from the difference of means from Shaviv’s paper, with and without CRF. I have a sense that independence of evidence can be judged by the manner in which CRF or its response is measured, so I have listed that in the table. At the long time scales I think CRF is estimated using an isotope of iron in meteorites (Fe). Over medium periods 10Be was used, while at shorter time scale the climate sensitivity was calibrated on the solar output TSI.

I take this roughly as three independent sources of evidence, as follows:

Period P=0 Time yrs Indicator
Phanerozoic 0.34 500,000,000 Fe/temp
Cretaceous 0.19 50,000,000 Fe/temp
Eocene 0.56 20,000,000 Fe/temp
LGM 0.25 10,000 10Be/temp
20thCentury 0.08 100 10Be/temp
SolarCycle 0.08 ~11yrs TSI/temp
CombinedC 0.09
CombinedLM 0.08
UK20thCent 0.01 50 Neutron/cloudiness
Forbush 0.05 0.1 Neutron/cloudiness

The last two lines are the effect of CRF as measured by neutron flux, on cloudiness in Empirical evidence for a nonlinear effect of galactic cosmic rays on clouds (2006) by R. Giles Harrison and David B Stephenson. This paper finds a variations of 20% in cloudiness between the max and mins of CRF. The effect is detectable even at the shortest timescale of a Forbush event, a sudden and transient reduction in cosmic rays lasting a few days.

The combination of independent probabilities is simply their product. The probability of the theory that CRF affects climate is given by four probabilities, multiplied together:

0.33*0.15*0.08*0.01 = 4xe-5 = 4 sigma

Even a conservative estimate where some results are ignored provides a 4 sigma significance for the CRF theory. This level of significance is typical, nay expected in physics, while climate science is lucky to achieve 2 sigma, or around 95% confidence. The numbers show that the probability the CRF theory is wrong is very low indeed. In other words, the CRF theory has a 0.004 % or 1 in 25,000 chance of being wrong, so far.

The evidence shows CRF forcing climate change, at most time scales. In contrast, CO2 is uncorrelated at both the long and short time scales, and at the medium scales the direction of causation is uncertain. Only the PDO/NAO would there seem to be another major factor. Shaviv estimates that only 20% of the last centuries warming is possibly attributable to green house gases.

A lot of posts here have been negative — highlighting the sloppiness of climate change statistics and the self-serving exaggerations of climate effects scientists. For the first time I am becoming convinced that the evidence is really there to show CO2 is just a bit player in climate change, and there is another factor that can explain a large chunk of the wiggles that we see in global temperature changes.

Here is the data from Table 1. sens

Here is the turnkey R code.

d<-read.table("http://landshape.org/enm/wp-content/uploads/2009/03/sens.txt")

prob<-function(m1,s1,m2,s2) {
s<-sqrt(s1^2+s2^2)
z<-(m2-m1)/s
pnorm(z)
}

run<-function() {
for (i in 1:8) {
m1<-d[i,4]
s1<-d[i,4]-d[i,3]
d2<-d[i+8,4]
s2<-d[i+8,4]-d[i,3]
print(prob(m1,s1,d2,s2))
}
}

run()

Was the Younger Dryas caused by cosmic ray flux?

The Younger Dryas, also referred to as the Big Freeze, was an abrupt and unexplained relapse into a glacial cold climate when the earth was emerging out of the last ice age. The dip is clearly seen in the traces below at about 11-12,000 years before present.

The shutdown of the North Atlantic thermohaline circulation is usually blamed, but this paper from 2000 suggests solar mediated cosmic ray flux could be responsible.

The concentration of radiocarbon, 14C, in the atmosphere depends on its production rate by cosmic rays, and on the intensity of carbon exchange between the atmosphere and other reservoirs, for example the deep oceans. For the Holocene (the past approx11,500 years), it has been shown that fluctuations in atmospheric radiocarbon concentrations have been caused mostly by variations in the solar magnetic field. Recent progress in extending the radiocarbon record backwards in time has indicated especially high atmospheric radiocarbon concentrations in the Younger Dryas cold period, between 12,700 and 11,500 years before the present. These high concentrations have been interpreted as a result of a reduced exchange with the deep-ocean reservoir, caused by a drastic weakening of the deep-ocean ventilation. Here we present a high-resolution reconstruction of atmospheric radiocarbon concentrations, derived from annually laminated sediments of two Polish lakes, Lake Gociacedilzdot and Lake Perespilno. These records indicate that the maximum in atmospheric radiocarbon concentrations in the early Younger Dryas was smaller than previously believed, and might have been caused by variations in solar activity. If so, there is no indication that the deep-ocean ventilation in the Younger Dryas was significantly different from today’s.

The difficulties that the climate science liberals have in explaining the Younger Dryas have been raised at ClimateAudit. Largely the long timescale for accumulation and absorption of CO2 in the atmosphere is not consistent with abrupt climate change. Changes must be on the same time scale as the forcings, and rapid changes must be explained either by high intrinsic variability in the system, or by sudden changes in system states. RealClimate in a recent post at least entertained a comet strike theory, perhaps an indication of the waning support for the shutdown of the North Atlantic thermohaline circulation as an explanation.

We have been looking at the Cosmic Ray Flux (CRF) theory of Nir Shaviv in the last few posts. CRF can vary relatively rapidly, when the sun shuts down its sunspot activity, or from cosmic sources. The scale is consistent with abrupt climate change. I don’t want to make the same mistake as climate liberals, and start blaming everything on the latest convenient explanation. But given the high concentration of atmospheric 14C during the Younger Dryas cold episode appears widely supported, does this not suggest an increase in CRF as the cause?

REFERENCES

Variations of Younger Dryas atmospheric radiocarbon explicable without ocean circulation changes Nature 403, 877-880 (24 February 2000) | doi:10.1038/35002547; Received 23 April 1999; Accepted 21 December 1999
Tomasz Goslar, Maurice Arnold, Nadine Tisnerat-Laborde, Justyna Czernik & Kazimierz Wie cedilckowski

January 2009 global temperature from RSS

RSS global temperature in the lower atmosphere increased 0.148C from the previous month. The two early leaders in the ‘Guess the monthly global temperatures’ competition are still CoRev and Jan Pompe:

The guesses were very biased, with an extraordinary 95.7% guessing incorrectly. That’s a lot worst than random choices would do.

What is the point of this competition? Well, I think there is a big difference between talking about prediction, and actually doing it. Real tests of skill provide an opportunity to observe your motivations, biases and reactions.