Fact Checking the Climate Council

The Climate Council mini-statement called Bushfires and Climate Change in Australia – The Facts states in support of their view that “1. In Australia, climate change is influencing both the frequency and intensity of extreme hot days, as well as prolonged periods of low rainfall. This increases the risk of bushfires.”

Southeast Australia is experiencing a long-term drying trend.

A moment of fact-checking the BoM recorded rainfall in Southeastern Australia reveals no trend in rainfall.

Another moment of fact-checking the BoM recorded rainfall in Australia reveals an increasing rainfall trend.

Fail.

A Practical Project for the Hyperloop

When the storied Tesla Motors CEO promoted the Hyperloop, a proposed California high speed rail project between San Francisco and Los Angeles in 30 minutes, instead of the 2 hours and 40 minutes on the VFT, people naturally got excited. But there are three questions. Will the ticket price be compeditive with existing air travel? Second, will the novel technology meet problems in research and development? Third, would consumers like being shot along a tube at almost supersonic speeds?

Given the price of an LA-FS link would be comparable with air travel, and the technology is conventional, the largest question is the third – consumer acceptance.

An alternative to test the third would be to build a smaller mass transit situation to augment or replace an existing airport shuttle service from check-in to terminal, or even between gates. such a system would operate in a mode where the capsules would spend half the time accelerating, and half decelerating. It would not reach the high speeds proposed in the hyperloop of 1000km per hour, and so provide an opportunity to trial consumer reactions and refine the technology.

How fast? A 0.5g force is an acceleration of around 5 m/sec/sec. Consider a 1 km run from the baggage check-in to a remote terminal. Double integrating we get the distance travelled as 5/2 times time squared. Solving for 500m distance we get a time of 14 sec to the half way point. The top speed will be 5t or 70 m/sec (or 256 km per hour). The entire trip with deceleration would take 20 sec.

If travelers are prepared to accept a 1g force in both acceleration and deceleration the entire trip would take 20 sec with a top speed of 100m/sec or 360 km per hour.

This would be sufficient to test the system even on these short runs.

But we all know the feeling of being treated like cattle that comes with the existing shuttle systems at Dulles and other major hubs.

Private, individual or dual pods may be the most desirable aspect to consumers, as they allow transport on demand, no waiting, and would take the ‘mass’ out of mass transport. This might be the major selling point.

Error in calculating Hyperloop ticket price

The semi-technical document on the Hyperloop mass transport system, recently produced by Elon Musk, estimated the price of a one-way ticket as $20.

Transporting 7.4 million people each way and amortizing the cost of $6 billion over 20 years gives a ticket price of $20 for a one-way trip for the passenger version of Hyperloop.

Multiply 7.4 million trips by two then by $20 over 20 years and you get $5.92 billion dollars which is about the $6 billion estimated cost of construction of the Hyperloop. So $20 is the price at which the cost of construction (very simplistically) is returned in 20 years.

The amortized cost is not the ticket price, which must necessarily include such costs as management, operations and maintenance, and financial costs such as interest on loans and profits to shareholders. Thus the actual ticket price of a fully private venture would be comparable to an airfare, at least $100 say.

The Musk document is poorly worded at best or misleading at best. Major media outlets universally quoted a ticket price of $20.

According to New Scientist

He also estimates that a ticket for a one-way Hyperloop trip could cost as little as $20, about half what high-speed rail service is likely to charge.

The Telegraph:

Hyperloop would propel passengers paying about $20 (£13) in pods through a 400-mile series of tubes that would be elevated above street…

The Washington Post

How the Hyperloop could get you from LA to San Francisco in 30 minutes for $20.

USA Today, Huffpost, Fox News, and all of the internet tech blogs simply repeated the same story. While this is one more example of the total absence or research in the media, the blame also surely rests on Musk, who should correct the misrepresentation immediately.

Hyperloop for Sydney – Melbourne – Brisbane link?

Elon Musk unveiled his concept for a new mass transport system consisting of capsules shot along a partially evacuated pipe at very high speed.

The details contain estimates of a capital cost of less than $10 billion and the cost of a one-way ticket of $20 — not bad. Compare that to the estimated capital cost of $100 billion for a very fast train (VFT) system, a reduction in the transit time between Los Angeles and San Francisco from 3 hours to 30 minutes, and the proposal looks very attractive.

The numbers would be similar for an equivalent system in Australia. The VFT has been costed at over $100 billion for a Melbourne to Brisbane link – but given this estimate is probably optimistic, it comes in at the same price for a similar distance as the Californian VFT proposal.

The savings on capital cost come largely from the greatly reduced land acquisition of an elevated system. It has been the high capital cost (that would have to be borne by the taxpayer) that has made the VFT uneconomic in the past. (Of course, a colossal waste of public money never stopped the Greens from advocating it.)

The Hyperloop would radically change that part of the equation. As Elon said:

It was born from frustration at his state’s plan to build a bullet train that he called one of the most expensive per mile and one of the slowest in the world.

If tickets on the Hyperloop were comparable with air and bus transport of $100 – or more given the travel time between Brisbane and Sydney would be around 60 minutes – would provide an adequate margin for an entirely privately-funded venture.

Cold Fusion a Victory for the Free Market

Free marketers and global warming alarmists alike should be heartened by the handful of companies that claim a zero carbon emissions commercial energy plant based on a safe cold fusion (CF) reaction. An Italian company demonstrated a product called E-Cat in 2011, and a Greek company named Defkalion also provided a profession demonstration of their Hyperion product.

The distain for CF by the mainstream government-funded research community and the lack of government funding support is well known. Cold fusion results are routinely and categorically rejected by physics and engineering journals and there has been virtually no support from government funding agencies, except for the military.

Meanwhile, the lack public benefit from government subsidies of green energy sources is an embarrassment. Subsidies for renewable sources such as wind and solar – $88 billion in 2011 – dropping due to political backlash from increasing electricity prices. Hot fusion research over the last 50 years – $50 billion – is no closer to break-even, let alone a working power plant.

One could argue that funding research on government priorities has been deeply harmful to research. If young faculty members in physics find a field promising, but can only secure grants in government-determined priority areas, they are incentivized to focus on politically motivated fields. Keep activists out of research funding!

Nevertheless, the field has progressed thorough the efforts of professionals working in their spare time and amateurs experimenting in their garages, though marked by contradictory experimental results and outright mistakes, secrecy and paranoia by wanna-be entrepreneurs. There are dozens of theories, but none of them properly tested. Defkalion ICCF18 slides show a realtime mass spec system being designed which they hope will nail down what is happening in the NiH fusion processes.

Examples of Scientific Method

Note to global warming alarmists:

“Science is our way of describing — as best we can — how the world works. The world works perfectly well without us. Our thinking about it makes no important difference. When our minds make a guess about what’s happening out there, if we put our guess to the test and we don’t get the results we expect, as Feynman says, there can be only one conclusion: we’re wrong.”

Scientific Method Meets Global Warming

In general, there are only two way to prove something in science.

1. Prove a singular (fact) with an observation such as “black swans exist”.
2. Disprove a universal (theory) with a singular fact such as “all swans are white”.

The inability to disprove a singular, or to prove a universal, is due to our finite limits to our observations. In general, we cannot gather the infinite observations required disprove (1) a fact, or to prove a universal (2).

Scientists need to be rigorous and strict particularly in the initial stages of formulating a study, whether it is a singular or a universal that is being tested, and how the observations will impact.

A case in point: the impact of observations of global temperatures on the climate model projections plotted below. By a strict interpretation of scientific method, the observed “slow rise in global temperature” is a fact that disproves the universal “all possible trajectories of climate models under AGW warming”.

The only appropriate scientific response is to throw away all of those falsified models and all of the work based on them – extinction predictions, extreme events, agricultural trends, and so on – as it is scientifically worthless. You must go back to the drawing board.

The rules of science were illustrated recently in a post on Vortex about the Wright Brothers’ first flight:

To give another dramatic example, suppose at 1:00 pm on the afternoon of December 17, 1903, you were take a poll about whether man can fly. Suppose you asked people to place bets as to whether airplanes exist. Out of the 1.6 billion people in the world alive on that day, at that moment, the only ones who had ANY KNOWLEDGE of that question were Wilbur and Orville Wright and the members of the Kitty Hawk coast guard who had helped them fly that morning. In all the world, there was not another soul who knew the facts or was qualified to address the question. The opinions of other people were worthless. Meaningless. All the money in the world placed in a bet would mean nothing. There was an undeveloped glass plate photograph showing the first flight:

That photograph was proof. It overruled all opinions, all money, all textbooks, and the previous 200,000 years of human technology. A thermocouple reading from a cold fusion experiment in 1989 overrules every member of the human race, including every scientist. Once experiments are replicated at high signal to noise ratios, all bets are off. The issue is settled forever. There is no appeal, and it makes no difference how many people disagree, or how many fail to understand calorimetry or the laws of thermodynamics. The rules of science in such clear-cut cases are objective and the proof is as indisputable as that photograph.

- Jed

Nanoplasmonics – a field is born

Axil Axil suggested in the Vortex discussion list – about the only list I read these days – the name nanoplasmonics for developments in cold fusion (while referencing a very funny mockery of how academics will revise the history of cold fusion in 2015 – “History is written by the losers”).

The field is so new, Wikipedia has yet to have an entry dedicated to “nanoplasmonics”, except as a subheading to an entry Surface Plasmon Polaritons. An effect seen in bulk Nickel powder is not a surface effect. The reactors of Rossi and Defkalion may be based on a plasma phenomenon like polaritons in a nano-sized bulk medium, obviously, the headings should by rights be reversed.

How did climate skeptics know the scare was not real?

The climate scare is collapsing, it seems, as climate scientists everywhere are renouncing their previous certainty.

Skeptics OTOH have been consistent. This blog in particular has been challenging since 2005 the establishment global warming views on such predictions as mass extinctions, significance of warming, decreasing rainfall and droughts.

It is instructive to look into ourselves and ask – how could the skeptics have been right – when the consensus of the learned experts thought differently? As a recent post at WUWT asked – what was my personal path to climate skepticism? Particularly when one has never before been at odds with the scientific mainstream.

The answer for me was elegantly expressed by A.O. Scott of the New York Times review of the Disney film Chicken Little. He said the film is:

“a hectic, uninspired pastiche of catchphrases and clichés, with very little wit, inspiration or originality to bring its frantically moving images to genuine life.”

My theory is that due to their scholarship in other fields – such as engineering, the hard sciences, and economics – skeptics are attuned to genuine scientific insight and not deceived by the “uninspired pastiche of catchphrases and clichés” that constitutes the majority of global warming research.

How does cold fusion work?

A scientific paper by Defkalion Energy sets our their theory behind the desk-top reactor.

1. Powdered nickel is loaded with hydrogen and heated its Debye temperature – which is the temperature which maximizes the vibration of the individual molecules in the nickel lattice.

2. The hydrogen molecules (H2) are dissociated into a plasma by a spark from a spark plug. In the plasma the H atoms (consisting of a proton and an electron) are excited into elliptical orbits. Due to the elliptical orbit, the electron comes very close to the proton at one end, and so is screened to appear like a neutron (no charge).

3. Driven by the lattice vibration and the pulse of plasma from the spark, the screened H atom is driven into the nucleus of a Ni atom, producing Copper, Zinc, and other transmuted byproducts, and copious heat.

That’s their theory.

UQ Fellow Spews Fear

John Cook, Climate Communication Fellow from the Global Change Institute at the University of Queensland is on the record saying:

Climate change like atom bomb

and

“animal species are responding to global warming by mating earlier in the year. This isn’t because animals are getting randier, it’s because the seasons themselves are shifting”

IMHO science is in need of a major shakeup.

More Evidence of a Sun-Climate Connection

Bjerknes compensation assumes a constant total poleward energy transport (and an inverse relation between oceanic and atmospheric heat transport fluxes (Bjerknes, 1964)). Contrary to this assumption, there is empirical evidence of a simultaneous increase in poleward oceanic and atmospheric heat transport during the most recent warming period since the mid-1970s (aka the Great Pacific Climate Shift). This paper argues that TSI directly modulates ocean–atmospheric meridional heat transport.

Solar irradiance modulation of Equator-to-Pole (Arctic) temperature gradients: Empirical evidence for climate variation on multi-decadal timescales, Willie Soon and David R. Legates. PDF

This paper raises more questions than it addresses. How sensitive is the estimate of global temperature to a change in the equator to pole temperature gradient? Can a change in the gradient produce an apparent ‘amplification’?

Another thought that has occurred to me is that climate models overestimate global warming but they underestimate Arctic melting. Could both failures be due to underestimating the response of meridional heat transfer from the equator to the poles?

The Widening Gap Between Present Global Temperature and IPCC Model Projections

An increase in global temperature required to match the Intergovernmental Panel on Climate Change (IPCC) projections is becoming increasingly unlikely. A shift to a mean projected pathway of 3 degrees increase by the end of the century would require an immediate, large, and sustained increase in temperature which seems physically impossible.

Global surface temperatures have not increased at all in the last 18 years. The trend over the last few years is even falling slightly.

Global temperatures continue to track at the low end of the range of global warming scenarios, expanding a significant gap between current trends and the course needed to be consistent with IPCC projections.

On-going international climate negotiations fail to recognise the growing gap between the model projections based on global greenhouse emissions and the increasingly unlikely chance of those models being correct.

Research led by Ben Santer, compared temperatures under emission scenarios used to project climate change by the (IPCC) with satellite temperature observations at all latitudes.

“The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes.” reports their paper in PNAS. Moreover, it is not known why they are failing.

“The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive.”

Explaining why they are failing will require a commitment to skeptical inquiry and an increasing need to rely on the scientific method.

The unquestioning acceptance of projections of IPCC climate models by the CSIRO, Australian Climate Change Science Program, and many other traditional scientific bodies that has informed policies and decisions on energy use and associated costs must be called into question. So to must the long-term warming scenarios based on the link between emissions and increases in temperature.

Q: Where Do Climate Models Fail? A: Almost Everywhere

“How much do I fail thee. Let me count the ways”

Ben Santer’s latest model/observation comparison paper demonstrates that climate realists were right and climate models exaggerate warming:

The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes.

Where do the models fail?

1. Significantly warmer than reality (95% CI) in the lower troposphere at all latitudes, except for the arctic.

2. Significantly warmer than reality (95% CI) in the mid-troposphere at all latitudes, except for the possible polar regions.

3. Significant warmer that reality (95% CI) in the lower stratosphere at all latitudes, except possibly polar regions.

Answer: Everywhere except for polar regions where uncertainty is greater.

East Pacific Region Temperatures: Climate Models Fail Again

Bob Tisdale, author of the awesome book “Who Turned on the Heat?” presented an interesting problem that turns out to be a good application of robust statistical tests called empirical fluctuation processes.

Bob notes that sea surface temperature (SST) in a large region of the globe in the Eastern Pacific does not appear to have warmed at all in the last 30 years, in contrast to model simulations (CMIP SST) for that region that show strong warming. The region in question is shown below.

The question is, what is the statistical significance of the difference between model simulations and the observations? The graph comparing the models with observations from Bob’s book shows two CMIP model projections strongly increasing at 0.15C per decade for the region (brown and green) and the observations increasing at 0.006C per decade (magenta).

However, there is a lot of variability in the observations, so the natural question is whether the difference is statistically significant? A simple-minded approach would be to compare the temperature change between 1980 and 2012 relative to the standard deviation, but this would be a very low power test, and only used by someone who wanted to obfuscate the obvious failure of climate models in this region.

Empirical fluctuation processes are a natural way to examine such questions in a powerful and generalized way, as we can ask of a strongly autocorrelated series — Has there been a change in level? — without requiring the increase to be a linear trend.

To illustrate the difference, if we assume a linear regression model, as is the usual practice: Y = mt +c the statistical test for a trend is whether the trend coefficient m is greater than zero.

H0: m=0 Ha: m>0

If we test for a change in level, the EFP statistical test is whether m is constant for all of time t:

H0: mi = m0 for i over all time t.

For answering questions similar to tests of trends in linear regression, the EFP path determines if and when a simple constant model Y=m+c deviates from the data. In R this is represented as the model Y~1. If we were to use a full model Y~t then this would test whether the trend of Y is constant, not whether the level of Y is constant. This is clearer if you have run linear models in R.

Moving on to the analysis, below are the three data series given to me by Bob, and available with the R code here.

The figure below shows the series in question on the x axis, the EFP path is the black line, and 95% significance levels for the EFP path are in red.

It can be seen clearly that while the EFP path for the SST observations series shows a little unusual behavior, with a significant change in level in 1998 and again in 2005, the level is currently is not significantly above the level in 1985.

The EFP path for the CMIP3 model (CMIP5 is similar), however, exceeds the 95% significant level in 1990 and continues to increase, clearly indicating a structural increase in level in the model that has continued to intensify.

Furthermore, we can ask whether there is a change in level between the CMIP models and the SST observations. The figure below shows the EFP path for the differences CMIP3-SST and CMIP5-SST. After some deviation from zero at about 1990, around 2000 the difference becomes very significant at the 5% level, and continues to increase. Thus the EFP test shows a very significant and widening disagreement between the temperature simulation of the CMIP over the observational SST series in the Eastern Pacific region after the year 2000.

While the average of multiple model simulations show a significant change in level over the period, in the parlance of climate science, there is not yet a detectable change in level in the observations.

One could say I am comparing apples and oranges, as the models are average behavior while the SST observations are a single realization. But, the fact remains only the simulations of models show warming, because there is no support for warming of the region from the observations. This is consistent with the previous post on Santer’s paper showing failure of models to match the observations over most latitudinal bands.

Santer: Climate Models are Exaggerating Warming – We Don’t Know Why

Ben Santer’s latest model/observation comparison paper in PNAS finally admits what climate realists have been been saying for years — climate models are exaggerating warming. From the abstract:

On average, the models analyzed … overestimate the warming of the troposphere. Although the precise causes of such differences are unclear…

Their figure above shows the massive model fail. The blue and magenta lines are trend of the UAH and RSS satelite temperature observations averaged by latitude, with the Arctic at the left and Southern Hemisphere to the right. Except for the Arctic, the observations are well outside all of the model simulations. As they say:

The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes. The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive. Our results suggest that forcing errors are a serious concern.

Anyone who has been following the AGW issue for more than a few years remembers that, Ross McKitrick, Stephen McIntyre and Chad Herman already showed climate models exaggerating warming in the tropical troposphere in their 2010 paper. Before that was Douglass, and in their usual small-minded way, the Santer team do not acknowledge them. Prior to then a few studies had differed on whether models significantly overstate the warming or not. McKitrick found that up to 1999 there was only weak evidence for a difference, but on updated data the models appear to significantly overpredict observed warming.

Santer had a paper where data after 1999 had been deliberately truncated, even though the data was available at the time. As Steve McIntyre wrote in 2009:

Last year, I reported the invalidity using up-to-date data of Santer’s claim that none of the satellite data sets showed a “statistically significant” difference in trend from the model ensemble, after allowing for the effect of AR1 autocorrelation on confidence intervals. Including up-to-date data, the claim was untrue for UAH data sets and was on the verge of being untrue for RSS_T2. Ross and I submitted a comment on this topic to the International Journal of Climatology, which we’ve also posted on arxiv.org. I’m not going to comment right now on the status of this submission.

Santer already had form at truncating inconvenient data, going back to 1995, related by John Daly. It is claimed that he authored the notorious “… a discernible human influence on global climate”, made in Chapter 8 of the 1995 IPCC Report, added without the consent of the drafting scientists in Madrid.

As John Daly says:

When the full available time period of radio sonde data is shown (Nature, vol.384, 12 Dec 96, p522) we see that the warming indicated in Santer’s version is just a product of the dates chosen. The full time period shows little change at all to the data over a longer 38-year time period extending both before Santer et al”s start year, and extending after their end year.

It was 5 months before `Nature’ published two rebuttals from other climate scientists, exposing the faulty science employed by Santer et al. (Vol.384, 12 Dec 1996). The first was from Prof Patrick Michaels and Dr Paul Knappenberger, both of the University of Virginia. Ben Santer is credited in the ClimateGate emails with threatening Pat Michaels with physical violence:

Next time I see Pat Michaels at a scientific meeting, I’ll be tempted to beat the crap out of him. Very tempted.

I suppose that now faced with a disparity between models and observations that can no longer be ignored, he has had to face the inevitable. That’s hardly a classy act. Remember Douglass, McKitrick, McIntyre and other climate realists reported the significant model/observation disparity in the peer-reviewed literature first. You won’t see them in Santer’s list of citations.

Failing to give due credit. Hiding the decline. Truncating the data. Threatening violence to critics. This is the AGW way.

Solar Cycle 24 peaked? The experimentum crucis begins.

The WSO Polar field strengths – early indicators of solar maximums and minimums – have dived towards zero recently, indicating that its all down from here for solar cycle 24.

Polar field reversals can occur within a year of sunspot maximum, but cycle 24 has been so insipid, it would not be surprising if the maximum sunspot number fails to reach the NOAA predicted peak of 90 spots per month, and get no higher than the current 60 spots per month.

The peak in solar intensity was predicted for early 2013, so this would be early, and may be another indication that we are in for a long period of subdued solar cycles.

A prolonged decline in solar output will provide the first crucial experiment to distinguish the accumulation theory of solar driven temperature change, and the AGW theory of CO2 driven temperature change. The accumulation theory predicts global temperature will decline as solar activity falls below its long-term average of around 50 sunspots per month. The AGW theory predicts that temperature will continue to increase as CO2 increases, with little effect from the solar cycle.

An experimentum crucis is considered necessary for a particular hypothesis or theory to be considered an established part of the body of scientific knowledge. A given theory, such as AGW, while in accordance with known data but has not yet produced a critical experiment is typically considered unworthy of full scientific confidence.

Prior to this moment, BOTH solar intensity was generally above its long term average, AND greenhouse gases were increasing. BOTH of these factors could explain generally rising global temperature in the last 50 years. However, now that one factor, solar intensity, is starting to decline and the other, CO2, continues to increase, their effects are in opposition, and the causative factor will become decisive.

For more information see WUWT’s Solar Reference page.

AGW Doesn’t Cointegrate: Beenstock’s Challenging Analysis Published

The Beenstock, Reingewertz, and Paldor paper on lack of cointegration of global temperature with CO2 has been accepted! This is a technical paper that we have been following since 2009 when an unpublished manuscript appeared, rebutting the statistical link between global temperature increase and anthropogenic factors like CO2, and so represents another nail in the coffin of CAGW. The editor praised the work as “challenging” and “needed in our field of work.”

Does the increase in CO2 concentration and global temperature over the past century, constitute a “warrant” for the anthropogenic global warming (AGW) theory? Such a relationship is a necessary for global warming, but not sufficient, as as range of other effects may make warming due to AGW trivial or less than catastrophic.

While climate models, or GCMs shows enhancement of the greenhouse effect can cause a temperature increase, the observed upward drift in global temperature could have other causes, such as high sensitivity to persistent warming from enhanced solar insolation (accumulation theory). There is also the urban heat island effect and natural cycles in operation.

In short, the CO2/temperature relationship may be spurious, have an independent cause, or temperature may cause CO2 increase, all of which falsify CAGW here and now.

Cointegration attempts to fit the random changes in drift of two or more series together to provide positive evidence of association where those variables are close to a random walk.
The form of time series process appropriate to this model is referred to as I(n), having the property that n is the number of differencing operations needed before the series has a finite mean (stationary, or does not drift far from the mean). A range of statistical tests, the Dickey-Fuller and Phillips-Perron procedures, identify the I(1) property.

Beenstock et.al find that while temperature and solar irradiance series are I(1), anthropogenic greenhouse gas (GHG) series are I(2), requiring differencing twice to yield a stationary series.

This fact blocks any evidence for AGW from an analysis of the time series. The variable may still somehow be causally connected, but not in an obvious way. Previous studies using simple linear regression to make attribution claims must be discounted.

The authors also show evidence of a cointegrating relationship between the temperature (corrected for solar irradiance) and changes in the anthropogenic variables. This highlights what I have been saying in the accumulation that the dynamics relationships between these variables must be give due attention, lest spurious results are obtained.

While this paper does not debunk AGW, it does debunk naïve linear regression methods, and demonstrate the power of applying rigorous statistical methodologies to climate science.

Still no weakening of the Walker Circulation

Once upon a time, a weakening of the East-West Pacific overturning circulation – called the Walker circulation – was regarded in climate science as a robust response to anthropogenic global warming. This belief was based on studies in 2006 and 2007 using climate models.

Together with a number of El Nino events (that are associated with a weakening of the Walker circulation) the alarm was raised in a string of papers (3-6) that global warming was now impacting the Pacific Ocean and that the Walker circulation would further weaken in the 21st century, causing more El Ninos and consequently more severe droughts in Australia.

These types of alarms in the context of a severe Australian drought gave rise to an hysterical reaction of building water desalination plants in the major capital cities in Australia, all but one now moth-balled, and costing consumers upwards of $290 per year in additional water costs.

In 2009 I did a study with Anthony Cox to see if there was any significant evidence of a weakening of the Walker circulation when autocorrelation was taken into account. We found no empirical basis for the claim that observed changes differed from natural variation, and so could not be attributed to Anthropogenic Global Warming.

Since 2009, a number of articles show that, contrary to the predictions of climate models, the Walker has been strengthening (7-12). A recent article gives models a fail: “Observational evidences of Walker circulation change over the last 30 years contrasting with GCM results” here.

The paper by Sohn argues that inceases in the frequency of El Nino cause the apparent weakening of the Walker Circulation, not the other way around, and it is well known that climate models unsuccessfully reproduce such trends.

The problems with models may rest in their treatment of mass flows. In “Indian Ocean warming modulates Pacific climate change” here, they find that

“Extratropical ocean processes and the Indonesia Throughflow could play an important role in redistributing the tropical Indo-Pacific interbasin upper-ocean heat content under global warming.”

Finally from an abstract in 2012 “Reconciling disparate twentieth-century Indo-Pacific ocean temperature trends in the instrumental record” here:

“Additionally, none of the disparate estimates of post-1900 total eastern equatorial Pacific sea surface temperature trends are larger than can be generated by statistically stationary, stochastically forced empirical models that reproduce ENSO evolution in each reconstruction.”

Roughly translated this means there is no evidence of any change to the Walker ciriculation beyond natural variation – weakening or otherwise.

Nice to be proven right again. The “weakening of the Walker Circulation” is another scary bedtime story for global warming alarmists, dismissed by a cursory look at the evidence.

References

1. Held, I. M. and B. J. Soden, 2006: Robust responses of the hydrological cycle to global warming. J. Climate, 19, 5686–5699.

2. Vecchi, G. A. and B. J. Soden, 2007: Global warming and the weakening of the tropical circulation. J. Climate, 20, 4316–4340.

3. Scott B. Power and Ian N. Smith. Weakening of the walker circulation and apparent dominance of el ni˜no both reach record levels, but has enso really changed? Geophys. Res. Lett., 34, 09 2007.

4. Power SB, Kociuba G (2011) What caused the observed twentieth-century weakeningof the Walker circulation? J Clim 24:6501–56514.

5. Yeh SW, et al. (2009) El Niño in a changing climate. Nature 461(7263):511–514.

6. Collins M, et al. (2010) The impact of global warming on the tropical Pacific Ocean and El Niño. Nat Geosci 3:391–397.

7. Li G, Ren B (2012) Evidence for strengthening of the tropical Pacific Ocean surfacewind speed during 1979-2001. Theor Appl Climatol 107:59–72.

8. Feng M, et al. (2011) The reversal of the multidecadal trends of the equatorial Pacific easterly winds, and the Indonesian Throughflow and Leeuwin Current transports. Geophys Res Lett 38:L11604.

9. Feng M, McPhaden MJ, Lee T (2010) Decadal variability of the Pacific subtropical cells and their influence on the southeast Indian Ocean. Geophys Res Lett 37:L09606.

10. Qiu B, Chen S (2012) Multidecadal sea level and gyre circulation variability in the northwestern tropical Pacific Ocean. J Phys Oceanogr 42:193–206.

11. Merrifield MA (2011) A shift in western tropical Pacific sea-level trends during the 1990s. J Clim 24:4126–4138.

12. Merrifield MA, Maltrud ME (2011) Regional sea level trends due to a Pacific trade wind intensification. Geophys Res Lett 38:L21605.

13. “Observational evidences of Walker circulation change over the last 30 years contrasting with GCM results BJ Sohn, SW Yeh, J Schmetz, HJ Song – Climate Dynamics, 2012 – Springer

14. Indian Ocean warming modulates Pacific climate change Jing-Jia Luoa,b,c,1, Wataru Sasakia, and Yukio Masumotoa

15. “Reconciling disparate twentieth-century Indo-Pacific ocean temperature trends in the instrumental record”. Solomon, A. & Newman, M. Nature Clim. Change 2, 691–699 (2012).”

Circularity and the Hockeystick: coming around again

The recent posts at climateaudit and WUWT show that climate scientists Gergis and Karoly were willing to manipulate their study to ensure a hockeystick result in the Southern Hemisphere, and resisted advice from editors of the Journal of Climate to report alternative approaches to establish robustness of their study.

The alternative the editors suggested of detrending the data first, revealed that most of the proxies collected in the Gergis study were uncorrelated with temperature, and so would have to be thrown out.

A false finding of “unprecedented warming” is a false positive. False positives are a characteristic of the circular fallacy. The circular logic arising from the method of screening proxies by correlation was written up by myself in a geological magazine “Reconstruction of past climate using series with red noise” DRB Stockwell, AIG News 8, 314 in 2005, and also occupies a chapter in my 2006 book “Niche modeling: predictions from statistical distributions” D Stockwell Chapman & Hall/CRC.

It is gratifying to see the issue still has legs, though as McIntyre notes in the discussion of his post he has been the only one to cite the AIG article in the literature, its been widely discussed on the blogs, but a nettle not yet grasped by climate scientists.

Because the topic is undiscussed in climate science academic literature, we cited David Stockwell’s article in an Australian newsletter for geologists (smiling as we did so.) The topic has been aired in “critical” climate blogs on many occasions, but, as I observed in an earlier post, the inability to appreciate this point seems to be a litmus test of being a real climate scientist.

Its now fourteen years since the publication, with great fanfare, of Mann, Bradley, and Hughes “Global-scale temperature patterns and climate forcing over the past six centuries” with the “premature rush to adoption” that followed, the creation of research agendas in multiple countries and institutions devoted to proxy study, amassing of warehouses of cores. In any normal science the basics of the methodologies would be well understood before such a rush to judgment.

Considered in the context of almost a decade of related public blog discussion of the issue, that screening proxies on 20th century temperatures gives rise to hockeysticks is a topic apparently only discussed in private by climate scientists:

The Neukom email from 07 June 2012 08:55 “…I also see the point that the selection process forces a hockey stick result but: – We also performed the reconstruction using noise proxies with the same ARl properties as the real proxies. – And these are of course resulting in a noise-hockey stick.”

Of course, the problem is that rigorous analysis of many studies would fail to confirm the original results, many of the proxies collected and used by their colleagues are shown to be useless, and the abandonment of the theory that contemporary warming is “unprecedented”.

In light of all the data and studies from the last decade, I am convinced of only one thing – that the fallacy of data and method snooping is simply not understood by most climate scientists, who tend to see picking and choosing between datasets, ad hoc and multiple methods as opportunities to select the ones that produce their desired results.

This highlights the common wisdom of asking “What about all the catastrophe theories we have seen adopted and later abandoned over the years?” And while climate scientists dismiss such questions denial, after you have witnessed the rise and fall of countless environmental hysterias over the years, you become more circumspect, and adjust your estimates of confidence to account for the low level of diligence in the field.

Is the problem alarmism, or prestige-seeking?

We all make mistakes. Sometime we exaggerate the risks, and sometimes we foolishly blunder into situations we regret. Climate skeptics often characterize their opponents as ‘alarmist’. But is the real problem a tendency for climate scientists to be ‘nervous ninnies’?

I was intrigued by the recent verdict in the case of the scientists before an Italian court in the aftermath of a fatal earthquake. Roger Pielke Jr. relates that all is not as it seems.

There is a popular misconception in circulation that the guilty verdict was based on the scientists’ failure to accurately forecast the devastating earthquake.

Apparently the scientists were not charged with failing to predict a fatal earthquake, but with failure of due diligence:

Prosecutors didn’t charge commission members with failing to predict the earthquake but with conducting a hasty, superficial risk assessment and presenting incomplete, falsely reassuring findings to the public.

But when the article then goes to motivation, it is not laziness, but prestige.

Media reports of the Major Risk Committee meeting and the subsequent press conference seem to focus on countering the views offered by Mr. Giuliani, whom they viewed as unscientific and had been battling in preceding months. Thus, one interpretation of the Major Risks Committee’s statements is that they were not specifically about earthquakes at all, but instead were about which individuals the public should view as legitimate and authoritative and which they should not.

If officials were expressing a view about authority rather than a careful assessment of actual earthquake risks, this would help to explain their sloppy treatment of uncertainties.

So there are examples both of alarmism and failure to alarm by the responsible authorities. Both, potentially, motivated by maintenance of their prestige. Could the same motivations be behind climate alarmism? After all, what gains are there from asserting that ‘climate changes’.

The Creation of Consensus via Administrative Abuse

The existence of ‘consensus’ around core claims of global warming is often cited as some kind of warrant for action. A recent article by Roger Pielke Jr reported the IPCC response to his attempts to correct biases and errors in AR4 in his field of expertise — extreme events losses. As noted at CA, he made four proposed error corrections to IPCC, all of which were refused.

Since sociological psychological research is now regarded worthy of a generous share of science funding, a scholarly mind asks, “If failure to admit previous errors could be a strategy for building the climate consensus, what does that say about the logical correctness of the process. What are the other strategies?” Could denigration of people who disagree by Lewandowsky be worth $1.7m of Australian Research Council approved, taxpayer funds to help create climate consensus?

Wikipedia appears to be another experimental platform for consensus building. The recent comment by a disillusioned editor describes many unpleasant strategic moves executed in the name of building a consensus for the cold-fusion entries on Wikipedia.

Foremost is failure of administrators to follow the stated rules. Could this, along with failure to admit errors, and denigration of opponents be also a consensus creation strategy? The parallels with the IPCC are uncanny.

Some excerpts below.

Alan, do you know what “arbitration enforcement” is? Hint: it is not arbitration. Essentially, the editor threatened to ask that you be sanctioned for “wasting other editor’s time,” which, pretty much, you were. That was rude, but the cabal is not polite, it’s not their style. A functional community would educate you in what is okay and what is not. The cabal just wants you gone. *You* are the waste of time, for them, really, but they can’t say that.

Discouraging objectors – the main goal.

I remember now why I gave up in December last year. But I thought it was my turn to put in a shift or two at the coalface (or whatever).

Here is what I did on Wikipedia. I had a long-term interest in community consensus process, and when I started to edit Wikipedia in 2007, I became familiar with the policies and guidelines and was tempered in that by the mentorship of a quite outrageous editor who showed me, by demonstration, the difference or gap between policies and guidelines and actual practice. I was quite successful, and that included dealing with POV-pushers and abusive administrators, which is quite hazardous on Wikipedia. If you want to survive, don’t notice and document administrative abuse. Administrators don’t like it, *especially* if you are right. Only administrators, in practice, are long-term allowed to do that, with a few exceptions who are protected by enough administrators to survive.

Shades of the IPCC.

So if you want to affect Wikipedia content in a way that will stick, relatively speaking, you will need to become *intimately* familiar with policies. You can do almost anything in this process, except be uncivil or revert war. That is, you can make lots of mistakes, but *slowly*. What I saw you doing was making lots of edits. Andy asked you to slow down. That was a reasonable request. But I’d add, “… and listen.”

Good advice for dealing with administrators of consensus creation processes.

Instead, it appears you assumed that the position of the other editors was ridiculous. For some, perhaps. But you, yourself, didn’t show a knowledge of Reliable Source and content policies.

Lots of editors have gone down this road. It’s fairly easy to find errors and imbalance in the Wikipedia Cold fusion article. However, fixing them is not necessarily easy, there are constituencies attached to this or that, and averse to this or that. I actually took the issue of the Storms Review to WP:RSN, and obtained a judgment there that this was basically RS. Useless, because *there were no editors willing to work on the article who were not part of the pseudoskeptical faction.* By that time, I certainly couldn’t do it alone, I was WP:COI, voluntarily declared as such.

It seems you need an ally who is part of the in-crowd in order to move the consensus towards an alternative proposition.

When the community banned me, you can be sure that it was not mentioned that I had been following COI guidelines, and only working on the Talk page, except where I believed an edit would not be controversial. The same thing happened with PCarbonn and, for that matter, with Jed Rothwell. All were following COI guidelines.

Following the rules does not provide immunity.

The problem wasn’t the “bad guys,” the problem was an absence of “good guys.” There were various points where editors not with an agenda to portry cold fusion as “pathological science,” assembled, and I found that when the general committee was presented with RfCs, sanity prevailed. But that takes work, and the very work was framed by the cabal as evidence of POV-pushing. When I was finally topic banned, where was the community? There were only a collection of factional editors, plus a few “neutral editors” who took a look at discussions that they didn’t understand and judged them to be “wall of text.” Bad, in other words, and the discussion that was used as the main evidence was actually not on Wikipedia, it was on meta, where it was necessary. And where it was successful.

A better description of the real-world response to scholarship I have yet to see.

Yes, I was topic banned on Wikipedia for successfully creating a consensus on the meta wiki to delist lenr-canr.org from the global blacklist. And then the same editors as before acted, frequently, to remove links, giving the same bankrupt arguments, and nobody cares. So all that work was almost useless.

So consensus is ultimately created via administrative abuse!

Now its possible to see why blogs porporting to represent an authoritative consensus such as RealClimate, SkepticalScience and LewsWorld must delete objections:

… furiously deleting inconvenient comments that ask questions like “What are you going to do now that the removal of the fake responses shows a conclusion reverse of that of your title”?

But what is the result of administrative abuse?

That is why so many sane people have given up on Wikipedia, and because so many sane people have given up, what’s left?

There would be a way to turn this situation around, but what I’ve seen is that not enough people care. It might take two or three. Seriously.

Opinions on the New Zealand AGW Judgement

Apropos the New Zealand AGW case, comments below by Goon and Ross:

# Goon (8) Says:
September 8th, 2012 at 3:45 pm

Justifying the unjustifiable. Don’t believe me…. then here is where the raw data lives.

http://cliflo.niwa.co.nz/

Register and have a look for yourself. Nothing even remotely approaching a 1 degree/century trend in the raw data from longer term climate sites. The only way NIWA can come up with this is by applying an extremely dodgy ‘adjustment’ to make all pre-1950′s temperatures colder and everything after warmer and hey presto, woe is me, there’s a trend. The arguement being tested in the court wasn’t anything to do with AGW, rather it was just that the methodology applied by NIWA to calculate the ‘sky is falling faster than the rest of the world’ trend is a complete crock. A trend which is then used by the same scientists to justify ever more research and lapped up by politicians keen to get their hands into your wallet.

In terms of climate change, I’m agnostic about the whole thing…..climate changes naturallly all the time and human activities no doubt contribute as well but what pisses me off is the dodgyness put up by NIWA as science. It wouldn’t stand up in any other discipline but spin disguised as science seems to be de riguer for climate science.

# Ross12 (186) Says:
September 8th, 2012 at 4:16 pm

Goon
You are correct in my view. This case was nothing to with AGW as such. It was to do with how the temperature data was collected and how it was analysed. The judge was very wrong not to allow Bob Dedekind’s evidence ( because he was supposed not an expert) — the statistical analysis for the data would be using methods similar to a number of different fields. So Dedekinds stats expertise should have been allowed.

Here is a summary of his position :

“… In fact, NIWA had to do some pretty nifty footwork to avoid some difficult questions.

For instance, where was the evidence that RS93 had ever been used on the 7SS from 1853-2009? Absent. We were asked to believe Dr Wratt’s assertion that it had (in 1992), but ALL evidence had apparently disappeared. Not only that, but the adjustments coincidentally all matched the thesis adjustments, which all ended in 1975. And no new adjustments were made between 1975 and 1992. Hmm.

Another question: Why, when NIWA performed their Review at taxpayers’ expense in 2010, did they NOT use RS93? They kept referring to it whenever the 7SS adjustment method was discussed, and it was a prime opportunity to re-do their missing work, yet instead they used an unpublished, untested method from a student’s thesis written in 1981.

Please understand this: the method used in the NIWA Review in 2010 has no international peer-reviewed scientific standing. None. It is mentioned nowhere, outside of Salinger’s thesis. NIWA have never yet provided a journal or text-book reference to their technique.

Yet a few people were able to do (at zero cost to the taxpayer) what NIWA should have done in the first place – produce a sensible 7SS using the same peer-reviewed technique NIWA kept referencing repeatedly, viz: RS93. In fact, one of NIWA’s complaints during the court case was that we applied the RS93 method “too rigorously”! In other words, when we did the job properly using an internationally-accepted method, we got a different result to NIWA’s, and they didn’t like it. In fact, the actual trend over the last 100 years is only a third of NIWA’s trend.

Their only response to date has been a desperate effort to try to show that the RS93 method as published is “unstable”. Why then did they trumpet it all this time? And why did they never challenge it in the literature between 1993 and 2010?

NIWA got away with it in the end, but only because the judge decided that he shouldn’t intervene in a scientific dispute, and our credentials (not the work we did) were not impressive enough. ”

For the AGW supporters to suggest ( as Prof Renwick from Victoria said) this a vindication of the science is utter nonsense. The judge says he is not going to make decisions about the science.
Some how I don’t think we have heard the end of this.

Lewandowsky article is a truly appalling piece of social science – Aitkin

Don Aitkin just weighted in on the Lewandowsky affair as Queensland University’s John Cook doubles down at the Conversation.

about 1 hour ago
Don Aitkin writer, speaker and teacher (logged in via email @grapevine.com.au)

Oh dear. The Lewandowsky article is a truly appalling piece of social science. How did it ever get past ordinary peer review? It, and the one above, demonstrate the kind of problems that Jim Woodgett in Nature two days ago and John Ioannidis a few years ago have pointed out: the failure of researchers to get their own house in order, and the poor quality of much published research. I have posted on that subject today on my website: www.donaitkin.com. That was before I came to all this! Perhaps someone a little better than Lewandowsky could do some research on why people believe in’ climate change’, and what their characteristics are…

Thank God there are true scholars in Australia. Unfortunately they are retired.

Carbon abatement from wind power – zero

Zip, nil, nada. That’s the findings of a two-year analysis of Victoria’s wind-farm developments by mechanical engineer Hamish Cumming.

Despite hundreds of millions of dollars of taxpayers money, from subsidies and green energy schemes driven by the renewable energy target, surprise, surprise, Victoria’s wind-farm developments have saved virtually zero carbon dioxide emissions due to their intermittent, unreliable power output.

Wind power advocate Dr Mark Diesendorf, Australian academic who teaches Environmental Studies at the University of New South Wales, formerly Professor of Environmental Science at the University of Technology, Sydney and a principal research scientist with CSIRO has not been shy about bad-mouthing wind power realists. See the renewable industry lobby group energyscience for his ebook The Base Load Fallacy and other Fallacies disseminated by Renewable Energy Deniers.

The verdict is in – wind power is a green theory that simply generates waste heat.

However, Cumming said the reports on greenhouse gas abatement did not take into account the continuation of burning coal during the time the wind farms were operational.

“The reports you refer to are theoretical abatements, not real facts. Coal was still burnt and therefore little if any GHG was really abated,” he told Clarke.

“Rather than trying to convince me with reports done by or for the wind industry, or the government departments promoting the industry, I challenge you to give me actual coal consumption data in comparison to wind generation times data that supports your argument.

Also see JoNova

Lewandowsky — again

This guy, a UWA employee, was shown by a Arlene Composta to be the most naive of leftists.

He now says that climate skeptics are conspiracy theorist wackos.

We have responded to this guy before:

He thinks the cognitive processes of Anthropogenic Global Warming (AGW) sceptics is deficient and on the same level as “truthers” and other “conspiracy theorists”. This is serious, for merely questioning the ‘science’ of AGW one now faces the opprobrium of having one’s mental ability questioned.

JoNova raises valid questions about his survey methodology here.

The word “fabrication” has been bandied about.

If so, he gives proof that the term “Psychological Science” is a contradiction in terms.

Not cointegrated, so global warming is not anthropogenic – Beenstock

Cointegration has been mentioned previously and is one of the highest ranking search terms on landshape.

We have also discussed the cointegration manuscript from 2009 by Beenstock and Reingewertz, and I see he has picked up another author and submitted it to an open access journal here.

Here is the abstract.

Polynomial cointegration tests of anthropogenic impact on global warming M. Beenstock, Y. Reingewertz, and N. Paldor

Abstract. We use statistical methods for nonstationary time series to test the anthropogenic interpretation of global warming (AGW), according to which an increase in atmospheric greenhouse gas concentrations raised global temperature in the 20th century. Specifically, the methodology of polynomial cointegration is used to test AGW since during the observation period (1880–2007) global temperature and solar irradiance are stationary in 1st differences whereas greenhouse gases and aerosol forcings are stationary in 2nd differences. We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcing, aerosols, solar irradiance and global temperature are not polynomially cointegrated. This implies that recent global warming is not statistically significantly related to anthropogenic forcing. On the other hand, we find that greenhouse gas forcing might have had a temporary effect on global temperature.

The bottom line:

Once the I(2) status of anthopogenic forcings is taken into consideration, there is no significant effect of anthropogenic forcing on global temperature.

They do, however, find a possible effect of the CO2 first difference:

The ADF and PP test statistics suggest that there is a causal effect of the change in CO2 forcing on global temperature.

They suggest “… there is no physical theory for this modified theory of AGW”, although I would think the obvious one would be that the surface temperature adjusts over time to higher CO2 forcing, such as through intensified heat loss by convection, so returning to an equilibrium. However, when revised solar data is used the relationship disappears, so the point is probably moot.

When we use these revised data, Eqs. (11) and (12) remain polynomially uncointegrated. However, Eq. (15) ceases to be cointegrated.

Finally:

For physical reasons it might be expected that over the millennia these variables should share the same order of integration; they should all be I(1) or all I(2), otherwise there would be persistent energy imbalance. However, during 150 yr there is no physical reason why these variables should share the same order of integration. However, the fact that they do not share the same order of integration over this period means that scientists who make strong interpretations about the anthropogenic causes of recent global warming should be cautious. Our polynomial cointegration tests challenge their interpretation of the data.

Abandon Government Sponsored Research on Forecasting Climate – Green

Kesten Green, now of U South Australia, has a manuscript up called Evidence-based Improvements to Climate Forecasting: Progress and Recommendations arguing that evidence-based research on climate forecasting finds no support for fear of dangerous man-made global warming, because simple, inexpensive, extrapolation models are more accurate than the complex and expensive “General Circulation Models” used by the Intergovernmental Panel on Climate Change (IPCC).

Their rigorous evaluations of the poor accuracy of climate models supports the view there is no trend in global mean temperatures that is relevant for policy makers, and that…

[G]overnment initiatives that are predicated on a fear of dangerous man-made global warming should be abandoned. Among such initiatives we include government sponsored research on forecasting climate, which are unavoidably biased towards alarm (Armstrong, Green, and Soon 2011).

This is what I found also in the evaluation of the CSIRO’s use of IPCC drought model. In fact, the use of the climate model projections is positively misleading, as they show decreasing rainfall over the last century when rainfall actually increased.

This is not welcome news to the growing climate projection science industry that serves the rapidly growing needs of impact and adaptation assessments. A new paper called Use of Representative Climate Futures in impact and adaptation assessment by Penny Whetton, Kevin Hennessy and others proposes another ad-hoc fix to climate model inaccuracy called Representative Climate Futures (or RFCs for short). Apparently the idea is the wide range of results given by different climate models are classified as “most likely” or “high risk” or whatever, and the researcher is then free to chose whichever set of models he or she wishes to use.

Experiment Resources.Com condemns ad hoc-ery in science:

The scientific method dictates that, if a hypothesis is rejected, then that is final. The research needs to be redesigned or refined before the hypothesis can be tested again. Amongst pseudo-scientists, an ad hoc hypothesis is often appended, in an attempt to justify why the expected results were not obtained.

Read “poor accuracy of climate models” for “hypothesis is rejected” and you get the comparison. Models that are unfit for the purpose need to be thrown out. RCF appears to be a desperate attempt to do something, anything, with grossly inaccurate models.

On freedom of choice, Kesten Green says:

So uncertain and poorly understood is the global climate over the long term that the IPCC modelers have relied heavily on unaided judgment in selecting model variables and setting parameter values. In their section on “Simulation model validation in longer-term forecasting” (p. 969–973, F&K observe of the IPCC modeling procedures: “a major part of the model building is judgmental” (p. 970).

Which is why its not scientific.

1 2 3 4 20