BoM copies me, inadequately

Yesterday’s post noted the appearance of station summaries at the BoM adjustment page attempting to defend their adjustments to the temperature record at several stations. Some I have also examined. Today’s post compares and contrasts their approach with mine.

Deniliquin

The figures below compare the minimum temperatures at Deniliquin with neighbouring stations. On the left, the BoM compares Deniliquin with minimum temperatures at Kerang (95 km west of Deniliquin) in the years around 1971. The figure on the right from my Deniliquin report shows the relative trend of daily temperature data from 26 neighbouring stations (ie ACORN-SAT – neighbour). The rising trends mean that the ACORN-SAT site is warming faster that the neighbour.

BoMdeniliquin Deniliquin

The BoMs caption:

Deniliquin is consistently warmer than Kerang prior to 1971, with similar or cooler temperatures after 1971. This, combined with similar results when Deniliquin’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.

Problems: Note the cherrypicking of a single site for comparison and the handwaving about “similar results” with other sites.

In my analysis, the ACORN-SAT version warms at 0.13C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, this puts the ACORN-SAT version outside the limit. Therefore the adjustments have made the trend of the official long term series for Deniliquin significantly warmer than the regional neighbours. I find that the residual trend of the raw data (before adjustment) for Deniliquin is -0.02C/decade which is not significant and so consistent with its neighbours.

Rutherglen

Now look at the comparison of minimum temperatures for Rutherglen with neighbouring stations. On the left, the BoM compares Rutherglen with the adjusted data from three other ACORN-SAT stations in the region. The figure on the right from my Rutherglen report shows the relative trend of daily temperature in 24 neighbouring stations (ie ACORN-SAT – neighbour). As in Deniliquin, the rising trends mean that the ACORN-SAT site is warming faster that the neighbour.

BoMrutherglen Deniliquin

The BoMs caption is

While the situation is complicated by the large amount of
missing data at Rutherglen in the 1960s, it is clear that, relative to the other sites, Rutherglen’s raw minimum temperatures are very much cooler after 1974, whereas they were only slightly cooler before the 1960s.

Problems: Note the cherrypicking of only three sites, but more seriously, the versions chosen are from the adjusted ACORN-SAT. That is, the already adjusted data is used to justify an adjustment — a classic circularity! This is not stated in the other BoM reports, but probably applies to the other station comparisons. Loss of data due to aggregation to annual data is also clear.

In my analysis, the ACORN-SAT version warms at 0.14C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, this puts the ACORN-SAT version outside the limit. Once again, the adjustments have made the trend of the official long term series for Deniliquin significantly warmer than the regional neighbours. As with Deniliquin, the residual trend of the raw data (before adjustment) is not significant and so consistent with its neighbours.

Amberley

The raw data is not always more consistent, as Amberley shows. On the left, the BoM compares Amberley with Gatton (38 km west of Amberley) in
the years around 1980. On the right from my Amberley report is the relative trend of daily temperature to 19 neighbouring stations (ie ACORN-SAT – neighbour). In contrast to Rutherglen and Deniliquin, the mostly flat trends mean that the ACORN-SAT site is not warming faster than the raw neighbours.

BoMamberley Amberley

The BoMs caption:

Amberley is consistently warmer than Gatton prior to 1980 and consistently cooler after 1980. This, combined with similar results when Amberley’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.

Problems: Note the cherrypicking and hand waving.

In my analysis, the ACORN-SAT version warms at 0.09C/decade faster than the neighbours. As the spread of temperature trends at weather stations in Australia is about 0.1C/decade at the 95% confidence level, I class the ACORN-SAT version as borderline. The residual trend of the raw data (before adjustment) is -0.32C/decade which is very significant and so there is clearly a problem with the raw station record.

Conclusions

More cherrypicking, circularity, and hand-waving from the BoM — excellent examples of the inadequacy of the adjusted ACORN-SAT reference network and justification for a full audit of the Bureau’s climate change division.

BoM publishing station summaries justifying adjustments

Last night George Christensen MP gave a speech accusing the Bureau of Meteorology of “fudging figures”. He waved a 28 page of adjustments around, and called for a review. These adjustments can be found here. While I dont agree that adjusting to account for station moves can necessarily be regarded as fudging figures, I am finding issues with the ACORN-SAT data set.

The problem is that most of the adjustments are not supported by known station moves, and many may be wrong or exaggerated. It also means that if the adjustment decreases temperatures in the past, claims of current record temperatures become tenuous. A maximum daily temperature of 50C written in 1890 in black and white is higher than a temperature of 48C in 2014, regardless of any post-hoc statistical manipulation.

But I do take issue with a set of summaries being released as blatant “cherry-picking”.

Scroll down to the bottom of the BoM adjustment page. Listed are station summaries justifying the adjustments to Amberley, Deniliquin, Mackay, Orbost, Rutherglen and Thargomindah. The overlaps with the ones I have evaluated are Deniliquin, Rutherglen and Amberley (see previous posts). While the BoM finds the adjustments to these stations justified, my quality control check finds problems with the minimum temperature at Deniliquin and Rutherglen. I think the Amberly raw data may have needed adjusting.

WRT Rutherglen, BoM defends the adjustments with Chart 3 (my emphasis):

Chart 3 shows a comparison of the raw minimum temperatures at Rutherglen with the adjusted data from three other ACORN-SAT stations in the region. While the situation is complicated by the large amount of missing data at Rutherglen in the 1960s, it is clear that, relative to the other sites, Rutherglen’s raw minimum temperatures are very much cooler after 1974, whereas they were only slightly cooler before the 1960s.

WRT Deniliquin, BoM defends the adjustments on Chart 3 (my emphasis):

Chart 3 shows a comparison of minimum temperatures at Kerang (95 km west of Deniliquin) and Deniliquin in the years around 1971. Deniliquin is consistently warmer than Kerang prior to 1971, with similar or cooler temperatures after 1971. This, combined with similar results when Deniliquin’s data are compared with other sites in the region, provides a very clear demonstration of the need to adjust the temperature data.

My analysis is superior to flawed the BoMs analysis in 3 important ways:
1. I compare the trend in Rutherglen and Deniliquin with 23 and 27 stations respectively, not 3 and 1 neighbouring stations respectively (aka cherry-picking).
2. I also use a rigorous statistical panel test to show that the trend of the Rutherglen minimum exceeds the neighbouring group by O.1C per decade, which is outside the 95% confidence interval for Australian stations trends — not a visual assessment of a chart (aka eyeballing).
3. I use the trends of daily data and not annual aggregates, which are very sensitive to missing data.

Kerang: Where is the daily data?

Been looking forward to doing Kerang as I knew it was another dud series from ACORN-SAT. The report is here:

The first thing to notice in plotting up the time series data for the raw CDO and ACORN-SAT is that while the ACORN-SAT data goes back to 1910 the CDO data is truncated at 1962.

figure12

The monthly data, however, goes back almost to 1900. This is inexplicable as the monthly data is derived from the daily data! Here is proof that, contrary to some opinion pieces, all of the data to check the record is not available at the Bureau of Meteorology website, Climate Data Online.

The residual trends of ACORN-SAT are at benchmark and greatly exceeding benchmark, for maximum and minimum respectively.

While on the subject of opinion pieces, the statement from No, the Bureau of Meteorology is not fiddling its weather data:

Anyone who thinks they have found fault with the Bureau’s methods should document them thoroughly and reproducibly in the peer-reviewed scientific literature. This allows others to test, evaluate, find errors or produce new methods.

So you think skeptics haven’t tried? A couple of peer-review papers of mine on quality control problems in Bureau of Meterology use of models have not had a response from the Bureau in over 2 years. The sound of crickets chirping is all. Talk is cheap in climate science, I guess. Here they are:

Biases in the Australian High Quality Temperature Network

Critique of drought models in the Australian drought exceptional circumstances report (DECR)

Scorecard for ACORN-SAT Quality Assurance – the score so far.

Three more quality tests of stations in the ACORN-SAT series have been completed:

The test measures the deviation of the trend of the series from its neighbours since 1913 (or residual trend). A deviation of plus or minus 0.05 degrees per decade is within tolerance (green), 0.05 to 0.1 is borderline (amber), and greater than than is regarded a fail (red) and should not be used.

Maximum Minimum
Station CDO ACORN-SAT CDO ACORN-SAT
Rutherglen -0.02 -0.03 -0.05 0.14
Williamtown 0.08 -0.02 -0.13 -0.00
Deniliquin -0.05 0.03 -0.02 0.13
Amberley -0.01 -0.01 -0.32 0.09
Cape Otway Lighthouse -0.17 0.02 -0.05 0.01
Wilcannia -0.16 -0.06 -0.13 0.05
Kerang 0.02 0.05 -0.01 0.13

There are more inconsistent stations among the raw CDO data — as would be expected as it is “raw”. Howover the standout problems in this small sample are in the ACORN-SAT minimums.

Results so far suggest the Bureau of Meteorology has a quality control problem with its minimum temperatures, with almost all borderline residual trends and very large deviations from the neighbours in Rutherglen and Deniliquin.

Williamtown RAAF – an interesting case

Williamtown RAAF is one case where the quality control test indicates that the adjustments were justified, even though the homogenization in ACORN-SAT produced a strong change in trend. My report is here and Ken has posted graphs for Williamtown, along with a number of sites with large changes in trend by ACORN-SAT.

.

This illustrates neatly that a series is not rejected just because the ACORN-SAT increases the warming trend. An ACORN-SAT series should be rejected, however, if it’s trend is inconsistent with its raw neighbours.

Panel tests of the ACORN-SAT temperature network – first results

You may have read Ken Stewart’s excellent blog on the official Australian temperature record. With the publication of the “adjustments.xls” file of the offical adjustments to the raw data in the ACORN-SAT dataset, as reported on JoNova’s blog, there has been a flurry of work behind the scenes, so to speak.

Jennifer Marohasy has been leading the charge also to audit or at least review the practises of the Bureau of Meteorology in adjusting raw temperature to produce the synthetic ACORN-SAT series. Rutherglen in particular has been in the news for massive warming adjustment to the minimum temperature trend.

The story has gone national in The Australian with articles like Climate records contradict Bureau of Meteorology. Even I have been quoted as saying that the BoM may be “adding mistakes” with their data modelling.

Well, all of this kerfuffle has been enough to get me out of hiding and start working on some stuff. I thought it would be good to have a quality assessment method that could reliably test the ACORN data. The idea I came up with is to test the trend of the ACORN-SAT series against the trends of the raw data neighbours. Ideally, if the trend of the synthetic series exceeds the overall trend of all the neighbours, then some thing must be wrong.

The difficulty is that the neighbours all start and stop at different times and so a slightly more complex test is needed than a simple ordinary least squares regression. The answer is a panel test, or POLS. Its all explained in the reports on the first three stations below.

The test seems to work remarkably well. The ACORN-SAT minimum temperatures for Rutherglen and Deniliquin fail the benchmark residual trend of 0.05 degrees C per decade – that is they warm at a much greater rate than their neighbours. ACORN-SAT at Williamtown, by contrast, is consistent with its neighbours and the raw CDO series are not, indicating that the single large adjustment applied around 1969 was warranted.

I intend to keep working through the stations one by one and uploading them to the viXra archive site as I go. I will also release the data and code soon for those who are interested. Its almost at the stage where you enter a station number and it spits out the analysis. Wish it would write the reports too, though latex goes a great deal towards that end.

New energy source confirmed in third party test

As previously covered here, Andre Rossi appears to have delivered the goods…

Cold fusion reactor verified by third-party researchers, seems to have 1
million times the energy density of gasoline

Andrea Rossi’s E-Cat — the device that purports to use cold fusion to
generate massive amounts of cheap, green energy – has been verified by
third-party researchers, according to a new 54-page report. The researchers
observed a small E-Cat over 32 days, where it produced net energy of 1.5
megawatt-hours, or “far more than can be obtained from any known chemical
sources in the small reactor volume.”…

Follow the link to story in full

Third part report is here.

Significant transformation of isotopes of Lithium and Nickel are broadly consistent with the energy produced. this leaves no doubt the source of the energy is nuclear. But the authors are perplexed, nay, dumbfounded, nay, flabbergasted at the possible physics involved as all known nuclear reactions typically have large Coloumb barriers to overcome.

They found it “very hard to comprehend” how these fusion processes could take place at such low energies – 1200C-1500C degrees. While the transmutations are remarkable in itself, they found not trace of radiation during the test, or residual radiation after the reactor had stopped – almost inevitable in a reaction of nuclear source.

What is the possible reaction(s) then? Speculations from the vortex discussion list:

Li7 + Ni58 => Ni59 + Li6 + 1.75 MeV
> Li7 + Ni59 => Ni60 + Li6 + 4.14 MeV
> Li7 + Ni60 => Ni61 + Li6 + 0.57 MeV
> Li7 + Ni61 => Ni62 + Li6 + 3.34 MeV
> Li7 + Ni62 => Ni63 + Li6 – 0.41 MeV (Endothermic!)
>
> This series stops at Ni62, hence all isotopes of Ni less than 62 are
> depleted
> and Ni62 is strongly enriched.
>
> I have only briefly skimmed the report, but the basic reaction appears to
> be a
> neutron transfer reaction where a neutron tunnels from Li7 to a Nickel
> isotope.
> The excess energy of the reaction appears as kinetic energy of the two
> resultant
> nuclei (i.e. Li6 & the new Ni isotope), rather than as gamma rays. Because
> there
> are two daughter nuclei, momentum can be conserved while dumping the
> energy as
> kinetic energy in a reaction that is much faster then gamma ray emission.
> Because both nuclei are “heavy” and slow moving, very little to no
> bremsstrahlung is produced. There is effectively no secondary gamma from
> Li6
> because the first excited state is too high. (I haven’t checked Li7).
> There is
> unlikely to be anything significant from Ni because the high charge on the
> nucleus combined with the “3” from Lithium tend to keep them apart (minimum
> distance 31 fm).
>
> It would be nice to know if the total amounts of each of Li & Ni in the
> sample
> were conserved (I’ll have to study the report more closely).
> Regards,
>
> Robin van Spaandonk

Fascinating new world of materials science opening up.

Fact Checking the Climate Council

The Climate Council mini-statement called Bushfires and Climate Change in Australia – The Facts states in support of their view that “1. In Australia, climate change is influencing both the frequency and intensity of extreme hot days, as well as prolonged periods of low rainfall. This increases the risk of bushfires.”

Southeast Australia is experiencing a long-term drying trend.

A moment of fact-checking the BoM recorded rainfall in Southeastern Australia reveals no trend in rainfall.

Another moment of fact-checking the BoM recorded rainfall in Australia reveals an increasing rainfall trend.

Fail.

A Practical Project for the Hyperloop

When the storied Tesla Motors CEO promoted the Hyperloop, a proposed California high speed rail project between San Francisco and Los Angeles in 30 minutes, instead of the 2 hours and 40 minutes on the VFT, people naturally got excited. But there are three questions. Will the ticket price be compeditive with existing air travel? Second, will the novel technology meet problems in research and development? Third, would consumers like being shot along a tube at almost supersonic speeds?

Given the price of an LA-FS link would be comparable with air travel, and the technology is conventional, the largest question is the third – consumer acceptance.

An alternative to test the third would be to build a smaller mass transit situation to augment or replace an existing airport shuttle service from check-in to terminal, or even between gates. such a system would operate in a mode where the capsules would spend half the time accelerating, and half decelerating. It would not reach the high speeds proposed in the hyperloop of 1000km per hour, and so provide an opportunity to trial consumer reactions and refine the technology.

How fast? A 0.5g force is an acceleration of around 5 m/sec/sec. Consider a 1 km run from the baggage check-in to a remote terminal. Double integrating we get the distance travelled as 5/2 times time squared. Solving for 500m distance we get a time of 14 sec to the half way point. The top speed will be 5t or 70 m/sec (or 256 km per hour). The entire trip with deceleration would take 20 sec.

If travelers are prepared to accept a 1g force in both acceleration and deceleration the entire trip would take 20 sec with a top speed of 100m/sec or 360 km per hour.

This would be sufficient to test the system even on these short runs.

But we all know the feeling of being treated like cattle that comes with the existing shuttle systems at Dulles and other major hubs.

Private, individual or dual pods may be the most desirable aspect to consumers, as they allow transport on demand, no waiting, and would take the ‘mass’ out of mass transport. This might be the major selling point.

Error in calculating Hyperloop ticket price

The semi-technical document on the Hyperloop mass transport system, recently produced by Elon Musk, estimated the price of a one-way ticket as $20.

Transporting 7.4 million people each way and amortizing the cost of $6 billion over 20 years gives a ticket price of $20 for a one-way trip for the passenger version of Hyperloop.

Multiply 7.4 million trips by two then by $20 over 20 years and you get $5.92 billion dollars which is about the $6 billion estimated cost of construction of the Hyperloop. So $20 is the price at which the cost of construction (very simplistically) is returned in 20 years.

The amortized cost is not the ticket price, which must necessarily include such costs as management, operations and maintenance, and financial costs such as interest on loans and profits to shareholders. Thus the actual ticket price of a fully private venture would be comparable to an airfare, at least $100 say.

The Musk document is poorly worded at best or misleading at best. Major media outlets universally quoted a ticket price of $20.

According to New Scientist

He also estimates that a ticket for a one-way Hyperloop trip could cost as little as $20, about half what high-speed rail service is likely to charge.

The Telegraph:

Hyperloop would propel passengers paying about $20 (£13) in pods through a 400-mile series of tubes that would be elevated above street…

The Washington Post

How the Hyperloop could get you from LA to San Francisco in 30 minutes for $20.

USA Today, Huffpost, Fox News, and all of the internet tech blogs simply repeated the same story. While this is one more example of the total absence or research in the media, the blame also surely rests on Musk, who should correct the misrepresentation immediately.

Hyperloop for Sydney – Melbourne – Brisbane link?

Elon Musk unveiled his concept for a new mass transport system consisting of capsules shot along a partially evacuated pipe at very high speed.

The details contain estimates of a capital cost of less than $10 billion and the cost of a one-way ticket of $20 — not bad. Compare that to the estimated capital cost of $100 billion for a very fast train (VFT) system, a reduction in the transit time between Los Angeles and San Francisco from 3 hours to 30 minutes, and the proposal looks very attractive.

The numbers would be similar for an equivalent system in Australia. The VFT has been costed at over $100 billion for a Melbourne to Brisbane link – but given this estimate is probably optimistic, it comes in at the same price for a similar distance as the Californian VFT proposal.

The savings on capital cost come largely from the greatly reduced land acquisition of an elevated system. It has been the high capital cost (that would have to be borne by the taxpayer) that has made the VFT uneconomic in the past. (Of course, a colossal waste of public money never stopped the Greens from advocating it.)

The Hyperloop would radically change that part of the equation. As Elon said:

It was born from frustration at his state’s plan to build a bullet train that he called one of the most expensive per mile and one of the slowest in the world.

If tickets on the Hyperloop were comparable with air and bus transport of $100 – or more given the travel time between Brisbane and Sydney would be around 60 minutes – would provide an adequate margin for an entirely privately-funded venture.

Cold Fusion a Victory for the Free Market

Free marketers and global warming alarmists alike should be heartened by the handful of companies that claim a zero carbon emissions commercial energy plant based on a safe cold fusion (CF) reaction. An Italian company demonstrated a product called E-Cat in 2011, and a Greek company named Defkalion also provided a profession demonstration of their Hyperion product.

The distain for CF by the mainstream government-funded research community and the lack of government funding support is well known. Cold fusion results are routinely and categorically rejected by physics and engineering journals and there has been virtually no support from government funding agencies, except for the military.

Meanwhile, the lack public benefit from government subsidies of green energy sources is an embarrassment. Subsidies for renewable sources such as wind and solar – $88 billion in 2011 – dropping due to political backlash from increasing electricity prices. Hot fusion research over the last 50 years – $50 billion – is no closer to break-even, let alone a working power plant.

One could argue that funding research on government priorities has been deeply harmful to research. If young faculty members in physics find a field promising, but can only secure grants in government-determined priority areas, they are incentivized to focus on politically motivated fields. Keep activists out of research funding!

Nevertheless, the field has progressed thorough the efforts of professionals working in their spare time and amateurs experimenting in their garages, though marked by contradictory experimental results and outright mistakes, secrecy and paranoia by wanna-be entrepreneurs. There are dozens of theories, but none of them properly tested. Defkalion ICCF18 slides show a realtime mass spec system being designed which they hope will nail down what is happening in the NiH fusion processes.

Examples of Scientific Method

Note to global warming alarmists:

“Science is our way of describing — as best we can — how the world works. The world works perfectly well without us. Our thinking about it makes no important difference. When our minds make a guess about what’s happening out there, if we put our guess to the test and we don’t get the results we expect, as Feynman says, there can be only one conclusion: we’re wrong.”

Scientific Method Meets Global Warming

In general, there are only two way to prove something in science.

1. Prove a singular (fact) with an observation such as “black swans exist”.
2. Disprove a universal (theory) with a singular fact such as “all swans are white”.

The inability to disprove a singular, or to prove a universal, is due to our finite limits to our observations. In general, we cannot gather the infinite observations required disprove (1) a fact, or to prove a universal (2).

Scientists need to be rigorous and strict particularly in the initial stages of formulating a study, whether it is a singular or a universal that is being tested, and how the observations will impact.

A case in point: the impact of observations of global temperatures on the climate model projections plotted below. By a strict interpretation of scientific method, the observed “slow rise in global temperature” is a fact that disproves the universal “all possible trajectories of climate models under AGW warming”.

The only appropriate scientific response is to throw away all of those falsified models and all of the work based on them – extinction predictions, extreme events, agricultural trends, and so on – as it is scientifically worthless. You must go back to the drawing board.

The rules of science were illustrated recently in a post on Vortex about the Wright Brothers’ first flight:

To give another dramatic example, suppose at 1:00 pm on the afternoon of December 17, 1903, you were take a poll about whether man can fly. Suppose you asked people to place bets as to whether airplanes exist. Out of the 1.6 billion people in the world alive on that day, at that moment, the only ones who had ANY KNOWLEDGE of that question were Wilbur and Orville Wright and the members of the Kitty Hawk coast guard who had helped them fly that morning. In all the world, there was not another soul who knew the facts or was qualified to address the question. The opinions of other people were worthless. Meaningless. All the money in the world placed in a bet would mean nothing. There was an undeveloped glass plate photograph showing the first flight:

That photograph was proof. It overruled all opinions, all money, all textbooks, and the previous 200,000 years of human technology. A thermocouple reading from a cold fusion experiment in 1989 overrules every member of the human race, including every scientist. Once experiments are replicated at high signal to noise ratios, all bets are off. The issue is settled forever. There is no appeal, and it makes no difference how many people disagree, or how many fail to understand calorimetry or the laws of thermodynamics. The rules of science in such clear-cut cases are objective and the proof is as indisputable as that photograph.

– Jed

Nanoplasmonics – a field is born

Axil Axil suggested in the Vortex discussion list – about the only list I read these days – the name nanoplasmonics for developments in cold fusion (while referencing a very funny mockery of how academics will revise the history of cold fusion in 2015 – “History is written by the losers”).

The field is so new, Wikipedia has yet to have an entry dedicated to “nanoplasmonics”, except as a subheading to an entry Surface Plasmon Polaritons. An effect seen in bulk Nickel powder is not a surface effect. The reactors of Rossi and Defkalion may be based on a plasma phenomenon like polaritons in a nano-sized bulk medium, obviously, the headings should by rights be reversed.

How did climate skeptics know the scare was not real?

The climate scare is collapsing, it seems, as climate scientists everywhere are renouncing their previous certainty.

Skeptics OTOH have been consistent. This blog in particular has been challenging since 2005 the establishment global warming views on such predictions as mass extinctions, significance of warming, decreasing rainfall and droughts.

It is instructive to look into ourselves and ask – how could the skeptics have been right – when the consensus of the learned experts thought differently? As a recent post at WUWT asked – what was my personal path to climate skepticism? Particularly when one has never before been at odds with the scientific mainstream.

The answer for me was elegantly expressed by A.O. Scott of the New York Times review of the Disney film Chicken Little. He said the film is:

“a hectic, uninspired pastiche of catchphrases and clichés, with very little wit, inspiration or originality to bring its frantically moving images to genuine life.”

My theory is that due to their scholarship in other fields – such as engineering, the hard sciences, and economics – skeptics are attuned to genuine scientific insight and not deceived by the “uninspired pastiche of catchphrases and clichés” that constitutes the majority of global warming research.

How does cold fusion work?

A scientific paper by Defkalion Energy sets our their theory behind the desk-top reactor.

1. Powdered nickel is loaded with hydrogen and heated its Debye temperature – which is the temperature which maximizes the vibration of the individual molecules in the nickel lattice.

2. The hydrogen molecules (H2) are dissociated into a plasma by a spark from a spark plug. In the plasma the H atoms (consisting of a proton and an electron) are excited into elliptical orbits. Due to the elliptical orbit, the electron comes very close to the proton at one end, and so is screened to appear like a neutron (no charge).

3. Driven by the lattice vibration and the pulse of plasma from the spark, the screened H atom is driven into the nucleus of a Ni atom, producing Copper, Zinc, and other transmuted byproducts, and copious heat.

That’s their theory.

UQ Fellow Spews Fear

John Cook, Climate Communication Fellow from the Global Change Institute at the University of Queensland is on the record saying:

Climate change like atom bomb

and

“animal species are responding to global warming by mating earlier in the year. This isn’t because animals are getting randier, it’s because the seasons themselves are shifting”

IMHO science is in need of a major shakeup.

More Evidence of a Sun-Climate Connection

Bjerknes compensation assumes a constant total poleward energy transport (and an inverse relation between oceanic and atmospheric heat transport fluxes (Bjerknes, 1964)). Contrary to this assumption, there is empirical evidence of a simultaneous increase in poleward oceanic and atmospheric heat transport during the most recent warming period since the mid-1970s (aka the Great Pacific Climate Shift). This paper argues that TSI directly modulates ocean–atmospheric meridional heat transport.

Solar irradiance modulation of Equator-to-Pole (Arctic) temperature gradients: Empirical evidence for climate variation on multi-decadal timescales, Willie Soon and David R. Legates. PDF

This paper raises more questions than it addresses. How sensitive is the estimate of global temperature to a change in the equator to pole temperature gradient? Can a change in the gradient produce an apparent ‘amplification’?

Another thought that has occurred to me is that climate models overestimate global warming but they underestimate Arctic melting. Could both failures be due to underestimating the response of meridional heat transfer from the equator to the poles?

The Widening Gap Between Present Global Temperature and IPCC Model Projections

An increase in global temperature required to match the Intergovernmental Panel on Climate Change (IPCC) projections is becoming increasingly unlikely. A shift to a mean projected pathway of 3 degrees increase by the end of the century would require an immediate, large, and sustained increase in temperature which seems physically impossible.

Global surface temperatures have not increased at all in the last 18 years. The trend over the last few years is even falling slightly.

Global temperatures continue to track at the low end of the range of global warming scenarios, expanding a significant gap between current trends and the course needed to be consistent with IPCC projections.

On-going international climate negotiations fail to recognise the growing gap between the model projections based on global greenhouse emissions and the increasingly unlikely chance of those models being correct.

Research led by Ben Santer, compared temperatures under emission scenarios used to project climate change by the (IPCC) with satellite temperature observations at all latitudes.

“The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes.” reports their paper in PNAS. Moreover, it is not known why they are failing.

“The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive.”

Explaining why they are failing will require a commitment to skeptical inquiry and an increasing need to rely on the scientific method.

The unquestioning acceptance of projections of IPCC climate models by the CSIRO, Australian Climate Change Science Program, and many other traditional scientific bodies that has informed policies and decisions on energy use and associated costs must be called into question. So to must the long-term warming scenarios based on the link between emissions and increases in temperature.

Q: Where Do Climate Models Fail? A: Almost Everywhere

“How much do I fail thee. Let me count the ways”

Ben Santer’s latest model/observation comparison paper demonstrates that climate realists were right and climate models exaggerate warming:

The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes.

Where do the models fail?

1. Significantly warmer than reality (95% CI) in the lower troposphere at all latitudes, except for the arctic.

2. Significantly warmer than reality (95% CI) in the mid-troposphere at all latitudes, except for the possible polar regions.

3. Significant warmer that reality (95% CI) in the lower stratosphere at all latitudes, except possibly polar regions.

Answer: Everywhere except for polar regions where uncertainty is greater.

East Pacific Region Temperatures: Climate Models Fail Again

Bob Tisdale, author of the awesome book “Who Turned on the Heat?” presented an interesting problem that turns out to be a good application of robust statistical tests called empirical fluctuation processes.

Bob notes that sea surface temperature (SST) in a large region of the globe in the Eastern Pacific does not appear to have warmed at all in the last 30 years, in contrast to model simulations (CMIP SST) for that region that show strong warming. The region in question is shown below.

The question is, what is the statistical significance of the difference between model simulations and the observations? The graph comparing the models with observations from Bob’s book shows two CMIP model projections strongly increasing at 0.15C per decade for the region (brown and green) and the observations increasing at 0.006C per decade (magenta).

However, there is a lot of variability in the observations, so the natural question is whether the difference is statistically significant? A simple-minded approach would be to compare the temperature change between 1980 and 2012 relative to the standard deviation, but this would be a very low power test, and only used by someone who wanted to obfuscate the obvious failure of climate models in this region.

Empirical fluctuation processes are a natural way to examine such questions in a powerful and generalized way, as we can ask of a strongly autocorrelated series — Has there been a change in level? — without requiring the increase to be a linear trend.

To illustrate the difference, if we assume a linear regression model, as is the usual practice: Y = mt +c the statistical test for a trend is whether the trend coefficient m is greater than zero.

H0: m=0 Ha: m>0

If we test for a change in level, the EFP statistical test is whether m is constant for all of time t:

H0: mi = m0 for i over all time t.

For answering questions similar to tests of trends in linear regression, the EFP path determines if and when a simple constant model Y=m+c deviates from the data. In R this is represented as the model Y~1. If we were to use a full model Y~t then this would test whether the trend of Y is constant, not whether the level of Y is constant. This is clearer if you have run linear models in R.

Moving on to the analysis, below are the three data series given to me by Bob, and available with the R code here.

The figure below shows the series in question on the x axis, the EFP path is the black line, and 95% significance levels for the EFP path are in red.

It can be seen clearly that while the EFP path for the SST observations series shows a little unusual behavior, with a significant change in level in 1998 and again in 2005, the level is currently is not significantly above the level in 1985.

The EFP path for the CMIP3 model (CMIP5 is similar), however, exceeds the 95% significant level in 1990 and continues to increase, clearly indicating a structural increase in level in the model that has continued to intensify.

Furthermore, we can ask whether there is a change in level between the CMIP models and the SST observations. The figure below shows the EFP path for the differences CMIP3-SST and CMIP5-SST. After some deviation from zero at about 1990, around 2000 the difference becomes very significant at the 5% level, and continues to increase. Thus the EFP test shows a very significant and widening disagreement between the temperature simulation of the CMIP over the observational SST series in the Eastern Pacific region after the year 2000.

While the average of multiple model simulations show a significant change in level over the period, in the parlance of climate science, there is not yet a detectable change in level in the observations.

One could say I am comparing apples and oranges, as the models are average behavior while the SST observations are a single realization. But, the fact remains only the simulations of models show warming, because there is no support for warming of the region from the observations. This is consistent with the previous post on Santer’s paper showing failure of models to match the observations over most latitudinal bands.

Santer: Climate Models are Exaggerating Warming – We Don’t Know Why

Ben Santer’s latest model/observation comparison paper in PNAS finally admits what climate realists have been been saying for years — climate models are exaggerating warming. From the abstract:

On average, the models analyzed … overestimate the warming of the troposphere. Although the precise causes of such differences are unclear…

Their figure above shows the massive model fail. The blue and magenta lines are trend of the UAH and RSS satelite temperature observations averaged by latitude, with the Arctic at the left and Southern Hemisphere to the right. Except for the Arctic, the observations are well outside all of the model simulations. As they say:

The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes. The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive. Our results suggest that forcing errors are a serious concern.

Anyone who has been following the AGW issue for more than a few years remembers that, Ross McKitrick, Stephen McIntyre and Chad Herman already showed climate models exaggerating warming in the tropical troposphere in their 2010 paper. Before that was Douglass, and in their usual small-minded way, the Santer team do not acknowledge them. Prior to then a few studies had differed on whether models significantly overstate the warming or not. McKitrick found that up to 1999 there was only weak evidence for a difference, but on updated data the models appear to significantly overpredict observed warming.

Santer had a paper where data after 1999 had been deliberately truncated, even though the data was available at the time. As Steve McIntyre wrote in 2009:

Last year, I reported the invalidity using up-to-date data of Santer’s claim that none of the satellite data sets showed a “statistically significant” difference in trend from the model ensemble, after allowing for the effect of AR1 autocorrelation on confidence intervals. Including up-to-date data, the claim was untrue for UAH data sets and was on the verge of being untrue for RSS_T2. Ross and I submitted a comment on this topic to the International Journal of Climatology, which we’ve also posted on arxiv.org. I’m not going to comment right now on the status of this submission.

Santer already had form at truncating inconvenient data, going back to 1995, related by John Daly. It is claimed that he authored the notorious “… a discernible human influence on global climate”, made in Chapter 8 of the 1995 IPCC Report, added without the consent of the drafting scientists in Madrid.

As John Daly says:

When the full available time period of radio sonde data is shown (Nature, vol.384, 12 Dec 96, p522) we see that the warming indicated in Santer’s version is just a product of the dates chosen. The full time period shows little change at all to the data over a longer 38-year time period extending both before Santer et al”s start year, and extending after their end year.

It was 5 months before `Nature’ published two rebuttals from other climate scientists, exposing the faulty science employed by Santer et al. (Vol.384, 12 Dec 1996). The first was from Prof Patrick Michaels and Dr Paul Knappenberger, both of the University of Virginia. Ben Santer is credited in the ClimateGate emails with threatening Pat Michaels with physical violence:

Next time I see Pat Michaels at a scientific meeting, I’ll be tempted to beat the crap out of him. Very tempted.

I suppose that now faced with a disparity between models and observations that can no longer be ignored, he has had to face the inevitable. That’s hardly a classy act. Remember Douglass, McKitrick, McIntyre and other climate realists reported the significant model/observation disparity in the peer-reviewed literature first. You won’t see them in Santer’s list of citations.

Failing to give due credit. Hiding the decline. Truncating the data. Threatening violence to critics. This is the AGW way.

Solar Cycle 24 peaked? The experimentum crucis begins.

The WSO Polar field strengths – early indicators of solar maximums and minimums – have dived towards zero recently, indicating that its all down from here for solar cycle 24.

Polar field reversals can occur within a year of sunspot maximum, but cycle 24 has been so insipid, it would not be surprising if the maximum sunspot number fails to reach the NOAA predicted peak of 90 spots per month, and get no higher than the current 60 spots per month.

The peak in solar intensity was predicted for early 2013, so this would be early, and may be another indication that we are in for a long period of subdued solar cycles.

A prolonged decline in solar output will provide the first crucial experiment to distinguish the accumulation theory of solar driven temperature change, and the AGW theory of CO2 driven temperature change. The accumulation theory predicts global temperature will decline as solar activity falls below its long-term average of around 50 sunspots per month. The AGW theory predicts that temperature will continue to increase as CO2 increases, with little effect from the solar cycle.

An experimentum crucis is considered necessary for a particular hypothesis or theory to be considered an established part of the body of scientific knowledge. A given theory, such as AGW, while in accordance with known data but has not yet produced a critical experiment is typically considered unworthy of full scientific confidence.

Prior to this moment, BOTH solar intensity was generally above its long term average, AND greenhouse gases were increasing. BOTH of these factors could explain generally rising global temperature in the last 50 years. However, now that one factor, solar intensity, is starting to decline and the other, CO2, continues to increase, their effects are in opposition, and the causative factor will become decisive.

For more information see WUWT’s Solar Reference page.

AGW Doesn’t Cointegrate: Beenstock’s Challenging Analysis Published

The Beenstock, Reingewertz, and Paldor paper on lack of cointegration of global temperature with CO2 has been accepted! This is a technical paper that we have been following since 2009 when an unpublished manuscript appeared, rebutting the statistical link between global temperature increase and anthropogenic factors like CO2, and so represents another nail in the coffin of CAGW. The editor praised the work as “challenging” and “needed in our field of work.”

Does the increase in CO2 concentration and global temperature over the past century, constitute a “warrant” for the anthropogenic global warming (AGW) theory? Such a relationship is a necessary for global warming, but not sufficient, as as range of other effects may make warming due to AGW trivial or less than catastrophic.

While climate models, or GCMs shows enhancement of the greenhouse effect can cause a temperature increase, the observed upward drift in global temperature could have other causes, such as high sensitivity to persistent warming from enhanced solar insolation (accumulation theory). There is also the urban heat island effect and natural cycles in operation.

In short, the CO2/temperature relationship may be spurious, have an independent cause, or temperature may cause CO2 increase, all of which falsify CAGW here and now.

Cointegration attempts to fit the random changes in drift of two or more series together to provide positive evidence of association where those variables are close to a random walk.
The form of time series process appropriate to this model is referred to as I(n), having the property that n is the number of differencing operations needed before the series has a finite mean (stationary, or does not drift far from the mean). A range of statistical tests, the Dickey-Fuller and Phillips-Perron procedures, identify the I(1) property.

Beenstock et.al find that while temperature and solar irradiance series are I(1), anthropogenic greenhouse gas (GHG) series are I(2), requiring differencing twice to yield a stationary series.

This fact blocks any evidence for AGW from an analysis of the time series. The variable may still somehow be causally connected, but not in an obvious way. Previous studies using simple linear regression to make attribution claims must be discounted.

The authors also show evidence of a cointegrating relationship between the temperature (corrected for solar irradiance) and changes in the anthropogenic variables. This highlights what I have been saying in the accumulation that the dynamics relationships between these variables must be give due attention, lest spurious results are obtained.

While this paper does not debunk AGW, it does debunk naïve linear regression methods, and demonstrate the power of applying rigorous statistical methodologies to climate science.

Still no weakening of the Walker Circulation

Once upon a time, a weakening of the East-West Pacific overturning circulation – called the Walker circulation – was regarded in climate science as a robust response to anthropogenic global warming. This belief was based on studies in 2006 and 2007 using climate models.

Together with a number of El Nino events (that are associated with a weakening of the Walker circulation) the alarm was raised in a string of papers (3-6) that global warming was now impacting the Pacific Ocean and that the Walker circulation would further weaken in the 21st century, causing more El Ninos and consequently more severe droughts in Australia.

These types of alarms in the context of a severe Australian drought gave rise to an hysterical reaction of building water desalination plants in the major capital cities in Australia, all but one now moth-balled, and costing consumers upwards of $290 per year in additional water costs.

In 2009 I did a study with Anthony Cox to see if there was any significant evidence of a weakening of the Walker circulation when autocorrelation was taken into account. We found no empirical basis for the claim that observed changes differed from natural variation, and so could not be attributed to Anthropogenic Global Warming.

Since 2009, a number of articles show that, contrary to the predictions of climate models, the Walker has been strengthening (7-12). A recent article gives models a fail: “Observational evidences of Walker circulation change over the last 30 years contrasting with GCM results” here.

The paper by Sohn argues that inceases in the frequency of El Nino cause the apparent weakening of the Walker Circulation, not the other way around, and it is well known that climate models unsuccessfully reproduce such trends.

The problems with models may rest in their treatment of mass flows. In “Indian Ocean warming modulates Pacific climate change” here, they find that

“Extratropical ocean processes and the Indonesia Throughflow could play an important role in redistributing the tropical Indo-Pacific interbasin upper-ocean heat content under global warming.”

Finally from an abstract in 2012 “Reconciling disparate twentieth-century Indo-Pacific ocean temperature trends in the instrumental record” here:

“Additionally, none of the disparate estimates of post-1900 total eastern equatorial Pacific sea surface temperature trends are larger than can be generated by statistically stationary, stochastically forced empirical models that reproduce ENSO evolution in each reconstruction.”

Roughly translated this means there is no evidence of any change to the Walker ciriculation beyond natural variation – weakening or otherwise.

Nice to be proven right again. The “weakening of the Walker Circulation” is another scary bedtime story for global warming alarmists, dismissed by a cursory look at the evidence.

References

1. Held, I. M. and B. J. Soden, 2006: Robust responses of the hydrological cycle to global warming. J. Climate, 19, 5686–5699.

2. Vecchi, G. A. and B. J. Soden, 2007: Global warming and the weakening of the tropical circulation. J. Climate, 20, 4316–4340.

3. Scott B. Power and Ian N. Smith. Weakening of the walker circulation and apparent dominance of el ni˜no both reach record levels, but has enso really changed? Geophys. Res. Lett., 34, 09 2007.

4. Power SB, Kociuba G (2011) What caused the observed twentieth-century weakeningof the Walker circulation? J Clim 24:6501–56514.

5. Yeh SW, et al. (2009) El Niño in a changing climate. Nature 461(7263):511–514.

6. Collins M, et al. (2010) The impact of global warming on the tropical Pacific Ocean and El Niño. Nat Geosci 3:391–397.

7. Li G, Ren B (2012) Evidence for strengthening of the tropical Pacific Ocean surfacewind speed during 1979-2001. Theor Appl Climatol 107:59–72.

8. Feng M, et al. (2011) The reversal of the multidecadal trends of the equatorial Pacific easterly winds, and the Indonesian Throughflow and Leeuwin Current transports. Geophys Res Lett 38:L11604.

9. Feng M, McPhaden MJ, Lee T (2010) Decadal variability of the Pacific subtropical cells and their influence on the southeast Indian Ocean. Geophys Res Lett 37:L09606.

10. Qiu B, Chen S (2012) Multidecadal sea level and gyre circulation variability in the northwestern tropical Pacific Ocean. J Phys Oceanogr 42:193–206.

11. Merrifield MA (2011) A shift in western tropical Pacific sea-level trends during the 1990s. J Clim 24:4126–4138.

12. Merrifield MA, Maltrud ME (2011) Regional sea level trends due to a Pacific trade wind intensification. Geophys Res Lett 38:L21605.

13. “Observational evidences of Walker circulation change over the last 30 years contrasting with GCM results BJ Sohn, SW Yeh, J Schmetz, HJ Song – Climate Dynamics, 2012 – Springer

14. Indian Ocean warming modulates Pacific climate change Jing-Jia Luoa,b,c,1, Wataru Sasakia, and Yukio Masumotoa

15. “Reconciling disparate twentieth-century Indo-Pacific ocean temperature trends in the instrumental record”. Solomon, A. & Newman, M. Nature Clim. Change 2, 691–699 (2012).”

Circularity and the Hockeystick: coming around again

The recent posts at climateaudit and WUWT show that climate scientists Gergis and Karoly were willing to manipulate their study to ensure a hockeystick result in the Southern Hemisphere, and resisted advice from editors of the Journal of Climate to report alternative approaches to establish robustness of their study.

The alternative the editors suggested of detrending the data first, revealed that most of the proxies collected in the Gergis study were uncorrelated with temperature, and so would have to be thrown out.

A false finding of “unprecedented warming” is a false positive. False positives are a characteristic of the circular fallacy. The circular logic arising from the method of screening proxies by correlation was written up by myself in a geological magazine “Reconstruction of past climate using series with red noise” DRB Stockwell, AIG News 8, 314 in 2005, and also occupies a chapter in my 2006 book “Niche modeling: predictions from statistical distributions” D Stockwell Chapman & Hall/CRC.

It is gratifying to see the issue still has legs, though as McIntyre notes in the discussion of his post he has been the only one to cite the AIG article in the literature, its been widely discussed on the blogs, but a nettle not yet grasped by climate scientists.

Because the topic is undiscussed in climate science academic literature, we cited David Stockwell’s article in an Australian newsletter for geologists (smiling as we did so.) The topic has been aired in “critical” climate blogs on many occasions, but, as I observed in an earlier post, the inability to appreciate this point seems to be a litmus test of being a real climate scientist.

Its now fourteen years since the publication, with great fanfare, of Mann, Bradley, and Hughes “Global-scale temperature patterns and climate forcing over the past six centuries” with the “premature rush to adoption” that followed, the creation of research agendas in multiple countries and institutions devoted to proxy study, amassing of warehouses of cores. In any normal science the basics of the methodologies would be well understood before such a rush to judgment.

Considered in the context of almost a decade of related public blog discussion of the issue, that screening proxies on 20th century temperatures gives rise to hockeysticks is a topic apparently only discussed in private by climate scientists:

The Neukom email from 07 June 2012 08:55 “…I also see the point that the selection process forces a hockey stick result but: – We also performed the reconstruction using noise proxies with the same ARl properties as the real proxies. – And these are of course resulting in a noise-hockey stick.”

Of course, the problem is that rigorous analysis of many studies would fail to confirm the original results, many of the proxies collected and used by their colleagues are shown to be useless, and the abandonment of the theory that contemporary warming is “unprecedented”.

In light of all the data and studies from the last decade, I am convinced of only one thing – that the fallacy of data and method snooping is simply not understood by most climate scientists, who tend to see picking and choosing between datasets, ad hoc and multiple methods as opportunities to select the ones that produce their desired results.

This highlights the common wisdom of asking “What about all the catastrophe theories we have seen adopted and later abandoned over the years?” And while climate scientists dismiss such questions denial, after you have witnessed the rise and fall of countless environmental hysterias over the years, you become more circumspect, and adjust your estimates of confidence to account for the low level of diligence in the field.

Is the problem alarmism, or prestige-seeking?

We all make mistakes. Sometime we exaggerate the risks, and sometimes we foolishly blunder into situations we regret. Climate skeptics often characterize their opponents as ‘alarmist’. But is the real problem a tendency for climate scientists to be ‘nervous ninnies’?

I was intrigued by the recent verdict in the case of the scientists before an Italian court in the aftermath of a fatal earthquake. Roger Pielke Jr. relates that all is not as it seems.

There is a popular misconception in circulation that the guilty verdict was based on the scientists’ failure to accurately forecast the devastating earthquake.

Apparently the scientists were not charged with failing to predict a fatal earthquake, but with failure of due diligence:

Prosecutors didn’t charge commission members with failing to predict the earthquake but with conducting a hasty, superficial risk assessment and presenting incomplete, falsely reassuring findings to the public.

But when the article then goes to motivation, it is not laziness, but prestige.

Media reports of the Major Risk Committee meeting and the subsequent press conference seem to focus on countering the views offered by Mr. Giuliani, whom they viewed as unscientific and had been battling in preceding months. Thus, one interpretation of the Major Risks Committee’s statements is that they were not specifically about earthquakes at all, but instead were about which individuals the public should view as legitimate and authoritative and which they should not.

If officials were expressing a view about authority rather than a careful assessment of actual earthquake risks, this would help to explain their sloppy treatment of uncertainties.

So there are examples both of alarmism and failure to alarm by the responsible authorities. Both, potentially, motivated by maintenance of their prestige. Could the same motivations be behind climate alarmism? After all, what gains are there from asserting that ‘climate changes’.

The Creation of Consensus via Administrative Abuse

The existence of ‘consensus’ around core claims of global warming is often cited as some kind of warrant for action. A recent article by Roger Pielke Jr reported the IPCC response to his attempts to correct biases and errors in AR4 in his field of expertise — extreme events losses. As noted at CA, he made four proposed error corrections to IPCC, all of which were refused.

Since sociological psychological research is now regarded worthy of a generous share of science funding, a scholarly mind asks, “If failure to admit previous errors could be a strategy for building the climate consensus, what does that say about the logical correctness of the process. What are the other strategies?” Could denigration of people who disagree by Lewandowsky be worth $1.7m of Australian Research Council approved, taxpayer funds to help create climate consensus?

Wikipedia appears to be another experimental platform for consensus building. The recent comment by a disillusioned editor describes many unpleasant strategic moves executed in the name of building a consensus for the cold-fusion entries on Wikipedia.

Foremost is failure of administrators to follow the stated rules. Could this, along with failure to admit errors, and denigration of opponents be also a consensus creation strategy? The parallels with the IPCC are uncanny.

Some excerpts below.

Alan, do you know what “arbitration enforcement” is? Hint: it is not arbitration. Essentially, the editor threatened to ask that you be sanctioned for “wasting other editor’s time,” which, pretty much, you were. That was rude, but the cabal is not polite, it’s not their style. A functional community would educate you in what is okay and what is not. The cabal just wants you gone. *You* are the waste of time, for them, really, but they can’t say that.

Discouraging objectors – the main goal.

I remember now why I gave up in December last year. But I thought it was my turn to put in a shift or two at the coalface (or whatever).

Here is what I did on Wikipedia. I had a long-term interest in community consensus process, and when I started to edit Wikipedia in 2007, I became familiar with the policies and guidelines and was tempered in that by the mentorship of a quite outrageous editor who showed me, by demonstration, the difference or gap between policies and guidelines and actual practice. I was quite successful, and that included dealing with POV-pushers and abusive administrators, which is quite hazardous on Wikipedia. If you want to survive, don’t notice and document administrative abuse. Administrators don’t like it, *especially* if you are right. Only administrators, in practice, are long-term allowed to do that, with a few exceptions who are protected by enough administrators to survive.

Shades of the IPCC.

So if you want to affect Wikipedia content in a way that will stick, relatively speaking, you will need to become *intimately* familiar with policies. You can do almost anything in this process, except be uncivil or revert war. That is, you can make lots of mistakes, but *slowly*. What I saw you doing was making lots of edits. Andy asked you to slow down. That was a reasonable request. But I’d add, “… and listen.”

Good advice for dealing with administrators of consensus creation processes.

Instead, it appears you assumed that the position of the other editors was ridiculous. For some, perhaps. But you, yourself, didn’t show a knowledge of Reliable Source and content policies.

Lots of editors have gone down this road. It’s fairly easy to find errors and imbalance in the Wikipedia Cold fusion article. However, fixing them is not necessarily easy, there are constituencies attached to this or that, and averse to this or that. I actually took the issue of the Storms Review to WP:RSN, and obtained a judgment there that this was basically RS. Useless, because *there were no editors willing to work on the article who were not part of the pseudoskeptical faction.* By that time, I certainly couldn’t do it alone, I was WP:COI, voluntarily declared as such.

It seems you need an ally who is part of the in-crowd in order to move the consensus towards an alternative proposition.

When the community banned me, you can be sure that it was not mentioned that I had been following COI guidelines, and only working on the Talk page, except where I believed an edit would not be controversial. The same thing happened with PCarbonn and, for that matter, with Jed Rothwell. All were following COI guidelines.

Following the rules does not provide immunity.

The problem wasn’t the “bad guys,” the problem was an absence of “good guys.” There were various points where editors not with an agenda to portry cold fusion as “pathological science,” assembled, and I found that when the general committee was presented with RfCs, sanity prevailed. But that takes work, and the very work was framed by the cabal as evidence of POV-pushing. When I was finally topic banned, where was the community? There were only a collection of factional editors, plus a few “neutral editors” who took a look at discussions that they didn’t understand and judged them to be “wall of text.” Bad, in other words, and the discussion that was used as the main evidence was actually not on Wikipedia, it was on meta, where it was necessary. And where it was successful.

A better description of the real-world response to scholarship I have yet to see.

Yes, I was topic banned on Wikipedia for successfully creating a consensus on the meta wiki to delist lenr-canr.org from the global blacklist. And then the same editors as before acted, frequently, to remove links, giving the same bankrupt arguments, and nobody cares. So all that work was almost useless.

So consensus is ultimately created via administrative abuse!

Now its possible to see why blogs porporting to represent an authoritative consensus such as RealClimate, SkepticalScience and LewsWorld must delete objections:

… furiously deleting inconvenient comments that ask questions like “What are you going to do now that the removal of the fake responses shows a conclusion reverse of that of your title”?

But what is the result of administrative abuse?

That is why so many sane people have given up on Wikipedia, and because so many sane people have given up, what’s left?

There would be a way to turn this situation around, but what I’ve seen is that not enough people care. It might take two or three. Seriously.

1 2 3 4 20

Bad Behavior has blocked 69327 access attempts in the last 7 days.