Renewable Energy

Which is better for the environment: renewable energies, oil, gas, coal or nuclear energy? The environmental damage caused by energy sources can be measured by their 'footprint' -- the area required to produce a specific amount of energy.

An article in Forbes lists the energy produced per unit area of major energy sources, from which I have calculated the area required to produce a specific amount of energy.

Source W/m2 m2/W
Biofuels 0.05 20
Wind power 1.2 0.8
Solar PV 6.7 0.15
Natural Gas 27 0.04
Oil 28 0.04
Nuclear 56 0.02
LENR 100 0.01

Table 1. Relative environmental damage from power sources in square meters destroyed per Watt of energy produced (m2/W).

Simple math shows that a gas or oil well has a power density at least 22 times that of a wind turbine as so uses 22 times less area for the same power generation. If damage to the environment was the only concern, oil and gas are 22 times more friendly.

The big environmental saviors are nuclear power which has 1000 times greater power density than biofuels, and so is 1000 times more environmentally friendly, and potentially LENR, low energy nuclear reactions, which could pack even more power into a smaller area due to low shielding requirements.

It could be argued that distributed power sources are more efficient because they are located closer to their sources. In fact, this is not the case as a source with low power density requires more resources for transmission lines and storage, reducing the economic viability and potential to scale.

It is obvious from this basic analysis, and has been shown from experience, renewable energies such as biofuels, wind and solar are bad for native fauna and flora.

The inevitable conclusion is that advocates of renewable energy do not care about the environment.

The progress of civilization is characterized by the utilization of denser energy sources. The environment has benefited from the reservation of larger areas of land in their natural state. The inclination to dispersed energy sources is a form of neo-Luddism -- opposition to any modern technology.

Shaviv and Pielke on Climate Science in 2011

Nir Shaviv is an astrophysicist who wrote some of the more interesting studies showing the role of Gamma Ray Flux (GRF) on climate change, now belatedly being acknowledged by the climate establishment.

He gives some advice to students here: Stay away from Climate Science until you are tenured or retired!

My point is that because climate science is so dogmatic students do risk burning themselves because of the politics, if they don’t follow the party line. Since doing bad (“alarmist”) climate science is not an option either, I advise them to do things which are not directly related to global warming. (In fact, all but one of the graduate students I had, work or worked on pure astrophysical projects). I, on the other hand, have the luxury of tenure, so I can shout the truth as loud as I want without really being hurt.”

As shown by Roger Pielke Jr.'s revelation of how GRL has given up all pretense of due process, in its review of a manuscript on tropical cyclone frequency or intensity addressing the misrepresentations of increased damages due to climate change.

Cyclones were among those misrepresentations made by Chief Scientist Prof. Chubb in front of the Senate Inquiry.

UPDATE: ACM provided a transcript from Hansard.

The Cat asks How Credible is the Chief Scientist?, and Judith Sloan suggests it is a long time since he worked as one.

I cant resist this quote, directly contradicted by Pielke and other evidence.

Mr HUSIC: What would those weather events be?
Prof. Chubb: The argument at the moment is that there will be, for example, much more intense cyclones and whatever they are called in the Northern Hemisphere, and more intense rain and flooding. There will be a lot more intense and focused events of that type and that character as the climate changes. That is where the current view is.

Sea level rise projections bias

Sea levels, recently updated with 10 new data-points, reinforce the hiatus described as a 'pothole' by Josh Willis of NASA's Jet Propulsion Laboratory, Pasadena, Calif., who says you can blame the pothole on the cycle of El Niño and La Niña in the Pacific:

This temporary transfer of large volumes of water from the oceans to the land surfaces also helps explain the large drop in global mean sea level. But they also expect the global mean sea level to begin climbing again.

Attributing the 'pothole' to a La Nina and the transfer of water from the ocean to land in Australia and the Amazon seems dubious, given many land areas experienced reduced rainfall at the same times, as shown above.

A quadratic model of sea-level indicates deceleration is now well-established and highly significant, and if present conditions continue, sea level will peak between 2020 and 2050 between 10mm and 40mm above present levels, and may have stopped rising already.

Reference to a 'pothole' in a long-term trend caused by short-term La Nina, while ignoring statistically significant overall deceleration, is another example of bias in climate science.

Call:
lm(formula = y ~ x + I(x^2))

Residuals:
Min 1Q Median 3Q Max
-8.53309 -2.39304 0.03078 2.45396 9.17058

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.264e+05 3.517e+04 -6.438 7.40e-10 ***
x 2.230e+02 3.513e+01 6.348 1.21e-09 ***
I(x^2) -5.490e-02 8.772e-03 -6.258 1.98e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.448 on 222 degrees of freedom
Multiple R-squared: 0.9617, Adjusted R-squared: 0.9613
F-statistic: 2786 on 2 and 222 DF, p-value: < 2.2e-16

And the code:

figure5<-function() {
x <- time(SL); y <- SL
l=lm(y ~ x+I(x^2))
new <- data.frame(x = 1993:2050)
predict(l, new, se.fit = TRUE)
pred.w.clim <- predict(l, new, interval="confidence")
matplot(new$x,pred.w.clim,lty=c(1,2,2), type="l", ylab="Sea Level",main="Quadratic Projection of Sea Level Rise",ylim=c(-10,100),lwd=3,col=c(2,2,2),xlab="Year")
lines(SL)
}

Is Bill Gates Rossi's Customer?

Rossi revealed on his blog site that The Customer is a person and not a corporation.

Andrea Rossi
September 24th, 2011 at 10:46 AM
Dear Simon Knight:
By half October we will explain exactly what follows:
1- where the 1 MW plant will be tested
2- all the (not confidential) characteristics of the 1MW plant (the complementary part is more reactors, of a new type that in the meantime we have developed)
3- possibly, who is the Customer, if the Customer will allow us to communicate his name.
The measurements will be made, as I said, by world top class scientists.
I can confirm that we are well in schedule, therefore all will happen in October.
I also confirm that in November we will start our commercial strategy.
Warm Regards,
A.R.

Which made me wonder, could the person be Bill Gates? He has previously written an article at Huff Post titled Why We Need Innovation, Not Just Insulation stating what every rational person knows about current strategies of renewables and efficiency improvements:

No amount of insulation will get us there, only innovating our way to essentially 0-carbon energy technology will do it. If we focus on just efficiency to the exclusion of innovation, or imagine that we can worry about efficiency first and worry about energy innovation later, we won't get there.

He goes then drops a hint about projects.

To achieve the kinds of innovations that will be required I think a distributed system of R&D with economic rewards for innovators and strong government encouragement is the key. There just isn't enough work going on today to get us to where we need to go.

Under the Labor-Green Carbon Tax legislation the renewable energy sector will receive over $13 billion in funding, including $10 billion for the establishment of the Clean Energy Finance Corporation (CEFC), which will invest in renewable energy and energy efficiency projects and technologies, and $3.2 billion for the establishment of the Australian Renewable Energy Agency, for research and development into renewable technologies.

But the dim bulbs running the country determine that nuclear technologies, which Rossi's is, will be excluded from the fund.

Not that it will matter if Rossi's demo in October works, and someone like Bill Gates is The Customer. There will be an e-Cat on every desktop within 10 years.

Clean Energy Solution on Target for October 2011 Test

Here is Rossi's 1MW plant consisting of 52 individual E-Cats mounted in a shipping container, reported by NyTechNik.

Above is a video tour of the 1MW plant. A successful trial in October will prove beyond doubt a clean nuclear energy with at least a tenth of the cost of fossil fuels, without emissions of greenhouse gasses.

It will demonstrate, once again, the folly of governments trying to pick winners, such as the billions of dollars directed at renewable energy that will never deliver on its promise.

Rossi developed the technology entirely using his own money.

Compare this with the billions of dollars in renewable research and development, poured down the drain by the Australian Government on behalf of the Australian taxpayer. Most scientists in fields of Low Energy Nuclear Reactions (LENR) have long shut up shop for lack of a small fraction of that amount of money.

If successful it will again prove that like anthropogenic global warming, modern science is motivated by the quest for government grants, discourages new ideas and new theoretical models, and is ruled by powerful special-interests.

See this recent letter from the DoE on cold fusion research:

Monday, September 19, 2011

Dear Mr. Owens:

This is in response to your e-mail message to Secretary Chu dated September 13, 2011 in which you asked to know where the Department of Energy stands on “cold fusion.”

In 1989, a review panel that had been charged by the Department concluded that reports of the experimental results of excess heat from calorimetric cells did not present convincing evidence that useful sources of energy will result from the phenomena attributed to “cold fusion.” To quote the panel, “Hence, we recommend against the establishment of special programs or research centers to develop cold fusion.”

In 2004, the Department organized a second review of the field and that review reached essentially the same conclusion as the 1989 review. The Department’s Office of Sciences does not provide any funding support for “cold fusion” research.

Al Opdenaker

Cold fusion is still laughed at by the entire establishment science. Of course if the test in October turns out to be flop, the joke will be on me. I think I can handle that. If not, then the entire establishment science from the Royal Academies down needs a close look to see whether they provide anything of value.

How ironic if the free market delivers a viable, cheap, clean energy solution at no cost to humanity, when the social democrats whose reason-to-be is to save us from the market, as the Prime Minister revealed, have with billions of taxpayer dollars done nothing, nada, zilch.

See background article here.

Phase Shift in Spencer's Data

It was shown here that the phase shift between total solar irradiance and global temperature is exactly one quarter of the solar cycle, 90 degrees, or 2.75 years. This is a prediction of the accumulation theory described here and here that shows how solar variation can account for paleo and recent temperature change.

Phase shifts in the short-wave (SW) side of the climate system are erroneously attributed to 'thermal inertia' of the ocean and earth mass, and called 'lags', or regarded as non-existent. If thermal inertia was responsible, then a larger mass would show a larger lag. In fact, an exactly 90 degrees shift emerges directly from the basic energy balance model, C.dT/dt=F, as I will show later.

A 90 degree shift is also present on the long-wave (LW) at the annual time-scale using Spencer's dataset. This cannot be a coincidence, and gives an important insight into the dynamics of the climate system.

First off is understanding how shifts arise.

The figure above shows an impulse (black) based on a cosine function with a 2*pi period, with its scaled derivative (green) and integral (red). Time is on the x axis.

The impulse in black represents any sudden change in forcing in the atmosphere that 'causes' the derivative and integral responses (as they are derived directly from the impulse).

Note two things: (1) the peak of the derivative leads the peak of the impulse, and the peak of the integral lags the impulse. (2) The lead and lag are exactly one quarter of the period (2*pi/4 or 1.57 radians) of the cosine impulse. Note (3) the integral 'amplifies' the impulse, the mechanism responsible for high solar sensitivity in the accumulative theory.

Cross-correlation (ccf in R) of two variables gives precise information about the phase shifts, their size and significance. Above is the cross-correlation of the derivative and integral with the impulse above, with significance as blue lines. You can read off the phase shift from the first peak location.

The data from Spencer consists of satellite measurements of the short-wave and long-wave intensities at the top of the atmosphere, both for clear sky and cloudy skies. Below is the cross-correlation of each of these variables against his global temperature HadCRUT3 column.

The peaks of correlation show a three month phase shift on the LW and SW_clr components. The LW peaks are positive and the SW peaks are negative due to the orientation of flux in the dataset.

The LW peaks (LW_tot and LW_cls) are affected by the sharp peak at zero lag, probably due to fast radiant effects (magenta line SW_clr), shown in the similar graphic of these data by P.Solar here, mentioned in this thread at CA.

The LW and SW_clr components lead the global surface temperature. There are three possible explanations:

1. Changes in cloud cover actually do drive changes in global temperature due to gamma-ray flux (GRF) or other effects, or

2. The changes in cloud cover are caused by changes in global temperature, with the derivative mechanism described above.

3. Both 1 and 2.

Spencer argues that it is impossible to distinguish between 1 and 2. Both Spencer and Lindzen both consider the lags important because correlation is greatly improved (and determines whether feedback is positive to negative). Neither seem to have mentioned the 3 month phase relationships emerging from integral/derivative system dynamics.

I can't see how it is possible perform a valid analysis without this insight.

Here is the code.

figure0<-function() {
x=2*pi*seq(-1,1,by=0.01);x2=2*pi*seq(-0.5,0.5,by=0.01)
x1=c(rep(0,50),cos(x2),rep(0,50))
png("impulse.png");
dx=as.numeric(scale(c(0,diff(x1))));sx=as.numeric(scale(cumsum(x1)))
plot(x,x1,ylab="Magnitude",ylim=c(-2,2),lwd=5,xlab="Radians",main="Derivative and Integral of an Impulse",type="l")
lines(c(-2*pi/4,-2*pi/4),c(-2,2),col="gray",lty=2)
lines(c(0,0),c(-2,2),col="gray",lty=2)
lines(c(2*pi/4,2*pi/4),c(-2,2),col="gray",lty=2)
lines(x,sx,col=2,lwd=3)
lines(x,dx,col=3,lwd=3)
text(c(-2*pi/4,0),c(1.5,1.5,1.5),c("f'(t)","f(t)=cos(t)"))
text(2*pi/4,1.5,expression(paste("\u222B",f(t))))
dev.off()
browser()
png("cross.png");
cxd=ccf(dx,x1,lag.max=100,plot=F)
cxs=ccf(sx,x1,lag.max=100,plot=F,new=T)
w=2*pi*cxd$lag/(100)
plot(w,cxs$acf,col=2,type="h",xlab="Radians",ylab="Correlation")
lines(w,cxd$acf,col=3,type="h")
lines(c(-100,100),c(0.15,0.15),lty=2,col=4)
lines(c(-100,100),c(-0.15,-0.15),lty=2,col=4)
lines(c(-100,100),c(0,0))
dev.off()
}

figure3<-function() {
par(mfcol=c(1,1),mar=c(4,4,3,3))
figure3.1(spencer[,7],spencer[,1:6],xlim=1)
#par(mar=c(4,4,0,3))
#figure3.1(dess[,5],dess[,1:4],xlim=1)
}

figure3.1<-function(X,data,lag=10,xlim=10) {
png("impulse.png");
plot(c(-100,100),c(0,0),xlim=c(-xlim,xlim),ylim=c(-0.5,0.5),type="l",xlab="Years",ylab="Correlation",main="Cross-correlation of SW and LW with Global Temperature")
lines(c(-100,100),c(0.18,0.18),lty=2,col=4)
lines(c(-100,100),c(-0.18,-0.18),lty=2,col=4)
lines(c(0.25,0.25),c(-1,1),lty=3)
lines(c(-0.25,-0.25),c(-1,1),lty=3)
send=tsp(data)
labels=colnames(data)
t=window(X,start=send[1],end=send[2])
for (i in 1:dim(data)[2]) {
cxd=ccf(data[,i],t,lag.max=lag,plot=F)
w=cxd$lag
lines(w,cxd$acf,col=i+1,lwd=2)
text(0.9,cxd$acf[length(w)],labels[i],col=1,cex=0.5)
}
dev.off()
}

FFT of TSI and Global Temperature

This is the application of the work-in-progress Fast Fourier Transform algorithm by Bart coded in R on the total solar irradiance (TSI via Lean 2000) and global temperature (HadCRU). The results show (PDF) that the atmosphere is sufficiently sensitive to variations in solar insolation for these to cause recent (post 1950) warming and paleowarming.

The mechanism, suggested by the basic energy balance model, but confirmed by the plots below, is accumulation. That is, global temperature is not only a function of the magnitude of solar anomaly, but also its duration. Small but persistent insolation above the solar constant can change global temperature over extended periods. Changes in temperature are proportional to the integral of insolation anomaly, not to insolation itself.

The figure below is the smoothed impulse response resulting from the Fourier analysis using TSI and GT. This is the simulated result of a single spike increase in insolation. The result is a constant change, or step in the GT. This is indicative of a system that ‘remembers shocks’, such as a ‘random walk’. Because of this memory, changes in TSI are accumulated. (Not sure why its negative.)

Below is the Bode plot of the TSP and GT data (still working on this). The magnitude response shows a negative, straight trend, indicative of an accumulation amplifier. This is also consistent with the spectral plots of temperature that cover paleo timescales in Figure 3 here.

Bart’s analysis is going to be very useful doing this sort of dynamic systems analysis in a very general way. Up to now I have been using spectral plots and ARMA models.

This analysis above is an indication of the robustness of the method, as it gives a different but appropriate result on a different data set. Its going to be a very useful tool in arguing that the climate system is not at all like its made out to be.

I will post the code when its further along.

Global Atmospheric Trends: Dessler, Spencer & Braswell

Starting the S&B story at the beginning, as did Steve McIntyre, with Dessler 2010 in Science, I'll put a new spin on the satellite data uploaded by Steve, using the accumulation theory. Although I am not familiar with the data, it turns out to be easily interpretable.

In black is the replication of Steve's Figure 1 and Dessler's 2010 Figure 2A, the scatter plot of monthly average values of ∆R_cloud (eradr) versus ∆T_s (erats) using CERES and ECMWF interim data. There is extremely little correlation as noted by Steve. In fact, it is not statistically significant in the conventional sense, Science apparently adopting the new IPCC-speak qualitative standard of 'likely'.

Coefficients:

Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.01751 0.04599 0.381 0.704
X 0.54351 0.36184 1.502 0.136

Residual standard error: 0.5036 on 118 degrees of freedom Multiple R-squared: 0.01876, Adjusted R-squared: 0.01045 F-statistic: 2.256 on 1 and 118 DF, p-value: 0.1358

The points in red are the sequential difference of temperature against the cloud radiance. While these have a lower slope, unlike the former, they are conventionally significant, almost to the 99%CL.

Coefficients:

Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.01269 0.04524 0.280 0.7796
dX 1.07071 0.42782 2.503 0.0137 * ---

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.4954 on 118 degrees of freedom Multiple R-squared: 0.0504, Adjusted R-squared: 0.04236 F-statistic: 6.263 on 1 and 118 DF, p-value: 0.01369

So why plot the sequential temperature differences and not the temperature directly? Firstly, while the autoregression coefficient (AR) of atmospheric temperature, erats (using arima(dess[,5],order=c(1,0,0))), is AR=0.65, for eradr its AR=0.16. This tells you that the two are different types of processes. The low AR is like a bunch of random numbers. The high AR is like a sequential accumulation of random numbers. Using different terminology, they do not cointegrate, as one can trend strongly (non-stationary) and the other stays around its mean (is stationary). Nor do they necessarily correlate. Though they can causally determine each other.

Differencing temperature is explained in accumulation theory, which pays close attention to heat accumulating in the ocean. Overlooking this basic physical model of the system causes many problems. Interpreting the data in terms of the physical model clears a lot of things up, as shown by the significant result above.

Above is the time-series plot of cloud radiance (black) and differenced global temperature (red) showing the relationship.

What does this say about cloud dynamics? The way to get intuition of dynamic relationships is to imagine the output from three types of input: impulse, step and periodic.

Impulse

On an impulse of radiation, the surface (and lower atmosphere) warm and then revert. The differenced variable (like the first derivative) surges positive while temperature is rising, then surges negative while temperature is falling.

Electrical engineering buffs will appreciate this as the current-opposing behavior of an inductor. Clouds, in this view, could be compared with the electromagnetic field set up by the changing current. (The ocean heat capacity is comparable to capacitance).

The peak of the differenced pulse will lead the peak of the forcing. This shows it that lag/lead relationships are not reliable indicators of the direction of causation in dynamic systems.

Step

On a step increase in radiation, the surface (and lower atmosphere) will ramp up as long as the forcing persists in accumulating heat in the ocean. The differenced variable will step-up and remain constant while temperature is rising at a constant rate.

This is a fundamentally different view of climate sensitivity, with different units. From the results above, the positive feedback from clouds is 1.1 W/m^2/K^2 and not 0.54 W/m^2/K. This means that clouds provide back-radiation (feedback is positive) while temperature is rising, but when the temperature stops rising, the back-radiation stops too. The number is part of the process.

I do not see how it is possible to interpret this in terms of a particular climate sensitivity. In the alternative view, cloud feedback is twice as strong as the conventional view while temperature is rising, but drops to zero when temperature is stable.

Periodic

Finally, a periodic forcing is phase shifted 90 degrees (as shown by the impulse example). By simple calculus, the derivative of a sine function is a cosine function.

Could this explain the approximately 4 month lag in terms of an annual cycle (12/4 = 3 months)? Possible? It may explain the negative correlation achieved by Steve McIntyre at 4 month lag, as a 90 degree lead in a peak, produces a 90 degree lag between peak and trough.

Here is my code (you need to download the data from link above).

dess=ts(read.csv(file="dessler_2010.csv")[3:8],start=2000.167,frequency=12)

figure1<-function(X,Y) {
dX=ts(c(0,diff(X)),start=start(X)[1],frequency=frequency(X))
fm=lm(Y~X)
fm2=lm(Y~dX)
plot(0,0,cex=1,col=2,xlab="Global Temp (black) and diffTemp (red)",ylab="Clouds R",type="p",xlim=c(-0.4,0.4),ylim=c(-2,2))
points(X,Y,col=2)
abline(fm,col=2)
points(dX,Y,cex=1,col=1)
abline(fm2,col=1)
browser()
plot(Y,col=1,ylab="Cloud R and diff(Temp)")
lines(dX,col=2)
lines(X,col=3)
}

figure1(dess[,5],dess[,3])

Phase Plots of Global Temperature after Eruptions

Here are a few more phase plots of global temperature after the impulse of stratosphere-reaching eruptions, Mt Agung, Mt Chichon and Mt Pinatubo in 1963, 1982 and 1991 respectively. The impulses are cooling of course, due to the shielding of short-wave solar radiation by stratospheric aerosols. The tendency of the global temperature dynamic to oscillate around a mean is clear.

These patterns were then disrupted by large El Ninos.

The axes of the phase space are chosen to represent abstract position and momentum (in this case temperature and temperature changes). Position and momentum in a conserved system correspond to potential and kinetic energy. The appearance of a circle or a spiral is evidence of a system that conserves energy by transferring between potential (radiative imbalance in this case) and kinetic (mass transfer, convection?) so that the sum remains constant.

Sinusoidal Wave in Global Temperature

A dynamic way of looking at global temperature is to plot it in phase space, which is usually with the position on the x axis and velocity on the y axis. Below is the phase space graph of global temperature since 1996 with temperature on the x axis and change in temperature on the y axis.

The graph of position versus velocity displays an inward spiral. In classical mechanics, this is described as an "attractor" and shows that the system is trapped in a potential well from which it cannot escape, as in this animation of a damped harmonic oscillator.

Graphs of temperature versus time show that the temperature since 1996 should be seen as a sinusoidal wave that decreases exponentially since the sudden impulse from the "El Nino of the Century" in 1997-8.

The phase diagram provides another way of showing that the global temperature has stopped warming. With average temperature acting as an "attractor", the system has been trapped in a potential well for 15 years, that's as good a description of "stopped" as you find in dynamics.