The value of tau

Admin: Posted up for Steve, with an initial response by Miklos. The slides Steve referred to are here. My bad for not telling Miklos that.

Link to TF&K08

Miskolczi theory proposes a tau (Ta if you will) significantly different from that found by at least a dozen other studies published in the peer-reviewed literature over more than a decade, as well as a number of other new relations A_A = E_D, f = 2/3 etc., etc.

In the scientific processes I have been involved myself a number of times, to resolve why one one or more studies get one or more critical parameter values significantly different from most other studies, a process is entered into whereby those who are getting the significantly different values have to demonstrate why that should be so. If their explanation proves good enough, and is verified/validated it usually results in a shift in the accepted values. This is just part of the normal processes of the advancement of scientific knowledge.

If this is not the case with Miskolczi then this is no better than the IPCC. I propose the following questions:

(1) Explain why neither Slides 68 or 69 explicitly state clear sky and all sky global means are being dealt with (respectively) as Nick has suggested is the case (and I concur seems the most likely interpretation).

(2) Confirm he is indeed referring to a global clear sky mean when he shows a slide (Slide 68) claiming the S_T = 90.7 W/m^2 and then explain why he simultaneously claims in the very same slide that K&T97 (known clear sky S_T ~100 W/m^2) is in error by 22.5 W/m^2!

(3) Identify which peer reviewed publication the interpretation given in Slide 70 claiming the Miskolczi HARTCODE interpretation of the NOAA 60 year average gives S_T = 60.9 W/m^2 appears when the AGW consensus is for a global all sky S_T of ~40 W/m^2 even as recently as F,T&K08 (which reviews/summarizes the finding of other radiative codes).

The bottom line is that Miskolczi is saying there is a ‘magic tau’ of magnitude 1.87. He has consistently got this by using a B anywhere from about 396 down to about 380 W/m^2 yet somehow the S_T values he gets at the same time always stays in the range of about 63 down to 58.5 W/m^2, resulting in a tau in the range 1.84 – 1.87 (say).

To accept Miskolczi Theory as viable we need to be technically very clear on why there is always this discrepancy where the Miskolczi S_T is always significantly greater by about 20 – 25 W/m^2 than the accepted literature range of values – from numerous studies – most using good radiative codes and putting great effort into correctly weighting the land and oceanic all sky values in order to derive a global mean.

We also need to understand why the Miskolczi S_T always appears to be much closer to the accepted literature range of values for net LW up i.e. the sum of S_T and that LW IR emitted upwards by clouds. Sheer coincidence? I would hope so.

Miklos Zagoni responds:

Here are some answers to Steve.

1. I do not know which of my presentations are you talking about, but as you all know, the clear-sky g is 1/3 while the all-sky global average is about 0.4. From the numbers one always able to figure out whether clear or all sky calculation is displayed.

2. The slide that gives you the KT97 22.5Wm-2 error, is calculated (as is written there) at their (reduced) h2o content. With those amounts of GHG in the air, Hartcode says that their St should be 90.7. But they (as they admitted) regarded only the WIN (833-1250 cm-1) region.

3. AGW consensus of St=40 rests on the mentioned KT97. Please point to the locus please in KT97, or TFK08 where they give the details beyond it.

Please give me also any indication how TFK08, or anyone else, measured the atmospheric window radiation.

Thanks,
Miklos

  • http://www.ecoengineers.com Steve Short

    Not true. AGW consensus of St=40 rests on numerous papers and reviews since K&T97.

    Requested loci in T,F&K08 are as follows:

    (1) Page 6, referring to the findings of Trenberth et al. 2001; Trenberth et al. 2002; Trenberth and Stepaniak 2003a,b,2004; Zhang et al. 2004, 2006, 2007; Gupta et al. 1999; Smith et al. 2002; Wilber et al. 2006′ Wild et al. 2006.

    (2) Pages 10-11, referring to the findings of Rossow and Duenas 2004, Zhang et al. 2004; CERES Loeb et al. 2000; 2007; 2008; Wielicki et al. 2006; Kim and Ramanathan 2008.

    (3) Table1b

    (4) Table 2b

    (5) Figure 1.

    Please give me also any indication how TFK08, or anyone else, measured the atmospheric window radiation.

    TF&K08 page 6

    “The radiative aspects have been explored in several studies by Zhang et al. (2004, 2006, 2007) based on International Satellite Cloud Climatology Project (ISCCP) cloud data and other data in an advanced radiative code. In addition, estimates of surface radiation budgets have been given by Gupta et al. (1999) and used by Smith et al. (2002) and Wilber et al. (2006)…. Many new measurements have now been made from space, notably from Clouds and the Earth’s Radiant Energy System (CERES) instruments on several platforms (Wielicki et al. 1996; 2006). Moreover there are a number of new estimates of the atmospheric energy budget possible from new atmospheric reanalyses,….”

    • Alex Harvey

      Hi Steve,

      While I regret to say that I haven’t been following this debate very closely lately, I do note your reference above to Wielicki et al. 1996; 2006 cited in TFK08 apparently on direct measurements of the atmospheric window radiation.

      As most probably know, there has been a debate recently between Richard Lindzen, who had an informal essay published at WUWT, and young Chris Colose, who rejected Lindzen’s argument on the basis of corrections to the raw data given in Wielicki et al. 2006. Lindzen then responded briefly, saying that (a) he was skeptical of the Wielicki et al 2006 corrections and (b) a negative feedback was implied whether you accept the corrections anyway.

      Anyhow, suppose that Lindzen is right and the corrections in W et al 2006 are bogus; would this help Miskolczi’s argument at all?

      • http://www.ecoengineers.com Steve Short

        Hi Alex

        I don’t think this Wielicki et al. 2006 business helps Miskolczi’s Theory at all. Lindzen is a brilliant man and his comments regarding the Wielicki et al 2006 corrections to CERES data may well be technically justified. But I have never heard of Lindzen endorsing even any small part of Miskolczi Theory.

        From the TF&K08 review Table 2b the Net LW IR up at TOA from the CERES period March 2000 – May 2004 body of work (4 different data analysis groups) ranges from 48.5 – 72.8 W.m^2 i.e. a mean of 61.2±10.0 at the one standard deviation level. Similarly, the best estimate of S_U is a very tight 395.9±1.4 W/m^2.

        From the TF&K08 review Table 1b the Net LW IR up at TOA from the ERBE period February 1985 – April 1989 body of work (5 different data analysis groups) ranges from 51.1 – 71.3 W.m^2 i.e. a mean of 60.9±8.3 W/m^2 at the one standard deviation level. Similarly, the best estimate of S_U is a very tight 394.6±2.6 W/m^2.

        These two sets of estimates for Net LW IR up at TOA over two time windows (one preceding, one following the K7T97/99 period) are clearly the same within error. These two sets of estimates for S_U over two time windows (one preceding, one following the K7T97/99 period) are also clearly the same within error.

        In addition, we know the best estimates for global all sky Net LW IR up at TOA NOT for S_T as Net LW IR at TOA include a TOA-leaving component LW IR emitted by clouds (following release of latent heat from Evapotranpiration (ET – part of the Miskolczi K term)).

        Net LW IR up at TOA is therefore by definition greater than S_T.

        The K&T97 and TF&K08 reviews both imply that LW IR emitted by clouds is ~30 W/m^2. This means that both the CERES and ERBE period best estimates of all sky S_T should be about 41 W/m^2 respectively and that the clear sky (cloud free) estimates of S_T again for both these periods should not exceed about 41±10 W/m^2 i.e there is about a less than one chance in 40 (2.5%) (assuming binomial distribution) of a clear sky S_T exceeding 61 W/m^2. [It also means that both the CERES and ERBE period best estimates of S_U should be about 395 W/m^2. Thus the best estimate of mean tau should be about 2.27+0.27,-0.22 also indicating a mean global all sky tau is hardly likely to be as low as 1.87]

        These data are significantly at odds with the values presented in Zagon’s recent presentation, particularly where he showed slides for Miskolczi HARTCODE estimates of global clear sky S_T of 90.7 (Slide 68), 58.7 (Slide 69) and 60.9 (Slide 70) W/m^2 respectively. Miskolczi HARTCODE measurements of even clear sky S_T are thus at or even above the statistical upper limit of the mainstream science values (typically <5% probability of agreement with them).

        We are therefore entitled to ask: What is so different about Miskolczi's HARTCODE major parameter estimations from the mainstream findings over now two decades and why should we accept Miskolci's values against the weight of the findings of so many other study groups over some two decades?

        • http://www.ecoengineers.com Steve Short

          Correction:

          The K&T97 and TF&K08 reviews both imply that LW IR emitted by clouds is ~30 W/m^2. This means that both the CERES and ERBE period best estimates of all sky S_T should be about 31 W/m^2 respectively and that the clear sky (cloud free) estimates of S_T again for both these periods should not exceed about 61±10 W/m^2 i.e there is about a less than one chance in 40 (2.5%) (assuming binomial distribution) of a clear sky S_T exceeding 81 W/m^2. [It also means that both the CERES and ERBE period best estimates of S_U should be about 395 W/m^2. Thus the best estimate of clear sky mean tau should be about 1.87+0.18,-0.15 but also indicating a mean global all sky tau is hardly likely to be as low as 1.87]

          These data are consistent with the values presented in Zagon’s recent presentation, particularly where he showed slides for Miskolczi HARTCODE estimates of global clear sky S_T of 90.7 (Slide 68), 58.7 (Slide 69) and 60.9 (Slide 70) W/m^2 respectively. Miskolczi HARTCODE measurements of clear sky S_T are thus probably consistent with mainstream science values.

          But this still does not address the problem with the Miskolczi value for global all sky mean S_T and tau.

  • http://www.ecoengineers.com Steve Short

    Not true. AGW consensus of St=40 rests on numerous papers and reviews since K&T97.Requested loci in T,F&K08 are as follows:(1) Page 6, referring to the findings of Trenberth et al. 2001; Trenberth et al. 2002; Trenberth and Stepaniak 2003a,b,2004; Zhang et al. 2004, 2006, 2007; Gupta et al. 1999; Smith et al. 2002; Wilber et al. 2006' Wild et al. 2006.(2) Pages 10-11, referring to the findings of Rossow and Duenas 2004, Zhang et al. 2004; CERES Loeb et al. 2000; 2007; 2008; Wielicki et al. 2006; Kim and Ramanathan 2008.(3) Table1b(4) Table 2b(5) Figure 1.Please give me also any indication how TFK08, or anyone else, measured the atmospheric window radiation.TF&K08 page 6″The radiative aspects have been explored in several studies by Zhang et al. (2004, 2006, 2007) based on International Satellite Cloud Climatology Project (ISCCP) cloud data and other data in an advanced radiative code. In addition, estimates of surface radiation budgets have been given by Gupta et al. (1999) and used by Smith et al. (2002) and Wilber et al. (2006)…. Many new measurements have now been made from space, notably from Clouds and the Earth’s Radiant Energy System (CERES) instruments on several platforms (Wielicki et al. 1996; 2006). Moreover there are a number of new estimates of the atmospheric energy budget possible from new atmospheric reanalyses,….”

  • Geoff Sherrington

    I think Miklos is saying that light intensity measurements are made by different parties, between different low and high wavelength boundaries. Such an implication might be that ultraviolet light was included by one party but not by another, or far IR at the other end.

    Steve, it would save a large amount of reference perusal if you were able to confirm that the authors you quote all use the same window, and the same as Ferenc. Would that be a large job? Your resources are far better than mine.

    Another complication is that various instruments have various sensitivities of measurement at a given wavelength. That is, each has to be calibrated to give similar results to the others at each wavelength subset. I would assume that not all measurements were made with the same design of equipment; indeed at times different operational principles would be involved.

    I’m sorry that this is not a positive post providing spectroscopic information to settle the question decisively. It is just too long since I was last a spectroscopist.

  • Geoff Sherrington

    I think Miklos is saying that light intensity measurements are made by different parties, between different low and high wavelength boundaries. Such an implication might be that ultraviolet light was included by one party but not by another, or far IR at the other end.Steve, it would save a large amount of reference perusal if you were able to confirm that the authors you quote all use the same window, and the same as Ferenc. Would that be a large job? Your resources are far better than mine.Another complication is that various instruments have various sensitivities of measurement at a given wavelength. That is, each has to be calibrated to give similar results to the others at each wavelength subset. I would assume that not all measurements were made with the same design of equipment; indeed at times different operational principles would be involved.I'm sorry that this is not a positive post providing spectroscopic information to settle the question decisively. It is just too long since I was last a spectroscopist.

  • Alex Harvey

    Hi Steve,While I regret to say that I haven't been following this debate very closely lately, I do note your reference above to Wielicki et al. 1996; 2006 cited in TFK08 apparently on direct measurements of the atmospheric window radiation.As most probably know, there has been a debate recently between Richard Lindzen, who had an informal essay published at WUWT, and young Chris Colose, who rejected Lindzen's argument on the basis of corrections to the raw data given in Wielicki et al. 2006. Lindzen then responded briefly, saying that (a) he was skeptical of the Wielicki et al 2006 corrections and (b) a negative feedback was implied whether you accept the corrections anyway.Anyhow, suppose that Lindzen is right and the corrections in W et al 2006 are bogus; would this help Miskolczi's argument at all?

  • http://www.ecoengineers.com Steve Short

    Hi AlexI don't think this Wielicki et al. 2006 business helps Miskolczi's Theory at all. Lindzen is a brilliant man and his comments regarding the Wielicki et al 2006 corrections to CERES data may well be technically justified. But I have never heard of Lindzen endorsing even any small part of Miskolczi Theory.From the TF&K08 review Table 2b the Net LW IR up at TOA from the CERES period March 2000 – May 2004 body of work (4 different data analysis groups) ranges from 48.5 – 72.8 W.m^2 i.e. a mean of 61.2±10.0 at the one standard deviation level. Similarly, the best estimate of S_U is a very tight 395.9±1.4 W/m^2.From the TF&K08 review Table 1b the Net LW IR up at TOA from the ERBE period February 1985 – April 1989 body of work (5 different data analysis groups) ranges from 51.1 – 71.3 W.m^2 i.e. a mean of 60.9±8.3 W/m^2 at the one standard deviation level. Similarly, the best estimate of S_U is a very tight 394.6±2.6 W/m^2.These two sets of estimates for Net LW IR up at TOA over two time windows (one preceding, one following the K7T97/99 period) are clearly the same within error. These two sets of estimates for S_U over two time windows (one preceding, one following the K7T97/99 period) are also clearly the same within error.In addition, we know the best estimates for global all sky Net LW IR up at TOA NOT for S_T as Net LW IR at TOA include a TOA-leaving component LW IR emitted by clouds (following release of latent heat from Evapotranpiration (ET – part of the Miskolczi K term)). Net LW IR up at TOA is therefore by definition greater than S_T. The K&T97 and TF&K08 reviews both imply that LW IR emitted by clouds is ~30 W/m^2. This means that both the CERES and ERBE period best estimates of all sky S_T should be about 41 W/m^2 respectively and that the clear sky (cloud free) estimates of S_T again for both these periods should not exceed about 41±10 W/m^2 i.e there is about a less than one chance in 40 (2.5%) (assuming binomial distribution) of a clear sky S_T exceeding 61 W/m^2. [It also means that both the CERES and ERBE period best estimates of S_U should be about 395 W/m^2. Thus the best estimate of mean tau should be about 2.27+0.27,-0.22 also indicating a mean global all sky tau is hardly likely to be as low as 1.87]These data are significantly at odds with the values presented in Zagon's recent presentation, particularly where he showed slides for Miskolczi HARTCODE estimates of global clear sky S_T of 90.7 (Slide 68), 58.7 (Slide 69) and 60.9 (Slide 70) W/m^2 respectively. Miskolczi HARTCODE measurements of even clear sky S_T are thus at or even above the statistical upper limit of the mainstream science values (typically <5% probability of agreement with them).We are therefore entitled to ask: What is so different about Miskolczi's HARTCODE major parameter estimations from the mainstream findings over now two decades and why should we accept Miskolci's values against the weight of the findings of so many other study groups over some two decades?

  • http://www.ecoengineers.com Steve Short

    Correction:The K&T97 and TF&K08 reviews both imply that LW IR emitted by clouds is ~30 W/m^2. This means that both the CERES and ERBE period best estimates of all sky S_T should be about 31 W/m^2 respectively and that the clear sky (cloud free) estimates of S_T again for both these periods should not exceed about 61±10 W/m^2 i.e there is about a less than one chance in 40 (2.5%) (assuming binomial distribution) of a clear sky S_T exceeding 81 W/m^2. [It also means that both the CERES and ERBE period best estimates of S_U should be about 395 W/m^2. Thus the best estimate of clear sky mean tau should be about 1.87+0.18,-0.15 but also indicating a mean global all sky tau is hardly likely to be as low as 1.87]These data are consistent with the values presented in Zagon's recent presentation, particularly where he showed slides for Miskolczi HARTCODE estimates of global clear sky S_T of 90.7 (Slide 68), 58.7 (Slide 69) and 60.9 (Slide 70) W/m^2 respectively. Miskolczi HARTCODE measurements of clear sky S_T are thus probably consistent with mainstream science values.But this still does not address the problem with the Miskolczi value for global all sky mean S_T and tau.

  • cohenite

    Steve, like Alex I too am interested in the Wong, Wielicki revision and I note RL’s response to Watts which Alex posted at Colose’s blog; it may interest you to know that RL was well aware of the revision before he wrote the Watts thread, which I think doesn’t reflect well on Colose; in respect of Miskolczi and the fact that despite the revision the OLR figures are still capable of sustaining a -ve feedback conclusion, is it not crucial to have some access to accurate upper water vapor data? I mentioned the NCEP and NOAA data to Nick recently and he was dismissive of this on the basis that it was patchy and preferred the modelled conclusions of Dessler and Soden etc; but if upper level water is declining then that must be a feather in M’s cap; speaking of feathers, you and anyone else for that matter may be interested in this;

    “I await the verdict of science. In the meantime – I will pay $100 to the first person who can balance Miskolczi’s Equation No. 7 to any reasonable accuracy using any published Earth radiative budget.

    SU – (F0 + Po) + ED – EU = OLR

    Where: SU is the surface upward radiative flux (= Sg in Miskolczi’s Figure 1)

    F0 is the net incoming short wave radiation (incoming solar radiation less the reflected component)

    PO is the friction heat from wind and waves and heat from the centre of the earth (hint – very minor)

    ED is the downward radiative flux from the atmosphere

    EU is the upward radiative flux from the atmosphere and

    OLR is the outgoing long wave radiation”

    • Nick Stokes

      I don’t have much enthusiasm for these WW&W type haggles, but I’m puzzled by:
      RL was well aware of the revision before he wrote the Watts thread, which I think doesn’t reflect well on Colose
      Surely if anything it reflects badly on RL – the complaint is that it is misleading, and you’re defending him by saying that he knew it was misleading. The defence that he doesn’t like the correction is lame. He quoted the results based on W&W’s authority. If they’ve retracted, then the results don’t have that authority – in fact, they don’t have anyone’s.

      On Eq 7, I’ve always thought it was a complete shambles. Not only has there never been a shred of justification offerred, but as an apparent energy balance equation, the directions are wrong. ED and EU are fluxes at the opposite sides of the atmosphere and both outward. As flux summations on the atmosphere itself, they should be added, and I can’t see what other balance they could be part of. I too would be interested to see any attempt to balance it with any real figures.

  • cohenite

    Steve, like Alex I too am interested in the Wong, Wielicki revision and I note RL's response to Watts which Alex posted at Colose's blog; it may interest you to know that RL was well aware of the revision before he wrote the Watts thread, which I think doesn't reflect well on Colose; in respect of Miskolczi and the fact that despite the revision the OLR figures are still capable of sustaining a -ve feedback conclusion, is it not crucial to have some access to accurate upper water vapor data? I mentioned the NCEP and NOAA data to Nick recently and he was dismissive of this on the basis that it was patchy and preferred the modelled conclusions of Dessler and Soden etc; but if upper level water is declining then that must be a feather in M's cap; speaking of feathers, you and anyone else for that matter may be interested in this;”I await the verdict of science. In the meantime – I will pay $100 to the first person who can balance Miskolczi’s Equation No. 7 to any reasonable accuracy using any published Earth radiative budget. SU – (F0 + Po) + ED – EU = OLR Where: SU is the surface upward radiative flux (= Sg in Miskolczi’s Figure 1) F0 is the net incoming short wave radiation (incoming solar radiation less the reflected component) PO is the friction heat from wind and waves and heat from the centre of the earth (hint – very minor)ED is the downward radiative flux from the atmosphereEU is the upward radiative flux from the atmosphere and OLR is the outgoing long wave radiation”

  • cohenite

    I should have put that this largess is being offered by one Robert Indigo Ellison, not me; quite frankly I would be prepared to put up a lot more but don’t quote me.

  • cohenite

    I should have put that this largess is being offered by one Robert Indigo Ellison, not me; quite frankly I would be prepared to put up a lot more but don't quote me.

  • Nick Stokes

    I don't have much enthusiasm for these WW&W type haggles, but I'm puzzled by:RL was well aware of the revision before he wrote the Watts thread, which I think doesn't reflect well on ColoseSurely if anything it reflects badly on RL – the complaint is that it is misleading, and you're defending him by saying that he knew it was misleading. The defence that he doesn't like the correction is lame. He quoted the results based on W&W's authority. If they've retracted, then the results don't have that authority – in fact, they don't have anyone's.On Eq 7, I've always thought it was a complete shambles. Not only has there never been a shred of justification offerred, but as an apparent energy balance equation, the directions are wrong. ED and EU are fluxes at the opposite sides of the atmosphere and both outward. As flux summations on the atmosphere itself, they should be added, and I can't see what other balance they could be part of. I too would be interested to see any attempt to balance it with any real figures.

  • cohenite

    Nick, I’m not sure RL has been duplicitous; here is RL’s follow up from WUWT;

    UPDATE3: I received this email today (4/10) from Dr. Lindzen. My sincere thanks for his response.

    Dear Anthony,

    The paper was sent out for comments, and the comments (even those from “realclimate”) are appreciated. In fact, the reduction of the difference in OLR between the 80’s and 90’s due to orbital decay seems to me to be largely correct. However, the reduction in Wong, Wielicki et al (2006) of the difference in the spikes of OLR between observations and models cannot be attributed to orbital decay, and seem to me to be questionable. Nevertheless, the differences that remain still imply negative feedbacks. We are proceeding to redo the analysis of satellite data in order to better understand what went into these analyses. The matter of net differences between the 80’s and 90’s is an interesting question. Given enough time, the radiative balance is reestablished and the anomalies can be wiped out. The time it takes for this to happen depends on climate sensitivity with adjustments occurring more rapidly when sensitivity is less. However, for the spikes, the time scales are short enough to preclude adjustment except for very low sensitivity.

    That said, it has become standard in climate science that data in contradiction to alarmism is inevitably ‘corrected’ to bring it closer to alarming models. None of us would argue that this data is perfect, and the corrections are often plausible. What is implausible is that the ‘corrections’ should always bring the data closer to models.

    Best wishes,

    Dick

    Now, RL is aware of the amendments in 2007 but uses the originals in the WUWT post. The reason he does this is that he appears to disagree with the reason for the amendments, namely orbital decay; if, as he states, the amendments are not grounded then he is justified in continuing to use them. I don’t know; the OLR is a crucial one and would assist M’s 1.87; now back to work on eqn 7 and that vast prize of $100. What’s the exchange rate again?

  • cohenite

    Nick, I'm not sure RL has been duplicitous; here is RL's follow up from WUWT;UPDATE3: I received this email today (4/10) from Dr. Lindzen. My sincere thanks for his response.Dear Anthony,The paper was sent out for comments, and the comments (even those from “realclimate”) are appreciated. In fact, the reduction of the difference in OLR between the 80’s and 90’s due to orbital decay seems to me to be largely correct. However, the reduction in Wong, Wielicki et al (2006) of the difference in the spikes of OLR between observations and models cannot be attributed to orbital decay, and seem to me to be questionable. Nevertheless, the differences that remain still imply negative feedbacks. We are proceeding to redo the analysis of satellite data in order to better understand what went into these analyses. The matter of net differences between the 80’s and 90’s is an interesting question. Given enough time, the radiative balance is reestablished and the anomalies can be wiped out. The time it takes for this to happen depends on climate sensitivity with adjustments occurring more rapidly when sensitivity is less. However, for the spikes, the time scales are short enough to preclude adjustment except for very low sensitivity.That said, it has become standard in climate science that data in contradiction to alarmism is inevitably ‘corrected’ to bring it closer to alarming models. None of us would argue that this data is perfect, and the corrections are often plausible. What is implausible is that the ‘corrections’ should always bring the data closer to models.Best wishes,DickNow, RL is aware of the amendments in 2007 but uses the originals in the WUWT post. The reason he does this is that he appears to disagree with the reason for the amendments, namely orbital decay; if, as he states, the amendments are not grounded then he is justified in continuing to use them. I don't know; the OLR is a crucial one and would assist M's 1.87; now back to work on eqn 7 and that vast prize of $100. What's the exchange rate again?

  • David L. Hagen

    Published paper:
    EARTH’S GLOBAL ENERGY BUDGET
    by Kevin E. Trenberth, John T. Fasullo, and Jeffrey Kiehl
    Bulletin AMERICAN METEOROLOGICAL SOCIETY March 2009 | 311-324

    Miklos
    Would welcome your comments on the changes from 1997 to 2009 relative to Miskolczi’s theory.

    • http://www.ecoengineers.com Steve Short

      David Hagen:

      “Miklos
      Would welcome your comments on the changes from 1997 to 2009 relative to Miskolczi’s theory.”

      Well, given that:

      (1) T,F&K now 2009 has been out in draft form since early 2008 (repeatedly referred-to above in this thread as T,F&K08) and may be taken as providing a reasonably comprehensive review of the body of modern literature on the ‘consensual’ all sky global energy atmospheric balance over the last 20 years; and

      (2) The whole purpose of this thread which David established was to obtain a frank response from Miklos Zagoni to discrepancies Nick and I had identified between Miskolczi’s all sky global S_T and all sky global tau (not to be confused with a confusing range of ‘clear sky’ equivalents presented by Miskolci and Zagoni in recent years) and the ‘consensual’ values; and

      (3) Since 26 April when David had an initial response from Zagoni which (a) avoided more questions that it answered, (b) made the erroneous claim all current global energy balance data rested only on K&T97 (!); and (c) asked for specific locations in T,F&K08 where the accepted ‘consensual’ S_T = 40 W/m^2 could be identified – to which I replied giving precise details, we have had no further response from Zagoni despite him issuing a YouTube video which repeated the same old claims which appeared in his Newcastle presentation,

      I don’t fancy your chances.

      Despite the terrible attractiveness of aspects of Miskolczi Theory to me (and evidently some other climate change sceptics) I have to admit that to this day it still appears to contains significant aspects of old fashioned ‘smoke and mirrors’ obfuscation and those who cannot or will not recognise that are themselves in denial…..

      The proof of a pudding is in the eating. If this theory had the ability to fly it would have been re-presented again at the 2nd Heartland Conference and would to this day be rising and rising – not sinking and sinking, perpetuated only by Zagoni’s promotion (noting Miskolczi himself has retired back into petulant silence as his is wont).

      • jae

        “noting Miskolczi himself has retired back into petulant silence as his is wont).”

        That is why I gave up on the M’s stuff, until such time as the author comes to his OWN rescue.

    • DG

      David,
      In the Trenberth et al paper, the net radiative imbalance is listed as .9 w/m squared, which is .05 more than Hansen et al 2005, based on 1993-2003 data.

      From what I can tell, the bone of contention is OHC, which has much uncertainty even with the ARGO system in place. OHC has more variability than SST!! There is not much agreement in this field. Levitus 2009 confirms this.

      Wouldn’t a time period from 2003-2008 be more informative to understand the current state of Earth’s energy budget as regardless of the uncertainty there arguably has been a reduction in OHC since 2003? Why 2000-2004? It doesn’t make sense.

  • David L. Hagen

    Published paper:EARTH’S GLOBAL ENERGY BUDGETby Kevin E. Trenberth, John T. Fasullo, and Jeffrey KiehlBulletin AMERICAN METEOROLOGICAL SOCIETY March 2009 | 311-324MiklosWould welcome your comments on the changes from 1997 to 2009 relative to Miskolczi's theory.

  • http://www.ecoengineers.com Steve Short

    David Hagen:”MiklosWould welcome your comments on the changes from 1997 to 2009 relative to Miskolczi's theory.”Well, given that:(1) T,F&K now 2009 has been out in draft form since early 2008 (repeatedly referred-to above in this thread as T,F&K08) and may be taken as providing a reasonably comprehensive review of the body of modern literature on the 'consensual' all sky global energy atmospheric balance over the last 20 years; and(2) The whole purpose of this thread which David established was to obtain a frank response from Miklos Zagoni to discrepancies Nick and I had identified between Miskolczi's all sky global S_T and all sky global tau (not to be confused with a confusing range of 'clear sky' equivalents presented by Miskolci and Zagoni in recent years) and the 'consensual' values; and(3) Since 26 April when David had an initial response from Zagoni which (a) avoided more questions that it answered, (b) made the erroneous claim all current global energy balance data rested only on K&T97 (!); and (c) asked for specific locations in T,F&K08 where the accepted 'consensual' S_T = 40 W/m^2 could be identified – to which I replied giving precise details, we have had no further response from Zagoni despite him issuing a YouTube video which repeated the same old claims which appeared in his Newcastle presentation,I don't fancy your chances.Despite the terrible attractiveness of aspects of Miskolczi Theory to me (and evidently some other climate change sceptics) I have to admit that to this day it still appears to contains significant aspects of old fashioned 'smoke and mirrors' obfuscation and those who cannot or will not recognise that are themselves in denial…..The proof of a pudding is in the eating. If this theory had the ability to fly it would have been re-presented again at the 2nd Heartland Conference and would to this day be rising and rising – not sinking and sinking, perpetuated only by Zagoni's promotion (noting Miskolczi himself has retired back into petulant silence as his is wont).

  • cohenite

    Steve; given the “terrible attractiveness” of certain aspects of Miskolczi which parts are salvageable? And by that I mean not only those parts which have theoretical coherence with [any] data but those parts which could be verified by further empirical studies?

    • http://www.ecoengineers.com Steve Short

      Hi Anthony

      What I think may be salvageable from Miskolczi is the so-called virial rule S_U = 2E_U. I can find no refutation of that empirical fact in the modern literature studies. Clearly this fact (?) can be verified by further empirical studies. If so this may save a small bit of the edifice Miskolczi constructed.

      Other than that I am now pessimistic as I agree with Nick that Eqn 7 is a complete shambles.

      If only Miskolczi had had the brains to follow through with the consequences that the real all sky global S_T is down around 30 – 40 W/m^2 (giving an S_T around 2.5) i.e. not the ~60 W/m^2 he (for some utterly obscure reason) erroneously insists it is, that his K parameter is not just something there to pay lip service to latent heat transfer to the clouds, and hence by definition MUST lead to some LW IR departing at TOA that is distinctly separate to S_T, then something might have come of all this.

      • Nick Stokes

        I take M’s “empirical” results with a big grain of salt, because he just doesn’t describe properly what he does.

        But even so, his empirical correlation of S_U and E_U is surprisingly poor. The points are quite scattered. And the regression line that he has obtained is not S_U=2E_U. It doesn’t pass through the origin.

  • cohenite

    Steve; given the “terrible attractiveness” of certain aspects of Miskolczi which parts are salvageable? And by that I mean not only those parts which have theoretical coherence with [any] data but those parts which could be verified by further empirical studies?

  • DG

    David, In the Trenberth et al paper, the net radiative imbalance is listed as .9 w/m squared, which is .05 more than Hansen et al 2005, based on 1993-2003 data.From what I can tell, the bone of contention is OHC, which has much uncertainty even with the ARGO system in place. OHC has more variability than SST!! There is not much agreement in this field. Levitus 2009 confirms this.Wouldn't a time period from 2003-2008 be more informative to understand the current state of Earth's energy budget as regardless of the uncertainty there arguably has been a reduction in OHC since 2003? Why 2000-2004? It doesn't make sense.

  • Anonymous

    Sorry I have been off air finishing a paper that I just sent off. It seems to me that the G=0.33 value that is linked to tau etc would be a very strange coincidence if it was that value purely by chance. If this is what the standard theory says, and what M says is no its not that by chance, there’s constraint(s) then well and good. Eqn 7 has always been my major concern, as its a conditional equality where the conditions for equality are not specified. Z says its where G is maximally developed, but for that to be true you have to accept the optimality calculations that assume a constant ground temperature.

    I like Steve’s approach as the devil is in the details, but I am afraid I don’t have the time to go into them in detail.

  • davids99us

    Sorry I have been off air finishing a paper that I just sent off. It seems to me that the G=0.33 value that is linked to tau etc would be a very strange coincidence if it was that value purely by chance. If this is what the standard theory says, and what M says is no its not that by chance, there's constraint(s) then well and good. Eqn 7 has always been my major concern, as its a conditional equality where the conditions for equality are not specified. Z says its where G is maximally developed, but for that to be true you have to accept the optimality calculations that assume a constant ground temperature. I like Steve's approach as the devil is in the details, but I am afraid I don't have the time to go into them in detail.

  • http://www.ecoengineers.com Steve Short

    Hi AnthonyWhat I think may be salvageable from Miskolczi is the so-called virial rule S_U = 2E_U. I can find no refutation of that empirical fact in the modern literature studies. Clearly this fact (?) can be verified by further empirical studies. If so this may save a small bit of the edifice Miskolczi constructed.Other than that I am now pessimistic as I agree with Nick that Eqn 7 is a complete shambles. If only Miskolczi had had the brains to follow through with the consequences that the real all sky global S_T is down around 30 – 40 W/m^2 (giving an S_T around 2.5) i.e. not the ~60 W/m^2 he (for some utterly obscure reason) erroneously insists it is, that his K parameter is not just something there to pay lip service to latent heat transfer to the clouds, and hence by definition MUST lead to some LW IR departing at TOA that is distinctly separate to S_T, then something might have come of all this.

  • cohenite

    Thanks Steve; a couple of queries; firstly, is eqn 7 valid in any way for clear sky values? The reason I ask this is your earlier comment that M’s clear sky mean tau of 1.87 is reasonable. The problem is with the all-sky values; that is clouds. In this respect I note you say the K value must contribute to the OLR in a way which is seperate to the window, S_T. Mike Hammer has a paper which looks at this; he states;

    “The implication is that thermal energy from the surface can escape to space in only two ways. First, by surface emission escaping directly to space at wavelengths which the greenhouse gases do not absorb. Second, by emission from the tropopause at wavelengths corresponding to the water vapour absorption/emission lines”

    If high water/cloud is declining, and this is a controversial point with Dessler and crew all modeling that it isn’t while casting aspersions on the NCEP data, especially pre-1973 data [and ignoring the recent Paltridge et al paper], then that addition to S_T will be as a result of the decline in high cloud; if the S_T plus the extra OLR at water wavelengths can achieve a figure of ~60Wm^2 would that then give mean Tau for all-sky a fighting chance of being 1.87 as well?

    • http://www.ecoengineers.com Steve Short

      Hi Anthony. I hope my last two posts adequately provide my response to your questions to me? Do you have the references-for (or pdf copies-of) the recent Mike Hammer and Paltridge et al. papers? Thanks.

  • cohenite

    Thanks Steve; a couple of queries; firstly, is eqn 7 valid in any way for clear sky values? The reason I ask this is your earlier comment that M's clear sky mean tau of 1.87 is reasonable. The problem is with the all-sky values; that is clouds. In this respect I note you say the K value must contribute to the OLR in a way which is seperate to the window, S_T. Mike Hammer has a paper which looks at this; he states;”The implication is that thermal energy from the surface can escape to space in only two ways. First, by surface emission escaping directly to space at wavelengths which the greenhouse gases do not absorb. Second, by emission from the tropopause at wavelengths corresponding to the water vapour absorption/emission lines”If high water/cloud is declining, and this is a controversial point with Dessler and crew all modeling that it isn't while casting aspersions on the NCEP data, especially pre-1973 data [and ignoring the recent Paltridge et al paper], then that addition to S_T will be as a result of the decline in high cloud; if the S_T plus the extra OLR at water wavelengths can achieve a figure of ~60Wm^2 would that then give mean Tau for all-sky a fighting chance of being 1.87 as well?

  • pochas

    I’d like to comment on what I believe M has accomplished. He has identified the proper boundary conditions for IR radiative calculations. The “Atmospheric Kirchoff’s Law” eqivalence of Aa and Ed is new and it eliminates the surface temperature discontinuity that could directly result in modeling errors if not included. This is, or should be considered, an advance.

    To include the effect of clouds, he models them as though they are all at 2 km altitude and effectively transparent to IR. “Clouds at around 2 km altitude have minimal effect on the LW energy balance and seem to regulate the SW absorption of the system by adjusting the effective cloud cover beta.” (M, p.19) That is, more surface OLR -> more low level clouds -> cooling, i.e., negative feedback. This is certainly not unheard of in the annals of climatology.

    • Nick Stokes

      What are “the proper boundary conditions for IR radiative calculations”?

      “it eliminates the surface temperature discontinuity” What discontinuity? How?

    • http://www.ecoengineers.com Steve Short

      “I’d like to comment on what I believe M has accomplished. He has identified the proper boundary conditions for IR radiative calculations. The “Atmospheric Kirchoff’s Law” eqivalence of Aa and Ed is new and it eliminates the surface temperature discontinuity that could directly result in modeling errors if not included. This is, or should be considered, an advance.”

      This, of course, complete and utter drivel. None of these assertions have been proven anywhere in the total literature, including in M&M04 and M07 because the global all sky mean S_T has never been proven to be ~60 W/m^2. If anything, in excess of about 20 papers show the global all sky mean S_T to be about 30 – 40 W/m^2 as I have comprehensively shown above (e.g. ERBE and CERES averages around 31±10 W/m^2)..

      What Miskolczi (and Zagoni) has done is to try to fool people who don’t read the literature that data from various clear sky studies can somehow be generalized to the all sky situation by completely wrapping the fraction of LW IR emission (@ water emission lines) from the tops of clouds (deriving originally from surface evapotranspiration (ET) and resulting from release of latent heat) which escapes the tropopause and contributes to OLR into his S_T term. Put simply this was a mistake.

      In the average (global all sky; cloud cover ~60%) ) this emission is ~30 W/m^2 and is about 30/80 = 0.375 of the total latent heat release – a proportion which is governed by relatively simple geometric considerations and hence remains a relatively constant fraction of the (variable) ET.

      In the clear sky situation there is no clouds, negligible ET and hence the poor neglected Miskolczi K term reduces from ~97 W/m^2 (17 for dry thermals + 80 for ET) to 17 W/m^2. There are no clouds to block IR transmission so S_T rises from ~31±10 W/m^2 (ERBE and CERES studies) to the clear sky ~61±10 W/m^2. The error bars I quote here are my approximations to the one standard deviations.

      Obviously it is possible to imagine a continuum of cloud covers from 100% through 60% (global average) to 0% (clear sky) because, hey, that is what happens!

      I fail to see why we should only imagine the cloud cover range from 0% to 60% which Miskolczi would both have us do and even fudge values of S_T near the global mean cloud cover of 60% as well!

      In my view, at 100% cloud cover ET rises to about (100/60) x 80 = 133 W/m^2 and hence the emission from clouds which escapes the tropopause and contributes to OLR should be about 0.375 x 133 = 50 W/m^2.

      Under these circumstances S_T should decline to ~31 – 10 – 10 = 11 W/m^2 and hence tau ~ -ln(11/396) = 3.58!

      However, in compensation for the much reduced S_T, the sum of the emission from clouds which contributes to OLR PLUS the S_T which contributes to OLR thus = 50 + 11 = 61 W/m^2 i.e. it remains constant!

      So, I now give you not Miskolczi Theory but basic Short Theory which suggests the following:

      (1) The LW IR tau is NOT constant (and there is no reason in the wide world why it should be) but varies from a 100% cloud cover situation value of ~ 3.58 all the way down to a 0% cloud cover situation value of ~ – ln(61/396) = 1.87;

      (2) As S_T reduces with increasing cloud cover, the fraction of ET which is emitted from the tops of clouds and escapes the tropopause (let’s call that fraction ET_U) remains relatively constant @ ~0.375 = 3/8; and hence

      (3) the sum of amount of LW IR contributing to OLR which is emitted both from the surface (S_T) and from the tops of the clouds (ET_U) remains constant @ ~61 W/m^2 and hence remains a constant ~25% of OLR.

      (4) There is no reason why A_A should = E_D but the remainder of OLR – (ET_U + S_T) = ‘the real E_U’ clearly also remains ~75% of OLR. In the next instalment (we all gotta eat) I’ll dissect this ‘real E_U’.

      • http://www.ecoengineers.com Steve Short

        Something very roughly like the following (assuming constant F, S_U and OLR across all cloud covers – which is not strictly true of course):

        Assumptions:
        S_U = 396 = constant
        ET = Evapotranspiration
        ET_U = LW IR emitted to OLR by clouds (as above) = 0.375ET
        OLR – (ET_U + S_T) = ‘the real E_U’ denoted rE_U
        Old E_U = ET_U + rE-U by definition
        A_A = S_U – S_T (by definition and as per Miskolczi)
        DT = Dry Thermals ~ 17 @ 60% cloud cover
        K = ET + DT (by definition and as per Miskolczi)
        F = absorbed SW IR (as per Miskolczi)
        E_D ~ 0.625(ET + DT) + 0.5F + 0.625A_A on the grounds: (a) SW IR absorbed throughout entire atmosphere but(b) LW IR absorbed below the clouds.

        Cloud%, Tau, S_T, ET, ET_U, DT, rE_U, A_A, E_D, Old E_U
        100, 3.58, 11, 133, 50, 0, 178, 385, 362, 228
        80, 2.94, 21, 107, 40, 8, 178, 375, 345, 218
        60 2.55, 31, 80, 30, 17, 178, 365, 328, 208
        40, 2.27, 41, 53, 20, 23, 178, 355, 288, 198
        20, 2.05, 51, 27, 10, 29, 178, 345, 302, 188
        0, 1.87, 61, 0, 0, 34, 178, 335, 270, 178,

        Note: S_U does not = 2 x old E_U except around 40% cloud cover.

        A_A does not = E_D (not required).

        LW IR Tau is not constant (not required).

        LW IR homeostasis arises through constancy of sum of (ET_U + S_T) and constancy of rE_U and proportionality of OLR to incoming Fo

        This approach unifies the roles of the important K and F terms (which Miskolczi essentially ignored) into the all sky global energy balance/framework.

        It shows how and why they are critical to a concept of low CO2 sensitivity ‘homeostasis’ applying across the full range of cloud covers (which of course applies naturally), in line with Lindzen’s concept of negative cloud forcing (iris).

        At least for LW IR and the role of ET and clouds, this is the core issue which IMO Miskolczi singularly failed to address (despite a massive smoke screen).

        Miskolczi’s so-called ‘constant tau’, so-called elimination of the surface/atmosphere temperature discontinuity and the so-called Atmospheric Kirchoff Law were all irrelevant distractions.

    • Alex Harvey

      Hi Pochas,

      If you were able to support this claim with reference to the historical literature of radiative transfer it would greatly help Ferenc’s case.

      Unfortunately I have reviewed the literature in the hope that I could bolster this case but found instead that all the evidence goes against the Miskolczi storyline.

      The issue as I see it is that Ferenc has written (M07, p. 13):

      “…There were several attempts to resolve the above deficiencies by developing simple semi-empirical spectral models, see for example Weaver and Ramanathan (1995), but the fundamental theoretical problem was never resolved. The source of this inconsistency can be traced back to several decades ago, when the semi-infinite solution was first used to solve bounded atmosphere problems. About 80 years ago Milne stated: “Assumption of infinite thickness involves little or no loss of generality”, and later, in the same paper, he created the concept of a secondary (internal) boundary (Milne, 1922). He did not realize that the classic Eddington solution is not the general solution of the bounded atmosphere problem and he did not re-compute the appropriate integration constant. This is the reason why scientists have problems with a mysterious surface temperature discontinuity and unphysical solutions, as in Lorenz and McKay (2003)…”

      The problem with all this is that the temperature discontinuity originated earlier in the work of Robert Emden in 1913 so it obviously can’t really be the result of any error that either Milne or Eddington made later.

      Emden, R 1913: Uber Strahlungsgleichgewicht und atmosphrische Strahlung. Sitz. d. Bayerische Akad. d. Wiss., Math. Phys. Klass, 55.

      A partial translation and early commentary for this can be found in Bateman 1916 here: http://docs.lib.noaa.gov/rescue/mwr/044/mwr-044-08-0450.pdf

      Now throughout history it seems that temperature discontinuity has been derived by many people in many situations but in all instances that I have been able to find it is always derived from first principles with an acknowledgement to Emden as the first to discover it.

      One example is in the astrophysicist Jeremiah Ostriker who studied the problem in the 1960s.

      Ostriker, J. P. [1963], Radiative Transfer in a Finite Gray Atmosphere, Astrophysical Journal, 138, 281-290, here: http://adsabs.harvard.edu/full/1963ApJ…138..281O

      On p. 284 he concludes:

      “…we see that there will be a discontinuity in the temperature between the lowest layer of the atmosphere (T1) and the ground (Tb). This discontinuity, though somewhat reduced, persists in higher approximations and in the exact solution; in an early paper on a similar problem, Emden (1913) found the same type of discontinuity.”

      Now it seems perfectly clear that when Ostriker discusses the work of Eddington he knows perfectly well that he’s dealing with a semi-infinite approximation — yet he still derives the temperature discontinuity by another route.

      So the question is what has Ostriker done wrong then? He clearly knows the atmosphere is bounded, he’s clearly not following Milne at all (he doesn’t even cite Milne), his work follows from first principles, he seems to know to keep that exponential term, and yet he derives a temperature discontinuity.

      If people still want to defend the Miskolczi theory, someone has got to explain this to us.

      • JamesG

        Well that certainly answered Nick Stokes question about “what discontinuity” at least. I trust he’ll stop repeating it now.

        • Nick Stokes

          No such luck. I jumped on Pochas because he said “the surface temperature discontinuity that could directly result in modeling errors” and I’ve been saying over and over – this has nothing to do with modelling. No-one uses it. People here have very fuzzy ideas on this. The first order ode approx that leads to this “discontinuity” is elementary. It’s now derived as a reduction from more exact theory, but it’s likely that people thought of it independently in the 19th C. I’m not surprised at Alex’s discovery. It’s just what you get if you assume the transport is a constant energy stream and entirely radiative. There’s nothing semi-infinite about it. It just works from the top down, until it doesn’t.

          As I’ve said over and over, the “discontinuity” arises because you only have one free parameter, eg the constant flux, and you have to match a boundary condition at the top of the atmosphere, where the assumption really is true. Thereafter, the approx works as long as the condition holds. It breaks down when other fluxes become important.

          FM’s variant is nonsense. You can’t force it to satisfy another condition without sacrificing the one that is really needed, at TOA. Which he does.

          • Alex Harvey

            Nick,

            I am not sure what I’ve discovered to be honest.

            I should have added that Milne 1922 contains a discussion of Emden 1913 and Milne claims in there that he only became aware of Emden’s paper after he’d already finished a draft of his own. However Milne doesn’t seem to discuss Emden’s temperature disctontinuity. So it is not impossible that Milne didn’t merely carry forward an earlier mistake made by Emden or even Schwarzschild.

            You may be interested to read this post here:

            http://complexclimate.blogspot.com/2008/09/even-more-complex-answer.html

            Quote: “But why did their simple model not work? The reason is that a mistake was made by a giant of astrophysics Robert Emden. In 1913 he applied the equations for internal solar radiation derived by his brother in law, another giant of astrophysics Karl Schwarzschild, to the earth’s atmosphere. The radiation scheme is based on Schwarzschild’s equation. However, we have a planetary atmosphere not a stellar one!”

            Sounds like the familiar Miskolczi storyline with Milne replaced by Emden?

            So it’s not impossible that I’ve discovered nothing other than a minor historical inaccuracy. I suppose it’s not impossible that Ostriker is likewise following Emden in making an error.

            If so, someone who really understands the theory would be able to respond to this and say what Ostriker did wrong, or why it is not relevant.

          • Nick Stokes

            Well, I guess you have two storylines here, and need to decide which is more believable. One is that that three giants of astrophysics somehow made a gross error in an elementary approximation, which was then taught to thousands of students over the years, and remained unnoticed until genius FM came along in 2007 and explained it all with his trademark clarity, or..
            The elementary approximation is just what it is, as explained so long ago, and misunderstood by FM in 2007, as he has misunderstood so much other physics in his paper. A few bloggers in 2009, who don’t understand the alleged error at all, jmst won’t let it go.

          • Pat Cassen

            Hi Alex – Nice job of tracing back the literature (I didn’t know that Emden had worked this problem – he was famous for many other things).

            Now, having gone through these papers, you don’t really have to decide between Nick’s storylines; you can see for yourself if Miskolczi’s “solution” satisfies the TOA boundary conditions. [To do this, you need the relation between B and the upward and downward fluxes at tau = 0, in the Eddington approximation (upward flux = OLR; downward = 0). I imagine that these relations are given in Milne, Emden or Ostriker, or certainly in standard textbooks.] You will find that Nick is right; M’s solution fails.

            I once posted here something to the effect that nothing is ever settled on blogs. Perhaps, thanks to Nick, Steve Short and yourself, I will be proved wrong. I would be delighted if that were the case. (Thanks, of course, to our host, also.)

          • Jan Pompe

            Pat what are the boundary conditions for a boundless integral?

            What are the boundary conditions for a (semi-infinite) Laplace Transform?

            Then kindly tell us why climatologists have found three one for TOA and one for the surface.

          • Jan Pompe

            Correction.

            that’s two for the surface.

            Also tell us how as you seem to imply Ms boundary condition at TOA for tau=0 differs from the the rest?

  • pochas

    I'd like to comment on what I believe M has accomplished. He has identified the proper boundary conditions for IR radiative calculations. The “Atmospheric Kirchoff's Law” eqivalence of Aa and Ed is new and it eliminates the surface temperature discontinuity that could directly result in modeling errors if not included. This is, or should be considered, an advance. To include the effect of clouds, he models them as though they are all at 2 km altitude and effectively transparent to IR. “Clouds at around 2 km altitude have minimal effect on the LW energy balance and seem to regulate the SW absorption of the system by adjusting the effective cloud cover beta.” (M, p.19) That is, more surface OLR -> more low level clouds -> cooling, i.e., negative feedback. This is certainly not unheard of in the annals of climatology.

  • Nick Stokes

    I take M's “empirical” results with a big grain of salt, because he just doesn't describe properly what he does.But even so, his empirical correlation of S_U and E_U is surprisingly poor. The points are quite scattered. And the regression line that he has obtained is not S_U=2E_U. It doesn't pass through the origin.

  • Nick Stokes

    What are “the proper boundary conditions for IR radiative calculations”?”it eliminates the surface temperature discontinuity” What discontinuity? How?

  • http://www.ecoengineers.com Steve Short

    “I'd like to comment on what I believe M has accomplished. He has identified the proper boundary conditions for IR radiative calculations. The “Atmospheric Kirchoff's Law” eqivalence of Aa and Ed is new and it eliminates the surface temperature discontinuity that could directly result in modeling errors if not included. This is, or should be considered, an advance.”This, of course, complete and utter drivel. None of these assertions have been proven anywhere in the total literature, including in M&M04 and M07 because the global all sky mean S_T has never been proven to be ~60 W/m^2. If anything, in excess of about 20 papers show the global all sky mean S_T to be about 30 – 40 W/m^2 as I have comprehensively shown above (e.g. ERBE and CERES averages around 31±10 W/m^2).. What Miskolczi (and Zagoni) has done is to try to fool people who don't read the literature that data from various clear sky studies can somehow be generalized to the all sky situation by completely wrapping the fraction of LW IR emission (@ water emission lines) from the tops of clouds (deriving originally from surface evapotranspiration (ET) and resulting from release of latent heat) which escapes the tropopause and contributes to OLR into his S_T term. Put simply this was a mistake. In the average (global all sky; cloud cover ~60%) ) this emission is ~30 W/m^2 and is about 30/80 = 0.375 of the total latent heat release – a proportion which is governed by relatively simple geometric considerations and hence remains a relatively constant fraction of the (variable) ET.In the clear sky situation there is no clouds, negligible ET and hence the poor neglected Miskolczi K term reduces from ~97 W/m^2 (17 for dry thermals + 80 for ET) to 17 W/m^2. There are no clouds to block IR transmission so S_T rises from ~31±10 W/m^2 (ERBE and CERES studies) to the clear sky ~61±10 W/m^2. The error bars I quote here are my approximations to the one standard deviations.Obviously it is possible to imagine a continuum of cloud covers from 100% through 60% (global average) to 0% (clear sky) because, hey, that is what happens!I fail to see why we should only imagine the cloud cover range from 0% to 60% which Miskolczi would both have us do and even fudge values of S_T near the global mean cloud cover of 60% as well! In my view, at 100% cloud cover ET rises to about (100/60) x 80 = 133 W/m^2 and hence the emission from clouds which escapes the tropopause and contributes to OLR should be about 0.375 x 133 = 50 W/m^2. Under these circumstances S_T should decline to ~31 – 10 – 10 = 11 W/m^2 and hence tau ~ -ln(11/396) = 3.58! However, in compensation for the much reduced S_T, the sum of the emission from clouds which contributes to OLR PLUS the S_T which contributes to OLR thus = 50 + 11 = 61 W/m^2 i.e. it remains constant!So, I now give you not Miskolczi Theory but basic Short Theory which suggests the following: (1) The LW IR tau is NOT constant (and there is no reason in the wide world why it should be) but varies from a 100% cloud cover situation value of ~ 3.58 all the way down to a 0% cloud cover situation value of ~ – ln(61/396) = 1.87;(2) As S_T reduces with increasing cloud cover, the fraction of ET which is emitted from the tops of clouds and escapes the tropopause (let's call that fraction ET_U) remains relatively constant @ ~0.375 = 3/8; and hence(3) the sum of amount of LW IR contributing to OLR which is emitted both from the surface (S_T) and from the tops of the clouds (ET_U) remains constant @ ~61 W/m^2 and hence remains a constant ~25% of OLR.(4) There is no reason why A_A should = E_D but the remainder of OLR – (ET_U + S_T) = 'the real E_U' clearly also remains ~75% of OLR. In the next instalment (we all gotta eat) I'll dissect this 'real E_U'.

  • http://www.ecoengineers.com Steve Short

    Something very roughly like the following (assuming constant F, S_U and OLR across all cloud covers – which is not strictly true of course):Assumptions:S_U = 396 = constantET = EvapotranspirationET_U = LW IR emitted to OLR by clouds (as above) = 0.375ETOLR – (ET_U + S_T) = 'the real E_U' denoted rE_UOld E_U = ET_U + rE-U by definitionA_A = S_U – S_T (by definition and as per Miskolczi)DT = Dry Thermals ~ 17 @ 60% cloud coverK = ET + DT (by definition and as per Miskolczi)F = absorbed SW IR (as per Miskolczi)E_D ~ 0.625(ET + DT) + 0.5F + 0.625A_A on the grounds: (a) SW IR absorbed throughout entire atmosphere but(b) LW IR absorbed below the clouds.Cloud%, Tau, S_T, ET, ET_U, DT, rE_U, A_A, E_D, Old E_U100, 3.58, 11, 133, 50, 0, 178, 385, 362, 22880, 2.94, 21, 107, 40, 8, 178, 375, 345, 218 60 2.55, 31, 80, 30, 17, 178, 365, 328, 208 40, 2.27, 41, 53, 20, 23, 178, 355, 288, 198 20, 2.05, 51, 27, 10, 29, 178, 345, 302, 1880, 1.87, 61, 0, 0, 34, 178, 335, 270, 178,Note: S_U does not = 2 x old E_U except around 40% cloud cover. A_A does not = E_D (not required).LW IR Tau is not constant (not required).LW IR homeostasis arises through constancy of sum of (ET_U + S_T) and constancy of rE_U and proportionality of OLR to incoming FoThis approach unifies the roles of the important K and F terms (which Miskolczi essentially ignored) into the all sky global energy balance/framework. It shows how and why they are critical to a concept of low CO2 sensitivity 'homeostasis' applying across the full range of cloud covers (which of course applies naturally), in line with Lindzen's concept of negative cloud forcing (iris). At least for LW IR and the role of ET and clouds, this is the core issue which IMO Miskolczi singularly failed to address (despite a massive smoke screen). Miskolczi's so-called 'constant tau', so-called elimination of the surface/atmosphere temperature discontinuity and the so-called Atmospheric Kirchoff Law were all irrelevant distractions.

  • http://www.ecoengineers.com Steve Short

    Hi Anthony. I hope my last two posts adequately provide my response to your questions to me? Do you have the references-for (or pdf copies-of) the recent Mike Hammer and Paltridge et al. papers? Thanks.

  • cohenite

    Steve,

    the Mike Hammer paper is here;

    http://jennifermarohasy.com/blog/2009/03/radical-new-hypothesis-on-the-effect-of-greenhouse-gases/#comment-87115

    The Paltridge sorry saga is discussed here with a reference to the publication source;

    http://www.climateaudit.org/?p=5416

  • cohenite

    Steve, the Mike Hammer paper is here;http://jennifermarohasy.com/blog/2009/03/radica…The Paltridge sorry saga is discussed here with a reference to the publication source;http://www.climateaudit.org/?p=5416

  • Alex Harvey

    Hi Pochas,If you were able to support this claim with reference to the historical literature of radiative transfer it would greatly help Ferenc's case.Unfortunately I have reviewed the literature in the hope that I could bolster this case but found instead that all the evidence goes against the Miskolczi storyline.The issue as I see it is that Ferenc has written (M07, p. 13):”…There were several attempts to resolve the above deficiencies by developing simple semi-empirical spectral models, see for example Weaver and Ramanathan (1995), but the fundamental theoretical problem was never resolved. The source of this inconsistency can be traced back to several decades ago, when the semi-infinite solution was first used to solve bounded atmosphere problems. About 80 years ago Milne stated: “Assumption of infinite thickness involves little or no loss of generality”, and later, in the same paper, he created the concept of a secondary (internal) boundary (Milne, 1922). He did not realize that the classic Eddington solution is not the general solution of the bounded atmosphere problem and he did not re-compute the appropriate integration constant. This is the reason why scientists have problems with a mysterious surface temperature discontinuity and unphysical solutions, as in Lorenz and McKay (2003)…”The problem with all this is that the temperature discontinuity originated earlier in the work of Robert Emden in 1913 so it obviously can't really be the result of any error that either Milne or Eddington made later.Emden, R 1913: Uber Strahlungsgleichgewicht und atmosphrische Strahlung. Sitz. d. Bayerische Akad. d. Wiss., Math. Phys. Klass, 55.A partial translation and early commentary for this can be found in Bateman 1916 here: http://docs.lib.noaa.gov/rescue/mwr/044/mwr-044…Now throughout history it seems that temperature discontinuity has been derived by many people in many situations but in all instances that I have been able to find it is always derived from first principles with an acknowledgement to Emden as the first to discover it.One example is in the astrophysicist Jeremiah Ostriker who studied the problem in the 1960s.Ostriker, J. P. [1963], Radiative Transfer in a Finite Gray Atmosphere, Astrophysical Journal, 138, 281-290, here: http://adsabs.harvard.edu/full/1963ApJ…138..281OOn p. 284 he concludes:”…we see that there will be a discontinuity in the temperature between the lowest layer of the atmosphere (T1) and the ground (Tb). This discontinuity, though somewhat reduced, persists in higher approximations and in the exact solution; in an early paper on a similar problem, Emden (1913) found the same type of discontinuity.”Now it seems perfectly clear that when Ostriker discusses the work of Eddington he knows perfectly well that he's dealing with a semi-infinite approximation — yet he still derives the temperature discontinuity by another route.So the question is what has Ostriker done wrong then? He clearly knows the atmosphere is bounded, he's clearly not following Milne at all (he doesn't even cite Milne), his work follows from first principles, he seems to know to keep that exponential term, and yet he derives a temperature discontinuity.If people still want to defend the Miskolczi theory, someone has got to explain this to us.

  • JamesG

    Well that certainly answered Nick Stokes question about “what discontinuity” at least. I trust he'll stop repeating it now.

  • Nick Stokes

    No such luck. I jumped on Pochas because he said “the surface temperature discontinuity that could directly result in modeling errors” and I've been saying over and over – this has nothing to do with modelling. No-one uses it. People here have very fuzzy ideas on this. The first order ode approx that leads to this “discontinuity” is elementary. It's now derived as a reduction from more exact theory, but it's likely that people thought of it independently in the 19th C. I'm not surprised at Alex's discovery. It's just what you get if you assume the transport is a constant energy stream and entirely radiative. There's nothing semi-infinite about it. It just works from the top down, until it doesn't.As I've said over and over, the “discontinuity” arises because you only have one free parameter, eg the constant flux, and you have to match a boundary condition at the top of the atmosphere, where the assumption really is true. Thereafter, the approx works as long as the condition holds. It breaks down when other fluxes become important.FM's variant is nonsense. You can't force it to satisfy another condition without sacrificing the one that is really needed, at TOA. Which he does.

  • Alex Harvey

    Nick,I am not sure what I've discovered to be honest.I should have added that Milne 1922 contains a discussion of Emden 1913 and Milne claims in there that he only became aware of Emden's paper after he'd already finished a draft of his own. However Milne doesn't seem to discuss Emden's temperature disctontinuity. So it is not impossible that Milne didn't merely carry forward an earlier mistake made by Emden or even Schwarzschild.You may be interested to read this post here:http://complexclimate.blogspot.com/2008/09/even…Quote: “But why did their simple model not work? The reason is that a mistake was made by a giant of astrophysics Robert Emden. In 1913 he applied the equations for internal solar radiation derived by his brother in law, another giant of astrophysics Karl Schwarzschild, to the earth’s atmosphere. The radiation scheme is based on Schwarzschild's equation. However, we have a planetary atmosphere not a stellar one!”Sounds like the familiar Miskolczi storyline with Milne replaced by Emden?So it's not impossible that I've discovered nothing other than a minor historical inaccuracy. I suppose it's not impossible that Ostriker is likewise following Emden in making an error.If so, someone who really understands the theory would be able to respond to this and say what Ostriker did wrong, or why it is not relevant.

  • Nick Stokes

    Well, I guess you have two storylines here, and need to decide which is more believable. One is that that three giants of astrophysics somehow made a gross error in an elementary approximation, which was then taught to thousands of students over the years, and remained unnoticed until genius FM came along in 2007 and explained it all with his trademark clarity, or..The elementary approximation is just what it is, as explained so long ago, and misunderstood by FM in 2007, as he has misunderstood so much other physics in his paper. A few bloggers in 2009, who don't understand the alleged error at all, jmst won't let it go.

  • jae

    “noting Miskolczi himself has retired back into petulant silence as his is wont).”That is why I gave up on the M's stuff, until such time as the author comes to his OWN rescue.

  • Anonymous

    I don’t think you should read anything into people not commenting on a blog, particularly when they have work to do. Just appreciate when they do put in the effort to explain and engage. For me, I look forward to the next installment from M, especially if its about Venus.

  • davids99us

    I don't think you should read anything into people not commenting on a blog, particularly when they have work to do. Just appreciate when they do put in the effort to explain and engage. For me, I look forward to the next installment from M, especially if its about Venus.

  • Pat Cassen

    Hi Alex – Nice job of tracing back the literature (I didn’t know that Emden had worked this problem – he was famous for many other things).Now, having gone through these papers, you don’t really have to decide between Nick’s storylines; you can see for yourself if Miskolczi’s “solution” satisfies the TOA boundary conditions. [To do this, you need the relation between B and the upward and downward fluxes at tau = 0, in the Eddington approximation (upward flux = OLR; downward = 0). I imagine that these relations are given in Milne, Emden or Ostriker, or certainly in standard textbooks.] You will find that Nick is right; M's solution fails.I once posted here something to the effect that nothing is ever settled on blogs. Perhaps, thanks to Nick, Steve Short and yourself, I will be proved wrong. I would be delighted if that were the case. (Thanks, of course, to our host, also.)

  • Jan Pompe

    Pat what are the boundary conditions for a boundless integral? What are the boundary conditions for a (semi-infinite) Laplace Transform? Then kindly tell us why climatologists have found three one for TOA and one for the surface.

  • Jan Pompe

    Correction.that's two for the surface.Also tell us how as you seem to imply Ms boundary condition at TOA for tau=0 differs from the the rest?

  • cohenite

    Steve has posted a more comprehensive theory for maintainence of LW IR homoeostasis than offered by M; in Steve’s theory clouds are the deux machina; when clouds increase S_T decreases but the total OLR remains constant because the low level clouds emit more through ET_U thus bringing the non-radiative thermal energy, K and the incoming SW F into the picture; in this way TAU may vary but the greenhouse effect doesn’t. The key is still high water; if that increases then the low-level cloud effect, ET_U will be blocked, so we are still at the cutting edge Dessler/IPCC vs NCEP/Paltridge dispute.

    As to the boundary situation; Nick says only the TOA boundary is essential and indeed extant; I still don’t see that that changes the necessity for the AGW model to be a semi-infinite one; AGW predicts a THS; a THS is a raising of the tropopause whereby the cooler stratosphere air is replaced by warmer upper troposphere air; as AGW claims, with more CO2 the higher the THS is and the higher the CO2 has to be and longer the CO2 has to wait before it can strike cold air to emit to space; without wanting to go into the problematic existence of a THS, the point is that AGW is based on a semi-infinite atmospheric model.

    • http://www.ecoengineers.com Steve Short

      I’ve cross posted this from Jennifer Marohasy’s blog for those who don’t go there.

      Here is another run of the basic spreadsheet (slightly expanded to improve self-explanation) for my little model. This time I once again kept S_U = 396 W/m^2, OLR = 239 W/m^2, F=78 W/m^2 again all as per the T,F&K09 review (see the cartoon therein).

      However, I also forced rE_U (i.e. the real E_U) to be 169 as per T,F&K09 and I set S_T to average 40 at 60% cloud cover again as per T,F&K09 (rather than the 31±10 of the CERES and ERBE averages).

      All other assumptions were as listed previously including that again I assumed E_D~0.625(ET+DT)+0.5F+0.625A_A on the grounds I have previously explained above.

      Once again I set Dry Thermals (DT; convective sensible heat) to be 17 W/m^2 at 60% cloud cover but as before scaled DT to be 0 at 100% cloud cover and 34 at 0% cloud cover. This is a conservative assumption in that it tends to force my estimate of E_D towards A_A i.e. in the direction of Miskolczi’s so-called Atmospheric Kirchoff Law. One could just as easily run this assuming DT = 17 at all cloud covers (probably unlikely physically) and the outcomes would only be trivially different.

      Here are the results:

      %Cloud, Tau, S_T, ET, ET_U, DT, rE_U, A_A, E_D, oE_U, S_U/oE_U, A_A/E_D, S_U, OLR, F
      100 2.99 20 133 50 0 169 376 357 219 1.81 1.05 396 239 78
      80 2.58 30 107 40 8 169 366 340 209 1.89 1.08 396 239 78
      60 2.29 40 80 30 17 169 356 322 199 1.99 1.11 396 239 78
      40 2.07 50 53 20 23 169 346 303 189 2.10 1.14 396 239 78
      20 1.89 60 27 10 29 169 336 284 179 2.21 1.18 396 239 78
      0 1.73 70 0 0 34 169 326 264 169 2.34 1.23 396 239 78

      It can be seen that the real (LW IR) Tau ranges from 1.73 at full clear sky to 2.99 at full 100% cloud cover being 1.89 i.e. ~1.87 only at 20% cloud cover.

      The Miskolci ‘Kirchoff Law’ test ratio A_A/E_D ranges from 1.05 at 100% cloud cover to 1.23% at clear sky

      The Miskolczi ‘Virial Rule’ test ratio S_U /oE_U (i.e. S_U /old E_U) ranges from 1.81 at 100% cloud cover to 2.34 at clear sky. It is 1.99 i.e. ~2.00 only at 60% cloud cover i.e. at the global all sky % cloud cover.

      I would be happy to send back to anyone, if they send an email to me, a copy of this little Excel spreadsheet so they can play around with it themselves. Your email address would not be recorded.

      You can then make up your own minds what this simple exercise tells you about:

      (1) the likely validity of the major tenets of Miskolczi Theory; and

      (2) the significance of LW IR by release of latent heat in clouds (water emission lines) which typically escapes to contribute to OLR (as a simple function of % cloud cover).

      • http://www.ecoengineers.com Steve Short

        Jan/Pat/Nick/David/Anthony

        I know this is not what Jan/Pat & Nick are currently (+ with deju vu ;-) scrapping about (BTW ‘whacko’ is one of my favourite slang words) but I wonder if you guys would feel like having a look at my little Miskolczi-modifying (refuting?) spreadsheet above in the light of Table 1 (page 444) in Ozawa and Ohmura (1996) – which paper I presume you guys are familiar with.

        I’m well on the way to modifying my spreadsheet model to have slightly different S_U values (and surface temperatures) for different (true) LW IR (and hence tau) values as per O&O96 Table 1 and intriguingly seem to be getting close to a situation which does actually maximize MEP along the lines of the (relatively simple) approach well-described in O&O96.

        Unfortunately I am in the middle of a really big work project at the moment (designing a hydromet plant for a magnesium production facility) and am having great difficulty concentrating on this stuff.

        Maybe none of you are not interested in MEP – in which case please ignore this message. But I seem to be on the verge of something rather interesting i.e. an atmospheric box model even simpler than Miskolczi which gets around the dodgy Eqn 7, doesn’t require ‘Kirchoff’ or Virial’, doesn’t need a constant tau, involves M’s K & F terms and provides an MEP-based basis for an inferred homeostasis.

        As it intimately involves a variable tau, convection and a surface S_U = S-B sigma*T^4 perhaps your interest might be piqued?

        Regards

        • Anonymous
          • http://www.ecoengineers.com Steve Short

            David: That URL doesn’t work for me (Firefox 3.0.10), I get a blank page.

        • Anonymous

          Steve, Interesting O&O as you say. I didn’t have a chance to follow-up on your references before. The proportionality of long-wave and short-wave optical depth assumption (between equations 2 and 3) would imply that an increase in optical depth due to increased GHGs would also increase SW optical depth, presumably due to increased water vapor and hence cloudiness.

          What would you suggest ofr a more recent follow-up to this model?

          • http://www.ecoengineers.com Steve Short

            Pauluis, OM and Held IM (2002a) Entropy budget of an atmosphere in radiative-convective equilibrium. Part I: maximum work and frictional dissipation. J. Atmos. Sci. 59: 125-139

            This is interesting because it concludes that moist convection (ET) behaves more as an atmopsheric dehumidifier than as a heat engine.

            Pauluis OM, Held IM (2002b) Entropy budget of an a atmosphere in radiative-convective equilibrium. Part II: Latent heat transport and moist processes. J. Atmos. Sci. 59: 140-149

            Conclusion: Frictional dissipation of atmopsheric motions accounts for ~30% of total entropy production, frictional dissipation of failing rain ~12%, phase changes and diffusion of water vapor ~40% and remaining ~20% uncertainties in the above.

        • http://www.ecoengineers.com Steve Short

          Hmmm, interesting, a 6 W/m^2 increase in OLR going to 100% cloud cover, a 9 W/m^2 decrease in OLR going to zero cloud cover.

          %Cloud,Tau,S_T,ET,ET_U,DT rE_U,A_A,E_D,oE_U,S_U/oE_U,A_A/E_D,S_U,OLR,F,(ET+DT)/S_U
          100,2.72,26,133,50,7,169,370,371,219,1.81,1.00,396,245,78,0.354
          80,2.48,33,107,40,12,169,363,353,209,1.89,1.03,396,242,78,0.300
          60,2.29,40,80,30,17,169,356,335,199,1.99,1.06,396,239,78,0.245
          40,2.13,47,53,20,23,169,349,317,189,2.10,1.10,396,236,78,193
          20,1.99,54,27,10,29,169,342,300,179,2.21,1.14,396,233,78,0.141
          0,1.87,61,0,0,34,169,335,281,169,2.34,1.19,396,230,78,0.086

          This suggests a key issue to resolve is whether with increasing cloud cover the release in latent heat (in all directions) rises proportionately. Intuitively one would think so. After all, over large areas of cloud there is (presumably) always about the same probability that about the same proportion are condensing into rain.

          After O&O96 and P&H02 this also suggests that entropy production (EP) is a function of cloud cover due to the fact that, as P&H02 suggest, moist convection accounts for an ~40% of EP.

  • cohenite

    Steve has posted a more comprehensive theory for maintainence of LW IR homoeostasis than offered by M; in Steve's theory clouds are the deux machina; when clouds increase S_T decreases but the total OLR remains constant because the low level clouds emit more through ET_U thus bringing the non-radiative thermal energy, K and the incoming SW F into the picture; in this way TAU may vary but the greenhouse effect doesn't. The key is still high water; if that increases then the low-level cloud effect, ET_U will be blocked, so we are still at the cutting edge Dessler/IPCC vs NCEP/Paltridge dispute.As to the boundary situation; Nick says only the TOA boundary is essential and indeed extant; I still don't see that that changes the necessity for the AGW model to be a semi-infinite one; AGW predicts a THS; a THS is a raising of the tropopause whereby the cooler stratosphere air is replaced by warmer upper troposphere air; as AGW claims, with more CO2 the higher the THS is and the higher the CO2 has to be and longer the CO2 has to wait before it can strike cold air to emit to space; without wanting to go into the problematic existence of a THS, the point is that AGW is based on a semi-infinite atmospheric model.

  • http://www.ecoengineers.com Steve Short

    I’ve cross posted this from Jennifer Marohasy’s blog for those who don’t go there.

    Here is another run of the basic spreadsheet (slightly expanded to improve self-explanation) for my little model. This time I once again kept S_U = 396 W/m^2, OLR = 239 W/m^2, F=78 W/m^2 again all as per the T,F&K09 review (see the cartoon therein).

    However, I also forced rE_U (i.e. the real E_U) to be 169 as per T,F&K09 and I set S_T to average 40 at 60% cloud cover again as per T,F&K09 (rather than the 31±10 of the CERES and ERBE averages).

    All other assumptions were as listed previously including that again I assumed E_D~0.625(ET+DT)+0.5F+0.625A_A on the grounds I have previously explained above.

    Once again I set Dry Thermals (DT; convective sensible heat) to be 17 W/m^2 at 60% cloud cover but as before scaled DT to be 0 at 100% cloud cover and 34 at 0% cloud cover. This is a conservative assumption in that it tends to force my estimate of E_D towards A_A i.e. in the direction of Miskolczi’s so-called Atmospheric Kirchoff Law. One could just as easily run this assuming DT = 17 at all cloud covers (probably unlikely physically) and the outcomes would only be trivially different.

    Here are the results:

    %Cloud, Tau, S_T, ET, ET_U, DT, rE_U, A_A, E_D, oE_U, S_U/oE_U, A_A/E_D, S_U, OLR, F
    100 2.99 20 133 50 0 169 376 357 219 1.81 1.05 396 239 78
    80 2.58 30 107 40 8 169 366 340 209 1.89 1.08 396 239 78
    60 2.29 40 80 30 17 169 356 322 199 1.99 1.11 396 239 78
    40 2.07 50 53 20 23 169 346 303 189 2.10 1.14 396 239 78
    20 1.89 60 27 10 29 169 336 284 179 2.21 1.18 396 239 78
    0 1.73 70 0 0 34 169 326 264 169 2.34 1.23 396 239 78

    It can be seen that the real (LW IR) Tau ranges from 1.73 at full clear sky to 2.99 at full 100% cloud cover being 1.89 i.e. ~1.87 only at 20% cloud cover.

    The Miskolci ‘Kirchoff Law’ test ratio A_A/E_D ranges from 1.05 at 100% cloud cover to 1.23% at clear sky

    The Miskolczi ‘Virial Rule’ test ratio S_U /oE_U (i.e. S_U /old E_U) ranges from 1.81 at 100% cloud cover to 2.34 at clear sky. It is 1.99 i.e. ~2.00 only at 60% cloud cover i.e. at the global all sky % cloud cover.

    I would be happy to send back to anyone, if they send an email to me, a copy of this little Excel spreadsheet so they can play around with it themselves. Your email address would not be recorded.

    You can then make up your own minds what this simple exercise tells you about:

    (1) the likely validity of the major tenets of Miskolczi Theory; and

    (2) the significance of LW IR by release of latent heat in clouds (water emission lines) which typically escapes to contribute to OLR (as a simple function of % cloud cover).

  • http://www.ecoengineers.com Steve Short

    I've cross posted this from Jennifer Marohasy's blog for those who don't go there.Here is another run of the basic spreadsheet (slightly expanded to improve self-explanation) for my little model. This time I once again kept S_U = 396 W/m^2, OLR = 239 W/m^2, F=78 W/m^2 again all as per the T,F&K09 review (see the cartoon therein).However, I also forced rE_U (i.e. the real E_U) to be 169 as per T,F&K09 and I set S_T to average 40 at 60% cloud cover again as per T,F&K09 (rather than the 31±10 of the CERES and ERBE averages).All other assumptions were as listed previously including that again I assumed E_D~0.625(ET+DT)+0.5F+0.625A_A on the grounds I have previously explained above.Once again I set Dry Thermals (DT; convective sensible heat) to be 17 W/m^2 at 60% cloud cover but as before scaled DT to be 0 at 100% cloud cover and 34 at 0% cloud cover. This is a conservative assumption in that it tends to force my estimate of E_D towards A_A i.e. in the direction of Miskolczi’s so-called Atmospheric Kirchoff Law. One could just as easily run this assuming DT = 17 at all cloud covers (probably unlikely physically) and the outcomes would only be trivially different.Here are the results:%Cloud, Tau, S_T, ET, ET_U, DT, rE_U, A_A, E_D, oE_U, S_U/oE_U, A_A/E_D, S_U, OLR, F 100 2.99 20 133 50 0 169 376 357 219 1.81 1.05 396 239 78 80 2.58 30 107 40 8 169 366 340 209 1.89 1.08 396 239 78 60 2.29 40 80 30 17 169 356 322 199 1.99 1.11 396 239 78 40 2.07 50 53 20 23 169 346 303 189 2.10 1.14 396 239 78 20 1.89 60 27 10 29 169 336 284 179 2.21 1.18 396 239 78 0 1.73 70 0 0 34 169 326 264 169 2.34 1.23 396 239 78It can be seen that the real (LW IR) Tau ranges from 1.73 at full clear sky to 2.99 at full 100% cloud cover being 1.89 i.e. ~1.87 only at 20% cloud cover.The Miskolci ‘Kirchoff Law’ test ratio A_A/E_D ranges from 1.05 at 100% cloud cover to 1.23% at clear skyThe Miskolczi ‘Virial Rule’ test ratio S_U /oE_U (i.e. S_U /old E_U) ranges from 1.81 at 100% cloud cover to 2.34 at clear sky. It is 1.99 i.e. ~2.00 only at 60% cloud cover i.e. at the global all sky % cloud cover.I would be happy to send back to anyone, if they send an email to me, a copy of this little Excel spreadsheet so they can play around with it themselves. Your email address would not be recorded.You can then make up your own minds what this simple exercise tells you about:(1) the likely validity of the major tenets of Miskolczi Theory; and(2) the significance of LW IR by release of latent heat in clouds (water emission lines) which typically escapes to contribute to OLR (as a simple function of % cloud cover).

  • Pat Cassen

    Hi Jan –
    M’s solution fails to meet the TOA boundary conditions in the following sense:
    If one imposes the condition that the downward LW flux = 0 at tau = 0, as it must, the upward flux does not equal OLR. If one imposes the condition that the upward flux equals OLR, as it must, then the downward LW flux does not equal zero.

    The classical solution of Emden, Milne, et al. does satisfy these boundary conditions. (The two conditions are used to determine the two coefficients that are introduced by the Eddington approximation, which M is using.)

    I don’t understand your other questions.

    • Jan Pompe

      “If one imposes the condition that the upward flux equals OLR, as it must, then the downward LW flux does not equal zero.”

      I do not see why if tau at TOA is zero there is no absorption and no re-emission only transmission I see no reason to assume that because transmission at TOA is non zero that there must also be a downward flux. I certainly see no such assumption made in M-07.

      “I don’t understand your other questions.”

      I didn’t think so.

      Put simply semi-infinite means unbounded in one direction.
      If and integral is unbounded in one direction there is NO boundary so why are there two in the classical theory?

      • Pat Cassen

        Jan says: “I see no reason to assume that because transmission at TOA is non zero that there must also be a downward flux”

        Right. The point is that there really is no downward LW flux at TOA, but M’s solution (incorrectly) gives a non-zero downward flux, if you demand that that the upward flux is OLR, as it must be.

        The Eddington approximation, which M uses to derive eqn. 11 and the following, introduces coefficients that specify the radiation field in terms of the angular distributon of the specific intensity. Boundary conditions at tau = 0 are used to determine these coefficients. M doesn’t talk about any of this, but he cannot legitimately ignore these boundary conditions. A nice feature of eqn 11, etc., is that the spatial variable (height) is effectively replaced by the optical depth tau, which is well-behaved (goes to zero) at “infinte” height (TOA). So, although there is “no boundary” in space, there is an optical depth boundary.

        • Jan Pompe

          Pat “Right. The point is that there really is no downward LW flux at TOA, but M’s solution (incorrectly) gives a non-zero downward flux,”

          Where?

          “if you demand that that the upward flux is OLR, as it must be.”

          Why?

          a non zero OLR says nothing about downward LW at TOA.

          I asked you why you haven’t answered.

          Your last paragraph seems confused in the extreme M does not ignore the upper boundary and the boundary conditions that I’m complaining about is not the one at the optical boundary where tau = 0 but those two where the equations are unbounded at tau = infinity.

  • Pat Cassen

    Hi Jan – M’s solution fails to meet the TOA boundary conditions in the following sense:If one imposes the condition that the downward LW flux = 0 at tau = 0, as it must, the upward flux does not equal OLR. If one imposes the condition that the upward flux equals OLR, as it must, then the downward LW flux does not equal zero.The classical solution of Emden, Milne, et al. does satisfy these boundary conditions. (The two conditions are used to determine the two coefficients that are introduced by the Eddington approximation, which M is using.)I don’t understand your other questions.

  • Jan Pompe

    “If one imposes the condition that the upward flux equals OLR, as it must, then the downward LW flux does not equal zero.”I do not see why if tau at TOA is zero there is no absorption and no re-emission only transmission I see no reason to assume that because transmission at TOA is non zero that there must also be a downward flux. I certainly see no such assumption made in M-07. “I don’t understand your other questions.”I didn't think so.Put simply semi-infinite means unbounded in one direction.If and integral is unbounded in one direction there is NO boundary so why are there two in the classical theory?

  • Pat Cassen

    Jan says: “I see no reason to assume that because transmission at TOA is non zero that there must also be a downward flux”Right. The point is that there really is no downward LW flux at TOA, but M’s solution (incorrectly) gives a non-zero downward flux, if you demand that that the upward flux is OLR, as it must be.The Eddington approximation, which M uses to derive eqn. 11 and the following, introduces coefficients that specify the radiation field in terms of the angular distributon of the specific intensity. Boundary conditions at tau = 0 are used to determine these coefficients. M doesn’t talk about any of this, but he cannot legitimately ignore these boundary conditions. A nice feature of eqn 11, etc., is that the spatial variable (height) is effectively replaced by the optical depth tau, which is well-behaved (goes to zero) at “infinte” height (TOA). So, although there is “no boundary” in space, there is an optical depth boundary.

  • Jan Pompe

    Pat “Right. The point is that there really is no downward LW flux at TOA, but M’s solution (incorrectly) gives a non-zero downward flux,”Where?”if you demand that that the upward flux is OLR, as it must be.”Why?a non zero OLR says nothing about downward LW at TOA.I asked you why you haven't answered.Your last paragraph seems confused in the extreme M does not ignore the upper boundary and the boundary conditions that I'm complaining about is not the one at the optical boundary where tau = 0 but those two where the equations are unbounded at tau = infinity.

  • Pat Cassen

    Jan –
    Where?
    Not in his paper; you have to calculate it yourself.

    Why?
    Why what? Why must the upward LW radiation be OLR? Conservation of energy, and the definition of OLR. Why does OLR at TOA demand (by M’s solution) a non-zero downward flux at TOA? Because that’s what his incorrect “solution” gives. Why do I know this? Because I calculated it. (As Nick implies above, it’s obvious.)

    “a non zero OLR says nothing about downward LW at TOA”
    Right. Downward LW at TOA is zero (in a proper solution) because there’s no LW radiation from space.

    “M does not ignore the upper boundary”
    You’re right, he claims to deal with it in Apendix B. He just gets it wrong.

    You have a problem with the lower boundary. I don’t.

    Jan, I’m sorry, there’s so much wrong with this paper I can’t believe we’re still talking about it. As I can’t recall a single instance when you agreed with me, and I expect more of the same, I’m just going to let you work it out for yourself. Google “Eddington approximation” + “plane parallel atmosphere”. Lots of good resources. Understand the derivation of the classical solution. Try putting M’s solution into the same context. See for yourself why it is whacko. Leave your preconceptions in your pocket, learn something new, and have fun.

    • Jan Pompe

      Pat Cassen:

      “Jan –
      Where?
      Not in his paper; you have to calculate it yourself.”

      What?

      You are saying

      “but M’s solution (incorrectly) gives a non-zero downward flux”

      but not in his paper and I have to calculate it myself?

      Do you think this makes any sense?

      I’f I have to invent a downward flux for myself then M’s solution does not give a downward flux.

      Makes about as much sense as your earlier complaint about three equations to solve 9 variables ( better check I may have miscounted).

      “See for yourself why it is whacko.”

      What is whacko is boundary conditions for differential equation where the solution is unbounded. What is whacko is using an unbounded solution where the system is bounded. What is whacko is two boundary conditions for a first order differential equation.

      All this whackiness comes about by using a approximations suitable for optically stellar atmospheres for an optically thin one.

      • Jan Pompe

        “for optically stellar” should read “for optically thick stellar”

      • Nick Stokes

        Jan,
        As Pat says, it is obvious. I doubt if he’ll want to deal further with your wearisome barrage of incoherent objections, but verification is simple. Go to M’s Eq 15. As M says, that is the Eddington approximation. It is based on constant radiative flux H – see start of para above Eq 12. Substutute tau=0, and you regenerate the boundary condition as a value of B.

        Now go to M’s Eq 21. Substitute tau=0. You get a quite different value of B. M’s solution does not satisfy the bc at TOA.

        Whether you interpret that as implying a non-zero return flux, or an incorrect OLR doesn’t matter. It isn’t right, and something’s gotta give.

        • Jan Pompe

          Nick
          First you need to look up the meaning of “barrage” and then explain ho I’m putting up a barrier to understanding by asking questions that neither you nor Cassen will answer?

          Sorry but you have it back to front Cassen put up the wearisome barrage and you are helping.

          Kindly answer the question I asked instead of putting up red herrings.

          Equation 15 is the semi-infinite solution otherwise there would be transcendental terms of tau in it, so where are they.

          Equations 20 and 21 are the bounded solution of the DE they have the transcendental term (Ta= Exp(-tau)) of course you are going to get different boundary conditions we expect that.

          The question that you you need to answer what makes eqn 15 right when it requires the two equations to satisfy the boundary condition (16 & 17) at the surface where according to the solution (eq 15) tau is infinite leading to a temperature discontinuity that is NOT OBSERVED or as Milne put it a singularity at the surface.

          Please no more red herrings they are getting tiresome.

          • Alex Harvey

            Jan,

            Pat seems to be undeniably correct when he suggests we need to understand the classical solution first, rather than Miskolczi’s critique of it.

            I have shown these papers to you before but again how is it that Ostriker says that the temperature discontinuity persists not only in the Eddington approximation but in the exact solution? This seems to completely contradict the Miskolczi storyline. The Miskolczi storyline holds that there never even WAS an exact solution before his 2007 paper. Ostriker is referring I believe to the 1955 solution of Jean I.F. King, and King acknowledges a debt to Chandrasekhar who (I believe) derives the whole thing from first principles.

            This is really starting to get silly.

            Can you show us what Jean I.F. King did wrong? Can you show us what Chandrasekhar did wrong? Otherwise we still have Miskolczi’s theory correcting an error in a theory and there has never been a single piece of evidence produced that anyone ever made this error in the first place.

          • Nick Stokes

            Alex,
            You should also note this observation of Orstriker on p 285:
            Thus, for example, this model could not be applied to the optically thin atmosphere of the earth, since (a) the surface discontinuity would be quite significant and (b) the approximate solution would be rather inaccurate.
            I think his caveat (b) applies to the thinness; it causes a local near the surface (O says for tau<1/2) and this is true for all the solutions in FM as well. But he may also be referring to the effect of LH and convection in the lower atmosphere – as I've said many times, I think this is the major issue on earth, and makes worrying about a surface discontinuity irrelevant. The basic assumption of radiative equilibrium has failed well before you get to the surface.

          • Nick Stokes

            Oops, I meant “a local error near TOA”

          • Alex Harvey

            Fair enough, what are your thoughts on this paper then:

            King, J. I. F 1956, Radiative Equilibrium of a Line-Absorbing Atmosphere. Astrophysical Journal, vol. 124, p.272

            http://adsabs.harvard.edu/full/1956ApJ…124..272K

          • Nick Stokes

            Alex, I have a bit to say here, but the page is getting narrow – so it’s below.

          • Jan Pompe

            “Pat seems to be undeniably correct when he suggests we need to understand the classical solution first, rather than Miskolczi’s critique of it.”
            Sure Alex and we also need to study flat earth theories before we can see the world as an oblate spheroid.

            Sorry but disagree with you, unless you can show me that it is reasonable to have a boundary condition for unbounded directions of integrals of course and then that it’s reasonable to come up with two for a first order differential equation.

            You may also throw this at some people a remark from Nick that I and M also would agree with.

            But he may also be referring to the effect of LH and convection in the lower atmosphere – as I’ve said many times, I think this is the major issue on earth, and makes worrying about a surface discontinuity irrelevant.

            FMs model despite the fact that he doesn’t quantify convection or latent heat in the paper because it’s beyond the scope of the paper is in fact a radiation/convection model.

          • http://www.ecoengineers.com Steve Short

            “FMs model despite the fact that he doesn’t quantify convection or latent heat in the paper because it’s beyond the scope of the paper is in fact a radiation/convection model.”

            Hi Jan. You must have extremely strong teeth (or a titanium denture)! To see you masticating that statement after all this time is quite a stunner, especially the little gem therein…. “because its beyond the scope of the paper” indeed!!!!

            What’s that you were saying about flat earths and the like? Choke!

          • Jan Pompe

            Steve “Hi Jan. You must have extremely strong teeth (or a titanium denture)!”

            Yes they are quite strong but I can assure you they are quite natural. On the other hand if yours are falling out you might be ready for a visit by ACAT.
            (Your partner will know what that is)

  • Pat Cassen

    Jan -Where? Not in his paper; you have to calculate it yourself.Why?Why what? Why must the upward LW radiation be OLR? Conservation of energy, and the definition of OLR. Why does OLR at TOA demand (by M’s solution) a non-zero downward flux at TOA? Because that’s what his incorrect “solution” gives. Why do I know this? Because I calculated it. (As Nick implies above, it’s obvious.)“a non zero OLR says nothing about downward LW at TOA”Right. Downward LW at TOA is zero (in a proper solution) because there’s no LW radiation from space.“M does not ignore the upper boundary”You’re right, he claims to deal with it in Apendix B. He just gets it wrong.You have a problem with the lower boundary. I don’t.Jan, I’m sorry, there’s so much wrong with this paper I can’t believe we’re still talking about it. As I can’t recall a single instance when you agreed with me, and I expect more of the same, I’m just going to let you work it out for yourself. Google “Eddington approximation” + “plane parallel atmosphere”. Lots of good resources. Understand the derivation of the classical solution. Try putting M’s solution into the same context. See for yourself why it is whacko. Leave your preconceptions in your pocket, learn something new, and have fun.

  • Jan Pompe

    Pat Cassen:”Jan -Where? Not in his paper; you have to calculate it yourself.”What?You are saying “but M’s solution (incorrectly) gives a non-zero downward flux”but not in his paper and I have to calculate it myself? Do you think this makes any sense?I'f I have to invent a downward flux for myself then M's solution does not give a downward flux.Makes about as much sense as your earlier complaint about three equations to solve 9 variables ( better check I may have miscounted). “See for yourself why it is whacko.”What is whacko is boundary conditions for differential equation where the solution is unbounded. What is whacko is using an unbounded solution where the system is bounded. What is whacko is two boundary conditions for a first order differential equation. All this whackiness comes about by using a approximations suitable for optically stellar atmospheres for an optically thin one.

  • Jan Pompe

    “for optically stellar” should read “for optically thick stellar”

  • davids99us
  • http://www.ecoengineers.com Steve Short

    Jan/Pat/Nick/David/AnthonyI know this is not what Jan/Pat & Nick are currently (+ with deju vu ;-) scrapping about (BTW 'whacko' is one of my favourite slang words) but I wonder if you guys would feel like having a look at my little Miskolczi-modifying (refuting?) spreadsheet above in the light of Table 1 (page 444) in Ozawa and Ohmura (1996) – which paper I presume you guys are familiar with.I'm well on the way to modifying my spreadsheet model to have slightly different S_U values (and surface temperatures) for different (true) LW IR (and hence tau) values as per O&O96 Table 1 and intriguingly seem to be getting close to a situation which does actually maximize MEP along the lines of the (relatively simple) approach well-described in O&O96.Unfortunately I am in the middle of a really big work project at the moment (designing a hydromet plant for a magnesium production facility) and am having great difficulty concentrating on this stuff. Maybe none of you are not interested in MEP – in which case please ignore this message. But I seem to be on the verge of something rather interesting i.e. an atmospheric box model even simpler than Miskolczi which gets around the dodgy Eqn 7, doesn't require 'Kirchoff' or Virial', doesn't need a constant tau, involves M's K & F terms and provides an MEP-based basis for an inferred homeostasis.As it intimately involves a variable tau, convection and a surface S_U = S-B sigma*T^4 perhaps your interest might be piqued?Regards

  • http://www.ecoengineers.com Steve Short

    David: That URL doesn't work for me (Firefox 3.0.10), I get a blank page.

  • Nick Stokes

    Jan,As Pat says, it is obvious. I doubt if he'll want to deal further with your wearisome barrage of incoherent objections, but verification is simple. Go to M's Eq 15. As M says, that is the Eddington approximation. It is based on constant radiative flux H – see start of para above Eq 12. Substutute tau=0, and you regenerate the boundary condition as a value of B. Now go to M's Eq 21. Substitute tau=0. You get a quite different value of B. M's solution does not satisfy the bc at TOA.Whether you interpret that as implying a non-zero return flux, or an incorrect OLR doesn't matter. It isn't right, and something's gotta give.

  • davids99us

    Steve, Interesting O&O as you say. I didn't have a chance to follow-up on your references before. The proportionality of long-wave and short-wave optical depth assumption (between equations 2 and 3) would imply that an increase in optical depth due to increased GHGs would also increase SW optical depth, presumably due to increased water vapor and hence cloudiness. What would you suggest ofr a more recent follow-up to this model?

  • Jan Pompe

    Nick First you need to look up the meaning of “barrage” and then explain ho I'm putting up a barrier to understanding by asking questions that neither you nor Cassen will answer?Sorry but you have it back to front Cassen put up the wearisome barrage and you are helping. Kindly answer the question I asked instead of putting up red herrings.Equation 15 is the semi-infinite solution otherwise there would be transcendental terms of tau in it, so where are they.Equations 20 and 21 are the bounded solution of the DE they have the transcendental term (Ta= Exp(-tau)) of course you are going to get different boundary conditions we expect that. The question that you you need to answer what makes eqn 15 right when it requires the two equations to satisfy the boundary condition (16 & 17) at the surface where according to the solution (eq 15) tau is infinite leading to a temperature discontinuity that is NOT OBSERVED or as Milne put it a singularity at the surface.Please no more red herrings they are getting tiresome.

  • Alex Harvey

    Jan,Pat seems to be undeniably correct when he suggests we need to understand the classical solution first, rather than Miskolczi's critique of it.I have shown these papers to you before but again how is it that Ostriker says that the temperature discontinuity persists not only in the Eddington approximation but in the exact solution? This seems to completely contradict the Miskolczi storyline. The Miskolczi storyline holds that there never even WAS an exact solution before his 2007 paper. Ostriker is referring I believe to the 1955 solution of Jean I.F. King, and King acknowledges a debt to Chandrasekhar who (I believe) derives the whole thing from first principles.This is really starting to get silly.Can you show us what Jean I.F. King did wrong? Can you show us what Chandrasekhar did wrong? Otherwise we still have Miskolczi's theory correcting an error in a theory and there has never been a single piece of evidence produced that anyone ever made this error in the first place.

  • Nick Stokes

    Alex,You should also note this observation of Orstriker on p 285:Thus, for example, this model could not be applied to the optically thin atmosphere of the earth, since (a) the surface discontinuity would be quite significant and (b) the approximate solution would be rather inaccurate.I think his caveat (b) applies to the thinness; it causes a local near the surface (O says for tau<1/2) and this is true for all the solutions in FM as well. But he may also be referring to the effect of LH and convection in the lower atmosphere – as I've said many times, I think this is the major issue on earth, and makes worrying about a surface discontinuity irrelevant. The basic assumption of radiative equilibrium has failed well before you get to the surface.

  • Nick Stokes

    Oops, I meant “a local error near TOA”

  • Alex Harvey

    Fair enough, what are your thoughts on this paper then:King, J. I. F 1956, Radiative Equilibrium of a Line-Absorbing Atmosphere. Astrophysical Journal, vol. 124, p.272http://adsabs.harvard.edu/full/1956ApJ…124..272K

  • http://www.ecoengineers.com Steve Short

    Pauluis, OM and Held IM (2002a) Entropy budget of an atmosphere in radiative-convective equilibrium. Part I: maximum work and frictional dissipation. J. Atmos. Sci. 59: 125-139This is interesting because it concludes that moist convection (ET) behaves more as an atmopsheric dehumidifier than as a heat engine.Pauluis OM, Held IM (2002b) Entropy budget of an a atmosphere in radiative-convective equilibrium. Part II: Latent heat transport and moist processes. J. Atmos. Sci. 59: 140-149Conclusion: Frictional dissipation of atmopsheric motions accounts for ~30% of total entropy production, frictional dissipation of failing rain ~12%, phase changes and diffusion of water vapor ~40% and remaining ~20% uncertainties in the above.

  • Jan Pompe

    “Pat seems to be undeniably correct when he suggests we need to understand the classical solution first, rather than Miskolczi's critique of it.”Sure Alex and we also need to study flat earth theories before we can see the world as an oblate spheroid.Sorry but disagree with you, unless you can show me that it is reasonable to have a boundary condition for unbounded directions of integrals of course and then that it's reasonable to come up with two for a first order differential equation.You may also throw this at some people a remark from Nick that I and M also would agree with.

    But he may also be referring to the effect of LH and convection in the lower atmosphere – as I've said many times, I think this is the major issue on earth, and makes worrying about a surface discontinuity irrelevant.

    FMs model despite the fact that he doesn't quantify convection or latent heat in the paper because it's beyond the scope of the paper is in fact a radiation/convection model.

  • http://www.ecoengineers.com Steve Short

    Hmmm, interesting, a 6 W/m^2 increase in OLR going to 100% cloud cover, a 9 W/m^2 decrease in OLR going to zero cloud cover.%Cloud,Tau,S_T,ET,ET_U,DT rE_U,A_A,E_D,oE_U,S_U/oE_U,A_A/E_D,S_U,OLR,F,(ET+DT)/S_U100,2.72,26,133,50,7,169,370,371,219,1.81,1.00,396,245,78,0.35480,2.48,33,107,40,12,169,363,353,209,1.89,1.03,396,242,78,0.30060,2.29,40,80,30,17,169,356,335,199,1.99,1.06,396,239,78,0.24540,2.13,47,53,20,23,169,349,317,189,2.10,1.10,396,236,78,19320,1.99,54,27,10,29,169,342,300,179,2.21,1.14,396,233,78,0.1410,1.87,61,0,0,34,169,335,281,169,2.34,1.19,396,230,78,0.086This suggests a key issue to resolve is whether with increasing cloud cover the release in latent heat (in all directions) rises proportionately. Intuitively one would think so. After all, over large areas of cloud there is (presumably) always about the same probability that about the same proportion are condensing into rain.After O&O96 and P&H02 this also suggests that entropy production (EP) is a function of cloud cover due to the fact that, as P&H02 suggest, moist convection accounts for an ~40% of EP.

  • Nick Stokes

    Alex, I have a bit to say here, but the page is getting narrow – so it's below.

  • Nick Stokes

    Alex,
    I haven’t read much of the King paper – it’s long, and the system of having to download one page at a time doesn’t help. But I’ve had more thoughts on where this all fits in.

    It goes back, in a way, to my first objection, that FM has led us down a blind alley. None of this theory is actual useful for, or used by, climate modelling. In fact, most climate people would be unfamiliar with it. And the Orstriker quote suggests that the planetary people too don’t think it is much use for Earth.

    The whole math theory in which H is related to B as a function of tau is a bit like the ray theory of light in microscopy. On a macro scale it works, when the geometry is on a scale many times the wavelength of light. You can see sharp edges. But as you get down to the wavelength of light, ray theory gradually breaks down, and microscopy does not work. Everything gets fuzzy. This is not sudden or absolute, but it is a problem.

    With this radiative theory, the corresponding length is unit optical density. On that scale, radiation travels significant distances before the interaction, on which the theory is based.

    We have discussed the boundary condition at TOA. A natural question is, where is TOA, in this grey-body model? We know there is no sharp line. In fact it is a distributed region, over a distance of order of OD 1. That means that even the ground has some TOA connection. The B corresponding to H can’t be attributed to a point in space – it’s an average over a big region.

    In a way, this is the problem M is trying to address, by using an alternative bc. But that has the same problem. In fact, the problem is in the use of the equation itself, on this scale. There is no simple fix for the failure of a light microscope at submicron ranges, and it’s the same here.

    The problem is much exacerbated by the use of grey-body approx (which no climate modeller would use). This describes a range from the IR window, with no absorption at all, to CO2 and water peaks, where IR is absorbed within a few metres. This is badly summarised by an average of a few KM.

    If you take out the window, the remaining IR has a slightly more meaningful, and larger, average OD. And the “ray theory” – radiative transfer equations – would work better. So the concept may still have some use. But a grey-body approx makes it useless for Earth.

    I might add that this fuzziness also makes the notion of a “discontinuity” meaningless.

    • http://www.ecoengineers.com Steve Short

      Nice discussion Nick. I benefited from it. Thanks for that.

      We have an analogous state of ‘fuzziness’ when approaching the (charged) surface of an amphoteric solid in contact with an ionic solution. This has led to numerous theories/viewpoints of the chemical and electrostatic nature of the ‘discontinuity’ between solid and liquid (which is critical to issues of reactivity and (chemo-)thermodynamic equilibrium).

      PS: I slowly read through the King paper last night. Sheesh!

      I might add I always suspected M’s approach of hanging an entire ‘theory’ on a fixed tau (putting aside all the other sub-theories) was nonsense and now the discovery that M’s ‘magic number’ tau is not even a true LW IR absorption tau anyway (except at perhaps in a pure clear sky Earth with no convection) but from then on must surreptiously include TOA-escaping emission from tops of clouds as well pretty much tipped M Theory into the dustbin for me.

      I find it hard to believe that M did that unconsciously i.e. as a genuine error, given his background and the mass of literature. What a strange person!

      • Nick Stokes

        Steve,
        Totally OT, but I was once very interested in the double layer theory you mention. Paper here.

        • http://www.ecoengineers.com Steve Short

          Also totally OT.

          Cripes Nick, 1976, talk about deja vu – you must be really, really old! I didn’t get into that stuff until going to Ansto in ’82 (after a character-building, de-hippiefying decade in industry). I’m more your grubby experimentalist (chemisorption studies, EXAFS of wet surfaces etc). Jim Davis of Triple Layer fame still a close personal friend – I was quaffing Aussie reds with him and Linda only about a month ago.

          • Nick Stokes

            Well, I prefer to think that then I was really really young. But gosh, it’s a small world. I used to work with those people at ANSTO too. Gary Pantelis, Paul Brown. Mostly on the early stages of Sulfidox.

    • Anonymous

      Agreed. With a simple equation like this, whether you apply a boundary condition at TOA or BOA seems to be a matter of assumptions. In M the TOA conditon is met ‘somehow’ just as the BOA discontinuity is dealt with ‘somehow’. That may be in M’s case, that the constraint is not actually at the solid surface, but slightly above, where latent and convection transfer has been effected already.

      Interesting you see M as a blind alley. O&O’s paper similar in gray body approximation, though interesting and natural doesn’t seem to have been elaborated (incorporating lattitudinal variations, empirical validation, etc). I also see close similarities of the O&O maximum entropy condition, to M’s problematic eqn. 7. They might be more similar that it looks at first.

      • http://www.ecoengineers.com Steve Short

        Actually, David, O&O96 has been elaborated in spades!

        The body of MEP in climate theory (and in other fields) is literally booming!

        I strongly suggest you get the Red Book and then follow up on the work of Kleidon, Dewar, Pauluis et al. and the big groups at e.g. Max Planck Institute for Meteorology in Hamburg and the Max Planck Institute for Biogeochemistry in Jena where Axel Kleidon is now.

        David Catling at University of Washington also has a group looking at biotic EP.

        Graham Farquhar at ANU is into it (sometimes with Paltridge).

        Check out: http://www.bgc-jena.mpg.de/bgc-theory/index.php/Research/2009-Thermodynamics

        • http://www.ecoengineers.com Steve Short

          Here’s a real gem:

          A Kleidon, K Fraedrich, E Kirk and F Lunkeit, 2006. Maximum entropy production and the strength of boundary layer exchange in an atmospheric general circulation model. Geophysical Research Letters, 33, L06706, doi: 10.1029/2005GL025373.

          “The difference in climate sensitivities of tropical and polar regions is at a minimum at a climatic state of MEP.”

          Look carefully and you will see this means, in effect, that any negative forcing at the tropics is a ‘bonus’ which has to result in an MEP-maximized meridional energy shift and hence is missing from any averaged-out pseudo-vertical model like M Theory.

          This takes all the way back to my “missing 20 – 25 W/m^2″ from M Theory which I posited occurred via lateral meridional energy/entropy flow leading Jan to want to do a “burn him at the stake” job on me.

          That is of course before Nick and I realised M’s tau simply wasn’t a real LW IR tau at all (for a global all sky model) and we could account for the missing 20 – 25 W/m^2 in that way.

          Ah, what a tangled web we weave, when first we venture to deceive.

      • Nick Stokes

        David,
        By blind alley, I meant specifically the Milne type radiative transfer theory for optically thick atmospheres.

      • Jan Pompe

        David ” That may be in M’s case, that the constraint is not actually at the solid surface, but slightly above, where latent and convection transfer has been effected already. ”

        The fact that the surface temperature is measured 1.5 – 2 m above the surface and complete optical occlusion doesn’t occur for several meters more makes that a given.

        • http://www.ecoengineers.com Steve Short

          I have big problems with this view. I think you are ignoring the demonstrable real world competition between viscosity and buoyancy of air.

          (1) The volume of warm rising air in a thermal passing through any arbitrary low altitude datum point e.g. 1000 feet (~330 m) or more per unit time is quite substantial (having thermalled my way up in them in a hang glider for about 8 years). Just watch pelicans, hawks and eagles closely and you will get some idea of the typical diameters of these columns.

          To get such a volume and still ‘have latent heat and convective transfer affected already only slightly above the surface’ requires he notion be maintained that this volume must have originated as a relatively thin ‘skin’ of air spread out over quite a substantial area and then it decided to all rush into a much smaller area before heading off upwards due to buoyancy?

          I doubt it.

          (2) I have ‘triggered’ off masses of still, near surface warm air on ground approach i.e. thermals, from heights as high up to about 30 feet i.e. 10 m. The pilots of light planes on final in training circuits i.e. having flown through the same space a couple of minutes before will frequently experience of the same effect. This is one of the reasons you are forced to go round and round practising landings. Trigger a thermal while on final (its not just coincidence you know) and not be prepared for it and you could dig a wing in, utterly pretzel the plane, yourself and loudly expostulating instructor all in just ‘the twinkling of an eye’.

          What is ‘given’ here?

          • http://www.ecoengineers.com Steve Short

            I guess what I am saying is that just as TOA is a fuzzy interface as Nick points out so is BOA. I don’t think anything is really ‘given’ about what happens at the BOA interface.

            Another example: I used to have to pump groundwater from shallow boreholes on an alluvial field in Switzerland in the middle of winter over extended periods (to pass the groundwater through special filtration and chemisorption equipment) such that I had to sleep in a uni van on the field through the night and camp there for about a week. The ground was covered in snow and it was bitterly cold by day and especially by night usually with strong katabatic winds down the valley. The only way I could warm my hands just enough just to work the valves, pumps etc (both day and night) was take my gloves off and frequently flush them with the actual groundwater. The groundwater was in thermal equilibrium with the ground which was significantly warmer than the air only centimetres above the ground. Strong discontinuity right down at ground level (or even slightly lower)!

            In other circumstances, as I note above, the actual discontinuity could be 10 m above ground level. This is simply because even though the ground is so warm it has heated a thicker layer of air for reasons of lack of wind and viscous drag that airmass is still ‘attached’ to the ground until some little disturbance e.g a dog, a sheep, a car, a hang glider or a landing plane causes just enough turbulence to lead to a chaotic breakdown of that attachment.

          • Anonymous

            Steve, Gliding does give one a greater appreciation of thermals. When paragliding in SoCal during high pressure periods, the thermals are so narrow and punchy you can hear them rushing nearby but not feel them, or one can hit on one wing, lean into it and catch the elevator , beep, beep, beep. So yes, the hot air punches through at a sharp rock or other trigger and drains a larger area. Then in a low pressure of course, the air rises in huge buoyant masses (and sinks over large areas too).

          • http://www.ecoengineers.com Steve Short

            Sounds like the Owens Valley to me. Got my ticket on the Whites and Inyos. You’re dead right.

          • Anonymous

            Never flew there. Mainly Black Mt and La Jolla in San Diego, La Salinas in Baja, and up at Marshall’s in San Bernadino.

          • http://www.ecoengineers.com Steve Short

            Are you still paragliding? I’ve done the hills near the Salton Sea (forget their name), Torrey Pines i.e. most of the wussy coastal Cal sites. But I’ve also done weird stuff like Popacotapetl (= smoking volcano) and other Mexico freak out sites. I gave up the hang gliding late 90s after a decade or so but every time I see those sweet paragliders an itch to start again with them gets very strong (even though I’m now 60).

          • Anonymous

            No – married with kids. You’re a Hard-Man Steve.

  • Nick Stokes

    Alex,I haven't read much of the King paper – it's long, and the system of having to download one page at a time doesn't help. But I've had more thoughts on where this all fits in.It goes back, in a way, to my first objection, that FM has led us down a blind alley. None of this theory is actual useful for, or used by, climate modelling. In fact, most climate people would be unfamiliar with it. And the Orstriker quote suggests that the planetary people too don't think it is much use for Earth.The whole math theory in which H is related to B as a function of tau is a bit like the ray theory of light in microscopy. On a macro scale it works, when the geometry is on a scale many times the wavelength of light. You can see sharp edges. But as you get down to the wavelength of light, ray theory gradually breaks down, and microscopy does not work. Everything gets fuzzy. This is not sudden or absolute, but it is a problem.With this radiative theory, the corresponding length is unit optical density. On that scale, radiation travels significant distances before the interaction, on which the theory is based.We have discussed the boundary condition at TOA. A natural question is, where is TOA, in this grey-body model? We know there is no sharp line. In fact it is a distributed region, over a distance of order of OD 1. That means that even the ground has some TOA connection. The B corresponding to H can't be attributed to a point in space – it's an average over a big region.In a way, this is the problem M is trying to address, by using an alternative bc. But that has the same problem. In fact, the problem is in the use of the equation itself, on this scale. There is no simple fix for the failure of a light microscope at submicron ranges, and it's the same here.The problem is much exacerbated by the use of grey-body approx (which no climate modeller would use). This describes a range from the IR window, with no absorption at all, to CO2 and water peaks, where IR is absorbed within a few metres. This is badly summarised by an average of a few KM.If you take out the window, the remaining IR has a slightly more meaningful, and larger, average OD. And the “ray theory” – radiative transfer equations – would work better. So the concept may still have some use. But a grey-body approx makes it useless for Earth.I might add that this fuzziness also makes the notion of a “discontinuity” meaningless.

  • http://www.ecoengineers.com Steve Short

    Nice discussion Nick. I benefited from it. Thanks for that. We have an analogous state of 'fuzziness' when approaching the (charged) surface of an amphoteric solid in contact with an ionic solution. This has led to numerous theories/viewpoints of the chemical and electrostatic nature of the 'discontinuity' between solid and liquid (which is critical to issues of reactivity and (chemo-)thermodynamic equilibrium).PS: I slowly read through the King paper last night. Sheesh!I might add I always suspected M's approach of hanging an entire 'theory' on a fixed tau (putting aside all the other sub-theories) was nonsense and now the discovery that M's 'magic number' tau is not even a true LW IR absorption tau anyway (except at perhaps in a pure clear sky Earth with no convection) but from then on must surreptiously include TOA-escaping emission from tops of clouds as well pretty much tipped M Theory into the dustbin for me. I find it hard to believe that M did that unconsciously i.e. as a genuine error, given his background and the mass of literature. What a strange person!

  • davids99us

    Agreed. With a simple equation like this, whether you apply a boundary condition at TOA or BOA seems to be a matter of assumptions. In M the TOA conditon is met 'somehow' just as the BOA discontinuity is dealt with 'somehow'. That may be in M's case, that the constraint is not actually at the solid surface, but slightly above, where latent and convection transfer has been effected already. Interesting you see M as a blind alley. O&O's paper similar in gray body approximation, though interesting and natural doesn't seem to have been elaborated (incorporating lattitudinal variations, empirical validation, etc). I also see close similarities of the O&O maximum entropy condition, to M's problematic eqn. 7. They might be more similar that it looks at first.

  • http://www.ecoengineers.com Steve Short

    “FMs model despite the fact that he doesn't quantify convection or latent heat in the paper because it's beyond the scope of the paper is in fact a radiation/convection model.”Hi Jan. You must have extremely strong teeth (or a titanium denture)! To see you masticating that statement after all this time is quite a stunner, especially the little gem therein…. “because its beyond the scope of the paper” indeed!!!!What's that you were saying about flat earths and the like? Choke!

  • http://www.ecoengineers.com Steve Short

    Actually, David, O&O96 has been elaborated in spades! The body of MEP in climate theory (and in other fields) is literally booming! I strongly suggest you get the Red Book and then follow up on the work of Kleidon, Dewar, Pauluis et al. and the big groups at e.g. Max Planck Institute for Meteorology in Hamburg and the Max Planck Institute for Biogeochemistry in Jena where Axel Kleidon is now. David Catling at University of Washington also has a group looking at biotic EP. Graham Farquhar at ANU is into it (sometimes with Paltridge).Check out: http://www.bgc-jena.mpg.de/bgc-theory/index.php

  • Nick Stokes

    Steve,Totally OT, but I was once very interested in the double layer theory you mention. Paper here.

  • http://www.ecoengineers.com Steve Short

    Here's a real gem:A Kleidon, K Fraedrich, E Kirk and F Lunkeit, 2006. Maximum entropy production and the strength of boundary layer exchange in an atmospheric general circulation model. Geophysical Research Letters, 33, L06706, doi: 10.1029/2005GL025373. “The difference in climate sensitivities of tropical and polar regions is at a minimum at a climatic state of MEP.”Look carefully and you will see this means, in effect, that any negative forcing at the tropics is a 'bonus' which has to result in an MEP-maximized meridional energy shift and hence is missing from any averaged-out pseudo-vertical model like M Theory.This takes all the way back to my “missing 20 – 25 W/m^2″ from M Theory which I posited occurred via lateral meridional energy/entropy flow leading Jan to want to do a “burn him at the stake” job on me. That is of course before Nick and I realised M's tau simply wasn't a real LW IR tau at all (for a global all sky model) and we could account for the missing 20 – 25 W/m^2 in that way.Ah, what a tangled web we weave, when first we venture to deceive.

  • Nick Stokes

    David,By blind alley, I meant specifically the Milne type radiative transfer theory for optically thick atmospheres.

  • http://www.ecoengineers.com Steve Short

    Also totally OT.Cripes Nick, 1976, talk about deja vu – you must be really, really old! I didn't get into that stuff until going to Ansto in '82 (after a character-building, de-hippiefying decade in industry). I'm more your grubby experimentalist (chemisorption studies, EXAFS of wet surfaces etc). Jim Davis of Triple Layer fame still a close personal friend – I was quaffing Aussie reds with him and Linda only about a month ago.

  • Jan Pompe

    David ” That may be in M's case, that the constraint is not actually at the solid surface, but slightly above, where latent and convection transfer has been effected already. “The fact that the surface temperature is measured 1.5 – 2 m above the surface and complete optical occlusion doesn't occur for several meters more makes that a given.

  • Jan Pompe

    Steve “Hi Jan. You must have extremely strong teeth (or a titanium denture)!”Yes they are quite strong but I can assure you they are quite natural. On the other hand if yours are falling out you might be ready for a visit by ACAT.(Your partner will know what that is)

  • Nick Stokes

    Well, I prefer to think that then I was really really young. But gosh, it's a small world. I used to work with those people at ANSTO too. Gary Pantelis, Paul Brown. Mostly on the early stages of Sulfidox.

  • http://www.ecoengineers.com Steve Short

    I have big problems with this view. I think you are ignoring the demonstrable real world competition between viscosity and buoyancy of air.(1) The volume of warm rising air in a thermal passing through any arbitrary low altitude datum point e.g. 1000 feet (~330 m) or more per unit time is quite substantial (having thermalled my way up in them in a hang glider for about 8 years). Just watch pelicans, hawks and eagles closely and you will get some idea of the typical diameters of these columns.To get such a volume and still 'have latent heat and convective transfer affected already only slightly above the surface' requires he notion be maintained that this volume must have originated as a relatively thin 'skin' of air spread out over quite a substantial area and then it decided to all rush into a much smaller area before heading off upwards due to buoyancy?I doubt it. (2) I have 'triggered' off masses of still, near surface warm air on ground approach i.e. thermals, from heights as high up to about 30 feet i.e. 10 m. The pilots of light planes on final in training circuits i.e. having flown through the same space a couple of minutes before will frequently experience of the same effect. This is one of the reasons you are forced to go round and round practising landings. Trigger a thermal while on final (its not just coincidence you know) and not be prepared for it and you could dig a wing in, utterly pretzel the plane, yourself and loudly expostulating instructor all in just 'the twinkling of an eye'.What is 'given' here?

  • http://www.ecoengineers.com Steve Short

    I guess what I am saying is that just as TOA is a fuzzy interface as Nick points out so is BOA. I don't think anything is really 'given' about what happens at the BOA interface. Another example: I used to have to pump groundwater from shallow boreholes on an alluvial field in Switzerland in the middle of winter over extended periods (to pass the groundwater through special filtration and chemisorption equipment) such that I had to sleep in a uni van on the field through the night and camp there for about a week. The ground was covered in snow and it was bitterly cold by day and especially by night usually with strong katabatic winds down the valley. The only way I could warm my hands just enough just to work the valves, pumps etc (both day and night) was take my gloves off and frequently flush them with the actual groundwater. The groundwater was in thermal equilibrium with the ground which was significantly warmer than the air only centimetres above the ground. Strong discontinuity right down at ground level (or even slightly lower)! In other circumstances, as I note above, the actual discontinuity could be 10 m above ground level. This is simply because even though the ground is so warm it has heated a thicker layer of air for reasons of lack of wind and viscous drag that airmass is still 'attached' to the ground until some little disturbance e.g a dog, a sheep, a car, a hang glider or a landing plane causes just enough turbulence to lead to a chaotic breakdown of that attachment.

  • David Stockwell

    Steve, Gliding does give one a greater appreciation of thermals. When paragliding in SoCal during high pressure periods, the thermals are so narrow and punchy you can hear them rushing nearby but not feel them, or one can hit on one wing, lean into it and catch the elevator , beep, beep, beep. So yes, the hot air punches through at a sharp rock or other trigger and drains a larger area. Then in a low pressure of course, the air rises in huge buoyant masses (and sinks over large areas too).

  • http://www.ecoengineers.com Steve Short

    Sounds like the Owens Valley to me. Got my ticket on the Whites and Inyos. You're dead right.

  • davids99us

    Never flew there. Mainly Black Mt and La Jolla in San Diego, La Salinas in Baja, and up at Marshall's in San Bernadino.

  • http://www.ecoengineers.com Steve Short

    Are you still paragliding? I've done the hills near the Salton Sea (forget their name), Torrey Pines i.e. most of the wussy coastal Cal sites. But I've also done weird stuff like Popacotapetl (= smoking volcano) and other Mexico freak out sites. I gave up the hang gliding late 90s after a decade or so but every time I see those sweet paragliders an itch to start again with them gets very strong (even though I'm now 60).

  • Christopher Game

    The Kirchhoff radiative emissivity-absorptivity law is not strictly exactly to the point here. We are talking near it, but not strictly exactly on it.

    But let us note the Kirchhoff law anyway, just for interest. It is often cavalierly taken without meticulous regard to its precise range of applicability. Originally Kirchhoff’s Law was stated for the condition that the energy supply maintains thermodynamic equilibrium. But it is now known that the law applies more widely. But just how much more widely?

    The absorptivity and emissivity of a medium are determined by its chemical constitution, by the physical geometrical arrangement of the chemical constituents, and by the way that energy is supplied to the medium to support the emission and govern the absorption. The physical geometrical arrangement of the chemical constituents matters because it affects how the supplied energy is transported and distributed within the medium, and how the chemical constituents are exposed to the contiguous medium. Thus the difference between a powdery or spicule-textured surface and a polished one.

    Einstein 1917 originated the idea that the absorptivity is the sum of an obvious kind of absorptivity in which a chemical species is simply lifted to a higher energy level, and a very non-obvious kind of absorptivity, indeed negative absorptivity, in which an incoming ‘photon’ triggers the release of identical copy of itself in the same direction. This is sometimes called stimulated emission and Einstein needed it to make sense of the Planck law. It is governed not by the availability of the low energy form of the relevant molecules, but by the availability of their high energy form. On the other hand, empirically detectable, or spontaneous, emissivity is governed only by the availability of the high energy form of the relevant molecules. The absorptivity and emissivity are therefore governed by different dynamics and can thus be expected to behave differently depending on how the energy is supplied and distributed to the medium.

    The full meaning of this was not really widely and fully understood until conditions of energy supply to a medium were varied experimentally, and the negative kind of absorption was demonstrated in the laser. Under the conditions of energy supply that support lasing, the absorptivity is negative. The emissivity is always positive, and so we have a clear example in which Kirchhoff’s law does not apply because of the conditions of supply and distribution of energy to the medium. Kirchhoff (1858, 1860 in English) himself, in stating the conditions for his law to hold, was careful to exclude phosphorescence and fluorescence, but I feel sure he would have excluded lasing as well if he had known about it. But when we use his law we have the duty to specify the conditions of energy supply and distribution, to help justify our use of the law.

    Of course, many conditions of energy supply and distribution are near enough to those of thermodynamic equilibrium. This happens so often that people forget that they need to be specified, and that outside them, the law can be very far from applicability. People often implicitly assume that the thermodynamic equilibrium condition is sufficiently mimicked that they do not need to put on notice that it is being used. This is carefully explained in Mihalas and Mihalas 1984 at section 84 starting on page 386. Also Hottel and Sarofim 1967 gives a slight account of it. The Einstein A and B coefficient theory is set out for example in section 1.5 et seq. of R. Loudon 2000 ‘The Quantum Theory of Light’ 3rd edition, Oxford University Press.

    There are two things to think about in our present concern. One is the empirical data equality, Aa = Ed, regardless of how we might or might not explain it by some theory. The other is Miskolczi’s rather cavalier citation of the Kirchhoff law.

    I think that the empirical data equality deserves to be examined in its own right, without regard to any putative theoretical explanation. That is not the subject of this post.

    But here I would like to think a bit about the relevant physical theory. I will not directly focus on Kirchhoff’s Law, because I just want to focus on the presently relevant physics. The Aa = Ed formula is in question. It refers to non-window wavenumbers. This means that for the relevant wavenumbers, the atmosphere is entirely opaque. An opaque object has an emissivity, and for the atmosphere for these wavenumbers, the emissivity is very little less than 1, say for argumentative definiteness without prejudice, 0.98 if you like. The land-sea surface also at these wavenumbers has an emissivity not too far from 0.98. (The situation is of course entirely different for the window wavenumbers, for which the atmosphere is transparent, and the emissivity will be far far less than 1, indeed nearly zero. But that is not immediately here relevant to the non-window wavenumbers that presently concern us.)

    Then we have two opaque media in contact, the land-sea surface and the atmosphere. If they are at the same temperatures at the contact interface (recalling that they have the same emissivity), then they will exchange thermal radiation with a net radiative flux density vector of zero at the contact interface; there may be a temperature gradient that crosses the interface with no discontinuity, and then there can still be heat transfer by conduction according to the Fourier heat diffusion law. The Fourier heat diffusion law admits that the conduction of heat cannot be measured with exclusion of the intrinsic thermal radiation of the conductive media, meaning that there is a fully Stefan’s-law thermal level of radiative specific intensity throughout the opaque media, but the radiative heat transport vector, found by integrating that radiative intensity over the sphere, is zero, and the heat is transferred only by diffusion according to the Fourier law that depends on the temperature gradient, a quantity not given by the radiant intensity at a point, but requiring a spatial interval for its definition.

    The argument can then be put that the atmosphere in contact with and near the land-sea surface is in various states of motion. So, for that matter, is the sea, on the condensed medium side of the interface. According to the above physics, this will affect the emissivity and absorptivity: they are governed by the conditions of supply and distribution of energy to the media as noted above. Will these various states of motion be enough to take us away from the operation of the Kirchhoff law? Perhaps the atmosphere is acting like a laser? I think not. We have indeed already accepted that the emissivity is close to 0.98 on each side of the interface.

    What is really at stake here is the notion that the temperatures on either side of the interface are equal in the geometrically relevant ways. Mostly I think this is so because of the effects of strong convection in the lowest atmosphere and the presence of evaporation and condensation. The idea of coarse-graining is sometime invoked. Presumably there may be some small departures occasionally when the conditions of supply and distribution of energy are so extreme that Kirchhoff’s law does not apply locally on those occasions. Even less often perhaps the conditions of supply and distribution of heat will be extreme enough to lead to large temperature discontinuities at the interface; but I think such occasions will be rare, and will likely very often be averaged away in the climate space-time scale of description. Very largely, the temperature continuity and adequate conduction-evaporation-condensation-convection condition will be satisfied and the practically-thermal condition of energy supply and distribution will apply and, for the relevant wavenumbers, the Kirchhoff law will apply.

    We do not actually need that Law in full generality. All we need is (a) temperature near-continuity and (b) emissivity near 0.98 in both media for the non-window wavenumbers.

    For window wavenumbers in clear skies, the opacity condition is not fulfilled and the situation is entirely different, but that is not immediately directly relevant to the question of explaining why Aa = Ed in clear skies.

    Downward values of the specific radiant intensity function for window wavenumbers will be very different between cloudy and clear skies, because opaque clouds radiate strongly also in the window wavenumbers, quite in contrast to the clear sky atmosphere. Under opaque 1.8 km clouds, there will be something like Kirchhoff’s hohlraum condition, and Aa = Ed will prevail. The opaque 1.8 km clouds will radiate upwards at window wavenumbers. Their temperature of upwards emission will be much lower than the land-sea surface temperature, but they are radiating upwards into a medium of much less optical density because the water vapour content of the air above them is low. The temperature and opacity reduction effects nearly cancel, but Miskolczi’s HARTCODE calculations indicate that the effective St will in the event actually be slightly greater from the cloud tops than it would be from the land-sea surface. In effect, it turns out that the St has as it were simply been translated nearly unchanged upwards from the land-sea surface to the tops of the clouds.

    This is the explanation of the maintenance of the St at a bit more than 60 W m^-2 in all sky conditions. And the explanation of why Aa = Ed overall for Miskolczi’s TIGR data sample.

  • Christopher Game

    The Kirchhoff radiative emissivity-absorptivity law is not strictly exactly to the point here. We are talking near it, but not strictly exactly on it.But let us note the Kirchhoff law anyway, just for interest. It is often cavalierly taken without meticulous regard to its precise range of applicability. Originally Kirchhoff's Law was stated for the condition that the energy supply maintains thermodynamic equilibrium. But it is now known that the law applies more widely. But just how much more widely?The absorptivity and emissivity of a medium are determined by its chemical constitution, by the physical geometrical arrangement of the chemical constituents, and by the way that energy is supplied to the medium to support the emission and govern the absorption. The physical geometrical arrangement of the chemical constituents matters because it affects how the supplied energy is transported and distributed within the medium, and how the chemical constituents are exposed to the contiguous medium. Thus the difference between a powdery or spicule-textured surface and a polished one.Einstein 1917 originated the idea that the absorptivity is the sum of an obvious kind of absorptivity in which a chemical species is simply lifted to a higher energy level, and a very non-obvious kind of absorptivity, indeed negative absorptivity, in which an incoming 'photon' triggers the release of identical copy of itself in the same direction. This is sometimes called stimulated emission and Einstein needed it to make sense of the Planck law. It is governed not by the availability of the low energy form of the relevant molecules, but by the availability of their high energy form. On the other hand, empirically detectable, or spontaneous, emissivity is governed only by the availability of the high energy form of the relevant molecules. The absorptivity and emissivity are therefore governed by different dynamics and can thus be expected to behave differently depending on how the energy is supplied and distributed to the medium.The full meaning of this was not really widely and fully understood until conditions of energy supply to a medium were varied experimentally, and the negative kind of absorption was demonstrated in the laser. Under the conditions of energy supply that support lasing, the absorptivity is negative. The emissivity is always positive, and so we have a clear example in which Kirchhoff's law does not apply because of the conditions of supply and distribution of energy to the medium. Kirchhoff (1858, 1860 in English) himself, in stating the conditions for his law to hold, was careful to exclude phosphorescence and fluorescence, but I feel sure he would have excluded lasing as well if he had known about it. But when we use his law we have the duty to specify the conditions of energy supply and distribution, to help justify our use of the law.Of course, many conditions of energy supply and distribution are near enough to those of thermodynamic equilibrium. This happens so often that people forget that they need to be specified, and that outside them, the law can be very far from applicability. People often implicitly assume that the thermodynamic equilibrium condition is sufficiently mimicked that they do not need to put on notice that it is being used. This is carefully explained in Mihalas and Mihalas 1984 at section 84 starting on page 386. Also Hottel and Sarofim 1967 gives a slight account of it. The Einstein A and B coefficient theory is set out for example in section 1.5 et seq. of R. Loudon 2000 'The Quantum Theory of Light' 3rd edition, Oxford University Press.There are two things to think about in our present concern. One is the empirical data equality, Aa = Ed, regardless of how we might or might not explain it by some theory. The other is Miskolczi's rather cavalier citation of the Kirchhoff law.I think that the empirical data equality deserves to be examined in its own right, without regard to any putative theoretical explanation. That is not the subject of this post.But here I would like to think a bit about the relevant physical theory. I will not directly focus on Kirchhoff's Law, because I just want to focus on the presently relevant physics. The Aa = Ed formula is in question. It refers to non-window wavenumbers. This means that for the relevant wavenumbers, the atmosphere is entirely opaque. An opaque object has an emissivity, and for the atmosphere for these wavenumbers, the emissivity is very little less than 1, say for argumentative definiteness without prejudice, 0.98 if you like. The land-sea surface also at these wavenumbers has an emissivity not too far from 0.98. (The situation is of course entirely different for the window wavenumbers, for which the atmosphere is transparent, and the emissivity will be far far less than 1, indeed nearly zero. But that is not immediately here relevant to the non-window wavenumbers that presently concern us.)Then we have two opaque media in contact, the land-sea surface and the atmosphere. If they are at the same temperatures at the contact interface (recalling that they have the same emissivity), then they will exchange thermal radiation with a net radiative flux density vector of zero at the contact interface; there may be a temperature gradient that crosses the interface with no discontinuity, and then there can still be heat transfer by conduction according to the Fourier heat diffusion law. The Fourier heat diffusion law admits that the conduction of heat cannot be measured with exclusion of the intrinsic thermal radiation of the conductive media, meaning that there is a fully Stefan's-law thermal level of radiative specific intensity throughout the opaque media, but the radiative heat transport vector, found by integrating that radiative intensity over the sphere, is zero, and the heat is transferred only by diffusion according to the Fourier law that depends on the temperature gradient, a quantity not given by the radiant intensity at a point, but requiring a spatial interval for its definition.The argument can then be put that the atmosphere in contact with and near the land-sea surface is in various states of motion. So, for that matter, is the sea, on the condensed medium side of the interface. According to the above physics, this will affect the emissivity and absorptivity: they are governed by the conditions of supply and distribution of energy to the media as noted above. Will these various states of motion be enough to take us away from the operation of the Kirchhoff law? Perhaps the atmosphere is acting like a laser? I think not. We have indeed already accepted that the emissivity is close to 0.98 on each side of the interface.What is really at stake here is the notion that the temperatures on either side of the interface are equal in the geometrically relevant ways. Mostly I think this is so because of the effects of strong convection in the lowest atmosphere and the presence of evaporation and condensation. The idea of coarse-graining is sometime invoked. Presumably there may be some small departures occasionally when the conditions of supply and distribution of energy are so extreme that Kirchhoff's law does not apply locally on those occasions. Even less often perhaps the conditions of supply and distribution of heat will be extreme enough to lead to large temperature discontinuities at the interface; but I think such occasions will be rare, and will likely very often be averaged away in the climate space-time scale of description. Very largely, the temperature continuity and adequate conduction-evaporation-condensation-convection condition will be satisfied and the practically-thermal condition of energy supply and distribution will apply and, for the relevant wavenumbers, the Kirchhoff law will apply.We do not actually need that Law in full generality. All we need is (a) temperature near-continuity and (b) emissivity near 0.98 in both media for the non-window wavenumbers.For window wavenumbers in
    clear skies, the opacity condition is not fulfilled and the situation is entirely different, but that is not immediately directly relevant to the question of explaining why Aa = Ed in clear skies.Downward values of the specific radiant intensity function for window wavenumbers will be very different between cloudy and clear skies, because opaque clouds radiate strongly also in the window wavenumbers, quite in contrast to the clear sky atmosphere. Under opaque 1.8 km clouds, there will be something like Kirchhoff's hohlraum condition, and Aa = Ed will prevail. The opaque 1.8 km clouds will radiate upwards at window wavenumbers. Their temperature of upwards emission will be much lower than the land-sea surface temperature, but they are radiating upwards into a medium of much less optical density because the water vapour content of the air above them is low. The temperature and opacity reduction effects nearly cancel, but Miskolczi's HARTCODE calculations indicate that the effective St will in the event actually be slightly greater from the cloud tops than it would be from the land-sea surface. In effect, it turns out that the St has as it were simply been translated nearly unchanged upwards from the land-sea surface to the tops of the clouds.This is the explanation of the maintenance of the St at a bit more than 60 W m^-2 in all sky conditions. And the explanation of why Aa = Ed overall for Miskolczi's TIGR data sample.

  • Christopher Game

    The Kirchhoff radiative emissivity-absorptivity law is not strictly exactly to the point here. We are talking near it, but not strictly exactly on it.

    But let us note the Kirchhoff law anyway, just for interest. It is often cavalierly taken without meticulous regard to its precise range of applicability. Originally Kirchhoff’s Law was stated for the condition that the energy supply maintains thermodynamic equilibrium. But it is now known that the law applies more widely. But just how much more widely?

    The absorptivity and emissivity of a medium are determined by its chemical constitution, by the physical geometrical arrangement of the chemical constituents, and by the way that energy is supplied to the medium to support the emission and govern the absorption. The physical geometrical arrangement of the chemical constituents matters because it affects how the supplied energy is transported and distributed within the medium, and how the chemical constituents are exposed to the contiguous medium. Thus the difference between a powdery or spicule-textured surface and a polished one.

    Einstein 1917 originated the idea that the absorptivity is the sum of an obvious kind of absorptivity in which a chemical species is simply lifted to a higher energy level, and a very non-obvious kind of absorptivity, indeed negative absorptivity, in which an incoming ‘photon’ triggers the release of identical copy of itself in the same direction. This is sometimes called stimulated emission and Einstein needed it to make sense of the Planck law. It is governed not by the availability of the low energy form of the relevant molecules, but by the availability of their high energy form. On the other hand, empirically detectable, or spontaneous, emissivity is governed only by the availability of the high energy form of the relevant molecules. The absorptivity and emissivity are therefore governed by different dynamics and can thus be expected to behave differently depending on how the energy is supplied and distributed to the medium.

  • Christopher Game

    The Kirchhoff radiative emissivity-absorptivity law is not strictly exactly to the point here. We are talking near it, but not strictly exactly on it.But let us note the Kirchhoff law anyway, just for interest. It is often cavalierly taken without meticulous regard to its precise range of applicability. Originally Kirchhoff's Law was stated for the condition that the energy supply maintains thermodynamic equilibrium. But it is now known that the law applies more widely. But just how much more widely?The absorptivity and emissivity of a medium are determined by its chemical constitution, by the physical geometrical arrangement of the chemical constituents, and by the way that energy is supplied to the medium to support the emission and govern the absorption. The physical geometrical arrangement of the chemical constituents matters because it affects how the supplied energy is transported and distributed within the medium, and how the chemical constituents are exposed to the contiguous medium. Thus the difference between a powdery or spicule-textured surface and a polished one.Einstein 1917 originated the idea that the absorptivity is the sum of an obvious kind of absorptivity in which a chemical species is simply lifted to a higher energy level, and a very non-obvious kind of absorptivity, indeed negative absorptivity, in which an incoming 'photon' triggers the release of identical copy of itself in the same direction. This is sometimes called stimulated emission and Einstein needed it to make sense of the Planck law. It is governed not by the availability of the low energy form of the relevant molecules, but by the availability of their high energy form. On the other hand, empirically detectable, or spontaneous, emissivity is governed only by the availability of the high energy form of the relevant molecules. The absorptivity and emissivity are therefore governed by different dynamics and can thus be expected to behave differently depending on how the energy is supplied and distributed to the medium.

  • Christopher Game

    The full meaning of this was not really widely and fully understood until conditions of energy supply to a medium were varied experimentally, and the negative kind of absorption was demonstrated in the laser. Under the conditions of energy supply that support lasing, the absorptivity is negative. The emissivity is always positive, and so we have a clear example in which Kirchhoff’s law does not apply because of the conditions of supply and distribution of energy to the medium. Kirchhoff (1858, 1860 in English) himself, in stating the conditions for his law to hold, was careful to exclude phosphorescence and fluorescence, but I feel sure he would have excluded lasing as well if he had known about it. But when we use his law we have the duty to specify the conditions of energy supply and distribution, to help justify our use of the law.

    Of course, many conditions of energy supply and distribution are near enough to those of thermodynamic equilibrium. This happens so often that people forget that they need to be specified, and that outside them, the law can be very far from applicability. People often implicitly assume that the thermodynamic equilibrium condition is sufficiently mimicked that they do not need to put on notice that it is being used. This is carefully explained in Mihalas and Mihalas 1984 at section 84 starting on page 386. Also Hottel and Sarofim 1967 gives a slight account of it. The Einstein A and B coefficient theory is set out for example in section 1.5 et seq. of R. Loudon 2000 ‘The Quantum Theory of Light’ 3rd edition, Oxford University Press.

  • Christopher Game

    The full meaning of this was not really widely and fully understood until conditions of energy supply to a medium were varied experimentally, and the negative kind of absorption was demonstrated in the laser. Under the conditions of energy supply that support lasing, the absorptivity is negative. The emissivity is always positive, and so we have a clear example in which Kirchhoff's law does not apply because of the conditions of supply and distribution of energy to the medium. Kirchhoff (1858, 1860 in English) himself, in stating the conditions for his law to hold, was careful to exclude phosphorescence and fluorescence, but I feel sure he would have excluded lasing as well if he had known about it. But when we use his law we have the duty to specify the conditions of energy supply and distribution, to help justify our use of the law.Of course, many conditions of energy supply and distribution are near enough to those of thermodynamic equilibrium. This happens so often that people forget that they need to be specified, and that outside them, the law can be very far from applicability. People often implicitly assume that the thermodynamic equilibrium condition is sufficiently mimicked that they do not need to put on notice that it is being used. This is carefully explained in Mihalas and Mihalas 1984 at section 84 starting on page 386. Also Hottel and Sarofim 1967 gives a slight account of it. The Einstein A and B coefficient theory is set out for example in section 1.5 et seq. of R. Loudon 2000 'The Quantum Theory of Light' 3rd edition, Oxford University Press.

  • Christopher Game

    There are two things to think about in our present concern. One is the empirical data equality, Aa = Ed, regardless of how we might or might not explain it by some theory. The other is Miskolczi’s rather cavalier citation of the Kirchhoff law.

    I think that the empirical data equality deserves to be examined in its own right, without regard to any putative theoretical explanation. That is not the subject of this post.

    But here I would like to think a bit about the relevant physical theory. I will not directly focus on Kirchhoff’s Law, because I just want to focus on the presently relevant physics. The Aa = Ed formula is in question. It refers to non-window wavenumbers. This means that for the relevant wavenumbers, the atmosphere is entirely opaque. An opaque object has an emissivity, and for the atmosphere for these wavenumbers, the emissivity is very little less than 1, say for argumentative definiteness without prejudice, 0.98 if you like. The land-sea surface also at these wavenumbers has an emissivity not too far from 0.98. (The situation is of course entirely different for the window wavenumbers, for which the atmosphere is transparent, and the emissivity will be far far less than 1, indeed nearly zero. But that is not immediately here relevant to the non-window wavenumbers that presently concern us.)

    Then we have two opaque media in contact, the land-sea surface and the atmosphere. If they are at the same temperatures at the contact interface (recalling that they have the same emissivity), then they will exchange thermal radiation with a net radiative flux density vector of zero at the contact interface; there may be a temperature gradient that crosses the interface with no discontinuity, and then there can still be heat transfer by conduction according to the Fourier heat diffusion law. The Fourier heat diffusion law admits that the conduction of heat cannot be measured with exclusion of the intrinsic thermal radiation of the conductive media, meaning that there is a fully Stefan’s-law thermal level of radiative specific intensity throughout the opaque media, but the radiative heat transport vector, found by integrating that radiative intensity over the sphere, is zero, and the heat is transferred only by diffusion according to the Fourier law that depends on the temperature gradient, a quantity not given by the radiant intensity at a point, but requiring a spatial interval for its definition.

    The argument can then be put that the atmosphere in contact with and near the land-sea surface is in various states of motion. So, for that matter, is the sea, on the condensed medium side of the interface. According to the above physics, this will affect the emissivity and absorptivity: they are governed by the conditions of supply and distribution of energy to the media as noted above. Will these various states of motion be enough to take us away from the operation of the Kirchhoff law? Perhaps the atmosphere is acting like a laser? I think not. We have indeed already accepted that the emissivity is close to 0.98 on each side of the interface.

    • Nick Stokes

      Christopher,
      You are using an inadequate model here. It reduces the spectrum to a transparent window and a fully opaque region. And then it is true that Aa=Ed. But the real spectrum is well known, and it is different in important ways.
      I discussed real spectra here at CA. Scroll down a bit to see Figs 8.2a and 8.2b. You should study them carefully, comparing with the BB curves.
      Fig 8.2a shows the upgoing spectrum at 20km – TOA. The window section, near 11μ, emerges as if at ground temp (268K). But there is radiation at all frequencies, most at an apparent lower temp – down to about 220K. And this is an important part of outgoing power.
      Your simple model cannot explain this. The window is OK, but there would be no power available to radiate at other frequencies, because Aa=Ed implies no transmission.
      Fig 8.2b shows downwelling IR. Yes, the window near 11μ has almost zero (but not quite). And the region near 15μ has radiance as if from air at ground temp. But there is an important intermediate region, as if emitted from colder regions (which it is).
      Its importance relates to the above contradiction. That intermediate region conveys the power that is emitted at all non-window frequencies, including 15μ. It can do so because in that region, Ed < Aa. This discrepancy is a small part of the total but is vital and GHG-sensitive.

      • Christopher Game

        Nick,

        Thank you for your kind and thoughtful reply. Sad to say I am not familiar with the technique for this blog and my post was mangled in transmission. I tried posting it in four parts, but only one made the distance. The part that I am now reading on my monitor, and that I suppose you are replying to is as it were page two of my essay. Naturally without pages 1, 3, and 4, my reasoning is not ready to be examined. I will now try to post page 1 in two parts, in case length is the problem. I think my argument is valid. Noting your comments I will before posting tidy up some of the original, but I may not be able to do that till later this afternoon. I understand your comments and I think they are addressed or will be in the tidied-up post. Christopher

  • Christopher Game

    What is really at stake here is the notion that the temperatures on either side of the interface are equal in the geometrically relevant ways. Mostly I think this is so because of the effects of strong convection in the lowest atmosphere and the presence of evaporation and condensation. The idea of coarse-graining is sometime invoked. Presumably there may be some small departures occasionally when the conditions of supply and distribution of energy are so extreme that Kirchhoff’s law does not apply locally on those occasions. Even less often perhaps the conditions of supply and distribution of heat will be extreme enough to lead to large temperature discontinuities at the interface; but I think such occasions will be rare, and will likely very often be averaged away in the climate space-time scale of description. Very largely, the temperature continuity and adequate conduction-evaporation-condensation-convection condition will be satisfied and the practically-thermal condition of energy supply and distribution will apply and, for the relevant wavenumbers, the Kirchhoff law will apply.

    We do not actually need that Law in full generality. All we need is (a) temperature near-continuity and (b) emissivity near 0.98 in both media for the non-window wavenumbers.

    For window wavenumbers in clear skies, the opacity condition is not fulfilled and the situation is entirely different, but that is not immediately directly relevant to the question of explaining why Aa = Ed in clear skies.

    Downward values of the specific radiant intensity function for window wavenumbers will be very different between cloudy and clear skies, because opaque clouds radiate strongly also in the window wavenumbers, quite in contrast to the clear sky atmosphere. Under opaque 1.8 km clouds, there will be something like Kirchhoff’s hohlraum condition, and Aa = Ed will prevail. The opaque 1.8 km clouds will radiate upwards at window wavenumbers. Their temperature of upwards emission will be much lower than the land-sea surface temperature, but they are radiating upwards into a medium of much less optical density because the water vapour content of the air above them is low. The temperature and opacity reduction effects nearly cancel, but Miskolczi’s HARTCODE calculations indicate that the effective St will in the event actually be slightly greater from the cloud tops than it would be from the land-sea surface. In effect, it turns out that the St has as it were simply been translated nearly unchanged upwards from the land-sea surface to the tops of the clouds.

    This is the explanation of the maintenance of the St at a bit more than 60 W m^-2 in all sky conditions. And the explanation of why Aa = Ed overall for Miskolczi’s TIGR data sample.

  • Christopher Game

    There are two things to think about in our present concern. One is the empirical data equality, Aa = Ed, regardless of how we might or might not explain it by some theory. The other is Miskolczi's rather cavalier citation of the Kirchhoff law.I think that the empirical data equality deserves to be examined in its own right, without regard to any putative theoretical explanation. That is not the subject of this post.But here I would like to think a bit about the relevant physical theory. I will not directly focus on Kirchhoff's Law, because I just want to focus on the presently relevant physics. The Aa = Ed formula is in question. It refers to non-window wavenumbers. This means that for the relevant wavenumbers, the atmosphere is entirely opaque. An opaque object has an emissivity, and for the atmosphere for these wavenumbers, the emissivity is very little less than 1, say for argumentative definiteness without prejudice, 0.98 if you like. The land-sea surface also at these wavenumbers has an emissivity not too far from 0.98. (The situation is of course entirely different for the window wavenumbers, for which the atmosphere is transparent, and the emissivity will be far far less than 1, indeed nearly zero. But that is not immediately here relevant to the non-window wavenumbers that presently concern us.)Then we have two opaque media in contact, the land-sea surface and the atmosphere. If they are at the same temperatures at the contact interface (recalling that they have the same emissivity), then they will exchange thermal radiation with a net radiative flux density vector of zero at the contact interface; there may be a temperature gradient that crosses the interface with no discontinuity, and then there can still be heat transfer by conduction according to the Fourier heat diffusion law. The Fourier heat diffusion law admits that the conduction of heat cannot be measured with exclusion of the intrinsic thermal radiation of the conductive media, meaning that there is a fully Stefan's-law thermal level of radiative specific intensity throughout the opaque media, but the radiative heat transport vector, found by integrating that radiative intensity over the sphere, is zero, and the heat is transferred only by diffusion according to the Fourier law that depends on the temperature gradient, a quantity not given by the radiant intensity at a point, but requiring a spatial interval for its definition.The argument can then be put that the atmosphere in contact with and near the land-sea surface is in various states of motion. So, for that matter, is the sea, on the condensed medium side of the interface. According to the above physics, this will affect the emissivity and absorptivity: they are governed by the conditions of supply and distribution of energy to the media as noted above. Will these various states of motion be enough to take us away from the operation of the Kirchhoff law? Perhaps the atmosphere is acting like a laser? I think not. We have indeed already accepted that the emissivity is close to 0.98 on each side of the interface.

  • Christopher Game

    What is really at stake here is the notion that the temperatures on either side of the interface are equal in the geometrically relevant ways. Mostly I think this is so because of the effects of strong convection in the lowest atmosphere and the presence of evaporation and condensation. The idea of coarse-graining is sometime invoked. Presumably there may be some small departures occasionally when the conditions of supply and distribution of energy are so extreme that Kirchhoff's law does not apply locally on those occasions. Even less often perhaps the conditions of supply and distribution of heat will be extreme enough to lead to large temperature discontinuities at the interface; but I think such occasions will be rare, and will likely very often be averaged away in the climate space-time scale of description. Very largely, the temperature continuity and adequate conduction-evaporation-condensation-convection condition will be satisfied and the practically-thermal condition of energy supply and distribution will apply and, for the relevant wavenumbers, the Kirchhoff law will apply.We do not actually need that Law in full generality. All we need is (a) temperature near-continuity and (b) emissivity near 0.98 in both media for the non-window wavenumbers.For window wavenumbers in clear skies, the opacity condition is not fulfilled and the situation is entirely different, but that is not immediately directly relevant to the question of explaining why Aa = Ed in clear skies.Downward values of the specific radiant intensity function for window wavenumbers will be very different between cloudy and clear skies, because opaque clouds radiate strongly also in the window wavenumbers, quite in contrast to the clear sky atmosphere. Under opaque 1.8 km clouds, there will be something like Kirchhoff's hohlraum condition, and Aa = Ed will prevail. The opaque 1.8 km clouds will radiate upwards at window wavenumbers. Their temperature of upwards emission will be much lower than the land-sea surface temperature, but they are radiating upwards into a medium of much less optical density because the water vapour content of the air above them is low. The temperature and opacity reduction effects nearly cancel, but Miskolczi's HARTCODE calculations indicate that the effective St will in the event actually be slightly greater from the cloud tops than it would be from the land-sea surface. In effect, it turns out that the St has as it were simply been translated nearly unchanged upwards from the land-sea surface to the tops of the clouds.This is the explanation of the maintenance of the St at a bit more than 60 W m^-2 in all sky conditions. And the explanation of why Aa = Ed overall for Miskolczi's TIGR data sample.

  • Anonymous

    Steve and Miklos,
    I came somewhat late to the discussion but enjoyed it. Reading through M’s paper it intrigued me that he seemed to be the first one, who used finite semi-transparent boundary conditions to solve the Schwarzschild equation within earth atmosphere.
    Is there any reason why nobody has ever done this? I did not find an argument yet that makes the generally applied semi-infinite boundary conditions preferable other than they are easier to solve.
    Reading this in all the books about climate change that deal with radiative transfer, I had the feeling of: Going through the motions without careful consideration. So I liked M’s new approach.
    It seems to me that the semi-transparent boundary conditions indeed provide an avenue to avoid the temperature discontinuity at the ground obtained by applying the semi-infinite boundary conditions.
    Despite all the critics I read in the internet, isn’t that finding or application together with an experimental test a worthy contribution to the scientific community. I guess it would be in physics.
    By the way: reading through the books, I found the following quote about Kirchhoff’s law in Goody and Yung’s Atmospheric Radiation, Theoretical Basis 2nd edition on page 3:
    “Since clouds, ground, and atmosphere do not differ greatly in temperature, it follows from Kirchhoff’s law that emission and absorption are approximately equal to each other.”
    I thought I cite that, since there are sites on the Internet and contributions that like to discredit Dr. Miskolczi and his paper on a similar quote. But this seems to me typical for the AGW discussion, always looking for small splinters in the others eye.
    Best regards
    Guenter

    • Nick Stokes

      Guenter,
      I did not find an argument yet that makes the generally applied semi-infinite boundary conditions preferable other than they are easier to solve.
      There is a very simple argument that you have ignored. The solution of Milne and others (which is not semi-infinite) yields a flux which matches OLR at TOA. M’s does not.

    • http://www.ecoengineers.com Steve Short

      “By the way: reading through the books, I found the following quote about Kirchhoff’s law in Goody and Yung’s Atmospheric Radiation, Theoretical Basis 2nd edition on page 3:
      “Since clouds, ground, and atmosphere do not differ greatly in temperature, it follows from Kirchhoff’s law that emission and absorption are approximately equal to each other.”
      I thought I cite that, since there are sites on the Internet and contributions that like to discredit Dr. Miskolczi and his paper on a similar quote. But this seems to me typical for the AGW discussion, always looking for small splinters in the others eye.”

      The great flaw in this argument of Guenter’s course is that, of the fraction of LW IR leaving at TOA which Miskolczi ‘lumps’ into his S_T which is comprised of LW IR emitted from the tops of clouds (due to release of latent heat during condensation), that fraction was never ‘absorbed’ by the clouds in the first place. That fraction came from Evapotranspiration (ET) from the surface i.e. its origin is non-radiative.

      The fact that Miskolczi ‘chooses’ to add that fraction of ET which is radiated upwards to TOA from the tops of the clouds to the true S_T which is transmitted from BOA to TOA (escaping absorption along the way) is purely an idiosyncrasy of Miskolczi. Whether on considers the creation of S_T as 2-component ‘lumped parameter’ to be justified or not, the fact remains that depending upon the %cloud present a significant fraction of Miskolczi’s S_T has a a non-radiative i.e. convective origin. Thus Kirchoff (or whatever)-type arguments are irrelevant to it.

      • Nick Stokes

        It isn’t just radiation from clouds. All of the 100 or so W/m2 making up LH and convection goes to heat the atmosphere, much of it at low levels. This heat adds to E_D and E_U, and as you say, a bit to S_T, but did not come from A_A.

        • http://www.ecoengineers.com Steve Short

          Agreed. I like your phrase ‘much of it at low levels’.

          That is why I (initially at least) set the fraction of A_A returning to BOA which contributes to E_D, and the fraction of DT returning to BOA which contributes to E_D to be 0.625 (62.5%) by analogy with the fraction of ET for a 1st pass estimate of E_D in my (crude) little spreadsheet model.

          If one fits the (60% cloud cover case) for A_A and E_D as per T,F&K09 then the fraction actually works out to be 0.66 (66%) for A_A if the fraction of DT (a minor component anyway) stays at 0.625 (62.5%). To me this suggests slightly more of A_A returns to contribute to E_D than of ET but it is close. This is as to be expected because most LW IR from BOA is absorbed below the mean cloud layer level.

          What I find intriguing about this crude spreadsheet approach is that it is very hard to see how a reduction in OLR (positive forcing) can arise from the situation where %cloud is greater than the global average of ~60%.

          In this sense I see where Coho (Anthony) is coming from and tend to agree with him.

          The only possible conclusion if T&F09 are correct is that as %cloud cover rises above ~60% the fraction of latent heat which is radiated through TOA falls off dramatically in a non-linear way.

          I haven’t got T&F09 yet but if they can’t prove that point then their contention won’t get up with me.

          If we stop and think about where high %cloud cover commonly exists it is in the equatorial band, over the gyres and over places like the Amazon and Congo. These are all places where highly energetic cu-nim storm lift cloud right up to the tropopause and are characterized by high precipitation rates. I have spent a fair bit of time in the Torres, PNG, New Caledonia etc and seen these storms for myself both from the surface and from the air numerous times.

          To assert that the fraction of ET which departs TOA at ET_U under such circumstances is proportionately lower than for the average cloud cover situation (temperate latitudes) is implausible to me. They are called ‘temperate’ for that very reason.

          So far, I’m with Lindzen and Spencer et al (and Coho) on this.

  • GHess

    Steve and Miklos,I came somewhat late to the discussion but enjoyed it. Reading through M's paper it intrigued me that he seemed to be the first one, who used finite semi-transparent boundary conditions to solve the Schwarzschild equation within earth atmosphere. Is there any reason why nobody has ever done this? I did not find an argument yet that makes the generally applied semi-infinite boundary conditions preferable other than they are easier to solve. Reading this in all the books about climate change that deal with radiative transfer, I had the feeling of: Going through the motions without careful consideration. So I liked M's new approach.It seems to me that the semi-transparent boundary conditions indeed provide an avenue to avoid the temperature discontinuity at the ground obtained by applying the semi-infinite boundary conditions.Despite all the critics I read in the internet, isn’t that finding or application together with an experimental test a worthy contribution to the scientific community. I guess it would be in physics.By the way: reading through the books, I found the following quote about Kirchhoff’s law in Goody and Yung’s Atmospheric Radiation, Theoretical Basis 2nd edition on page 3:“Since clouds, ground, and atmosphere do not differ greatly in temperature, it follows from Kirchhoff’s law that emission and absorption are approximately equal to each other.”I thought I cite that, since there are sites on the Internet and contributions that like to discredit Dr. Miskolczi and his paper on a similar quote. But this seems to me typical for the AGW discussion, always looking for small splinters in the others eye. Best regardsGuenter

  • Nick Stokes

    Christopher,You are using an inadequate model here. It reduces the spectrum to a transparent window and a fully opaque region. And then it is true that Aa=Ed. But the real spectrum is well known, and it is different in important ways.I discussed real spectra here at CA. Scroll down a bit to see Figs 8.2a and 8.2b. You should study them carefully, comparing with the BB curves.Fig 8.2a shows the upgoing spectrum at 20km – TOA. The window section, near 11μ, emerges as if at ground temp (268K). But there is radiation at all frequencies, most at an apparent lower temp – down to about 220K. And this is an important part of outgoing power.Your simple model cannot explain this. The window is OK, but there would be no power available to radiate at other frequencies, because Aa=Ed implies no transmission.Fig 8.2b shows downwelling IR. Yes, the window near 11μ has almost zero (but not quite). And the region near 15μ has radiance as if from air at ground temp. But there is an important intermediate region, as if emitted from colder regions (which it is).Its importance relates to the above contradiction. That intermediate region conveys the power that is emitted at all non-window frequencies, including 15μ. It can do so because in that region, Ed < Aa. This discrepancy is a small part of the total but is vital and GHG-sensitive.

  • Nick Stokes

    Guenter,I did not find an argument yet that makes the generally applied semi-infinite boundary conditions preferable other than they are easier to solve.There is a very simple argument that you have ignored. The solution of Milne and others (which is not semi-infinite) yields a flux which matches OLR at TOA. M's does not.

  • davids99us

    No – married with kids. You're a Hard-Man Steve.

  • Anonymous

    Nick,
    could you educate me, which is the radiative transfer equation that applies in this context and how is it defined?. What is the reason for choosing one boundary condition over the other?
    If you are right it would be easy to refute Miskolczi’s equation.
    I have not seen all the mathematical experts that are in climate science doing that.
    Milne’s equations I see usually used in the context of radiative transfer problems dealing with scattering.
    Especially two cases:
    Firstly the limit of a scattering problem with an embedded source and absorption that goes towards zero. The source term is zero in this case.
    Secondly, diffuse reflection problems with partly reflecting boundaries.
    Of course you are always free to choose your boundary conditions.
    I didn’t want to ignore Milne’s work and apologize.
    I just have not come across a paper or a book that used the semi – transparent boundary conditions to derive the relationship between surface air temperature, ground temperature and optical depth in the infrared region. In hindsight I found it interesting that Dr. Miskolczi did that and compare the results. This is why I was looking for an argument, not to use them since you always can apply different boundary conditions, get the solutions and compare afterwards. Moreover it is good practice of an open minded scientist to do so. That is what Miskolczi did. My feeling when I came across the blogs about his work that he is discredited because of political reasons.
    But what he did is very valuable; he brought in a new perspective that is what keeps the science going.
    Can I asked a stupid question. Maybe I missed something in the discussion, perhaps you can repeat it for me: why is Miskolczi not matching OLR at TOA?
    What to you mean the solution of Milne and others yield a flux that match OLR at TOA. Isn’t that a circular argument, since you proscribe the boundary condition matching OLR at TOA to get the solutions.

    Best regards
    Guenter

    • Nick Stokes

      Guenther,
      M has defined the equations. he’s solving eq (12), which is a reduced version of (11). In fact, it is just an indefinite integral of a constant.

      The bc for TOA matching is stated and applied in the discussion preceding Eq 15. This eq includes the condition, and you can recover it by putting tau=0.

      As I said above, you can also put tau=0 in eq 21, M’s eq, to see that it does not give the same condition. The first task of any model like this is to conserve energy. That means that the outgoing energy must match the energy generated. M’s does not do that. Milne’s does.M does not seem to have noticed. Otherwise, at least a comment would be in order. He did not “compare afterwards”.

      Refutation? This paper was firmly rejected by a large number of reputable physics and climate journals. The referees gave their reasons, some of which FM quoted.

    • Jan Pompe

      Guenter “Maybe I missed something in the discussion, perhaps you can repeat it for me: why is Miskolczi not matching OLR at TOA?”

      Actually he is he is not “choosing” boundary conditions they fall out of the solution as derived in appendix b. If you evaluate eqn 20 for tau = 0 i.e. for a transparent (or even non-existent atmosphere) you get Bo = Bg as we would expect. BTW I don’t think anyone expects the Bg to be the same for a transparent atmosphere as for an opaque or semi transparent one apart from being treated as such in the classical solution. Nick is not being quite honest (perhaps he genuinely does not know) but B(tau) really looks more like those integrals in Eqn B3 than a simple indefinite integral of a constant. From your first post I expect that you are already aware that to get from B3 to something nice and simple like Eqn 15 one needs to evaluate the integrals from 0 -> infinity.

  • GHess

    Nick,could you educate me, which is the radiative transfer equation that applies in this context and how is it defined?. What is the reason for choosing one boundary condition over the other?If you are right it would be easy to refute Miskolczi's equation.I have not seen all the mathematical experts that are in climate science doing that.Milne’s equations I see usually used in the context of radiative transfer problems dealing with scattering. Especially two cases: Firstly the limit of a scattering problem with an embedded source and absorption that goes towards zero. The source term is zero in this case. Secondly, diffuse reflection problems with partly reflecting boundaries.Of course you are always free to choose your boundary conditions.I didn’t want to ignore Milne’s work and apologize. I just have not come across a paper or a book that used the semi – transparent boundary conditions to derive the relationship between surface air temperature, ground temperature and optical depth in the infrared region. In hindsight I found it interesting that Dr. Miskolczi did that and compare the results. This is why I was looking for an argument, not to use them since you always can apply different boundary conditions, get the solutions and compare afterwards. Moreover it is good practice of an open minded scientist to do so. That is what Miskolczi did. My feeling when I came across the blogs about his work that he is discredited because of political reasons.But what he did is very valuable; he brought in a new perspective that is what keeps the science going. Can I asked a stupid question. Maybe I missed something in the discussion, perhaps you can repeat it for me: why is Miskolczi not matching OLR at TOA?What to you mean the solution of Milne and others yield a flux that match OLR at TOA. Isn’t that a circular argument, since you proscribe the boundary condition matching OLR at TOA to get the solutions.Best regardsGuenter

  • cohenite

    Crikey Nick, I’m in seventh heaven reading you boffins go about your business; ignorance is bliss as they say; but a slight query; you say the Milne model incorporated into the K&T efforts has successfully matched OLR at TOA; that, as Lindzen would say, is problematic; as to the semi-infinite moniker which you object to; doesn’t the Milne model state that as extra CO2 is put into the atmosphere the tropopause rises as the stratosphere cools and contracts due to the rise in the final emission level; given this, as CO2 levels continue to rise isn’t it part of the Milne model that there is no upper limit as to how high the tropopause will rise; and given this, is it not appropriate for that to be called semi-infinite?

    On another tack, it is a bit unfair to pillory M theory because it only applies to clear-sky conditions; that is, after all, what his paper set out to do; to establish a non-semi-infinite description or semi-transparent explanation for clear-sky radiative flux. Steve’s solution for all-sky conditions is worthy of further elaboration but I can’t help but feel it extends the M clear-sky explanation into the ‘real-world'; and if Steve is right then the Milne ‘semi-infinite’ [or whatever term you want to use Nick] model is really in a pickle because Steve’s values show that, overall, water/clouds in the atmosphere are a negative feedback [or as Spencer and Braswell note, cause]; this is at loggerheads with the orthodoxy which relies on water/clouds being a positive feedback; see, for example the new paper by Trenberth and Fasullo which asserts this revolutionary effect;

    “While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor [as a feedback], this is offset to a large degree by decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures.”

    On the face of it that is rather amazing.

    • Nick Stokes

      Coho,
      The Milne model is in no way used for K&T, or any other climate study.

      The Milne model says nothing about CO2 or tropopauses. It’s a general planetary model, which actually has very limited applicability to Earth, for the reasons I’ve given above.

      And again, the Milne model says nothing about clouds or feedbacks. You just don’t see things for what they are.

  • cohenite

    Crikey Nick, I'm in seventh heaven reading you boffins go about your business; ignorance is bliss as they say; but a slight query; you say the Milne model incorporated into the K&T efforts has successfully matched OLR at TOA; that, as Lindzen would say, is problematic; as to the semi-infinite moniker which you object to; doesn't the Milne model state that as extra CO2 is put into the atmosphere the tropopause rises as the stratosphere cools and contracts due to the rise in the final emission level; given this, as CO2 levels continue to rise isn't it part of the Milne model that there is no upper limit as to how high the tropopause will rise; and given this, is it not appropriate for that to be called semi-infinite?On another tack, it is a bit unfair to pillory M theory because it only applies to clear-sky conditions; that is, after all, what his paper set out to do; to establish a non-semi-infinite description or semi-transparent explanation for clear-sky radiative flux. Steve's solution for all-sky conditions is worthy of further elaboration but I can't help but feel it extends the M clear-sky explanation into the 'real-world'; and if Steve is right then the Milne 'semi-infinite' [or whatever term you want to use Nick] model is really in a pickle because Steve's values show that, overall, water/clouds in the atmosphere are a negative feedback [or as Spencer and Braswell note, cause]; this is at loggerheads with the orthodoxy which relies on water/clouds being a positive feedback; see, for example the new paper by Trenberth and Fasullo which asserts this revolutionary effect;”While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor [as a feedback], this is offset to a large degree by decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures.”On the face of it that is rather amazing.

  • Christopher Game

    Nick,Thank you for your kind and thoughtful reply. Sad to say I am not familiar with the technique for this blog and my post was mangled in transmission. I tried posting it in four parts, but only one made the distance. The part that I am now reading on my monitor, and that I suppose you are replying to is as it were page two of my essay. Naturally without pages 1, 3, and 4, my reasoning is not ready to be examined. I will now try to post page 1 in two parts, in case length is the problem. I think my argument is valid. Noting your comments I will before posting tidy up some of the original, but I may not be able to do that till later this afternoon. I understand your comments and I think they are addressed or will be in the tidied-up post. Christopher

  • Nick Stokes

    Coho,The Milne model is in no way used for K&T, or any other climate study.The Milne model says nothing about CO2 or tropopauses. It's a general planetary model, which actually has very limited applicability to Earth, for the reasons I've given above.And again, the Milne model says nothing about clouds or feedbacks. You just don't see things for what they are.

  • http://www.ecoengineers.com Steve Short

    “By the way: reading through the books, I found the following quote about Kirchhoff’s law in Goody and Yung’s Atmospheric Radiation, Theoretical Basis 2nd edition on page 3:“Since clouds, ground, and atmosphere do not differ greatly in temperature, it follows from Kirchhoff’s law that emission and absorption are approximately equal to each other.”I thought I cite that, since there are sites on the Internet and contributions that like to discredit Dr. Miskolczi and his paper on a similar quote. But this seems to me typical for the AGW discussion, always looking for small splinters in the others eye.”The great flaw in this argument of Guenter's course is that, of the fraction of LW IR leaving at TOA which Miskolczi 'lumps' into his S_T which is comprised of LW IR emitted from the tops of clouds (due to release of latent heat during condensation), that fraction was never 'absorbed' by the clouds in the first place. That fraction came from Evapotranspiration (ET) from the surface i.e. its origin is non-radiative.The fact that Miskolczi 'chooses' to add that fraction of ET which is radiated upwards to TOA from the tops of the clouds to the true S_T which is transmitted from BOA to TOA (escaping absorption along the way) is purely an idiosyncrasy of Miskolczi. Whether on considers the creation of S_T as 2-component 'lumped parameter' to be justified or not, the fact remains that depending upon the %cloud present a significant fraction of Miskolczi's S_T has a a non-radiative i.e. convective origin. Thus Kirchoff (or whatever)-type arguments are irrelevant to it.

  • Nick Stokes

    Guenther,M has defined the equations. he's solving eq (12), which is a reduced version of (11). In fact, it is just an indefinite integral of a constant. The bc for TOA matching is stated and applied in the discussion preceding Eq 15. This eq includes the condition, and you can recover it by putting tau=0.As I said above, you can also put tau=0 in eq 21, M's eq, to see that it does not give the same condition. The first task of any model like this is to conserve energy. That means that the outgoing energy must match the energy generated. M's does not do that. Milne's does.M does not seem to have noticed. Otherwise, at least a comment would be in order. He did not “compare afterwards”.Refutation? This paper was firmly rejected by a large number of reputable physics and climate journals. The referees gave their reasons, some of which FM quoted.

  • Nick Stokes

    It isn't just radiation from clouds. All of the 100 or so W/m2 making up LH and convection goes to heat the atmosphere, much of it at low levels. This heat adds to E_D and E_U, and as you say, a bit to S_T, but did not come from A_A.

  • http://www.ecoengineers.com Steve Short

    Agreed. I like your phrase 'much of it at low levels'.That is why I (initially at least) set the fraction of A_A returning to BOA which contributes to E_D, and the fraction of DT returning to BOA which contributes to E_D to be 0.625 (62.5%) by analogy with the fraction of ET for a 1st pass estimate of E_D in my (crude) little spreadsheet model. If one fits the (60% cloud cover case) for A_A and E_D as per T,F&K09 then the fraction actually works out to be 0.66 (66%) for A_A if the fraction of DT (a minor component anyway) stays at 0.625 (62.5%). To me this suggests slightly more of A_A returns to contribute to E_D than of ET but it is close. This is as to be expected because most LW IR from BOA is absorbed below the mean cloud layer level.What I find intriguing about this crude spreadsheet approach is that it is very hard to see how a reduction in OLR (positive forcing) can arise from the situation where %cloud is greater than the global average of ~60%. In this sense I see where Coho (Anthony) is coming from and tend to agree with him.The only possible conclusion if T&F09 are correct is that as %cloud cover rises above ~60% the fraction of latent heat which is radiated through TOA falls off dramatically in a non-linear way. I haven't got T&F09 yet but if they can't prove that point then their contention won't get up with me.If we stop and think about where high %cloud cover commonly exists it is in the equatorial band, over the gyres and over places like the Amazon and Congo. These are all places where highly energetic cu-nim storm lift cloud right up to the tropopause and are characterized by high precipitation rates. I have spent a fair bit of time in the Torres, PNG, New Caledonia etc and seen these storms for myself both from the surface and from the air numerous times. To assert that the fraction of ET which departs TOA at ET_U under such circumstances is proportionately lower than for the average cloud cover situation (temperate latitudes) is implausible to me. They are called 'temperate' for that very reason. So far, I'm with Lindzen and Spencer et al (and Coho) on this.

  • cohenite

    Alright Nick, what atmospheric model does AGW rely on then? Does it have a discontinuity between the surface and the immediate atmosphere or BOA; and does it rely on a rising tropopause due to an increase in atmospheric CO2 levels? My point here is what atmospheric processes does AGW rely on? The goal-posts seem to be a bit fuzzy.

    Steve; wouldn’t ET_U continually be reducing [the] surface/BOA discontinuity until the feedback from the cloud build-up from increasing temperatures readjusted the equilibrium back to the pre-perturbation [through CO2 increase] state? Or am I confusing heat transfer with radiative fluxes? Even so, wouldn’t the result be the same at TOA and BOA?

    The abstract of that new Trenberth and Fasullo paper is here;

    http://www.agu.org/pubs/crossref/2009/2009GL037527.shtml

    • Anonymous

      “There is an increase in net radiation absorbed, but not in ways commonly assumed.”

      Damming really. Love to read the paper. Now what came first, the clouds or the heat?

      • http://www.ecoengineers.com Steve Short

        The Abstract

        Global climate models used in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) are examined for the top‐of‐atmosphere radiation changes as carbon dioxide and other greenhouse gases build up from 1950 to 2100. There is an increase in net radiation absorbed, but not in ways commonly assumed. While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor (as a feedback), this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures. Instead the main warming from an energy budget standpoint comes from increases in absorbed solar radiation that stem directly from the decreasing cloud amounts. These findings underscore the need to ascertain the credibility of the model changes, especially insofar as changes in clouds are concerned.

        Well, this puts T&F09 firmly in the position that increasing Greenhouse should lead to a reduction in %cloud amount i.e. cloud cover should be trending downwards. While there is some evidence as noted previously that that may be the case it is controversial.

        Let me put in my 5 cents worth:

        Increased Greenhouse by MEP should increases the ~50% of EP processes i.e. the hydrologic cycle (O&O96, many MEP papers since). Thus an increase in lower troposphere temperatures should increases ET, hence cloud cover and also decrease upper troposphere humidity (i.e. Pauluis’ ‘dehumidifier’). The latter is observed but is also controversial. It should also increase the polewards heat/entropy flux i.e. ocean winds increase. The latter is observed but continental winds are also decreasing by about the same amount.

        Even if temperature at BOA initially stays the same and hence S_U stays the same increasing cloud above 60% should increase OLR, decreasing cloud below 60% should decrease OLR (see my little spreadsheet results above). This accords with Lindzen/Spencer/Braswell/Cristy etc.

        David is correct the chicken and egg are clouds and heat.

        Cohenite:

        “Steve; wouldn’t ET_U continually be reducing [the] surface/BOA discontinuity until the feedback from the cloud build-up from increasing temperatures readjusted the equilibrium back to the pre-perturbation [through CO2 increase] state? Or am I confusing heat transfer with radiative fluxes? Even so, wouldn’t the result be the same at TOA and BOA?”

        I think I understand what you are saying (it is not really clear). If so the answer (I think) is roughly yes. But as well as the difference between heat content per se (not heat transfer) and LW IR radiative fluxes we need to consider heat diffusion into the ocean and heat diffusion into the (solid) ground.

        Clearly these is a lot of testing yet to be done, as follows:

        (1) Is oceanic cloud cover increasing or decreasing? yes/no

        (2) Is continental cloud cover increasing or decreasing? yes/no

        (3) Is the OHC increasing (yes but controversial?) yes/no

        (4) Is oceanic ET increasing? yes/no

        (5) Is global precipitation increasing? yes/no

        (6) Is continental heat content increasing yes/no (noting evaporation from pans is decreasing but supposedly due to decreasing winds!). Remember continental ET is driven by warming of both air and the ground down to about 5 m due to its effect on plants.

        Seems to me that we either don’t have the answers to the above question or where we do there are a significant number of contra-indicated trends.

        Ergo, we don’t really know comprehensively what we are talking about and/or we are looking at a system subtly adjusting itself in a number of directions i.e. there is massive homeostasis due to system dynamical complexity. I vote yes to both.

        • http://www.ecoengineers.com Steve Short

          “…..this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover…..”

          How to subtly turn a completely unproven effect into an integral part of your overall ‘paradigm’.

          Where is the (body of literature) consensual PROOF that increasing cloud cover increases the greenhouse effect?

          • Anonymous

            Quite so. I thought cloudiness decreased temperatures oveall.

    • Nick Stokes

      Coho,
      no, your goalposts are totally imagined. The reality is much simpler. The models are numerical. That is, they divide the atmosphere (and ocean, generally) up into a huge number of little boxes. There is turbulent flow and transport of heat, gas constituents etc. Radiation is modelled with full spectrum, and again with layers. Diffreential equations are thus solved.

      There is some use of global modelling of things where satisfactory de’s can’t be used. Clouds and rainfall are notable. But mostly the need of this kind of modelling is not for ready-made global de solutions of the Milne type, but for models of what happens on a small scale, where complexity can’t be represented by just box averages. Turbulence (eddies) is the most prominent, but there are others, especially near the land/ocean surfaces.

  • cohenite

    Alright Nick, what atmospheric model does AGW rely on then? Does it have a discontinuity between the surface and the immediate atmosphere or BOA; and does it rely on a rising tropopause due to an increase in atmospheric CO2 levels? My point here is what atmospheric processes does AGW rely on? The goal-posts seem to be a bit fuzzy.Steve; wouldn't ET_U continually be reducing [the] surface/BOA discontinuity until the feedback from the cloud build-up from increasing temperatures readjusted the equilibrium back to the pre-perturbation [through CO2 increase] state? Or am I confusing heat transfer with radiative fluxes? Even so, wouldn't the result be the same at TOA and BOA?The abstract of that new Trenberth and Fasullo paper is here;http://www.agu.org/pubs/crossref/2009/2009GL037

  • davids99us

    “There is an increase in net radiation absorbed, but not in ways commonly assumed.”Damming really. Love to read the paper. Now what came first, the clouds or the heat?

  • http://www.ecoengineers.com Steve Short

    The AbstractGlobal climate models used in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) are examined for the top‐of‐atmosphere radiation changes as carbon dioxide and other greenhouse gases build up from 1950 to 2100. There is an increase in net radiation absorbed, but not in ways commonly assumed. While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor (as a feedback), this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures. Instead the main warming from an energy budget standpoint comes from increases in absorbed solar radiation that stem directly from the decreasing cloud amounts. These findings underscore the need to ascertain the credibility of the model changes, especially insofar as changes in clouds are concerned. Well, this puts T&F09 firmly in the position that increasing Greenhouse should lead to a reduction in %cloud amount i.e. cloud cover should be trending downwards. While there is some evidence as noted previously that that may be the case it is controversial.Let me put in my 5 cents worth:Increased Greenhouse by MEP should increases the ~50% of EP processes i.e. the hydrologic cycle (O&O96, many MEP papers since). Thus an increase in lower troposphere temperatures should increases ET, hence cloud cover and also decrease upper troposphere humidity (i.e. Pauluis' 'dehumidifier'). The latter is observed but is also controversial. It should also increase the polewards heat/entropy flux i.e. ocean winds increase. The latter is observed but continental winds are also decreasing by about the same amount.Even if temperature at BOA initially stays the same and hence S_U stays the same increasing cloud above 60% should increase OLR, decreasing cloud below 60% should decrease OLR (see my little spreadsheet results above). This accords with Lindzen/Spencer/Braswell/Cristy etc.David is correct the chicken and egg are clouds and heat.Cohenite:”Steve; wouldn't ET_U continually be reducing [the] surface/BOA discontinuity until the feedback from the cloud build-up from increasing temperatures readjusted the equilibrium back to the pre-perturbation [through CO2 increase] state? Or am I confusing heat transfer with radiative fluxes? Even so, wouldn't the result be the same at TOA and BOA?”I think I understand what you are saying (it is not really clear). If so the answer (I think) is roughly yes. But as well as the difference between heat content per se (not heat transfer) and LW IR radiative fluxes we need to consider heat diffusion into the ocean and heat diffusion into the (solid) ground.Clearly these is a lot of testing yet to be done, as follows:(1) Is oceanic cloud cover increasing or decreasing? yes/no(2) Is continental cloud cover increasing or decreasing? yes/no(3) Is the OHC increasing (yes but controversial?) yes/no(4) Is oceanic ET increasing? yes/no(5) Is global precipitation increasing? yes/no(6) Is continental heat content increasing yes/no (noting evaporation from pans is decreasing but supposedly due to decreasing winds!). Remember continental ET is driven by warming of both air and the ground down to about 5 m due to its effect on plants.Seems to me that we either don't have the answers to the above question or where we do there are a significant number of contra-indicated trends. Ergo, we don't really know comprehensively what we are talking about and/or we are looking at a system subtly adjusting itself in a number of directions i.e. there is massive homeostasis due to system dynamical complexity. I vote yes to both.

  • http://www.ecoengineers.com Steve Short

    “…..this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover…..”How to subtly turn a completely unproven effect into an integral part of your overall 'paradigm'.Where is the (body of literature) consensual PROOF that increasing cloud cover increases the greenhouse effect?

  • davids99us

    Quite so. I thought cloudiness decreased temperatures oveall.

  • Nick Stokes

    Coho,no, your goalposts are totally imagined. The reality is much simpler. The models are numerical. That is, they divide the atmosphere (and ocean, generally) up into a huge number of little boxes. There is turbulent flow and transport of heat, gas constituents etc. Radiation is modelled with full spectrum, and again with layers. Diffreential equations are thus solved. There is some use of global modelling of things where satisfactory de's can't be used. Clouds and rainfall are notable. But mostly the need of this kind of modelling is not for ready-made global de solutions of the Milne type, but for models of what happens on a small scale, where complexity can't be represented by just box averages. Turbulence (eddies) is the most prominent, but there are others, especially near the land/ocean surfaces.

  • Jan Pompe

    Guenter “Maybe I missed something in the discussion, perhaps you can repeat it for me: why is Miskolczi not matching OLR at TOA?”Actually he is he is not “choosing” boundary conditions they fall out of the solution as derived in appendix b. If you evaluate eqn 20 for tau = 0 i.e. for a transparent (or even non-existent atmosphere) you get Bo = Bg as we would expect. BTW I don't think anyone expects the Bg to be the same for a transparent atmosphere as for an opaque or semi transparent one apart from being treated as such in the classical solution. Nick is not being quite honest (perhaps he genuinely does not know) but B(tau) really looks more like those integrals in Eqn B3 than a simple indefinite integral of a constant. From your first post I expect that you are already aware that to get from B3 to something nice and simple like Eqn 15 one needs to evaluate the integrals from 0 -> infinity.

  • cohenite

    Well Nick, I do like it when you talk technical to me, so what is your technical take on Trenberth’s assertion that decreased clouds decrease the greenhouse effect and by inference increased clouds increase the greenhouse effect? It would seem to diverge from the Kump and Pollard paper;

    http://www.jennifermarohasy.com/blog/archives/002914.html

    • Nick Stokes

      Coho,
      Are your referring to the Trenberth and Fasullo paper? Have you got that right? They sayInstead the main warming from an energy budget standpoint comes from increases in absorbed solar radiation that stem directly from the decreasing cloud amounts.

  • cohenite

    Well Nick, I do like it when you talk technical to me, so what is your technical take on Trenberth's assertion that decreased clouds decrease the greenhouse effect and by inference increased clouds increase the greenhouse effect? It would seem to diverge from the Kump and Pollard paper;http://www.jennifermarohasy.com/blog/archives/0

  • Nick Stokes

    Coho,Are your referring to the Trenberth and Fasullo paper? Have you got that right? They sayInstead the main warming from an energy budget standpoint comes from increases in absorbed solar radiation that stem directly from the decreasing cloud amounts.

  • cohenite

    Eh? Trenberth says;

    “this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover”

    You’re a hard man to nail down.

    • Nick Stokes

      Well, OK, but I don’t have to resolve the divergence. T&F do it. Their sentence I quoted is what Kump and Pollard are saying. T&F say that, yes, in the models that effect dominates the reduction in greenhouse effect (blocking IR) by clouds.

      • http://www.ecoengineers.com Steve Short

        No Nick. You do have to resolve a divergence.

        I posted the entire abstract of T&F09 just above. It seems like you missed that altogether, viz:

        “…While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor (as a feedback), this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures.”

        So T&F09 don’t say what you said (above)!

        They say: a decreasing greenhouse effect results from decreasing cloud cover.

        Ergo: an increasing greenhouse effect (would) result from increasing cloud cover.

        Let’s not descend into Pompe-speak. That way madness lies (or at least an inability to get to bed ;-).

        • http://www.ecoengineers.com Steve Short

          Thanks Anthony. I love K&P BTW – exactly what I have been on about for several years (even pre K&P) – CCN from biological productivity.

          Kump and Pollard:

          The extreme warmth of particular intervals of geologic history cannot be simulated with climate models, which are constrained by the geologic proxy record to relatively modest increases in atmospheric carbon dioxide levels. Recent recognition that biological productivity controls the abundance of cloud condensation nuclei (CCN) in the unpolluted atmosphere provides a solution to this problem. Our climate simulations show that reduced biological productivity (low CCN abundance) provides a substantial amplification of CO2-induced warming by reducing cloud lifetimes and reflectivity. If the stress of elevated temperatures did indeed suppress marine and terrestrial ecosystems during these times, this long-standing climate enigma may be solved.

          To repeat: “…..a substantial amplification of CO2-induced warming by reducing cloud lifetimes and reflectivity.”

          Coho gotcha there, Nick.

        • Nick Stokes

          Steve,
          I don’t think I missed anything. K&F have it covered. If clouds reduce, there are two effects:
          LW effect (your quote) – GE drops, more IR emitted – cooler
          SW effect – less SW reflected, more absorbed – warmer.
          The SW effect is the K&P one, and T&F say that it is bigger. Resolved.

          • http://www.ecoengineers.com Steve Short

            Phooey.

            More biomass, more CCN. More CCN, more cloud.

            It should be getting cooler due to increasing oceanic cloud cover (over 60%):

            http://bobtisdale.blogspot.com/2008/12/ocean-cloud-cover-data.html

            SW effect – more SW reflected, less absorbed – cooler.

            More cloud = more precipitation.

            LW effect – more latent heat release, more LW escapes TOA – cooler

            Resolved

            A cool four of Empiricism beats a GCM hot flush any day.

          • Nick Stokes

            Well, Steve, I guess your Empiricism perfectly explains the cooling of the last fifty years.

          • http://www.ecoengineers.com Steve Short

            You said that. I didn’t. IMO it is a relatively facile comment which does your obvious intelligence no credit.

            I would only say that my ‘Empiricism’ certainly helps to explain the evidence for a relatively low CO2 sensitivity which the observations of the last 50 years e.g. a missing OHC, an increasing oceanic cloud cover, a decreasing continental pan evaporation etc indicate is more likely to be the case.

            As I said before, even for the view that increasing cloud cover MUST increase positive forcing to stand, it also means BY DEFINITION that an increasingly smaller percentage of clouds MUST actually precipitate at higher cloud covers.

            Tell me: just how do your GCMs factor that required effect (=> reducing precipitation rate per unit cloud cover) in?

            Especially in the context that every single study shows continental plant biomass and oceanic cyanobacterial biomass (and hence lower troposphere CCN density) increases with increasing CO2.

            I know you’d really love to ‘have your cake and eat it too’, but so far, it is just not possible (as I see it).

          • Nick Stokes

            Steve,
            Yes, it’s facile, but draws attention to the fact that you’re just listing a few of the things (on one side) that determine temperature. They all point to cooling, but it has been warming. There’s a lot else (eg CO2).

            T&F rightly emphasise the different pulls. SW and clouds – warming. LW and clouds – cooling. But, as they say, it isn’t just clouds. There’s the gas GE. And then there’s your CCN story. You have to add it all up.

          • http://www.ecoengineers.com Steve Short

            Nick I haven’t got a copy of the whole T&F paper yet so I’ll have to suspend judgment.

            However, I’d make the following comments.

            If ALL clouds were removed albedo would fall to about 15% and the amount of SW available to warm warm the atmosphere/surface would increase from 239 W/m^2 to about 288 W/m^2. However the LW OLR would increase to about 266 W/m^2 compared to 238 – 239 at present. The net effect of complete cloud removal would there be an increase in net radiation of about 22 W/m^2.

            So the 60% cloud cover has a net cooling effect of about 22 W/m^2 even though (I acknowledge) the net effects of high altitude and low altitude clouds is essentially opposite.

            Again intuitively I would expect increased CO2 to increase the mean elevation of clouds (stronger convection and stronger meridional winds) hence increase the negative forcing effect (from higher clouds).

            As an incorrigible bookworm (to the despair of my good lady) it intrigues me that the annual peak in total moisture content right up through the atmosphere occurs in August/September. This is when NH ET is at its peak. Why NH only, why not two annual peaks?

            NH has more land, more anthropogenic CO2 sources, more agriculture and more N & P pollution.

          • Alex Harvey

            Hi Steve and all,

            I’ve seen a lot of discussion of GCMs here, what assumptions they contain, what assumptions they don’t contain, what they do, what they don’t do.

            For anyone interested the ECHAM5 model (and also its predecessors the ECHAM3 and ECHAM4) is documented freely and you can download a description of it here:

            http://www.mpimet.mpg.de/fileadmin/models/echam/mpi_report_349.pdf

            It makes very interesting reading, just to see all of its assumptions and approximations laid bare, and also to note how much of it is based on theory that is 10, 20, 30 years old (well it’s supposed to be “state of the art” whatever that means).

            By the way, I can’t actually see any evidence of a built-in temperature discontinuity although it’s possible you need to follow back through the references on the radiation scheme to see that.

            After reading through it I am afraid I just can’t bring myself anywhere near a place that I could take seriously the predictions of one of these programs.

          • Nick Stokes

            Alex,
            Any mention of Milne, Schwarzschild or Emden?

          • Alex Harvey

            Nick,
            As I said, I can’t see anything about a temperature discontinuity in the paper, but one would have to follow it all the back through the references to be sure.

            I suggest you read the introduction and the section on radiation — it’s only short.

            Mlawer, E. J., S. J. Taubman, P. D. Brown, M. J. Iacono, and S. A. Clough (1997), Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave, J. Geophys. Res., 102(D14), 16,663–16,682.

            The radiation scheme apparently is “new” (where new means 1997) and leads to this paper here:

            A rapid and accurate radiative transfer model (RRTM) for climate applications has been developed and the results extensively evaluated. The current version of RRTM calculates fluxes and cooling rates for the longwave spectral region (10–3000 cm−1) for an arbitrary clear atmosphere. The molecular species treated in the model are water vapor, carbon dioxide, ozone, methane, nitrous oxide, and the common halocarbons. The radiative transfer in RRTM is performed using the correlated-k method: the k distributions are attained directly from the LBLRTM line-by-line model, which connects the absorption coefficients used by RRTM to high-resolution radiance validations done with observations. Refined methods have been developed for treating bands containing gases with overlapping absorption, for the determination of values of the Planck function appropriate for use in the correlated-k approach, and for the inclusion of minor absorbing species in a band. The flux and cooling rate results of RRTM are linked to measurement through the use of LBLRTM, which has been substantially validated with observations. Validations of RRTM using LBLRTM have been performed for the midlatitude summer, tropical, midlatitude winter, subarctic winter, and four atmospheres from the Spectral Radiance Experiment campaign. On the basis of these validations the longwave accuracy of RRTM for any atmosphere is as follows: 0.6 W m−2 (relative to LBLRTM) for net flux in each band at all altitudes, with a total (10–3000 cm−1) error of less than 1.0 W m−2 at any altitude; 0.07 K d−1 for total cooling rate error in the troposphere and lower stratosphere, and 0.75 K d−1 in the upper stratosphere and above. Other comparisons have been performed on RRTM using LBLRTM to gauge its sensitivity to changes in the abundance of specific species, including the halocarbons and carbon dioxide. The radiative forcing due to doubling the concentration of carbon dioxide is attained with an accuracy of 0.24 W m−2, an error of less than 5%. The speed of execution of RRTM compares favorably with that of other rapid radiation models, indicating that the model is suitable for use in general circulation models.

          • http://www.ecoengineers.com Steve Short

            This thin column stuff is getting into Monty Python territory. Talk about Occam’s Razor!!!!

          • Alex Harvey

            Yes I was hoping the software would get cleverer and cleverer as we got the edge of the screen here — but it’s not!

            Let’s see where this one goes.

          • Alex Harvey

            Okay website, I’m determined to find your right limit!

          • Alex Harvey

            It’s here!

    • jae

      Well now. Silence. Time for a new thought?

      Since the “consensus” hypothesis on the GHE effect is certainly suspect, given all the relevant information: i.e., no temperature increases for 12 years and absolutely no other empirical or theoretical evidence to support said nonsense. And since the Miskolczi hypothesis has been discredited by the experts here, maybe we should bo back to my simpleton hypothesis that the “greenhouse effect” is nothing more than the ability of the Planet to store heat from one day to the next. And the corollary that IR radiation doesn’t have a damn thing to do with it. It now looks like this is as good an hypothesis as any other. LOL.

  • cohenite

    Eh? Trenberth says;”this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover”You're a hard man to nail down.

  • Nick Stokes

    Well, OK, but I don't have to resolve the divergence. T&F do it. Their sentence I quoted is what Kump and Pollard are saying. T&F say that, yes, in the models that effect dominates the reduction in greenhouse effect (blocking IR) by clouds.

  • http://www.ecoengineers.com Steve Short

    No Nick. You do have to resolve a divergence.I posted the entire abstract of T&F09 just above. It seems like you missed that altogether, viz:”…While there is a large increase in the greenhouse effect from increasing greenhouse gases and water vapor (as a feedback), this is offset to a large degree by a decreasing greenhouse effect from reducing cloud cover and increasing radiative emissions from higher temperatures.”So T&F09 don't say what you said (above)! They say: a decreasing greenhouse effect results from decreasing cloud cover.Ergo: an increasing greenhouse effect (would) result from increasing cloud cover.Let's not descend into Pompe-speak. That way madness lies (or at least an inability to get to bed ;-).

  • cohenite

    As I understand K&P they say the unavailability of condensation nuclei produced a drastic reduction in clouds and as a result of fewer clouds super greenhouse conditions.

    T&F say that decreased cloud both reduces greenhouse conditions and increases temperature as a result of increased absorbed solar radiation [ASR]; T&F also distinguish vapor as a +ve feedback from ASR; however both increased vapor and increased ASR increase OLR while increased low-level clouds do not; T&F note that observations disprove Lindzen’s Iris and that the increase in convective clouds has a +ve feedback [which would suggest that Lindzen's Iris may hold true 'if' there were a decrease in high convective cloud], although an increase in low cloud through reduced SW cloud radiative forcing has a negative feedback as temperatures rise and also a decrease in optical depth. Oddly enough all the models studied by T&F still show warming even if there is no change in cloud cover.

    T&F don’t refer to Dessler at all.

    • http://www.ecoengineers.com Steve Short

      Yep, you can do practically anything you like with “a fully coupled GCM” e.g.:

      Lunt et al. 2008

      “….Although the popular conception is that geoengineering can re-establish a ‘natural’ pre-industrial climate, such a scheme would itself inevitably lead to climate change, due to the different temporal and spatial forcing of increased CO2 compared to reduced solar radiation. We investigate the magnitude and nature of this climate change for the first time within a fully coupled General Circulation Model. We find significant cooling of the tropics, warming of high latitudes and related sea ice reduction, a reduction in intensity of the hydrological cycle, reduced ENSO variability, and an increase in Atlantic overturning. However, the changes are small relative to those associated with an unmitigated rise in CO2 emissions.”

      In fact I’m so mightily impressed with GCMs I’m thinking of fitting one to my Hilux 4WD to boost the turbo….

      • http://www.ecoengineers.com Steve Short

        Question (3) above – my take:

        Rising OHC:
        Levitus et al 2009

        Falling OHC:
        Willis 2008
        Ishii & Kimoto 2009
        Loehle 2009

        Any others?

    • http://www.ecoengineers.com/ Steve Short

      I have been hitting the literature hard on cloud effects. This is where I have got my little spreadsheet model to (see below).

      I have partly stuck with Miskolczi parameter terminology only because this is very, very familiar to most of us here (possibly not a lurking Christopher Game from over at JM though, haha ;-). The remaining terminology I’ve explained before and/or is self explanatory.

      Each (%cloud cover) row energy balances (I think) and all parameters are interlinked by relatively simple and empirically justifiable algorithms. Most major parameter values can be found somewhere (or a value very close) in the mainstream literature – going all the way back to Hartmann, 1994.

      The ‘Virial Rule ‘ does reasonably well but Kirchoff falls over. S_U/OLR ranges from about 1.5 (clear sky) to about 1.75 (100% cloud) but seems to be close to 1.66 around 60% cloud cover). Interestingly the Miskolczi so-called ‘tau’ decreases with increasing cloud! Miskolczi’s ‘magic tau’ (=ET_U + real Tau = ET_U + rTau) has value of 1.87 somewhere around 10±10% cloud cover i.e. it approximates clear sky (as seems obvious in retrospect) but is a useless parameter in every respect.

      I think a little spreadsheet like this has considerable value for getting our heads clear on what affects what, why and roughly by how much – particularly in respect of the all-important clouds.

      %Cloud,Albedo(A),Fo,Fo(1-A),F,rTau,S_T,ET,ET_U,DT,rE_U,A_A,E_D,oE_U,S_U/oE_U,A_A/E_D,S_U,OLR,S_U/OLR,M-Tau
      100,0.40,341,205,67,2.70,26,133,50,1,145,360,353,195,1.98,1.02,386,221,1.75,1.63
      80,0.35,341,222,72,2.47,33,107,40,9,157,358,343,197,1.98,1.04,391,230,1.70,1.68
      60,0.30,341,239,78,2.29,40,80,30,17,169,356,333,199,1.99,1.07,396,239,1.66,1.73
      40,0.25,341,255,83,2.14,47,53,20,24,181,354,322,201,2.00,1.10,401,248,1.62,1.79
      20,0.20,341,272,89,2.02,54,27,10,33,193,352,312,203,2.00,1.13,406,257,1.58,1.85
      0,0.15,341,288,94,1.91,61,0,0,41,205,349,301,205,2.00,1.16,410,266,1.54,1.91

  • cohenite

    As I understand K&P they say the unavailability of condensation nuclei produced a drastic reduction in clouds and as a result of fewer clouds super greenhouse conditions. T&F say that decreased cloud both reduces greenhouse conditions and increases temperature as a result of increased absorbed solar radiation [ASR]; T&F also distinguish vapor as a +ve feedback from ASR; however both increased vapor and increased ASR increase OLR while increased low-level clouds do not; T&F note that observations disprove Lindzen's Iris and that the increase in convective clouds has a +ve feedback [which would suggest that Lindzen's Iris may hold true 'if' there were a decrease in high convective cloud], although an increase in low cloud through reduced SW cloud radiative forcing has a negative feedback as temperatures rise and also a decrease in optical depth. Oddly enough all the models studied by T&F still show warming even if there is no change in cloud cover. T&F don't refer to Dessler at all.

  • http://www.ecoengineers.com Steve Short

    Thanks Anthony. I love K&P BTW – exactly what I have been on about for several years (even pre K&P) – CCN from biological productivity.Kump and Pollard:The extreme warmth of particular intervals of geologic history cannot be simulated with climate models, which are constrained by the geologic proxy record to relatively modest increases in atmospheric carbon dioxide levels. Recent recognition that biological productivity controls the abundance of cloud condensation nuclei (CCN) in the unpolluted atmosphere provides a solution to this problem. Our climate simulations show that reduced biological productivity (low CCN abundance) provides a substantial amplification of CO2-induced warming by reducing cloud lifetimes and reflectivity. If the stress of elevated temperatures did indeed suppress marine and terrestrial ecosystems during these times, this long-standing climate enigma may be solved.To repeat: “…..a substantial amplification of CO2-induced warming by reducing cloud lifetimes and reflectivity.”Coho gotcha there, Nick.

  • http://www.ecoengineers.com Steve Short

    Yep, you can do practically anything you like with “a fully coupled GCM” e.g.:Lunt et al. 2008″….Although the popular conception is that geoengineering can re-establish a ‘natural’ pre-industrial climate, such a scheme would itself inevitably lead to climate change, due to the different temporal and spatial forcing of increased CO2 compared to reduced solar radiation. We investigate the magnitude and nature of this climate change for the first time within a fully coupled General Circulation Model. We find significant cooling of the tropics, warming of high latitudes and related sea ice reduction, a reduction in intensity of the hydrological cycle, reduced ENSO variability, and an increase in Atlantic overturning. However, the changes are small relative to those associated with an unmitigated rise in CO2 emissions.”In fact I'm so mightily impressed with GCMs I'm thinking of fitting one to my Hilux 4WD to boost the turbo….

  • http://www.ecoengineers.com Steve Short

    Question (3) above – my take:Rising OHC:Levitus et al 2009Falling OHC:Willis 2008Ishii & Kimoto 2009Loehle 2009Any others?

  • Nick Stokes

    Steve,I don't think I missed anything. K&F have it covered. If clouds reduce, there are two effects:LW effect (your quote) – GE drops, more IR emitted – coolerSW effect – less SW reflected, more absorbed – warmer.The SW effect is the K&P one, and T&F say that it is bigger. Resolved.

  • http://www.ecoengineers.com Steve Short

    Question (1) above. Bob Tisdale has done a nice job (as usual) on Oceanic Cloud Cover:http://bobtisdale.blogspot.com/2008/12/ocean-cl…Rising

  • http://www.ecoengineers.com Steve Short

    Phooey.More biomass, more CCN. More CCN, more cloud. It should be getting cooler due to increasing oceanic cloud cover (over 60%):http://bobtisdale.blogspot.com/2008/12/ocean-cl…SW effect – more SW reflected, less absorbed – cooler.More cloud = more precipitation.LW effect – more latent heat release, more LW escapes TOA – coolerResolvedA cool four of Empiricism beats a GCM hot flush any day.

  • Nick Stokes

    Well, Steve, I guess your Empiricism perfectly explains the cooling of the last fifty years.

  • http://www.ecoengineers.com Steve Short

    You said that. I didn't. IMO it is a relatively facile comment which does your obvious intelligence no credit.I would only say that my 'Empiricism' certainly helps to explain the evidence for a relatively low CO2 sensitivity which the observations of the last 50 years e.g. a missing OHC, an increasing oceanic cloud cover, a decreasing continental pan evaporation etc indicate is more likely to be the case.As I said before, even for the view that increasing cloud cover MUST increase positive forcing to stand, it also means BY DEFINITION that an increasingly smaller percentage of clouds MUST actually precipitate at higher cloud covers.Tell me: just how do your GCMs factor that required effect (=> reducing precipitation rate per unit cloud cover) in? Especially in the context that every single study shows continental plant biomass and oceanic cyanobacterial biomass (and hence lower troposphere CCN density) increases with increasing CO2.I know you'd really love to 'have your cake and eat it too', but so far, it is just not possible (as I see it).

  • Nick Stokes

    Steve,Yes, it's facile, but draws attention to the fact that you're just listing a few of the things (on one side) that determine temperature. They all point to cooling, but it has been warming. There's a lot else (eg CO2).T&F rightly emphasise the different pulls. SW and clouds – warming. LW and clouds – cooling. But, as they say, it isn't just clouds. There's the gas GE. And then there's your CCN story. You have to add it all up.

  • http://www.ecoengineers.com Steve Short

    Nick I haven't got a copy of the whole T&F paper yet so I'll have to suspend judgment.However, I'd make the following comments.If ALL clouds were removed albedo would fall to about 15% and the amount of SW available to warm warm the atmosphere/surface would increase from 239 W/m^2 to about 288 W/m^2. However the LW OLR would increase to about 266 W/m^2 compared to 238 – 239 at present. The net effect of complete cloud removal would there be an increase in net radiation of about 22 W/m^2.So the 60% cloud cover has a net cooling effect of about 22 W/m^2 even though (I acknowledge) the net effects of high altitude and low altitude clouds is essentially opposite.Again intuitively I would expect increased CO2 to increase the mean elevation of clouds (stronger convection and stronger meridional winds) hence increase the negative forcing effect (from higher clouds).As an incorrigible bookworm (to the despair of my good lady) it intrigues me that the annual peak in total moisture content right up through the atmosphere occurs in August/September. This is when NH ET is at its peak. Why NH only, why not two annual peaks?NH has more land, more anthropogenic CO2 sources, more agriculture and more N & P pollution.

  • Alex Harvey

    Hi Steve and all,I've seen a lot of discussion of GCMs here, what assumptions they contain, what assumptions they don't contain, what they do, what they don't do.For anyone interested the ECHAM5 model (and also its predecessors the ECHAM3 and ECHAM4) is documented freely and you can download a description of it here:http://www.mpimet.mpg.de/fileadmin/models/echam…It makes very interesting reading, just to see all of its assumptions and approximations laid bare, and also to note how much of it is based on theory that is 10, 20, 30 years old (well it's supposed to be “state of the art” whatever that means).By the way, I can't actually see any evidence of a built-in temperature discontinuity although it's possible you need to follow back through the references on the radiation scheme to see that.After reading through it I am afraid I just can't bring myself anywhere near a place that I could take seriously the predictions of one of these programs.

  • Nick Stokes

    Alex,Any mention of Milne, Schwarzschild or Emden?

  • Alex Harvey

    Nick,As I said, I can't see anything about a temperature discontinuity in the paper, but one would have to follow it all the back through the references to be sure.I suggest you read the introduction and the section on radiation — it's only short.Mlawer, E. J., S. J. Taubman, P. D. Brown, M. J. Iacono, and S. A. Clough (1997), Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave, J. Geophys. Res., 102(D14), 16,663–16,682.

    The radiation scheme apparently is “new” (where new means 1997) and leads to this paper here:A rapid and accurate radiative transfer model (RRTM) for climate applications has been developed and the results extensively evaluated. The current version of RRTM calculates fluxes and cooling rates for the longwave spectral region (10–3000 cm−1) for an arbitrary clear atmosphere. The molecular species treated in the model are water vapor, carbon dioxide, ozone, methane, nitrous oxide, and the common halocarbons. The radiative transfer in RRTM is performed using the correlated-k method: the k distributions are attained directly from the LBLRTM line-by-line model, which connects the absorption coefficients used by RRTM to high-resolution radiance validations done with observations. Refined methods have been developed for treating bands containing gases with overlapping absorption, for the determination of values of the Planck function appropriate for use in the correlated-k approach, and for the inclusion of minor absorbing species in a band. The flux and cooling rate results of RRTM are linked to measurement through the use of LBLRTM, which has been substantially validated with observations. Validations of RRTM using LBLRTM have been performed for the midlatitude summer, tropical, midlatitude winter, subarctic winter, and four atmospheres from the Spectral Radiance Experiment campaign. On the basis of these validations the longwave accuracy of RRTM for any atmosphere is as follows: 0.6 W m−2 (relative to LBLRTM) for net flux in each band at all altitudes, with a total (10–3000 cm−1) error of less than 1.0 W m−2 at any altitude; 0.07 K d−1 for total cooling rate error in the troposphere and lower stratosphere, and 0.75 K d−1 in the upper stratosphere and above. Other comparisons have been performed on RRTM using LBLRTM to gauge its sensitivity to changes in the abundance of specific species, including the halocarbons and carbon dioxide. The radiative forcing due to doubling the concentration of carbon dioxide is attained with an accuracy of 0.24 W m−2, an error of less than 5%. The speed of execution of RRTM compares favorably with that of other rapid radiation models, indicating that the model is suitable for use in general circulation models.

  • http://www.ecoengineers.com Steve Short

    This thin column stuff is getting into Monty Python territory. Talk about Occam's Razor!!!!

  • Alex Harvey

    Yes I was hoping the software would get cleverer and cleverer as we got the edge of the screen here — but it's not!Let's see where this one goes.

  • Alex Harvey

    Okay website, I'm determined to find your right limit!

  • Alex Harvey

    It's here!

  • http://www.ecoengineers.com/ Steve Short

    I have been hitting the literature hard on cloud effects. This is where I have got my little spreadsheet model to (see below).I have partly stuck with Miskolczi parameter terminology only because this is very, very familiar to most of us here (possibly not a lurking Christopher Game from over at JM though, haha ;-). The remaining terminology I've explained before and/or is self explanatory. Each (%cloud cover) row energy balances (I think) and all parameters are interlinked by relatively simple and empirically justifiable algorithms. Most major parameter values can be found somewhere (or a value very close) in the mainstream literature – going all the way back to Hartmann, 1994. The 'Virial Rule ' does reasonably well but Kirchoff falls over. S_U/OLR ranges from about 1.5 (clear sky) to about 1.75 (100% cloud) but seems to be close to 1.66 around 60% cloud cover). Interestingly the Miskolczi so-called 'tau' decreases with increasing cloud! Miskolczi's 'magic tau' (=ET_U + real Tau = ET_U + rTau) has value of 1.87 somewhere around 10±10% cloud cover i.e. it approximates clear sky (as seems obvious in retrospect) but is a useless parameter in every respect.I think a little spreadsheet like this has considerable value for getting our heads clear on what affects what, why and roughly by how much – particularly in respect of the all-important clouds.%Cloud,Albedo(A),Fo,Fo(1-A),F,rTau,S_T,ET,ET_U,DT,rE_U,A_A,E_D,oE_U,S_U/oE_U,A_A/E_D,S_U,OLR,S_U/OLR,M-Tau100,0.40,341,205,67,2.70,26,133,50,1,145,360,353,195,1.98,1.02,386,221,1.75,1.6380,0.35,341,222,72,2.47,33,107,40,9,157,358,343,197,1.98,1.04,391,230,1.70,1.6860,0.30,341,239,78,2.29,40,80,30,17,169,356,333,199,1.99,1.07,396,239,1.66,1.7340,0.25,341,255,83,2.14,47,53,20,24,181,354,322,201,2.00,1.10,401,248,1.62,1.7920,0.20,341,272,89,2.02,54,27,10,33,193,352,312,203,2.00,1.13,406,257,1.58,1.850,0.15,341,288,94,1.91,61,0,0,41,205,349,301,205,2.00,1.16,410,266,1.54,1.91

  • http://www.ecoengineers.com/ Steve Short

    Oops make that:Miskolczi tau (M-tau) = -ln((ET_U + S_T)/S_U Real tau (rTau) = -ln(S_T/S_U)

  • http://www.ecoengineers.com/ Steve Short
  • Geoff Sherrington

    I am going to stick my neck out and hope that it will be chopped off. Because, if it is choppped off, it will demonstrate that what I am about to say is already understood. That will be good.

    The lengthy posts about F.M. and his theory are confusing to a scientist like me, with a chemistry major and a spectroscopy background for part of my work. They are confusing because they seem to leap from one concept to another too often. Some of these concepts include the use of temperature as a proxy for heat, when heat is the fundamental parameter; then confusing the statatics and dynamics and equilibria and rates and shapes of equations and powers within them, the mixing of conduction, convection, radiation, the immense complexity of heat flow through a heterogeneous atmosphere where light has a very complex absorption spectrum, the extrapolation of physics from known situations into relatively unknown (like spectral absorption of gas mixtures at very low pressures and temperatures), like assumptions about the extension of optical density from lab measurements into regions that are opague or transparent or unknown degrees between………. and overshadowing all this, the impression that much remains to be learned about the true action of water vapour, the most effective GHG.

    This is before we even start on the complex maths problems of DEs and bounded or unbounded cases, of using assumed maths to show that physical effects must exist – before their acceptance by measurement. Let alone the problems of grid cell scale for some effects, of unsolved equations like Navier Stokes, of a completely unreliable but much relied upon reconstruction of “average global temperature”.

    For example, what is the accepted figure for the heat generated from the friction of air sliding past a rotating rough earth? What assumtions are made to derive this figure?

    Illustration, not picking on anyone, just taking an example that happens to be from Steve Short above:

    “The only way I could warm my hands just enough just to work the valves, pumps etc (both day and night) was take my gloves off and frequently flush them with the actual groundwater. The groundwater was in thermal equilibrium with the ground which was significantly warmer than the air only centimetres above the ground. Strong discontinuity right down at ground level (or even slightly lower)! ”

    There was no pressing need to explain the science of this precisely on this blog, but you need to take into consideration factors such as: the velocity and relative humidity of the wind helping evaporate water from your hands; the thermal conductivity of your hands modelled from the skin to the depth where blood flow controls temperature; the rate of heat flow throught this region, which might vary if you are fat or thin; the rate of warming/cooling of hands given a temperature step change through glove removal, by circulating body blood; the long-term ability of body blood to maintain its steadiness under these conditions; actual measurement to show that the ground water was in equilibrium with the rock around it; if not, then the same heat flow problems; and so on. You see, while you survived working with your body in the air, you might get hypothermia and die if you went to lie down in the “warmer” stream of groundwater.

    Then, IIRC, the many model discontinuity has the air warmer than the ground, not the reverse as Steve describes.

    The discussion is too unstructured. You guys need to pick one quibble point at a time and thrash it to death. Please, as a favour, don’t include CO2 in the first half dozen quibbles, I’m sick to death of its magic, ephemeral trace gas properties. Maybe you could start with one like this: Is there indeed a heat discontinuity between the near earth atmosphere and the ground? (Under what assumptions, and of what magnitude? Whose measurements? If there is, is there a measurable heat flow from one to the other? Does it equilibrate? At what value?)

    So hack me to death. I’m a lousy typist too.

    • http://www.ecoengineers.com/ Steve Short

      (a) I think there usually is a discontinuity (somewhere).
      (b) It may be quite sharp or quite gradual (depending upon circumstances).

      I can suspend a pH/EC/temperature probe into shallow groundwater in winter and get a temperature significantly higher that the temperature probe records in air just outside the borehole stem in zero wind conditions.

      I can suspend a pH/EC/temperature probe into shallow groundwater in summer and get a temperature significantly lower that the temperature probe records in air just outside the borehole stem in zero wind conditions.

      This effect is well known to hydrogeologists.

      I can monitor the pH/EC/disslved oxygen/temperature (etc) profile of a lake or reservoir throughout the year, monitor its thermocline and watch it ‘turnover’ as slight changes in salinity (between top and bottom) overcome or are overcome by slight changes in temperature as a consequence of the effects of these two parameters on water density.

      This effect is well known to limnologists.

      Put these in yer pipe and puff on ‘em.

      • kuhnkat

        Steve,

        you seem to think you have provided us with empirical evidence of a temperature discontinuity. How about the heat flow and gradient, if any, for this discontinuity??

        It would require a perfect insulator to maintain a discontinuity. Or, am I misunderstanding what is meant by a discontinuity here?? There would HAVE to be a flux if there is a difference in potential without this insulator.

        I believe what you are telling us is that the ground has low conductivity and huge capacity compared to the atmospheres higher conductivity and low capacity, therefore maintaining a more even temperature below the immediate surface??

        Let’s not forget that water is a better conductor. It would also be thermally connected to water that is typically deeper in the ground where the temperature is “average” compared to the surface.

        So what is so special about this?? How does it show a discontinuity?????

        Please stop confusing a steep gradient due to known conditions with a discontinuity.

        • http://www.ecoengineers.com/ Steve Short

          “Please stop confusing a steep gradient due to known conditions with a discontinuity.”

          Huh? Are you kidding? How steep do you want it to be? LOL.

          (1) It is commonly accepted that shallow groundwater is in temperature equilibrium with the ground in which it resides. Remember shallow aquifers typically have an effective porosity of 1 – 20% (depending upon whether the lithology is consolidated or unconsolidated).

          (2) If the air temperature immediately above that ground is significantly different even under zero wind conditions then we have the closest thing to a discontinuity I can possibly imagine.

          Actually, the order of heat conductivity in the absence of convective heat transfer (which cannot typically occur in an aquifer) is typically solid rock>water>air.

          You are having yourself on here.

        • Nick Stokes

          I agree with kuhnkat here. The discussion about discontinuity is misconceived in its origin, because it was inspired by a math approximation extended beyond its region of applicability.
          But a true discontinuity in fluid is impossible. The temp difference across a region is the heat flux times the width and divided by the conductivity, or heat transfer coef, or whatever you want to call it. So finite difference over zero width – then either infinite flux or zero conductivity.
          For a very big temp change over a small distance you need either a big flux or a good insulator. A big heat flux usually can’t be sustained, because you run out of heat. That leaves a good insulator.
          Now the air is generally moving, and turbulent diffusion is effective. It’s only when you get very still air that it becomes a fairly good insulator. That’s why you can get a near discontinuity on a frosty night. That means a gradient of a deg C or two per metre, total difference maybe 3-4C (frost on grass etc).
          You can’t have a steep gradient the other way (hot at the bottom). It’s convectively unstable.

          • Jan Pompe

            “You can’t have a steep gradient the other way (hot at the bottom). It’s convectively unstable.”

            Non-inverted lapse rate.

            Frosts occur when there is a temperature inversion cold at the bottom. On frosty nights you can have gradients of 10k K/m over a few millimitres.

            Get out there with a thermometer some time Nick.

          • http://www.ecoengineers.com/ Steve Short

            Life is filled with profound ironies. I’m with Jan on his reply. The obvious reply to Nick’s naive comments was that one simple word : inversion.

            “For a very big temp change over a small distance you need either a big flux or a good insulator. A big heat flux usually can’t be sustained, because you run out of heat. That leaves a good insulator.
            Now the air is generally moving, and turbulent diffusion is effective.”

            Not invariably true.

            FYI, there is a popular phenomenon in hang gliding, particularly in the US, called ‘magic air’. What happens is that in dense pine forests in the bases of valley the air heats up throughout the day and forms an inversion, sticking close to ground level due to a (yes) viscous attachment to the trees.

            Surprise, surprise, fluids do have viscosity and sometimes that is sufficient to overcome even temperature gradients.

            In other words an inversion forms. It is not until late afternoon when cold katabatic air flow occurs down tributary gullies, thereby entering the valley floor basal forests, getting under the warm air, does the inversion then start ‘lifting off’.

            One can then hang glide for about 2 -3 hours late afternoon to early evening, 100 – 300 feet AGL in air which is constantly and gently rising at about 2- 4 foot/min up thus counteracting the sink rate of the glider. I have done this many times. The sensation is delicious.

            One of the thrilling side effects of this phenomenon was that I could cruise around over small lakes created by beaver dams, almost hands off the A frame, photographing beavers as they scudded backwards and forwards across their dams. Having never seen beavers in my life before this was a big, big thrill.

            Magic air indeed.

            We need to remember that solid ground is not a fluid. Groundwater within solid ground hardly behaves as a fluid. Air above the ground will not necessarily behave as a simple fluid, free to instantly convect.

            Having spent 12 years in an Australian Fed. Govt. research organisation and 3 in a Swiss one, I love the freedom and intellectual delights that come from doing pure research. I also know what it is like to live in academia. But such places are not the font of absolute truth.

            Next time, if you are out there, just for curiosity try sticking a thermometer onto hot tarmac perhaps which a fat goanna has just vacated.

            Or watch carefully and this time notice that jackrabbit who ‘flicked off’ a dust devil as he scampered across a gibber plain.

            There is an even greater wisdom which comes from just getting out there and actually experiencing the fantastic variety and subtleties of what Mother Nature ‘has to throw at us’ than we will ever find inside a laboratory or an office.

            Call them very steep temperature gradients if you wish, rather than discontinuities, but don’t ever be so very, very foolish as to claim they don’t exist.

          • Nick Stokes

            No, you’re not with Jan. I am, though he doesn’t know it. The frost situation I was talking about is inversion, in a lapse rate sense. The bottom air is cold, and the temp rises as you go up.

            What you are talking about, though, is not inversion. It’s stability in the face of a super lapse rate, which would normally be convectively unstable. The undoing of this by the katabatic winds restores the instability, which creates the updrafts that you float on.

            So much for the formal argument. I hesitate to cause a distraction, but I don’t believe your viscosity story. And I spent some years in the fluids lab at Highett, where they experiment with natural convection, so it isn’t just math theory. But gas viscosity doesn’t work like that. Air is always free to convect, if the temp gradient is there.

          • http://www.ecoengineers.com/ Steve Short

            “It’s stability in the face of a super lapse rate, which would normally be convectively unstable. The undoing of this by the katabatic winds restores the instability, which creates the updrafts that you float on.”

            Huh?

            I couldn’t give a tinkers cuss if you don’t believe my viscosity story because there is clearly negligible convective rise from the bases of the valleys during the day. Its been tried. It is simply not possible to thermal off the bottoms of the valleys during most of the day. The only significant thermals are forming off slopes higher up.

            So how come therefore this stability in the face of a super lapse rate builds up over the better part of the day? The evening ‘magic air’ even feels warm!

            Be careful you don’t ‘super lapse ‘ into instability of rationality in your argument.

            I also don’t care how many years you spent in how many labs experimenting with whatever. Been there, done that.

            Those who spend their lives in glass houses shouldn’t stow thrones – they can never successfully sit on ‘em anyway.

          • Nick Stokes

            Well OK, let’s keep to that “simple word” – inversion. Where is it?

          • http://www.ecoengineers.com/ Steve Short

            In the forest (along with Little Red Riding Hood, the wolf, woodpeckers etc – presumably we can forget the beavers – don’t want to blow my argument ‘wide open’).

            Have you perchance considered the properties of pine forests under daily irradiation?

          • http://www.ecoengineers.com/ Steve Short

            The temperature inversion in the lower part of the canopy is a typical feature of daytime temperature profiles in tall crop and forest canopies….

            Introduction to Micrometeorology
            S. Pal Arya 2001

          • http://www.ecoengineers.com/ Steve Short

            Here, we examine whether sub-canopy flow through a small gully in the vicinity of the flux tower was thermotopographically driven, and was linked to the flow divergence found above canopy. While flow in the gully was frequently aligned with the mean wind aloft, indicating dynamic coupling, there were periods when the wind in the gully appeared to be decoupled from the flow aloft and was consistent with thermotopographic flow forcings (including geometry, temperature gradient, and net radiation). During the leaf-off season, these episodes exhibited a classic thermotopographic pattern, with down-gully nighttime flow and up-gully daytime flow. However, during the leaf-on season, the pattern was reversed: during the daytime, flow was down-gully consistent with inversion conditions occurring below the dense leaf canopy; at night, flow was up-gully, consistent with below-canopy lapse conditions. The thermotopographic flow during the leaf-on season suggests horizontal flow convergence at night and divergence during the day, and is shown to be decoupled from the flow aloft. While this research focuses only on flow patterns and not explicitly on CO2 gradients or fluxes, these findings suggest that inferences about drainage flow/advection and corrections to flux measurements based on above-canopy conditions alone may be inappropriate.

            Froelich and Schmid, 2006.

      • jae

        I think there is no discontinuity, but often a VERY steep gradient over a VERY thin layer. In humid warm areas (15-30 C), the air directly over the surface has an AVERAGE absolute humidity that is 70-80 percent of the saturation level. It seems to me that this could occur only if there were a layer of water at the surface that is very close in temperature to the air above. Mabe a very thin layer.

    • http://www.ecoengineers.com/ Steve Short

      “The discussion is too unstructured. You guys need to pick one quibble point at a time and thrash it to death. ”

      That’s a bit rich! What do you think we have been doing with Miskolczi’s (magic) LW IR ‘tau’?

      Miskolczi says its tau = -ln(S_T/S_U)

      I say Miskolczi has been having us on and its actually tau = -ln ((ET_U + S_T)/S_U) where ET_U = that portion of ET (or M’s K if you will) which radiates from the top of precipitating cloud (release of latent heat) as LW IR and escapes the TOA. Hence Miskolci’s tau is not the ‘regular’ tau in the commonly accepted meaning of the word.

      Both Zagoni and Miskolczi have had more than ample time to respond and have failed to do so.

      It is not enough to accuse the science establishment of bad science (as M&Z have done) but refuse to respond to those who point out a significant flaw in Miskolczi Theory – in this case just ONE SINGLE POINT, Geoff!

      I don’t like science which is conducted by globe trotting oratory to the largely unschooled in science, perpetually ignoring empirical, theoretical or mathematical difficulties. Isn’t this what we sceptics accuse the AGW bandwagon of doing?

      I have difficulty feeling solidarity with an Antipodean sceptical movement which persists in de facto endorsing Miskolczi in the face of quite genuine technical difficulties with his theory as evidenced e.g. by neither he nor Zagoni getting invited back to the 2nd Heartland Conference.

  • Geoff Sherrington

    I am going to stick my neck out and hope that it will be chopped off. Because, if it is choppped off, it will demonstrate that what I am about to say is already understood. That will be good.The lengthy posts about F.M. and his theory are confusing to a scientist like me, with a chemistry major and a spectroscopy background for part of my work. They are confusing because they seem to leap from one concept to another too often. Some of these concepts include the use of temperature as a proxy for heat, when heat is the fundamental parameter; then confusing the statatics and dynamics and equilibria and rates and shapes of equations and powers within them, the mixing of conduction, convection, radiation, the immense complexity of heat flow through a heterogeneous atmosphere where light has a very complex absorption spectrum, the extrapolation of physics from known situations into relatively unknown (like spectral absorption of gas mixtures at very low pressures and temperatures), like assumptions about the extension of optical density from lab measurements into regions that are opague or transparent or unknown degrees between………. and overshadowing all this, the impression that much remains to be learned about the true action of water vapour, the most effective GHG.This is before we even start on the complex maths problems of DEs and bounded or unbounded cases, of using assumed maths to show that physical effects must exist – before their acceptance by measurement. Let alone the problems of grid cell scale for some effects, of unsolved equations like Navier Stokes, of a completely unreliable but much relied upon reconstruction of “average global temperature”. For example, what is the accepted figure for the heat generated from the friction of air sliding past a rotating rough earth? What assumtions are made to derive this figure?Illustration, not picking on anyone, just taking an example that happens to be from Steve Short above:”The only way I could warm my hands just enough just to work the valves, pumps etc (both day and night) was take my gloves off and frequently flush them with the actual groundwater. The groundwater was in thermal equilibrium with the ground which was significantly warmer than the air only centimetres above the ground. Strong discontinuity right down at ground level (or even slightly lower)! “There was no pressing need to explain the science of this precisely on this blog, but you need to take into consideration factors such as: the velocity and relative humidity of the wind helping evaporate water from your hands; the thermal conductivity of your hands modelled from the skin to the depth where blood flow controls temperature; the rate of heat flow throught this region, which might vary if you are fat or thin; the rate of warming/cooling of hands given a temperature step change through glove removal, by circulating body blood; the long-term ability of body blood to maintain its steadiness under these conditions; actual measurement to show that the ground water was in equilibrium with the rock around it; if not, then the same heat flow problems; and so on. You see, while you survived working with your body in the air, you might get hypothermia and die if you went to lie down in the “warmer” stream of groundwater.Then, IIRC, the many model discontinuity has the air warmer than the ground, not the reverse as Steve describes.The discussion is too unstructured. You guys need to pick one quibble point at a time and thrash it to death. Please, as a favour, don't include CO2 in the first half dozen quibbles, I'm sick to death of its magic, ephemeral trace gas properties. Maybe you could start with one like this: Is there indeed a heat discontinuity between the near earth atmosphere and the ground? (Under what assumptions, and of what magnitude? Whose measurements? If there is, is there a measurable heat flow from one to the other? Does it equilibrate? At what value?)So hack me to death. I'm a lousy typist too.

  • http://www.ecoengineers.com/ Steve Short

    (a) I think there usually is a discontinuity (somewhere).(b) It may be quite sharp or quite gradual (depending upon circumstances).I can suspend a pH/EC/temperature probe into shallow groundwater in winter and get a temperature significantly higher that the temperature probe records in air just outside the borehole stem in zero wind conditions.I can suspend a pH/EC/temperature probe into shallow groundwater in summer and get a temperature significantly lower that the temperature probe records in air just outside the borehole stem in zero wind conditions.This effect is well known to hydrogeologists.I can monitor the pH/EC/disslved oxygen/temperature (etc) profile of a lake or reservoir throughout the year, monitor its thermocline and watch it 'turnover' as slight changes in salinity (between top and bottom) overcome or are overcome by slight changes in temperature as a consequence of the effects of these two parameters on water density.This effect is well known to limnologists.Put these in yer pipe and puff on 'em.

  • http://www.ecoengineers.com/ Steve Short

    “The discussion is too unstructured. You guys need to pick one quibble point at a time and thrash it to death. “That's a bit rich! What do you think we have been doing with Miskolczi's (magic) LW IR 'tau'?Miskolczi says its tau = -ln(S_T/S_U)I say Miskolczi has been having us on and its actually tau = -ln ((ET_U + S_T)/S_U) where ET_U = that portion of ET (or M's K if you will) which radiates from the top of precipitating cloud (release of latent heat) as LW IR and escapes the TOA. Hence Miskolci's tau is not the 'regular' tau in the commonly accepted meaning of the word.Both Zagoni and Miskolczi have had more than ample time to respond and have failed to do so. It is not enough to accuse the science establishment of bad science (as M&Z have done) but refuse to respond to those who point out a significant flaw in Miskolczi Theory – in this case just ONE SINGLE POINT, Geoff!I don't like science which is conducted by globe trotting oratory to the largely unschooled in science, perpetually ignoring empirical, theoretical or mathematical difficulties. Isn't this what we sceptics accuse the AGW bandwagon of doing?I have difficulty feeling solidarity with an Antipodean sceptical movement which persists in de facto endorsing Miskolczi in the face of quite genuine technical difficulties with his theory as evidenced e.g. by neither he nor Zagoni getting invited back to the 2nd Heartland Conference.

  • kuhnkat

    Steve,you seem to think you have provided us with empirical evidence of a temperature discontinuity. How about the heat flow and gradient, if any, for this discontinuity??It would require a perfect insulator to maintain a discontinuity. Or, am I misunderstanding what is meant by a discontinuity here?? There would HAVE to be a flux if there is a difference in potential without this insulator.I believe what you are telling us is that the ground has low conductivity and huge capacity compared to the atmospheres higher conductivity and low capacity, therefore maintaining a more even temperature below the immediate surface??Let's not forget that water is a better conductor. It would also be thermally connected to water that is typically deeper in the ground where the temperature is “average” compared to the surface.So what is so special about this?? How does it show a discontinuity?????Please stop confusing a steep gradient due to known conditions with a discontinuity.

  • http://www.ecoengineers.com/ Steve Short

    “Please stop confusing a steep gradient due to known conditions with a discontinuity.”Huh? Are you kidding? How steep do you want it to be? LOL.(1) It is commonly accepted that shallow groundwater is in temperature equilibrium with the ground in which it resides. Remember shallow aquifers typically have an effective porosity of 1 – 20% (depending upon whether the lithology is consolidated or unconsolidated).(2) If the air temperature immediately above that ground is significantly different even under zero wind conditions then we have the closest thing to a discontinuity I can possibly imagine. Actually, the order of heat conductivity in the absence of convective heat transfer (which cannot typically occur in an aquifer) is typically solid rock>water>air. You are having yourself on here.

  • Geoff Sherrington

    The much discussed Miskolczi equation (7)SU – (F0 + Po) + ED – EU = OLRseems to be mest easily conceptualised by a person standing near the Equator, where the direction of incoming sunlight is roughly perpendicular to the ground surface around noon. Even the terminolgy suggests this.Consider now that you are standing near th enorth pole in the middle of winter. Some of the incoming sunlight will pass through the atmosphere above you without ever going near the earth's surface. So, the definition of “Incoming” varies with latitude. One can start to correct for this with trig functions, but what about the trig functions that reach zero or infinity at some angles found in Nature? In my brief reading, I have only seen the spherical earth converted to a planar disc for sunlight interception geometry, then halved to cope with night time.The next complication is that most of the earth's atmospheric CO2 is close to the surface. Even on Mauna Loa the concentration at sea level can be several times that reported at the observatory. So what happens to equation 7 when incoming radiation near the north pole does not even get close to the ground in Winter, (hence misses the main CO2 there), unless it is scattered? Is there a correction that adjusts the atmosphereic temperature when it is in high sunlight and another for when there is low-altitude darkness?I find it hard to dissect an equation like (7) unless I know that such effects are in the solution or in the analysis that follows. Are they? This is the point of my post and my previous one.Re Steve Sort, thanks for the reply that limnologists know all about. But please don't be hostile, I'm asking in an attempt to learn, not to criticise. However, your reply does underscore my point about use of temperature as a proxy for heat. I carry no torch, either personal or national, for Miskolczi or Zagoni. But I do find it encouraging that such people are thinking laterally and not obediently. I'n not knocking you, but I think many others would agree that you guys are jumping all over the place and are hard to follow. To the extent that you are concentrating on tau, that is good. But, do you have brackets around all the parameters that contribute to it and good values to cover the globe? Seems to me that if you can drop off half the RHS of the original eqn and say “this is better”, there might be a loose definition of what is “better”. Structure it. Set you objectives, make your sub-hypotheses, design your tests, write down standards for pass/fail, then proceed. That's often better than following many a forking trail.

  • Geoff Sherrington

    The much discussed Miskolczi equation (7)

    SU – (F0 + Po) + ED – EU = OLR

    seems to be mest easily conceptualised by a person standing near the Equator, where the direction of incoming sunlight is roughly perpendicular to the ground surface around noon. Even the terminolgy suggests this.

    Consider now that you are standing near th enorth pole in the middle of winter. Some of the incoming sunlight will pass through the atmosphere above you without ever going near the earth’s surface. So, the definition of “Incoming” varies with latitude. One can start to correct for this with trig functions, but what about the trig functions that reach zero or infinity at some angles found in Nature? In my brief reading, I have only seen the spherical earth converted to a planar disc for sunlight interception geometry, then halved to cope with night time.

    The next complication is that most of the earth’s atmospheric CO2 is close to the surface. Even on Mauna Loa the concentration at sea level can be several times that reported at the observatory. So what happens to equation 7 when incoming radiation near the north pole does not even get close to the ground in Winter, (hence misses the main CO2 there), unless it is scattered? Is there a correction that adjusts the atmosphereic temperature when it is in high sunlight and another for when there is low-altitude darkness?

    I find it hard to dissect an equation like (7) unless I know that such effects are in the solution or in the analysis that follows. Are they? This is the point of my post and my previous one.

    Re Steve Sort, thanks for the reply that limnologists know all about. But please don’t be hostile, I’m asking in an attempt to learn, not to criticise. However, your reply does underscore my point about use of temperature as a proxy for heat.

    I carry no torch, either personal or national, for Miskolczi or Zagoni. But I do find it encouraging that such people are thinking laterally and not obediently. I’n not knocking you, but I think many others would agree that you guys are jumping all over the place and are hard to follow. To the extent that you are concentrating on tau, that is good. But, do you have brackets around all the parameters that contribute to it and good values to cover the globe? Seems to me that if you can drop off half the RHS of the original eqn and say “this is better”, there might be a loose definition of what is “better”.

    Structure it. Set you objectives, make your sub-hypotheses, design your tests, write down standards for pass/fail, then proceed. That’s often better than following many a forking trail.

    • Jan Pompe

      Geoff “To the extent that you are concentrating on tau, that is good.”

      Consider this tau = -ln(S_T/S_U) is pretty much the 150+ YO definition of the optical depth (tau)

      This tau = -ln ((ET_U + S_T)/S_U) is not.

      As I am sure you are aware.

      Now try to imagine why some of us think it’s a waste of time discussing this with the person that makes the latter claim.

      That person also keeps also keeps quoting figures from FT&K 08 & K&T 97 who claim to have obtained their numbers (esp OLR) from the CERES project. The person who calibrated those instruments and briefed the NASA on their use does not understand how K&T and later FT&K obtained their numbers from it and until they give an explanation as to how as requested there is little point in continuing with it.

      • http://www.ecoengineers.com/ Steve Short

        When you say “The person who calibrated those instruments and briefed the NASA on their use does not understand how K&T and later FT&K obtained their numbers from it and until they give an explanation as to how as requested there is little point in continuing with it.” you are presumably referring to Miskolczi and a claim made by him and him alone!

        As far as I can find there is not one single independent competent person (including Gupta!) in the field of atmospheric radiation who supports Miskolczi’s claims.

        I have actually bothered to get most of the references quoted in F,T&K09 and it seems to me (as a non-expert) that it is a reasonable review and summary of the findings and data which can be found in a relatively large numbers of papers. We are not talking about numbers that are hard to understand here. This is not particularly obscure stuff.

        I’ll hazard a guess you have not read a single one of the many papers cited by F,T&K09 and tried to check these data out for yourself.

        To be specific:

        (1) Outside of M&M04 and M07 I cannot find anywhere data which independently supports a mean global all sky S_T of the order of 60 – 65 W/m^2 as claimed by Miskolczi and I’ve expended a lot of effort to find it.

        (2) I also cannot find anywhere data which independently supports a mean global all sky tau of the order of 1.87 as claimed by Miskolczi.

        Where is the independent verification that NONE of the numerous authors who wrote the papers which I cited at the start of this thread (and which appeared as references in F,T&K09) know what they are writing about (i.e. that they are incompetent)?

        Where is the independent evidence for Miskolczi’s (and by proxy yourself) assertions that K&T97 and F,T&L09 (and hey let’s not forget NASA too) are all wrong?

        So you hold up to ridicule my suggestion that the only way a so-called global all sky S_T of 60 – 65 W/m^2 could conceivably be claimed by Miskolczi is to add in another LW IR component to S_T e.g. the emission by clouds which escapes TOA (there possibly could be something else but I can’t find it) on the basis of one lone maverick who has not found honest support for his claims anywhere, either when they were originally made or indeed since is simply ridiculous.

        You have no bona fides to do that. It only shows you up as another lone, stubborn and idiosyncratic maverick like Miskolczi.

        But hey, you are not an authority or scientist in this field.

        As I pointed out above, IF Miskolczi did have serious claims AND they could be even partly verified by prominent sceptics such as Lindzen, Spencer, McIntyre etc., then surely Miskolci would have reappeared at the 2nd Heartland Conference and his ‘Theory’ would today actually be going somewhere – rather than still just blowing out of your Antipodean backside?

        Or is there a conspiracy against Miskolczi in the ‘Sceptical Establishment ‘ as well…..?

        • http://www.ecoengineers.com/ Steve Short

          The latest report from Anthony Watts on the real state of the US Historical Climate Network (USHCN) contains the following text:

          “We found stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat.”

          “Check out the site survey photographs showing temperature stations next to brick and concrete walls, sited on or next to concrete, …..”

          “Check out the photographs from heat cameras showing concrete more than 10 degrees warmer than the air temperature.”

          What’s all this Kuhnkat and Nick Stokes were telling us about the lack of near-surface temperature discontinuities?

          Clearly not a notion that Anthony Watts ‘warms to’!

          • Nick Stokes

            Well, Steve, evidence please! Where are those photos?

          • http://www.ecoengineers.com/ Steve Short
          • Nick Stokes

            OK, I saw lots of infrared photos, indicating that there are big instantaneous temperature differences of different surfaces, measured by infrared thermometry. Especially in a built or somewhat industrial environment, which relates to the point that AW is trying to make. But that doesn’t imply a discontinuity. All it says is that you have surface heterogeneity, like hot pavement in the sun vs lawn. And that means you must have variation on a smaller spatial scale, over short periods of time.

            None of this relates to the original argument, which tried to stretch an approximate solution of the radiative transfer equations to claim a general temperature discontinuity between surface and atmosphere. Oddly, the original fallacious argument was advanced to try to discredit the Milne solution by saying that such a discontinuity is impossible, but now seems to have morphed into the credulous thinking that the inferred discontinuity is real.

          • kuhnkat

            Steve Short,

            please state your definition of DISCONTINUITY in the physical world.

            Your examples have nothing to do with what I, and probably the others seem to think it is.

          • http://www.ecoengineers.com/ Steve Short

            If temperature from the ground up actually passes through a significant maximum (or a minimum) over height ranges which may be quite small e.g. just within the elevations of crops or trees or lower parts of valleys, urban canyons etc etc then that is mathematically equivalent to a discontinuity since, as must be obvious, the temperature then does NOT change monotonically moving upwards from the surface and cannot be modeled as such. If you need me to explain what monotonically means you shouldn’t be trying to start an argument on this subject. All the Miskolczi math falls over if such low level inversions occur. Even the conventional treatment ignores all such micrometeorological situations.

            As you can see I already proved Nick has no idea how/why/when such low level inversions occur as evidenced by his deathly silence after I quoted a 2001 micrometeorology textbook and a 2006 paper on the very situation I had raised (and he was getting all high and mighty about).

            If that is what he was trying to say, then Jan was correct that inversions are functionally equivalent to discontinuities for the mathematical purposes we were considering.

            It may be worthwhile you talking to those who fly sailplanes, hang gliders, paragliders or even crop dusting planes. Those communities tend to have real good hands on knowledge, based on often hair-raising experience of the realities of low level atmospheric inversions and other sharp discontinuities e.g. shears.

            As you well know there’s just no subject for going out and experiencing reality no matter how nicely gold-plated or pre-heated the toilet seat from which one normally likes to pontificate (I’m quoting my long deceased old Dad again – an amateur philosopher out there with the best and worst of ‘em).

          • Nick Stokes

            Steve,
            A discontinuity is a cliff, not a hill. Certainly a deviation from monotonicity does not imply discontinuity.

            I didn’t reply earlier because if a local heat gradient under trees proves something, then I’ve lost track of what it is.

          • http://www.ecoengineers.com/ Steve Short

            This discussion has descended into semantic farce.

            It’s all a question of the scale at which you look at something. Magnify any cliff and you will find a series of steep slopes (not to mention all sort of roughness). So, if you like, I may label a ‘discontinuity’ a relatively steep gradient depending on the scale one chooses to or can look at (and maybe deal mathematically with at bulk scale). For many problems that may well be a practical scale. Then we might even be able to make it/call it a boundary condition (right Jan?).

            As an expert in hydrodynamics Nick knows full well that at the end of the day it is all a question of the scale you employ. Look in fine scale and you will find a host of steep gradients or discontinuities even right down to complete fractality.

            Nick knows all this full well – he just wants to play a silly game of ‘lets catch the other guy out’. I don’t buy into it.

            For many practical purposes Nature is ubiquitously littered with steep gradients/discontinuities.

            For example, circle up in a good large thermal in a sailplane and you could easily be rising at (say) 10 foot/sec up. You may even be able to log the lapse rate as you do so. But just slip outside the margins of the thermal by about 10 or 20 foot and you will probably be in air descending at (say) 10 foot/sec. Sharp gradient? Discontinuity? I would say yes (for all practical purposes).

            A forester may have a tower in the middle of a forest logging vertical gradients of temperature, relative humidity, insolation penetration etc. Will he sometimes find some very steep gradients in there? You betcha!

            As for Kuhnkat – well, I doubt he could always find the right end of a monkey wrench, that big ol’ kuhnskin cap regularly slips down over his eyes and ears so much.

            Yep.

          • kuhnkat

            Steve Short,

            No.

  • Jan Pompe

    Geoff “To the extent that you are concentrating on tau, that is good.”Consider this tau = -ln(S_T/S_U) is pretty much the 150+ YO definition of the optical depth (tau)This tau = -ln ((ET_U + S_T)/S_U) is not.As I am sure you are aware.Now try to imagine why some of us think it's a waste of time discussing this with the person that makes the latter claim. That person also keeps also keeps quoting figures from FT&K 08 & K&T 97 who claim to have obtained their numbers (esp OLR) from the CERES project. The person who calibrated those instruments and briefed the NASA on their use does not understand how K&T and later FT&K obtained their numbers from it and until they give an explanation as to how as requested there is little point in continuing with it.

  • http://www.ecoengineers.com/ Steve Short

    When you say “The person who calibrated those instruments and briefed the NASA on their use does not understand how K&T and later FT&K obtained their numbers from it and until they give an explanation as to how as requested there is little point in continuing with it.” you are presumably referring to Miskolczi and a claim made by him and him alone!As far as I can find there is not one single independent competent person (including Gupta!) in the field of atmospheric radiation who supports Miskolczi's claims.I have actually bothered to get most of the references quoted in F,T&K09 and it seems to me (as a non-expert) that it is a reasonable review and summary of the findings and data which can be found in a relatively large numbers of papers. We are not talking about numbers that are hard to understand here. This is not particularly obscure stuff.I'll hazard a guess you have not read a single one of the many papers cited by F,T&K09 and tried to check these data out for yourself.To be specific:(1) Outside of M&M04 and M07 I cannot find anywhere data which independently supports a mean global all sky S_T of the order of 60 – 65 W/m^2 as claimed by Miskolczi and I've expended a lot of effort to find it.(2) I also cannot find anywhere data which independently supports a mean global all sky tau of the order of 1.87 as claimed by Miskolczi.Where is the independent verification that NONE of the numerous authors who wrote the papers which I cited at the start of this thread (and which appeared as references in F,T&K09) know what they are writing about (i.e. that they are incompetent)?Where is the independent evidence for Miskolczi's (and by proxy yourself) assertions that K&T97 and F,T&L09 (and hey let's not forget NASA too) are all wrong?So you hold up to ridicule my suggestion that the only way a so-called global all sky S_T of 60 – 65 W/m^2 could conceivably be claimed by Miskolczi is to add in another LW IR component to S_T e.g. the emission by clouds which escapes TOA (there possibly could be something else but I can't find it) on the basis of one lone maverick who has not found honest support for his claims anywhere, either when they were originally made or indeed since is simply ridiculous.You have no bona fides to do that. It only shows you up as another lone, stubborn and idiosyncratic maverick like Miskolczi. But hey, you are not an authority or scientist in this field.As I pointed out above, IF Miskolczi did have serious claims AND they could be even partly verified by prominent sceptics such as Lindzen, Spencer, McIntyre etc., then surely Miskolci would have reappeared at the 2nd Heartland Conference and his 'Theory' would today actually be going somewhere – rather than still just blowing out of your Antipodean backside?Or is there a conspiracy against Miskolczi in the 'Sceptical Establishment ' as well…..?

  • Nick Stokes

    I agree with kuhnkat here. The discussion about discontinuity is misconceived in its origin, because it was inspired by a math approximation extended beyond its region of applicability. But a true discontinuity in fluid is impossible. The temp difference across a region is the heat flux times the width and divided by the conductivity, or heat transfer coef, or whatever you want to call it. So finite difference over zero width – then either infinite flux or zero conductivity.For a very big temp change over a small distance you need either a big flux or a good insulator. A big heat flux usually can't be sustained, because you run out of heat. That leaves a good insulator.Now the air is generally moving, and turbulent diffusion is effective. It's only when you get very still air that it becomes a fairly good insulator. That's why you can get a near discontinuity on a frosty night. That means a gradient of a deg C or two per metre, total difference maybe 3-4C (frost on grass etc).You can't have a steep gradient the other way (hot at the bottom). It's convectively unstable.

  • Jan Pompe

    “You can't have a steep gradient the other way (hot at the bottom). It's convectively unstable.”Non-inverted lapse rate. Frosts occur when there is a temperature inversion cold at the bottom. On frosty nights you can have gradients of 10k K/m over a few millimitres.Get out there with a thermometer some time Nick.

  • http://www.ecoengineers.com/ Steve Short

    Life is filled with profound ironies. I'm with Jan on his reply. The obvious reply to Nick's naive comments was that one simple word : inversion.”For a very big temp change over a small distance you need either a big flux or a good insulator. A big heat flux usually can't be sustained, because you run out of heat. That leaves a good insulator.Now the air is generally moving, and turbulent diffusion is effective.”Not invariably true.FYI, there is a popular phenomenon in hang gliding, particularly in the US, called 'magic air'. What happens is that in dense pine forests in the bases of valley the air heats up throughout the day and forms an inversion, sticking close to ground level due to a (yes) viscous attachment to the trees. Surprise, surprise, fluids do have viscosity and sometimes that is sufficient to overcome even temperature gradients. In other words an inversion forms. It is not until late afternoon when cold katabatic air flow occurs down tributary gullies, thereby entering the valley floor basal forests, getting under the warm air, does the inversion then start 'lifting off'.One can then hang glide for about 2 -3 hours late afternoon to early evening, 100 – 300 feet AGL in air which is constantly and gently rising at about 2- 4 foot/min up thus counteracting the sink rate of the glider. I have done this many times. The sensation is delicious.One of the thrilling side effects of this phenomenon was that I could cruise around over small lakes created by beaver dams, almost hands off the A frame, photographing beavers as they scudded backwards and forwards across their dams. Having never seen beavers in my life before this was a big, big thrill.Magic air indeed.We need to remember that solid ground is not a fluid. Groundwater within solid ground hardly behaves as a fluid. Air above the ground will not necessarily behave as a simple fluid, free to instantly convect. Having spent 12 years in an Australian Fed. Govt. research organisation and 3 in a Swiss one, I love the freedom and intellectual delights that come from doing pure research. I also know what it is like to live in academia. But such places are not the font of absolute truth.Next time, if you are out there, just for curiosity try sticking a thermometer onto hot tarmac perhaps which a fat goanna has just vacated. Or watch carefully and this time notice that jackrabbit who 'flicked off' a dust devil as he scampered across a gibber plain.There is an even greater wisdom which comes from just getting out there and actually experiencing the fantastic variety and subtleties of what Mother Nature 'has to throw at us' than we will ever find inside a laboratory or an office.Call them very steep temperature gradients if you wish, rather than discontinuities, but don't ever be so very, very foolish as to claim they don't exist.

  • Nick Stokes

    No, you're not with Jan. I am, though he doesn't know it. The frost situation I was talking about is inversion, in a lapse rate sense. The bottom air is cold, and the temp rises as you go up. What you are talking about, though, is not inversion. It's stability in the face of a super lapse rate, which would normally be convectively unstable. The undoing of this by the katabatic winds restores the instability, which creates the updrafts that you float on.So much for the formal argument. I hesitate to cause a distraction, but I don't believe your viscosity story. And I spent some years in the fluids lab at Highett, where they experiment with natural convection, so it isn't just math theory. But gas viscosity doesn't work like that. Air is always free to convect, if the temp gradient is there.

  • http://www.ecoengineers.com/ Steve Short

    “It's stability in the face of a super lapse rate, which would normally be convectively unstable. The undoing of this by the katabatic winds restores the instability, which creates the updrafts that you float on.”Huh?I couldn't give a tinkers cuss if you don't believe my viscosity story because there is clearly negligible convective rise from the bases of the valleys during the day. Its been tried. It is simply not possible to thermal off the bottoms of the valleys during most of the day. The only significant thermals are forming off slopes higher up. So how come therefore this stability in the face of a super lapse rate builds up over the better part of the day? The evening 'magic air' even feels warm!Be careful you don't 'super lapse ' into instability of rationality in your argument. I also don't care how many years you spent in how many labs experimenting with whatever. Been there, done that. Those who spend their lives in glass houses shouldn't stow thrones – they can never successfully sit on 'em anyway.

  • Nick Stokes

    Well OK, let's keep to that “simple word” – inversion. Where is it?

  • http://www.ecoengineers.com/ Steve Short

    In the forest (along with Little Red Riding Hood, the wolf, woodpeckers etc – presumably we can forget the beavers – don't want to blow my argument 'wide open').Have you perchance considered the properties of pine forests under daily irradiation?

  • http://www.ecoengineers.com/ Steve Short

    The temperature inversion in the lower part of the canopy is a typical feature of daytime temperature profiles in tall crop and forest canopies….Introduction to MicrometeorologyS. Pal Arya 2001

  • http://www.ecoengineers.com/ Steve Short

    Here, we examine whether sub-canopy flow through a small gully in the vicinity of the flux tower was thermotopographically driven, and was linked to the flow divergence found above canopy. While flow in the gully was frequently aligned with the mean wind aloft, indicating dynamic coupling, there were periods when the wind in the gully appeared to be decoupled from the flow aloft and was consistent with thermotopographic flow forcings (including geometry, temperature gradient, and net radiation). During the leaf-off season, these episodes exhibited a classic thermotopographic pattern, with down-gully nighttime flow and up-gully daytime flow. However, during the leaf-on season, the pattern was reversed: during the daytime, flow was down-gully consistent with inversion conditions occurring below the dense leaf canopy; at night, flow was up-gully, consistent with below-canopy lapse conditions. The thermotopographic flow during the leaf-on season suggests horizontal flow convergence at night and divergence during the day, and is shown to be decoupled from the flow aloft. While this research focuses only on flow patterns and not explicitly on CO2 gradients or fluxes, these findings suggest that inferences about drainage flow/advection and corrections to flux measurements based on above-canopy conditions alone may be inappropriate.Froelich and Schmid, 2006.

  • jae

    Well now. Silence. Time for a new thought? Since the “consensus” hypothesis on the GHE effect is certainly suspect, given all the relevant information: i.e., no temperature increases for 12 years and absolutely no other empirical or theoretical evidence to support said nonsense. And since the Miskolczi hypothesis has been discredited by the experts here, maybe we should bo back to my simpleton hypothesis that the “greenhouse effect” is nothing more than the ability of the Planet to store heat from one day to the next. And the corollary that IR radiation doesn't have a damn thing to do with it. It now looks like this is as good an hypothesis as any other. LOL.

  • http://www.ecoengineers.com/ Steve Short

    The latest report from Anthony Watts on the real state of the US Historical Climate Network (USHCN) contains the following text:”We found stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat.””Check out the site survey photographs showing temperature stations next to brick and concrete walls, sited on or next to concrete, …..””Check out the photographs from heat cameras showing concrete more than 10 degrees warmer than the air temperature.”What's all this Kuhnkat and Nick Stokes were telling us about the lack of near-surface temperature discontinuities?Clearly not a notion that Anthony Watts 'warms to'!

  • Nick Stokes

    Well, Steve, evidence please! Where are those photos?

  • http://www.ecoengineers.com/ Steve Short
  • Nick Stokes

    OK, I saw lots of infrared photos, indicating that there are big instantaneous temperature differences of different surfaces, measured by infrared thermometry. Especially in a built or somewhat industrial environment, which relates to the point that AW is trying to make. But that doesn't imply a discontinuity. All it says is that you have surface heterogeneity, like hot pavement in the sun vs lawn. And that means you must have variation on a smaller spatial scale, over short periods of time.None of this relates to the original argument, which tried to stretch an approximate solution of the radiative transfer equations to claim a general temperature discontinuity between surface and atmosphere. Oddly, the original fallacious argument was advanced to try to discredit the Milne solution by saying that such a discontinuity is impossible, but now seems to have morphed into the credulous thinking that the inferred discontinuity is real.

  • kuhnkat

    Steve Short,please state your definition of DISCONTINUITY in the physical world.Your examples have nothing to do with what I, and probably the others seem to think it is.

  • http://www.ecoengineers.com/ Steve Short

    If temperature from the ground up actually passes through a significant maximum (or a minimum) over height ranges which may be quite small e.g. just within the elevations of crops or trees or lower parts of valleys, urban canyons etc etc then that is mathematically equivalent to a discontinuity since, as must be obvious, the temperature then does NOT change monotonically moving upwards from the surface and cannot be modeled as such. If you need me to explain what monotonically means you shouldn't be trying to start an argument on this subject. All the Miskolczi math falls over if such low level inversions occur. Even the conventional treatment ignores all such micrometeorological situations. As you can see I already proved Nick has no idea how/why/when such low level inversions occur as evidenced by his deathly silence after I quoted a 2001 micrometeorology textbook and a 2006 paper on the very situation I had raised (and he was getting all high and mighty about).If that is what he was trying to say, then Jan was correct that inversions are functionally equivalent to discontinuities for the mathematical purposes we were considering.It may be worthwhile you talking to those who fly sailplanes, hang gliders, paragliders or even crop dusting planes. Those communities tend to have real good hands on knowledge, based on often hair-raising experience of the realities of low level atmospheric inversions and other sharp discontinuities e.g. shears. As you well know there's just no subject for going out and experiencing reality no matter how nicely gold-plated or pre-heated the toilet seat from which one normally likes to pontificate (I'm quoting my long deceased old Dad again – an amateur philosopher out there with the best and worst of 'em).

  • Nick Stokes

    Steve,A discontinuity is a cliff, not a hill. Certainly a deviation from monotonicity does not imply discontinuity.I didn't reply earlier because if a local heat gradient under trees proves something, then I've lost track of what it is.

  • kuhnkat

    Steve Short,No.

  • http://www.ecoengineers.com/ Steve Short

    This discussion has descended into semantic farce. It's all a question of the scale at which you look at something. Magnify any cliff and you will find a series of steep slopes (not to mention all sort of roughness). So, if you like, I may label a 'discontinuity' a relatively steep gradient depending on the scale one chooses to or can look at (and maybe deal mathematically with at bulk scale). For many problems that may well be a practical scale. Then we might even be able to make it/call it a boundary condition (right Jan?). As an expert in hydrodynamics Nick knows full well that at the end of the day it is all a question of the scale you employ. Look in fine scale and you will find a host of steep gradients or discontinuities even right down to complete fractality. Nick knows all this full well – he just wants to play a silly game of 'lets catch the other guy out'. I don't buy into it.For many practical purposes Nature is ubiquitously littered with steep gradients/discontinuities. For example, circle up in a good large thermal in a sailplane and you could easily be rising at (say) 10 foot/sec up. You may even be able to log the lapse rate as you do so. But just slip outside the margins of the thermal by about 10 or 20 foot and you will probably be in air descending at (say) 10 foot/sec. Sharp gradient? Discontinuity? I would say yes (for all practical purposes).A forester may have a tower in the middle of a forest logging vertical gradients of temperature, relative humidity, insolation penetration etc. Will he sometimes find some very steep gradients in there? You betcha!As for Kuhnkat – well, I doubt he could always find the right end of a monkey wrench, that big ol' kuhnskin cap regularly slips down over his eyes and ears so much. Yep.

  • Alex Harvey

    This whole discussion about whether a temperature discontinuity is observed at the Earth’s surface or not (and it seems trivial to know that there either is or there isn’t depending on what you actually mean by ‘temperature discontinuity’) seems to me to be entirely beside the point. The temperature discontinuity that Miskolczi is talking about is the ‘radiative equilibrium temperature discontinuity’ and it’s not meant to be observed at the Earth’s surface in either the classical theory OR the Miskolczi theory — because the assumption of radiative equilibrium is supposed to break down at the convective surface of Earth in the classical theory.

    By the way, I believe that the earliest reference to the temperature discontinuity in the English literature in connection with the modern era of GCM modelling is in the seminal paper by Manabe and Möller 1961, “On the radiative equilibrium and heat balance of the atmosphere”, Monthy Weather Review, 89, 12, 503-532.

    http://docs.lib.noaa.gov/rescue/mwr/089/mwr-089-12-0503.pdf

    After explaining how they assumed rather than calculating the Earth’s surface temperature they go on to describe their earlier result (only available in German). I quote from pp. 518-519:

    In the computation by the matrix method, we allowed the possibility of a temperature discontinuity at the earth’s surface as was first obtained by Emden 1913, whereas in the present computation we did not. The magnitude of temperature discontinuity was very small, about 0.06 C. This is much smaller than the discontinuity of about 20 C which Emden obtained in his computation of radiative equilibrium based upon the assumption of gray radiation. Accordingly, no large error would be introduced by the neglect of this temperature jump at the earth’s surface, which we did in the present computation.

    Later in Manabe and Strickler 1964 (“Thermal Equilibrium of the Atmosphere with a Convective Adjustment”, J. Atmos. Sci., 21(4), pp. 361–385, p. 362):

    http://ams.allenpress.com/perlserv/?request=res-loc&uri=urn%3Aap%3Apdf%3Adoi%3A10.1175%2F1520-0469%281964%29021%3C0361%3ATEOTAW%3E2.0.CO%3B2

    Section 2a. Pure Radiative Equilibrium. … In the course of the computation, the temperature jump which theoretically exists between the atmosphere and the earth’s surface is smoothed out by the vertical finite difference representation of the equations of radiative transfer. Fortunately, the magnitude of the theoretical temperature jump is much smaller than would be the case if a gray assumption were made for the absorption and emission of radiation. This is due partly to the very strong absorption near the line centers, and also to the upward radiation from the earth’s surface through the nearly transparent regions in the line wings and through the window region of water vapor, which compensates for most of the net downward solar radiation at the earth’s surface. Accordingly, the condition of no heat storage at the earth’s surface could be satisfied radiatively by a temperature jump which is much smaller than that for a gray absorber. In the previous study [...the above-mentioned German language paper], this temperature jump at the surface turned out to be less than 1 C, depending on the amount of water vapor and other parameters. Thus, the neglect of the temperature jump would not produce a serious error in the results.

    Then in Manabe and Wetherald 1967 and more or less from then on it seems that it was assumed in GCM modelling that this radiative equilibrium temperature jump should be set to 0 and treated as empirically unobservable:

    Manabe and Wetherald 1967 “Thermal equilibrium of the atmosphere with a given distribution of relative humidity”, J. Atmos. Sci, 24(3), pp. 241-259:

    http://ams.allenpress.com/perlserv/?request=res-loc&uri=urn%3Aap%3Apdf%3Adoi%3A10.1175%2F1520-0469%281967%29024%3C0241%3ATEOTAW%3E2.0.CO%3B2

    …the radiative convective equilibrium of the atmosphere…should satisfy the following requirements: … (2) No temperature discontinuity should exist

    I should add that I am yet to find a single reference to Milne 1922 or indeed to anything to do with Milne in all of this literature; it’s widely known in the literature that the temperature discontinuity originated with Emden in 1913. It’s unclear that it can have had any effect in any GCM as all of them have assumed the temperature discontinuity would be negligible, even in the case of radiative equilibrium, which doesn’t seem to be the case of the earth’s surface.

  • Alex Harvey

    This whole discussion about whether a temperature discontinuity is observed at the Earth's surface or not (and it seems trivial to know that there either is or there isn't depending on what you actually mean by 'temperature discontinuity') seems to me to be entirely beside the point. The temperature discontinuity that Miskolczi is talking about is the 'radiative equilibrium temperature discontinuity' and it's not meant to be observed at the Earth's surface in either the classical theory OR the Miskolczi theory — because the assumption of radiative equilibrium is supposed to break down at the convective surface of Earth in the classical theory.By the way, I believe that the earliest reference to the temperature discontinuity in the English literature in connection with the modern era of GCM modelling is in the seminal paper by Manabe and Möller 1961, “On the radiative equilibrium and heat balance of the atmosphere”, Monthy Weather Review, 89, 12, 503-532.http://docs.lib.noaa.gov/rescue/mwr/089/mwr-089…After explaining how they assumed rather than calculating the Earth's surface temperature they go on to describe their earlier result (only available in German). I quote from pp. 518-519:

    In the computation by the matrix method, we allowed the possibility of a temperature discontinuity at the earth's surface as was first obtained by Emden 1913, whereas in the present computation we did not. The magnitude of temperature discontinuity was very small, about 0.06 C. This is much smaller than the discontinuity of about 20 C which Emden obtained in his computation of radiative equilibrium based upon the assumption of gray radiation. Accordingly, no large error would be introduced by the neglect of this temperature jump at the earth's surface, which we did in the present computation.

    Later in Manabe and Strickler 1964 (“Thermal Equilibrium of the Atmosphere with a Convective Adjustment”, J. Atmos. Sci., 21(4), pp. 361–385, p. 362):http://ams.allenpress.com/perlserv/?request=res

    Section 2a. Pure Radiative Equilibrium. … In the course of the computation, the temperature jump which theoretically exists between the atmosphere and the earth's surface is smoothed out by the vertical finite difference representation of the equations of radiative transfer. Fortunately, the magnitude of the theoretical temperature jump is much smaller than would be the case if a gray assumption were made for the absorption and emission of radiation. This is due partly to the very strong absorption near the line centers, and also to the upward radiation from the earth's surface through the nearly transparent regions in the line wings and through the window region of water vapor, which compensates for most of the net downward solar radiation at the earth's surface. Accordingly, the condition of no heat storage at the earth's surface could be satisfied radiatively by a temperature jump which is much smaller than that for a gray absorber. In the previous study [...the above-mentioned German language paper], this temperature jump at the surface turned out to be less than 1 C, depending on the amount of water vapor and other parameters. Thus, the neglect of the temperature jump would not produce a serious error in the results.

    Then in Manabe and Wetherald 1967 and more or less from then on it seems that it was assumed in GCM modelling that this radiative equilibrium temperature jump should be set to 0 and treated as empirically unobservable:Manabe and Wetherald 1967 “Thermal equilibrium of the atmosphere with a given distribution of relative humidity”, J. Atmos. Sci, 24(3), pp. 241-259:http://ams.allenpress.com/perlserv/?request=res

    …the radiative convective equilibrium of the atmosphere…should satisfy the following requirements: … (2) No temperature discontinuity should exist

    I should add that I am yet to find a single reference to Milne 1922 or indeed to anything to do with Milne in all of this literature; it's widely known in the literature that the temperature discontinuity originated with Emden in 1913. It's unclear that it can have had any effect in any GCM as all of them have assumed the temperature discontinuity would be negligible, even in the case of radiative equilibrium, which doesn't seem to be the case of the earth's surface.

  • cohenite

    Doesn’t the surface discontinuity or not depend on time rather than cliffs and hills; for instance the air adjacent to the surface is heated by the surface but will be compressed by the air above it until it becomes warmer than that higher air; the most the surface can warm the adjacent air is to its own temperature at which time the adjacent air must be warmer than the higher air and will convectively ascend to be replaced by the cooler descending air; thus at ‘lift-off’ there is no discontinuity but at the other ‘refueling’ stages there is.

    • Jan Pompe

      Hi cohenite

      Nick might say some things that are in my opinion silly but hes not quite that silly. Nick was being metaphorical the cliff refers to a singularity or if you prefer an infinite gradient that really does not lend itself to be divided into “hills” by altering the scale.

      • Nick Stokes

        Er, thank you Jan. Quite so, But the “hill” refers to Steve’s odd belief that an inversion layer is a discontinuity (hill – hilltop – change of slope – inversion).

  • cohenite

    Doesn't the surface discontinuity or not depend on time rather than cliffs and hills; for instance the air adjacent to the surface is heated by the surface but will be compressed by the air above it until it becomes warmer than that higher air; the most the surface can warm the adjacent air is to its own temperature at which time the adjacent air must be warmer than the higher air and will convectively ascend to be replaced by the cooler descending air; thus at 'lift-off' there is no discontinuity but at the other 'refueling' stages there is.

  • pochas

    Guys:

    The film of air in contact with a solid surface is always at the temperature of the solid surface – no exceptions. The temperature gradient next to the surface may be extreme (glowing electric hotplate) or zero (thermal equilibrium). There are ways of calculating this gradient, but all are empirical and not based on first principles.

    The temperature discontinuity arises in math-model-land when calculations divide the atmosphere up into layers and one calculates from the top down making no a priori assumptions about conditions at the surface. If you end up with Aa != Ed you have a problem, that is, an unphysical temperature discontinuity. What you should do when you discover this is go fix your model.

    There is no such thing as a temperature discontinuity in nature – only in mathematical models.

  • pochas

    Guys:The film of air in contact with a solid surface is always at the temperature of the solid surface – no exceptions. The temperature gradient next to the surface may be extreme (glowing electric hotplate) or zero (thermal equilibrium). There are ways of calculating this gradient, but all are empirical and not based on first principles.The temperature discontinuity arises in math-model-land when calculations divide the atmosphere up into layers and one calculates from the top down making no a priori assumptions about conditions at the surface. If you end up with Aa != Ed you have a problem, that is, an unphysical temperature discontinuity. What you should do when you discover this is go fix your model.There is no such thing as a temperature discontinuity in nature – only in mathematical models.

  • Jan Pompe

    Hi cohenite Nick might say some things that are in my opinion silly but hes not quite that silly. Nick was being metaphorical the cliff refers to a singularity or if you prefer an infinite gradient that really does not lend itself to be divided into “hills” by altering the scale.

  • Nick Stokes

    Er, thank you Jan. Quite so, But the “hill” refers to Steve's odd belief that an inversion layer is a discontinuity (hill – hilltop – change of slope – inversion).

  • Alex Harvey

    Pochas,

    Very well, so whose mathematical model actually has a temperature discontinuity?

    Here is Lindzen in 1994:

    As was noted long ago by Emden 1913, radiative equilibrium profiles are intrinsically impossible since they lead to large decreases in temperature with height with respect to buoyant convection.

    http://www-eaps.mit.edu/faculty/lindzen/191_ach.pdf

    Now the thing is, everyone knows and agrees that the convective adjustment was just a hack, and that even the newest parameterisations of convection are also hacks.

    If this is all Miskolczi is telling us, then the theory is not original.

    Or, if he is telling us something else, then what is it? Please tell me what the actual problem is that Miskolczi has solved in his paper? Miklos posted up quotations at his website from Milne and Eddington and we were all led to believe that Milne had misinterpreted Eddington 1916 and Schwarzschild 1906, and that all climatologists since have lacked the brains and initiative to go back and check over Milne’s assumptions.

    This is unquestionably wrong now, so then what is the error and who made it? Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake. Now, that’s absurd. I know for a fact that none of us have read Emden 1913. So what, was it Schwarzschild who made the mistake?

    If no one knows the answer, can’t we all just let this go & start focusing on what the real errors are in AGW theory?

    • http://www.ecoengineers.com/ Steve Short

      “Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake.”

      That’s another real hoot just like the one where I heard someone say we merely had to wait around for all of NASA to bow down, admit they had all been idiots, and that only Miskolczi was the ‘one true prophet’ of LBL radiation codes.

      Pigs WILL fly – and into space too!

    • http://www.ecoengineers.com/ Steve Short

      “If no one knows the answer, can’t we all just let this go & start focusing on what the real errors are in AGW theory?”

      Uhhhh, things like:

      http://theresilientearth.com/?q=content/airborne-bacteria-discredit-climate-modeling-dogma

    • Jan Pompe

      “Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake. Now, that’s absurd.”

      Well Alex I didn’t realise that you had misunderstood what I said so completely, perhaps I should say nothing at all to you.

      People keep making the same mistake because the keep on kow-towing to those they think are giants instead of standing on their shoulders, and since you are so knowledgeable perhaps you can explain what both Pat Cassen and Nick Stokes have dodged and that is how we can get two boundary conditions for a first order differential equation where the variable in question is unbounded.

      I await your answer with bated breath.

      • Nick Stokes

        Jan, this is tiresome. I’ve spelt it out many times. Your turn. What is the ode with 2 bc? What are they? What variable is unbounded? Give a proper argument instead of muddled allusions.

        • Jan Pompe

          “Jan, this is tiresome.”

          I agree Nick.

          here first definition
          http://www.answers.com/topic/semi-infinite

          semi-infinite unbounded in one direction or dimension.

          Second one
          http://en.wikipedia.org/wiki/Laplace_transform#Formal_definition

          You will no doubt notice that little ‘8’ on it’s side that is the symbol for infinity which means that the integral is unbounded in one direction, in this case the variable is time and that is unbounded i.e. it goes on forever.

          You will no doubt notice that the bilateral Laplace transform is unbounded in both directions.

          In the the radiation transport equations the variable is not time but tau or the optical depth. In the classical solution it is this variable that is unbounded i.e. infinite.

          Now the two boundary conditions are the two values implied for the surface temperature for a finite tau where tau was assumed infinite in equations 15 & 16 in Miskolczi’s paper or 1 & 2 in the Lorenz and McKay for differential equations solved for the semi-infinite case.

          This is pretty elementary stuff and should not be a problem at all for someone who has a PhD in control mathematics.

          • Nick Stokes

            Jan,
            Again, you’re just not giving a connected argument. Yes, I know what semi-infinite means, and what a Laplace transform is. But where are they used in this theory?
            FM says he’s using a “semi-infinite” solution in Eq 15. But he isn’t. tau is there in the equation, and seems perfectly finite. He even, in the leadup to 16, defines a value of tau, tau_C, at the surface.

            L&M say nothing about their model being semi-infinite. Again tau is there in the equation, and they define tau_0 as the value at the surface. And they include it in their equations in the normal way.

            And you still haven’t addressed this nonsense about two boundary conditions.

          • Jan Pompe

            “Yes, I know what semi-infinite means, and what a Laplace transform is.”

            Which what I would expect.

            “L&M say nothing about their model being semi-infinite.”

            I don’t expect they should have to for someone who knows “know what semi-infinite means, and what a Laplace transform is” to know that it is.

          • Nick Stokes

            Again, these are useless answers. You’ve said L&M assume tau semi-infinite. Back it up!

          • Jan Pompe

            “Again, these are useless answers. You’ve said L&M assume tau semi-infinite. Back it up!”

            I don’t understand Nick I thought you had a PhD in control mathematics I only did one semester of of control mathematics and I can see that the two L&M equations 1 & 2 as well as the two Goody and Yung ones Miskolczi quote (16 & 17) are solutions of equation 12 assuming tau varies between e>0 to infinity.

            How is it that you can’t? At the very least L&M eqn 1 & 2 and M 16 & 17 should have an exp(-tau) term if finite tau was assumed.

          • Nick Stokes

            I can’t see it (and don’t believe it) and you’re not helping. Where does this exp(-tau) come from? Both L&M and FM specify an explicit finite tau range – 0 to tau_0 for L&M and 0 to tau_C for FM.

          • Jan Pompe

            Tell me Nick did you notice what happened the equation when Miskolczi specified a finite tau in appendix B do you notice the difference between his transfer function and say either of those in eqn 16 & 17 of the same paper. Anyone who knows ” what semi-infinite means, and what a Laplace transform is” should be able to tell us straight away. Also you can use your superior PhD in control mathematics training what equations B4 B5 B6 B7 B8 B9
            B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity.

          • Nick Stokes

            Again, in Eq 16 he has a finite value tau_A at ground. The diff in appendix B is that he’s applying his one BC at the ground rather than at TOA. There’s no issue of finiteness there.
            OF course, his problem then is that you can’t get it right in both places. Right at ground – wrong at TOA. Big problem.

          • http://www.ecoengineers.com/ Steve Short

            That is how it has seemed to me too.
            BTW for the statement: “At the very least L&M eqn 1 & 2 and M 16 & 17 should have an exp(-tau) term if finite tau was assumed.” –
            I can’t see that bunging in exp(-tau) has any math logic to it. What am I missing?
            Please explain. Just the math.

  • Alex Harvey

    Pochas,Very well, so whose mathematical model actually has a temperature discontinuity?Here is Lindzen in 1994:

    As was noted long ago by Emden 1913, radiative equilibrium profiles are intrinsically impossible since they lead to large decreases in temperature with height with respect to buoyant convection.

    http://www-eaps.mit.edu/faculty/lindzen/191_ach…Now the thing is, everyone knows and agrees that the convective adjustment was just a hack, and that even the newest parameterisations of convection are also hacks.If this is all Miskolczi is telling us, then the theory is not original.Or, if he is telling us something else, then what is it? Please tell me what the actual problem is that Miskolczi has solved in his paper? Miklos posted up quotations at his website from Milne and Eddington and we were all led to believe that Milne had misinterpreted Eddington 1916 and Schwarzschild 1906, and that all climatologists since have lacked the brains and initiative to go back and check over Milne's assumptions.This is unquestionably wrong now, so then what is the error and who made it? Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake. Now, that's absurd. I know for a fact that none of us have read Emden 1913. So what, was it Schwarzschild who made the mistake? If no one knows the answer, can't we all just let this go & start focusing on what the real errors are in AGW theory?

  • http://www.ecoengineers.com/ Steve Short

    No, I don't think you are silly either Nick.But putting Jan's uncharacteristic little bout of empathy aside (is this not the winter of our discontent….), you can get really silly when you pretend you know what goes on inside the canopies of forests. I strongly suggest you brief yourself on the relationships between aerodynamic resistance, canopy resistance, Vapor Pressure Deficit and Bowen Ratio etc. throughout the day for deciduous forests etc., viz:http://books.google.com.au/books?id=KaJHBv9FbYI

  • http://www.ecoengineers.com/ Steve Short

    “Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake.”That's another real hoot just like the one where I heard someone say we merely had to wait around for all of NASA to bow down, admit they had all been idiots, and that only Miskolczi was the 'one true prophet' of LBL radiation codes.Pigs WILL fly – and into space too!

  • http://www.ecoengineers.com/ Steve Short

    “If no one knows the answer, can't we all just let this go & start focusing on what the real errors are in AGW theory?”Uhhhh, things like:http://theresilientearth.com/?q=content/airborn

  • Nick Stokes

    Steve, I don't think I've claimed knowledge of what goes on in canopies (although I could – in CSIRO my first job was a four year stint working for John Philip – known for the soil plant atmosphere continuum).I fully accept that during the day, when sunlight is intercepted by the canopy, heat must flow downwards from canopy to Earth. I just can't see what it's relevant to.My objection is just the meaning of words. An inversion (change of gradient) is not a discontinuity.

  • Jan Pompe

    “Jan told me the other day that the theory now is that oh, well, every single astrophysicist and climatologist since Schwarzschild has independently made the same mistake. Now, that's absurd.”Well Alex I didn't realise that you had misunderstood what I said so completely, perhaps I should say nothing at all to you.People keep making the same mistake because the keep on kow-towing to those they think are giants instead of standing on their shoulders, and since you are so knowledgeable perhaps you can explain what both Pat Cassen and Nick Stokes have dodged and that is how we can get two boundary conditions for a first order differential equation where the variable in question is unbounded. I await your answer with bated breath.

  • Alex Harvey

    Jan,

    Sorry, I may have misunderstood you, but if not to suppose that all astrophysicists and climatologists alike have made the same error, then how else could you answer this? That one great astrophysicist (Milne) made a mistake, and that others took his word for it simply because no one ever imagined he’d make such a basic error — that was a believable story. So what is the story now?

    Pochas obviously doesn’t think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven’s sake, doesn’t this deserve some kind of explanation?

  • Alex Harvey

    Jan,Sorry, I may have misunderstood you, but if not to suppose that all astrophysicists and climatologists alike have made the same error, then how else could you answer this? That one great astrophysicist (Milne) made a mistake, and that others took his word for it simply because no one ever imagined he'd make such a basic error — that was a believable story. So what is the story now?Pochas obviously doesn't think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven's sake, doesn't this deserve some kind of explanation?

  • Alex Harvey

    Jan,

    Sorry, I may have misunderstood you, but if not to suppose that all astrophysicists and climatologists alike have made the same error, then how else could you answer this? That one great astrophysicist (Milne) made a mistake, and that others took his word for it simply because no one ever imagined he’d make such a basic error — that was a believable story. So what is the story now?

    Pochas obviously doesn’t think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven’s sake, doesn’t this deserve some kind of explanation?

    • kuhnkat

      Alex,

      Milne made no error. He was working with Solar type atmospheres and the computational shortcut worked fine there. The fact that early atmospheric types made the mistake of ASSUMING that this shortcut would be close enough for gubmint work here on earth and other planetary atmospheres is another issue entirely.

    • Jan Pompe

      Hi ALex,

      You have the story more or less straight now what I objected to was “has independently ” I didn’t think there was anything independent about it, but more a case of people going along with it because it was convenient. How come it could go on so long?

      It’s what people wanted to hear as Miskolczi quoted from Milne’s paper
      “Assumption of infinite thickness involves little or no loss of generality” Then goes on to provide the two stream fudge. Leaving no room for the IR window (or transmission through the atmosphere of IR) which was in fact discovered some time later as more work was done on absorption coefficients.

      • Alex Harvey

        Jan,

        You can’t have it both ways.

        Either astrophysicists/climatologists have independently made the same mistake (and Milne 1922 does seem to be largely independent of Emden 1913) or one person made the mistake first (Schwarzschild??) and others copied (i.e. there was some kind of dependency).

        So which was it: who went along with whom because it was convenient? By bet is, you don’t know the answer to this question, which proves that you like everyone else here have taken this whole thing on faith because it says so in M’s 2007 paper.

        • Jan Pompe

          Alex Excuse me?

          Where dose Emden or his brother in law say:

          “Assumption of infinite thickness involves little or no loss of generality”

          or anything like it?

          Do you honestly believe that anyone could read through that 1922 paper of Milne’s find that quote and the fudge, that everyone subsequently has used, without having heard of Emden, Swchwarzschild and Gold as you suggested to me earlier?

          Now kindly look at the equations from which Emden deduced his temperature discontinuity here and take a close look at what Milne comes up with and then compare it with the one quote from Goody and Yung and you’ll soon see (I have a vain hope still) that you will see why Milne and not Emden was cited as the source of the error.

  • Alex Harvey

    Jan,Sorry, I may have misunderstood you, but if not to suppose that all astrophysicists and climatologists alike have made the same error, then how else could you answer this? That one great astrophysicist (Milne) made a mistake, and that others took his word for it simply because no one ever imagined he'd make such a basic error — that was a believable story. So what is the story now?Pochas obviously doesn't think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven's sake, doesn't this deserve some kind of explanation?

  • pochas

    Jan:
    “Pochas obviously doesn’t think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven’s sake, doesn’t this deserve some kind of explanation?”

    pochas:
    Another mistake that appears to have a life of its own is the constant relative humidity assumption that is behind this whole AGW scare. Spencer, Lindzen and Miskolczi have all argued against this, Lindzen and Spencer with data and analysis, Miskolczi with the hypothesis being discussed here. Will this get into the models? Not while they are funded with US government AGW study grants. (I know, RH is now allowed to vary a little.) This mistake puts the temperature discontinuity mistake in the shade.

    These mistakes have wasted billions of dollars, but because of funding considerations they are bullet proof.

    Alex:
    “Now the thing is, everyone knows and agrees that the convective adjustment was just a hack, and that even the newest parameterisations of convection are also hacks.

    If this is all Miskolczi is telling us, then the theory is not original.

    Or, if he is telling us something else, then what is it?”

    pochas:
    It seems as though “this is nothing new” is often heard as the culprit covers his tracks. What M has done is to write a paper that presents a method, with constant tau and surface temperature equilibrium (lets put Kirchoff to bed), which, if it stands, is a strong refutation of alarmist AGW theory.

    I don’t think its necessary to debate eq (7) or the Virial rule any further. They don’t really matter.

    • Nick Stokes

      pochas:
      Another mistake that appears to have a life of its own is the constant relative humidity assumption that is behind this whole AGW scare. Spencer, Lindzen and Miskolczi have all argued against this, Lindzen and Spencer with data and analysis, Miskolczi with the hypothesis being discussed here. Will this get into the models?

      This silly furphy certainly seems to have a life of its own. There is no constant relative humidity assumption behind AGW. I know of no Lindzen/Spencer argument on this. Miskolczi doesn’t mention relative humidity anywhere.
      Please provide some evidence before propagating this nonsense.

      • jae

        Hey, Nicko: From NASA:

        “In climate modeling, scientists have assumed that the relative humidity of the atmosphere will stay the same regardless of how the climate changes. In other words, they assume that even though air will be able to hold more moisture as the temperature goes up, proportionally more water vapor will be evaporated from the ocean surface and carried through the atmosphere so that the percentage of water in the air remains constant. Climate models that assume that future relative humidity will remain constant predict greater increases in the Earth’s temperature in response to increased carbon dioxide than models that allow relative humidity to change. The constant-relative-humidity assumption places extra water in the equation, which increases the heating.”

        http://earthobservatory.nasa.gov/Features/WaterVapor/water_vapor3.php

        Now, it’s your turn to provide a linky.

        • Nick Stokes

          OK, Jae
          We’ve seen this one before. It’s a science ed site, and they just got it wrong. Here is my linky. It is GISS Model E output. It’s interactive, so you have to set map type to trend, and quantity to relative humidity. You can play with different time periods. The results are interesting, but RH is definitely not constant.

          You can of course go here and look at the code. I’ve done that. The transport model for water is quite conventional. No sign of any constant RH assumption.

          • kuhnkat

            Nick,

            you “proved” that the GISS model E does not assume, or output, a constant humidity.

            I DID notice that the only place it showed decreasing humidity was in the troposphere from 1980-2008 and excluding the high latitudes and the equator..

            Since I think most people agree that the humidity went UP in the trop, but, definitely did not go DOWN, the model is still WRONG!!!!

          • Nick Stokes

            Well, the goalposts are moving. But you’d better sort out which kind of humidity you are talking about. Check out specific humidity.

          • kuhnkat

            If I could move the earth, I would!!!

            HAHAHAHAHAHAHAHAHA

            Unfortunately for your model the Specific Humidity is rising in the strat. Again, doesn’t match the earth.

            How about some arm waving for the reduction of water vapor by increase in CO2?? I ran across this guy looking for other things:

            http://www.geocities.com/profadrian/ScienceOfGlobalWarming.html

            Scan down to the Forcing Concept section. Sounds almost too simple to be real!!

            The way I understand what he is saying is that increase in ANY gas to the atmosphere would tend to reduce water vapor!!

            Cheers!!!

      • Alex Harvey

        Nick,
        C’mon…. do some reading…. the assumption of constant relative humidity, unlike temperature discontinuity, is all throughout the modern literature on GCM modelling, starting with the Manabe & Wetherald paper I pasted above. Lindzen’s arguments against it are in the last Lindzen paper I posted. If you then look in the ECHAM5 manual I posted, and followed that to Tiedtke 1989 cited therein, you can see for yourself that there is still a very unphysical hack in models for dealing with convection that goes back at least as far as 1989 (ironically, Lindzen himself seems to have had a significant role in creating the new hack so he’s eminently qualified to comment on its shortcomings).

        • Nick Stokes

          Alex,
          Yes, early models like M&W did make assumptions of that kind. But from the mid 70’s, models used a world grid and solved the transport equations directly. Then assumptions about RH were not only unnecessary, but unfeasible, since they would override conservation of mass.

          I saw nothing in the Lindzen 1999 chapter about RH in modern GCM’s. Convection adjustments are something different. Again his reference here seems to be to 60’s papers.

      • Jan Pompe

        Moved up to avoid too much thinning

        “wrong at TOA. Big problem.”

        Fo mathematicians maybe. That the equations don’t hold where there are no absorbers or air fro that matter are no problem at all just exclude to point at tau=0 like they do here

        Now you have evaded the question: What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity.

        i don’t know about anyone else but I don’t see any problem with a temperature discontinuity between the finite source and an infinite sink at the TOA.

        • Nick Stokes

          It’s not a temperature equation, it’s a flux equation. And getting the outgoing flux wrong is a problem.
          In App B he’s applying conditions at ground. If tau is infinite, there is no ground. But you’re dodging the questions – L&M and FM both specify tau at ground (and for FM, not just in App B). Where’s the semi-infinite assumption?

          • Jan Pompe

            “And getting the outgoing flux wrong is a problem.”

            OLR = f * Sg i.e. OLR = 2Sg/(1+tau + exp(-tau)) = 2/Sg*1/2 when tau = 0

            so when tau is 0 OLR = Su what a surprise.

            Now again: What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity?

          • Jan Pompe

            By the bye Nick have you by any chance noticed that when tau = 0 the TOA IS the ground and roughly the same conditions apply as on the moon?

          • Nick Stokes

            Nonsense. tau is just an altitude measure, like pressure. It starts at tau=0 at TOA – any atmosphere.
            And (prev comment) at OLR, no, the equation you’ve written involves tau_A, the tau at ground. You can’t set that to 0. What you have to do is put tau=0 in eq 21.
            And again, the B eqs would be nonsense if he set tau to infinity. He didn’t, and for this purpose (BC at ground) he can’t.

          • Jan Pompe

            “tau is just an altitude measure, like pressure.”

            you can take it that way if you want but you will be wrong. tau is optical depth which does vary with height but tau-tilde is the average whole atmosphere tau. tilde-tau_A is the optical depth of the entire column of the atmosphere not just a layer of it. Regardless of what you think the mathematical meaning of the integrals are I have just given you the physical meaning. You did notice that he had different dummy variable in the integral I hope.

            Equation 20 and figure 3 are an equation and a graph of the effect on OLR and Bg of the average tau = tilde-tau_A

            I had thought you had realised this by now.

          • Nick Stokes

            Quite wrong. Look at what FM says after eq 15:
            “where tilde-τ is the flux optical depth”
            “At the upper boundary tilde-τ = 0″
            Nothing whole atmosphere about that. He subscripts A to show ground values, which then means whole atmosphere.

          • Jan Pompe

            “And again, the B eqs would be nonsense if he set tau to infinity.”

            It didn’t stop Milne. Page 897 of his 1922 paper

            “Assumption of infinite thickness involves little or no loss of generality; we could if we liked, consider a mass of finite thickness with an inner boundary consisting of a black radiating surface, but since our results will only involve the optical thickness, we need only suppose the absorption coefficient or the density to become very suddenly large at the assigned depth in order to deduce the case of an inner boundary from the solution for an infinitely thick slab of material”

          • Nick Stokes

            There’s no indication this quote is relevant. I don’t think I have Milne’s paper, and Zagoni isn’t pushing it any more, but as I recall, he was treating radiation incident on a planet, not coming from the surface. You’ll have to do better than that.

          • Jan Pompe

            “I don’t think I have Milne’s paper,”

            Then get it google is your friend.

            You’ll have to do better than that.

            No Nick you do first read the paper and then answer what you have been evading.

            What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity?

            Answer that and the problem with Milne will become clear to you. It’s really immaterial whether we are talking about inbound or out bound the atmosphere of the earth is nowhere infinitely thick and cannot be sensibly modelled as an “infinitely thick slab of material”

    • http://www.ecoengineers.com/ Steve Short

      “I don’t think its necessary to debate eq (7) or the Virial rule any further. They don’t really matter.”

      I agree. I think it is all about the effects of rising CO2 and nutrient pollution on biota, on humidity, on aerosols, on CCN, on clouds, on albedo, Bowen Ratio (B), on Evaporative Fraction (EF), on latent heat (LH), on latent heat escaping TOA (LH_U), and on sensible heat (SH) fluxes etc., etc.

      Have some fun:

      https://download.yousendit.com/U0d4K2VqMGN1YlBIRGc9PQ

  • pochas

    Jan:”Pochas obviously doesn't think it matters who really made the temperature discontinuity mistake, and maybe he is right? But for heaven's sake, doesn't this deserve some kind of explanation?”pochas:Another mistake that appears to have a life of its own is the constant relative humidity assumption that is behind this whole AGW scare. Spencer, Lindzen and Miskolczi have all argued against this, Lindzen and Spencer with data and analysis, Miskolczi with the hypothesis being discussed here. Will this get into the models? Not while they are funded with US government AGW study grants. (I know, RH is now allowed to vary a little.) This mistake puts the temperature discontinuity mistake in the shade.These mistakes have wasted billions of dollars, but because of funding considerations they are bullet proof.Alex:”Now the thing is, everyone knows and agrees that the convective adjustment was just a hack, and that even the newest parameterisations of convection are also hacks.If this is all Miskolczi is telling us, then the theory is not original.Or, if he is telling us something else, then what is it?”pochas:It seems as though “this is nothing new” is often heard as the culprit covers his tracks. What M has done is to write a paper that presents a method, with constant tau and surface temperature equilibrium (lets put Kirchoff to bed), which, if it stands, is a strong refutation of alarmist AGW theory.I don't think its necessary to debate eq (7) or the Virial rule any further. They don't really matter.

  • jae

    I think there is no discontinuity, but often a VERY steep gradient over a VERY thin layer. In humid warm areas (15-30 C), the air directly over the surface has an AVERAGE absolute humidity that is 70-80 percent of the saturation level. It seems to me that this could occur only if there were a layer of water at the surface that is very close in temperature to the air above. Mabe a very thin layer.

  • Nick Stokes

    pochas:Another mistake that appears to have a life of its own is the constant relative humidity assumption that is behind this whole AGW scare. Spencer, Lindzen and Miskolczi have all argued against this, Lindzen and Spencer with data and analysis, Miskolczi with the hypothesis being discussed here. Will this get into the models?This silly furphy certainly seems to have a life of its own. There is no constant relative humidity assumption behind AGW. I know of no Lindzen/Spencer argument on this. Miskolczi doesn't mention relative humidity anywhere. Please provide some evidence before propagating this nonsense.

  • Nick Stokes

    Jan, this is tiresome. I've spelt it out many times. Your turn. What is the ode with 2 bc? What are they? What variable is unbounded? Give a proper argument instead of muddled allusions.

  • jae

    Hey, Nicko: From NASA:”In climate modeling, scientists have assumed that the relative humidity of the atmosphere will stay the same regardless of how the climate changes. In other words, they assume that even though air will be able to hold more moisture as the temperature goes up, proportionally more water vapor will be evaporated from the ocean surface and carried through the atmosphere so that the percentage of water in the air remains constant. Climate models that assume that future relative humidity will remain constant predict greater increases in the Earth’s temperature in response to increased carbon dioxide than models that allow relative humidity to change. The constant-relative-humidity assumption places extra water in the equation, which increases the heating.”http://earthobservatory.nasa.gov/Features/Water…Now, it's your turn to provide a linky.

  • http://www.ecoengineers.com/ Steve Short

    “I don't think its necessary to debate eq (7) or the Virial rule any further. They don't really matter.”I agree. I think it is all about the effects of rising CO2 and nutrient pollution on biota, on humidity, on aerosols, on CCN, on clouds, on albedo, Bowen Ratio (B), on Evaporative Fraction (EF), on latent heat (LH), on latent heat escaping TOA (LH_U), and on sensible heat (SH) fluxes etc., etc.Have some fun:https://download.yousendit.com/U0d4K2VqMGN1YlBI

  • kuhnkat

    Alex,Milne made no error. He was working with Solar type atmospheres and the computational shortcut worked fine there. The fact that early atmospheric types made the mistake of ASSUMING that this shortcut would be close enough for gubmint work here on earth and other planetary atmospheres is another issue entirely.

  • Nick Stokes

    OK, JaeWe've seen this one before. It's a science ed site, and they just got it wrong. Here is my linky. It is GISS Model E output. It's interactive, so you have to set map type to trend, and quantity to relative humidity. You can play with different time periods. The results are interesting, but RH is definitely not constant.You can of course go here and look at the code. I've done that. The transport model for water is quite conventional. No sign of any constant RH assumption.

  • Jan Pompe

    “Jan, this is tiresome.”I agree Nick.here first definitionhttp://www.answers.com/topic/semi-infinitesemi-infinite unbounded in one direction or dimension.Second onehttp://en.wikipedia.org/wiki/Laplace_transform#…You will no doubt notice that little '8' on it's side that is the symbol for infinity which means that the integral is unbounded in one direction, in this case the variable is time and that is unbounded i.e. it goes on forever. You will no doubt notice that the bilateral Laplace transform is unbounded in both directions.In the the radiation transport equations the variable is not time but tau or the optical depth. In the classical solution it is this variable that is unbounded i.e. infinite. Now the two boundary conditions are the two values implied for the surface temperature for a finite tau where tau was assumed infinite in equations 15 & 16 in Miskolczi's paper or 1 & 2 in the Lorenz and McKay for differential equations solved for the semi-infinite case.This is pretty elementary stuff and should not be a problem at all for someone who has a PhD in control mathematics.

  • Jan Pompe

    Hi ALex,You have the story more or less straight now what I objected to was “has independently ” I didn't think there was anything independent about it, but more a case of people going along with it because it was convenient. How come it could go on so long? It's what people wanted to hear as Miskolczi quoted from Milne's paper”Assumption of infinite thickness involves little or no loss of generality” Then goes on to provide the two stream fudge. Leaving no room for the IR window (or transmission through the atmosphere of IR) which was in fact discovered some time later as more work was done on absorption coefficients.

  • kuhnkat

    Nick,you “proved” that the GISS model E does not assume, or output, a constant humidity.I DID notice that the only place it showed decreasing humidity was in the troposphere from 1980-2008 and excluding the high latitudes and the equator..Since I think most people agree that the humidity went UP in the trop, but, definitely did not go DOWN, the model is still WRONG!!!!

  • Nick Stokes

    Jan,Again, you're just not giving a connected argument. Yes, I know what semi-infinite means, and what a Laplace transform is. But where are they used in this theory?FM says he's using a “semi-infinite” solution in Eq 15. But he isn't. tau is there in the equation, and seems perfectly finite. He even, in the leadup to 16, defines a value of tau, tau_C, at the surface.L&M say nothing about their model being semi-infinite. Again tau is there in the equation, and they define tau_0 as the value at the surface. And they include it in their equations in the normal way.And you still haven't addressed this nonsense about two boundary conditions.

  • Nick Stokes

    Well, the goalposts are moving. But you'd better sort out which kind of humidity you are talking about. Check out specific humidity.

  • Jan Pompe

    “Yes, I know what semi-infinite means, and what a Laplace transform is.”Which what I would expect.”L&M say nothing about their model being semi-infinite.”I don't expect they should have to for someone who knows “know what semi-infinite means, and what a Laplace transform is” to know that it is.

  • Nick Stokes

    Again, these are useless answers. You've said L&M assume tau semi-infinite. Back it up!

  • Alex Harvey

    Nick,C'mon…. do some reading…. the assumption of constant relative humidity, unlike temperature discontinuity, is all throughout the modern literature on GCM modelling, starting with the Manabe & Wetherald paper I pasted above. Lindzen's arguments against it are in the last Lindzen paper I posted. If you then look in the ECHAM5 manual I posted, and followed that to Tiedtke 1989 cited therein, you can see for yourself that there is still a very unphysical hack in models for dealing with convection that goes back at least as far as 1989 (ironically, Lindzen himself seems to have had a significant role in creating the new hack so he's eminently qualified to comment on its shortcomings).

  • Jan Pompe

    “Again, these are useless answers. You've said L&M assume tau semi-infinite. Back it up!”I don't understand Nick I thought you had a PhD in control mathematics I only did one semester of of control mathematics and I can see that the two L&M equations 1 & 2 as well as the two Goody and Yung ones Miskolczi quote (16 & 17) are solutions of equation 12 assuming tau varies between e>0 to infinity. How is it that you can't? At the very least L&M eqn 1 & 2 and M 16 & 17 should have an exp(-tau) term if finite tau was assumed.

  • Alex Harvey

    Jan,You can't have it both ways.Either astrophysicists/climatologists have independently made the same mistake (and Milne 1922 does seem to be largely independent of Emden 1913) or one person made the mistake first (Schwarzschild??) and others copied (i.e. there was some kind of dependency).So which was it: who went along with whom because it was convenient? By bet is, you don't know the answer to this question, which proves that you like everyone else here have taken this whole thing on faith because it says so in M's 2007 paper.

  • Nick Stokes

    Alex,Yes, early models like M&W did make assumptions of that kind. But from the mid 70's, models used a world grid and solved the transport equations directly. Then assumptions about RH were not only unnecessary, but unfeasible, since they would override conservation of mass. I saw nothing in the Lindzen 1999 chapter about RH in modern GCM's. Convection adjustments are something different. Again his reference here seems to be to 60's papers.

  • Nick Stokes

    I can't see it (and don't believe it) and you're not helping. Where does this exp(-tau) come from? Both L&M and FM specify an explicit finite tau range – 0 to tau_0 for L&M and 0 to tau_C for FM.

  • Jan Pompe

    Alex Excuse me?Where dose Emden or his brother in law say: “Assumption of infinite thickness involves little or no loss of generality”or anything like it?Do you honestly believe that anyone could read through that 1922 paper of Milne's find that quote and the fudge, that everyone subsequently has used, without having heard of Emden, Swchwarzschild and Gold as you suggested to me earlier?Now kindly look at the equations from which Emden deduced his temperature discontinuity here and take a close look at what Milne comes up with and then compare it with the one quote from Goody and Yung and you'll soon see (I have a vain hope still) that you will see why Milne and not Emden was cited as the source of the error.

  • Jan Pompe

    Tell me Nick did you notice what happened the equation when Miskolczi specified a finite tau in appendix B do you notice the difference between his transfer function and say either of those in eqn 16 & 17 of the same paper. Anyone who knows ” what semi-infinite means, and what a Laplace transform is” should be able to tell us straight away. Also you can use your superior PhD in control mathematics training what equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity.

  • Nick Stokes

    Again, in Eq 16 he has a finite value tau_A at ground. The diff in appendix B is that he's applying his one BC at the ground rather than at TOA. There's no issue of finiteness there.OF course, his problem then is that you can't get it right in both places. Right at ground – wrong at TOA. Big problem.

  • Alex Harvey

    Jan,

    That is absurd.

    1.

    Once again:

    M07: “…About 80 years ago Milne stated: “Assumption of
    infinite thickness involves little or no loss of generality”, and later, in the same paper, he created the concept of a secondary (internal) boundary (Milne, 1922). He did not realize that the classic Eddington solution [Alex -- a reference to Eddington 1916?] is not the general solution of the bounded atmosphere problem and he did not re-compute the appropriate integration constant. This is the reason why scientists have problems with a mysterious surface temperature discontinuity and unphysical solutions, as in Lorenz and McKay (2003)…”

    Okay, do we agree that these are M’s actual words? Do we agree that 1913 came before 1922? Can you accept that words written in 1922 can not have influenced words written in 1913? And finally, do you admit that it is well-documented (anyone can see this from the very Bateman link you just posted) that the “mysterious” temperature discontinuity originated in Emden 1913? So, therefore, Miskolczi was WRONG on this point. Will you please concede this so that there is some sanity in the conversation?

    2.

    Can you list all the Schwarzschild papers you have read and where I can find them?
    Can you tell me how you can know what is written in Emden since you can’t read German?

    • Jan Pompe

      Alex “? Do we agree that 1913 came before 1922? Can you accept that words written in 1922 can not have influenced words written in 1913?”

      Yes I agree but look at the equations. Yes Emden did get a temperature discontinuity but the equations derived by Emden are not the same as those derived by Milne. Milne was the one who was followed, so what Emden might have thought or done is quite irrelevent.

      I might even go so far as to say that even though Emden preceded Milne and Milne did look at his paper and Emden did find a temperature discontinuity that the influence of Emden on Milne was in fact minimal.

    • Jan Pompe

      Alex you’ve got to stop this hero worship of Emden.

      “Can you tell me how you can know what is written in Emden since you can’t read German?”

      I don’t read German well but get by. It’s not necessary unless Bateman miscopied the equations what Emden did is of no interest to us. I have no trouble reading the equations there are the same in all languages.

  • Alex Harvey

    Jan,That is absurd.1.Once again:M07: “…About 80 years ago Milne stated: “Assumption ofinfinite thickness involves little or no loss of generality”, and later, in the same paper, he created the concept of a secondary (internal) boundary (Milne, 1922). He did not realize that the classic Eddington solution [Alex -- a reference to Eddington 1916?] is not the general solution of the bounded atmosphere problem and he did not re-compute the appropriate integration constant. This is the reason why scientists have problems with a mysterious surface temperature discontinuity and unphysical solutions, as in Lorenz and McKay (2003)…”Okay, do we agree that these are M's actual words? Do we agree that 1913 came before 1922? Can you accept that words written in 1922 can not have influenced words written in 1913? And finally, do you admit that it is well-documented (anyone can see this from the very Bateman link you just posted) that the “mysterious” temperature discontinuity originated in Emden 1913? So, therefore, Miskolczi was WRONG on this point. Will you please concede this so that there is some sanity in the conversation?2.Can you list all the Schwarzschild papers you have read and where I can find them?Can you tell me how you can know what is written in Emden since you can't read German?

  • Jan Pompe

    Moved up to avoid too much thinning”wrong at TOA. Big problem.”Fo mathematicians maybe. That the equations don't hold where there are no absorbers or air fro that matter are no problem at all just exclude to point at tau=0 like they do hereNow you have evaded the question: What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity.i don't know about anyone else but I don't see any problem with a temperature discontinuity between the finite source and an infinite sink at the TOA.

  • Nick Stokes

    It's not a temperature equation, it's a flux equation. And getting the outgoing flux wrong is a problem.In App B he's applying conditions at ground. If tau is infinite, there is no ground. But you're dodging the questions – L&M and FM both specify tau at ground (and for FM, not just in App B). Where's the semi-infinite assumption?

  • Jan Pompe

    “And getting the outgoing flux wrong is a problem.”OLR = f * Sg i.e. OLR = 2Sg/(1+tau + exp(-tau)) = 2/Sg*1/2 when tau = 0 so when tau is 0 OLR = Su what a surprise.Now again: What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity?

  • Jan Pompe

    By the bye Nick have you by any chance noticed that when tau = 0 the TOA IS the ground and roughly the same conditions apply as on the moon?

  • Jan Pompe

    Alex “? Do we agree that 1913 came before 1922? Can you accept that words written in 1922 can not have influenced words written in 1913?”Yes I agree but look at the equations. Yes Emden did get a temperature discontinuity but the equations derived by Emden are not the same as those derived by Milne, Milne was the one follows so what Emden might have thought or done is quite irrelevent. I might even go so far as to say that even though Emden preceded Milne and Milne did look at his paper that the influence of Emden on Milne was minimal.

  • Nick Stokes

    Nonsense. tau is just an altitude measure, like pressure. It starts at tau=0 at TOA – any atmosphere.And (prev comment) at OLR, no, the equation you've written involves tau_A, the tau at ground. You can't set that to 0. What you have to do is put tau=0 in eq 21.And again, the B eqs would be nonsense if he set tau to infinity. He didn't, and for this purpose (BC at ground) he can't.

  • Alex Harvey

    Nick,

    early models like M&W did make assumptions of that kind. But from the mid 70’s, models used a world grid and solved the transport equations directly.

    I believe the truth of the matter is that in the mid 1970s they ‘sorta’ started doing what you’re saying, but still haven’t really got it right. You might like to find that Tiedtke 1989 paper and read it because it’s actually quite influential, in as much as this convective scheme seems to be in use even today.

    Quoting Ozawa & Ohmura 1997:

    “…Thermal convection has long been examined since an
    early investigation of the atmosphere by Hadley (1735),
    early laboratory experiments by Be´nard (1901), and a
    numerical solution of nonlinear equations by Lorenz
    (1963). Yet there is no solid physical theory that is
    capable of expressing the complete process of thermal
    convection. Understanding of the convection process
    may be of urgent necessity since all living creatures,
    including human beings, are distributed at the earth’s
    surface where convective transport of sensible heat and
    latent heat is largest. If convection were more active
    (inactive), surface temperature would decrease (increase).
    Yet we have no solid understanding how convection
    would change with a future increase of, for instance,
    carbon dioxide.

    One may expect general circulation models to represent
    the convective process. However, GCMs contain
    an artificial device for convection. The convective adjustment
    was first introduced by Manabe and Strickler
    (1964) in order to adjust the vertical temperature profile
    to observations. The adjustment was necessary for
    GCMs since it was not possible to treat vertical instability
    of the atmosphere by a grid-scale dynamic motion;
    thus the calculation diverged during time integration
    (Manabe et al. 1965). Even current versions of GCMs
    contain a sort of convective adjustment whose parameters
    are tuned to reproduce observations (e.g., Tiedtke
    1988).”

    • Nick Stokes

      Alex,
      Again this has nothing to do with assuming relative humidity constant. But yes, GCM’s do have trouble with thermal convection The reason is that much occurs during tropical storms etc, on a scale too small for the grid. It’s a familiar situation – in CFD, eddies (turbulence) invariably get down to a scale below what you can resolve, and the subgrid scale has to be modelled. With turbulence it is fairly random, but thermal convection has structure, which adds to the problem. Fortunately, there is a lot of observational data that can be used.

      • Alex Harvey

        Nick,

        Whether constant relative humidity is assumed or output is beside the point. The modellers certainly believe that relative humidity should remain constant, and we’ve also seen that they regard their data as hopelessly uncertain whenever it doesn’t agree with what they already believe (witness, Santer+16 on the tropospheric temperature trends, or everyone on the NOAA radiosonde data that shows atmospheric humidity declining). We know for a fact that thermal convection affects the relative humidity, as do the circulation patterns of the atmosphere. We also know that the models can’t actually get either of these right. Thus, the models are tuned to all this data you mention, and it seems just too hard to believe that they haven’t been tuned in such a way as to give the result the modellers expected.

        The bottom line is, how can you model something when you know you don’t know the underlying physical theory?

        Incidentally, have a look at this Jeffrey Kiehl (2007) paper:

        https://www.atmos.washington.edu/twiki/pub/Main/ClimateModelingClass/kiehl_2007GL031383.pdf

        It’s only short, and I’ve read it four times now. It’s an incredibly eye-opening piece coming from an IPCC author. Correct me if I’m wrong, but Kiehl is as good as admitting here that the model hindcasts of the simulated 20th century temperature records must have been fudges. He doesn’t say so explicitly, but it is implied. Read his concluding paragraph. He finally just shrugs it off and says, “well, it may be fudging to say we simulated the 20th century record, but who cares? Aerosols won’t matter in the future, and this fudging of the 20th century is kinda like tuning a NWP model which is known to improve its accuracy anyway.”

        • Nick Stokes

          Alex I don’t agree. But I’ll have to leave it there, cos I’m away for a couple of weeks. Hope you’ve sorted it out when I return.

    • Nick Stokes

      ps I did find the Tiedtke paper and yes, it is a subgrid scale model like those used in turbulence, but incorporating knowledge of cumulus cloud behaviour. His conclusion summarises the strengths and weaknesses well.

    • JamesG

      The refrain from Gavin S and others is that specific humidity and hence relative humidity are model output from the calculations – definitely not an input.

      But the calculation seems to be thus:
      Specific humidity, q=e.eps/p, where p is the atmospheric pressure output from the model, eps is the constant 0.622, e comes from e=RH*es, where RH is relative humidity and es comes from the Clausius-Clapeyron relation, which depends on the temperature output. This leaves you to either use a measured, calculated or assumed value for RH. The number of calculated outputs (or degrees of freedom) from a model is very limited and increasing these outputs comes from coupling combined with gross assumptions. Pressure & temperature ok but I don’t immediately see how relative humidity can be another separate output from the numerical analysis of the grid. It surely must be either constrained or the result of a constraint.

      By comparison, the same “it’s not an assumption, it’s an output” argument has been used for the CO2 sensitivity, yet as Steve’s link above showed, researchers can vary the sensitivity to produce different scenarios. Regardless of which parameter is used to actually nudge the sensitivity in the desired direction, it is not a true output. It leads me to believe that modelers are just being tricky with their definitions. Of course documentation on this is somewhat sparse – both implementation and validations.

      Part of the trouble of course is that all models are not the same. While some are clearly rubbish, others – the coupled models – are not too bad. Someone though can say that the models do this or that when they really mean that just one particular model does it. And then there is a blanket assumption to lump together all models, good or bad, and blithely consider without a shred of physicality that the ensemble has some merit. But then surely only the more simplistic models can be used to produce these projections 100 years into the future.

      • Nick Stokes

        James,
        Where do you get this “calculation” from? I believe it is the other way around. Specific humidity is computed, and your formula or similar used to derive RH.
        Alex linked above this good documentation for the GCM Echam5. Eq 2.4 is the equation for water advection. It is a conventional flux-form mass-conserving advection equation for water content. That could only yield SH directly.

        • JamesG

          “Climatology” by Rohli and Vega, chapter 5 “Energy matter and momentum exchanges near the surface.”

          Thanks though, I’ll check that ref out. I’ve been fobbed off in the past by people claiming it all came out of the C-C equation which of course was nonsense. Nice to see where it really does come from. I’m a bit suspicious of assumption led conclusions I admit since I’ve seen so many of them. However I don’t even think it’s an odd assumption that water vapour would increase with temperature, since surely that is what is needed to cause the Amazon to green up and the Sahara to shrink.

  • Alex Harvey

    Nick,early models like M&W did make assumptions of that kind. But from the mid 70's, models used a world grid and solved the transport equations directly.I believe the truth of the matter is that in the mid 1970s they 'sorta' started doing what you're saying, but still haven't really got it right. You might like to find that Tiedtke 1989 paper and read it because it's actually quite influential, in as much as this convective scheme seems to be in use even today.Quoting Ozawa & Ohmura 1997:”…Thermal convection has long been examined since anearly investigation of the atmosphere by Hadley (1735),early laboratory experiments by Be´nard (1901), and anumerical solution of nonlinear equations by Lorenz(1963). Yet there is no solid physical theory that iscapable of expressing the complete process of thermalconvection. Understanding of the convection processmay be of urgent necessity since all living creatures,including human beings, are distributed at the earth’ssurface where convective transport of sensible heat andlatent heat is largest. If convection were more active(inactive), surface temperature would decrease (increase).Yet we have no solid understanding how convectionwould change with a future increase of, for instance,carbon dioxide. …One may expect general circulation models to representthe convective process. However, GCMs containan artificial device for convection. The convective adjustmentwas first introduced by Manabe and Strickler(1964) in order to adjust the vertical temperature profileto observations. The adjustment was necessary forGCMs since it was not possible to treat vertical instabilityof the atmosphere by a grid-scale dynamic motion;thus the calculation diverged during time integration(Manabe et al. 1965). Even current versions of GCMscontain a sort of convective adjustment whose parametersare tuned to reproduce observations (e.g., Tiedtke1988).”

  • Jan Pompe

    “tau is just an altitude measure, like pressure.”you can take it that way if you want but you will be wrong. tau is optical depth which does vary with height but tau-tilde is the average whole atmosphere tau. tilde-tau_A is the optical depth of the entire column of the atmosphere not just a layer of it. Regardless of what you think the mathematical meaning of the integrals are I have just given you the physical meaning. You did notice that he had different dummy variable in the integral I hope. Equation 20 and figure 3 are an equation and a graph of the effect on OLR and Bg of the average tau = tilde-tau_AI had thought you had realised this by now.

  • http://www.ecoengineers.com/ Steve Short

    That is how it has seemed to me too. BTW for the statement: “At the very least L&M eqn 1 & 2 and M 16 & 17 should have an exp(-tau) term if finite tau was assumed.” -I can't see that bunging in exp(-tau) has any math logic to it. What am I missing?Please explain. Just the math.

  • Nick Stokes

    Alex,Again this has nothing to do with assuming relative humidity constant. But yes, GCM's do have trouble with thermal convection The reason is that much occurs during tropical storms etc, on a scale too small for the grid. It's a familiar situation – in CFD, eddies (turbulence) invariably get down to a scale below what you can resolve, and the subgrid scale has to be modelled. With turbulence it is fairly random, but thermal convection has structure, which adds to the problem. Fortunately, there is a lot of observational data that can be used.

  • Nick Stokes

    Quite wrong. Look at what FM says after eq 15:”where tilde-Ï„ is the flux optical depth””At the upper boundary tilde-Ï„ = 0″Nothing whole atmosphere about that. He subscripts A to show ground values, which then means whole atmosphere.

  • Nick Stokes

    ps I did find the Tiedtke paper and yes, it is a subgrid scale model like those used in turbulence, but incorporating knowledge of cumulus cloud behaviour. His conclusion summarises the strengths and weaknesses well.

  • Jan Pompe

    “And again, the B eqs would be nonsense if he set tau to infinity.”It didn't stop Milne. Page 897 of his 1922 paper”Assumption of infinite thickness involves little or no loss of generality; we could if we liked, consider a mass of finite thickness with an inner boundary consisting of a black radiating surface, but since our results will only involve the optical thickness, we need only suppose the absorption coefficient or the density to become very suddenly large at the assigned depth in order to deduce the case of an inner boundary from the solution for an infinitely thick slab of material”

  • Nick Stokes

    There's no indication this quote is relevant. I don't think I have Milne's paper, and Zagoni isn't pushing it any more, but as I recall, he was treating radiation incident on a planet, not coming from the surface. You'll have to do better than that.

  • JamesG

    The refrain from Gavin S and others is that specific humidity and hence relative humidity are model output from the calculations – definitely not an input.But the calculation seems to be thus:Specific humidity, q=e.eps/p, where p is the atmospheric pressure output from the model, eps is the constant 0.622, e comes from e=RH*es, where RH is relative humidity and es comes from the Clausius-Clapeyron relation, which depends on the temperature output. This leaves you to either use a measured, calculated or assumed value for RH. The number of calculated outputs (or degrees of freedom) from a model is very limited and increasing these outputs comes from coupling combined with gross assumptions. Pressure & temperature ok but I don't immediately see how relative humidity can be another separate output from the numerical analysis of the grid. It surely must be either constrained or the result of a constraint.By comparison, the same “it's not an assumption, it's an output” argument has been used for the CO2 sensitivity, yet as Steve's link above showed, researchers can vary the sensitivity to produce different scenarios. Regardless of which parameter is used to actually nudge the sensitivity in the desired direction, it is not a true output. It leads me to believe that modelers are just being tricky with their definitions. Of course documentation on this is somewhat sparse – both implementation and validations.Part of the trouble of course is that all models are not the same. While some are clearly rubbish, others – the coupled models – are not too bad. Someone though can say that the models do this or that when they really mean that just one particular model does it. And then there is a blanket assumption to lump together all models, good or bad, and blithely consider without a shred of physicality that the ensemble has some merit. But then surely only the more simplistic models can be used to produce these projections 100 years into the future.

  • Jan Pompe

    “I don't think I have Milne's paper,”Then get it google is your friend.You'll have to do better than that.No Nick you do first read the paper and then answer what you have been evading.What will equations B4 B5 B6 B7 B8 B9 B10 and last but not least B11 would look like had he specified tau ->0 to tau -> infinity?Answer that and the problem with Milne will become clear to you. It's really immaterial whether we are talking about inbound or out bound the atmosphere of the earth is nowhere infinitely thick and cannot be sensibly modelled as an “infinitely thick slab of material”

  • Nick Stokes

    James,Where do you get this “calculation” from? I believe it is the other way around. Specific humidity is computed, and your formula or similar used to derive RH.Alex linked above this good documentation for the GCM Echam5. Eq 2.4 is the equation for water advection. It is a conventional flux-form mass-conserving advection equation for water content. That could only yield SH directly.

  • JamesG

    “Climatology” by Rohli and Vega, chapter 5 “Energy matter and momentum exchanges near the surface.”Thanks though, I'll check that ref out. I've been fobbed off in the past by people claiming it all came out of the C-C equation which of course was nonsense. Nice to see where it really does come from. I'm a bit suspicious of assumption led conclusions I admit since I've seen so many of them. However I don't even think it's an odd assumption that water vapour would increase with temperature, since surely that is what is needed to cause the Amazon to green up and the Sahara to shrink.

  • Jan Pompe

    Alex you've got to stop this hero worship of Emden.”Can you tell me how you can know what is written in Emden since you can't read German?”I don't read German well but get by. It's not necessary unless Bateman miscopied the equations what Emden did is of no interest to us. I have no trouble reading the equations there are the same in all languages.

  • kuhnkat

    If I could move the earth, I would!!!HAHAHAHAHAHAHAHAHAUnfortunately for your model the Specific Humidity is rising in the strat. Again, doesn't match the earth.How about some arm waving for the reduction of water vapor by increase in CO2?? I ran across this guy looking for other things:http://www.geocities.com/profadrian/ScienceOfGl…Scan down to the Forcing Concept section. Sounds almost too simple to be real!!The way I understand what he is saying is that increase in ANY gas to the atmosphere would tend to reduce water vapor!!Cheers!!!

  • Alex Harvey

    Jan,

    Yes I agree but look at the equations. Yes Emden did get a temperature discontinuity but the equations derived by Emden are not the same as those derived by Milne, Milne was the one follows so what Emden might have thought or done is quite irrelevent. I might even go so far as to say that even though Emden preceded Milne and Milne did look at his paper that the influence of Emden on Milne was minimal.

    Okay, so you admit that Emden’s temperature discontinuity has nothing to do with any “semi-infinite” assumption that Milne made in 1922.

    Does that mean you agree that Emden’s temperature discontinuity is real even if Milne’s (and I assume for the moment that Milne actually had one since I can’t actually see one in the 1922 paper) isn’t?

    Finally, you say that Milne is the guy that everyone follows. How do you explain then that none of Manabe, Strickler, Moller, Wetherald, King, Ostriker or Lindzen actually cite Milne at all yet they all cite Emden?

    • Jan Pompe

      “Does that mean you agree that Emden’s temperature discontinuity is real”

      No and if Bateman’s paraphrase is true to the source neither does Emden.

      “Finally, you say that Milne is the guy that everyone follows. How do you explain then that none of Manabe, Strickler, Moller, Wetherald, King, Ostriker or Lindzen actually cite Milne at all yet they all cite Emden?”

      I don’t think “Wow he found one (TD) too” means they use his methods.

      Look at the equations. You obviously haven’t done that.

      So how do we know by looking at the equations that Emden didn’t use Milne’s approximation (time is not a factor) and for example Manabe and Wetherald did despite their brief mention of Emden?

      • Alex Harvey

        Jan,

        Okay can you please cite page, line & equation number in Bateman’s (10 page) summary of Emden’s 150 page monograph as evidence that “Emden didn’t believe in his own temperature discontinuity.”

        Here is the paper: http://docs.lib.noaa.gov/rescue/mwr/044/mwr-044-08-0450.pdf

        Next you say “how do we know by looking at the equations that Emden didn’t use Milne’s approximation (time is not a factor)”

        *) I thought you just said you had looked at the equations yourself???
        *) “time is not a factor” — what, you mean because of time machines???

        • Jan Pompe

          TRy not to twist things Alex.

          I said “No and if Bateman’s paraphrase is true to the source neither does Emden.”

          I made no claim that Bateman is true to his source or adding his own opinion it’s in the last line before the summary in full.

          No no time machines I thought you were a little more logical than that. You are disappointing me.

          After your song and dance about Milne coming after Emden I thought you might be worrying about the possibility that I thought Milne had a time machine.

          Now what is different about the equations that make it rather obvious that Emden did not use the semi-infinite approximation and others do.

          Thanks i don’t need the paper I seem to have several copies.

      • Alex Harvey

        Jan,

        I realise on second reading, I misread you, that you’re saying, since Bateman hasn’t reproduced Emden’s derivations… how do we know Emden didn’t use Milne’s approximation… in other words you’re saying, how do we know they haven’t made the same mistake independently. And of course this whole discussion began when you denied having ever said that they must have made the same mistake independently. Okay, so you have contradicted yourself already.

        You can’t win this, you have to choose between two possibilities, either their discovery was independent, in which case we have an absurd story of many great astrophysicists (Emden, Eddington, Milne) all independently making a very obvious, silly error OR their discovery was dependent, in which case Emden must have come into the future and stolen Milne’s 1922 result.

        The other problem is that Eddington’s 1916 approximation didn’t exist for Emden either… and they were in separate countries (England vs Germany)… and they were known to be working independently.

        Can you just admit this has gone way past the point of silliness?

        • Jan Pompe

          “in other words you’re saying, how do we know they haven’t made the same mistake independently.”

          You say it here alex they did not make the *same* mistake independently.

          they made different ones.

          • Alex Harvey

            Okay, so now you’re saying there’s no dependency on Milne (you have agreed that’s absurd as it would involve time travel), they haven’t all made the same mistake independently, so to get out of saying they made the same mistake independently, you’re now saying they have made different mistakes independently, thus we have TWO temperature discontinuities, both of them wrong, for two different reasons. Is that it?

          • Jan Pompe

            “thus we have TWO temperature discontinuities, both of them wrong, for two different reasons. Is that it?”

            Yep and the one Emden made is of no interest to us. It’s trivially true if you only take into account radiation balance, and forget about conduction, convection and latent heat, that you will get a temperature discontinuity. So in a sense Emden’s error may have been no error at all but simply an artefact of what he was attempting to do. In any case it’s of no interest to us because apart from the odd mention he does not appear to have been a major influence in the intervening years.

            Alex It wasn’t I who was saying every astrophysicist etc since Schwarzschild this you will have to own up to yourself I’m afraid.

            You have wanted to bring the others prior to Milne into the narrative not I. While it might be interesting from a historical perspective it does not advance the debate on iota.

          • Alex Harvey

            Jan,

            So when Manabe and Möller write in 1961: ‘…In the computation by the matrix method, we allowed the possibility of a temperature discontinuity at the earth’s surface as was first obtained by Emden 1913, whereas in the present computation we did not. The magnitude of temperature discontinuity was very small, about 0.06 C. This is much smaller than the discontinuity of about 20 C which Emden obtained in his computation of radiative equilibrium based upon the assumption of gray radiation…’

            are they talking about Emden’s temperature discontinuity, or are they talking about Milne’s temperature discontinuity…?

          • Jan Pompe

            “are they talking about Emden’s temperature discontinuity, or are they talking about Milne’s temperature discontinuity…?”

            They are talking about Emden’s temperature discontinuity. This does not mean they used his method. Look at their equations the all important indicator of using finite tau (which Emden did in his method) is missing. Since that is missing and they are gloating that they did better than Emden at reducing the discontinuity you can’t really pin their error on Emden.

          • Alex Harvey

            Jan,

            We can’t look at the equations in this particular instance because it’s another paper that’s never been translated from the German.

            Okay, so they think they’re talking about Emden’s temperature discontinuity but actually they’re talking about Milne’s. This is a strange story, but possible I suppose.

            Very well, but I suppose we can agree that their decision to “stuff the theory” and arbitrarily set the temperature discontinuity to 0 can not be blamed on either Emden or Milne, but that is clearly their own decision. Right?

            So after Milne made his mistake, where did it next appear in the literature? Has it ever appeared anywhere other than in Weaver & Ramanathan / Lorenz & McKay? GCM modelling began with Moller, Manabe et al. If they didn’t actually use Milne, how else can it have ever affected GCM models?

          • Jan Pompe

            “Okay, so they think they’re talking about Emden’s temperature discontinuity but actually they’re talking about Milne’s.”

            No they are talking about Emden’s discontinuity.

            “So after Milne made his mistake,”

            I know we have been talking about “Milnes mistake” for my part mainly for language economy but is it really an error? To use an approximation for a windowless atmosphere when the window has yet to be discovered?

            Alex has it ever occurred to you that Energy Balance Models and General Circulation Models are different sorts of models?

          • Jan Pompe

            “Has it ever appeared anywhere other than in Weaver & Ramanathan Has it ever appeared anywhere other than in Weaver & Ramanathan ”

            I remembered another
            here found by David are the lecture notes of Irina Sokolic of Georgia Tech. So it’s in the text books of Milne wirtten in 1930, Goody and Yung 1989 and current lecture notes. It is what has been taught to students since 1930 at the very least and still being taught.

            Is it any surprise then that so many who are working in the field get the same results?

  • Alex Harvey

    Jan,Yes I agree but look at the equations. Yes Emden did get a temperature discontinuity but the equations derived by Emden are not the same as those derived by Milne, Milne was the one follows so what Emden might have thought or done is quite irrelevent. I might even go so far as to say that even though Emden preceded Milne and Milne did look at his paper that the influence of Emden on Milne was minimal.Okay, so you admit that Emden's temperature discontinuity has nothing to do with any “semi-infinite” assumption that Milne made in 1922.Does that mean you agree that Emden's temperature discontinuity is real even if Milne's (and I assume for the moment that Milne actually had one since I can't actually see one in the 1922 paper) isn't?Finally, you say that Milne is the guy that everyone follows. How do you explain then that none of Manabe, Strickler, Moller, Wetherald, King, Ostriker or Lindzen actually cite Milne at all yet they all cite Emden?

  • Jan Pompe

    “Does that mean you agree that Emden's temperature discontinuity is real”No and if Bateman's paraphrase is true to the source neither does Emden.”Finally, you say that Milne is the guy that everyone follows. How do you explain then that none of Manabe, Strickler, Moller, Wetherald, King, Ostriker or Lindzen actually cite Milne at all yet they all cite Emden?”I don't think “Wow he found one (TD) too” means they use his methods. Look at the equations. You obviously haven't done that. So how do we know by looking at the equations that Emden didn't use Milne's approximation (time is not a factor) and for example Manabe and Wetherald did despite their brief mention of Emden?

  • Alex Harvey

    Jan,Okay can you please cite page, line & equation number in Bateman's (10 page) summary of Emden's 150 page monograph as evidence that “Emden didn't believe in his own temperature discontinuity.” Here is the paper: http://docs.lib.noaa.gov/rescue/mwr/044/mwr-044…Next you say “how do we know by looking at the equations that Emden didn't use Milne's approximation (time is not a factor)”*) I thought you just said you had looked at the equations yourself???*) “time is not a factor” — what, you mean because of time machines???

  • Jan Pompe

    “I can’t see that bunging in exp(-tau) has any math logic to it. What am I missing?
    Please explain. Just the math.”

    The classical radiative transport equations are semi-infinite Laplace Transforms using the the standard stellar atmosphere approximations.

    Evaluating them in the standard way eliminates exp(-tau) terms evaluating them for finite tau at the surface retains them.

    So the classical solution eg equations 16 and 17 in M2007 and 1&2 in L&M2003 having been evaluated with the semi-infinite approximation has no exp(-tau) term but B11 does.

    • http://www.ecoengineers.com/ Steve Short

      Thanks Jan.

      I am going to re-visit this. Doesn’t this effectively also shift TOA (where by definition tau = zero) upwards relative to a semi-infinite solution TOA (although not by much it seems)?

      BTW, on a completely other matter have those here noticed that the current crop of GCMs have considerable difficulty predicting the temporal behaviour of evaporation in real time?

      http://www.knmi.nl/samenw/eldas/GLASS_GABLS_presentations/Dirmeyer.ppt

      This is not a trivial subject, as the validation of ET and EF on regional scales using remotely sensed data which is required for validate this component of GCMs is tricky due to its low data frequency and heavy reliance on application of Bouchet’s or Grager and Grays complementary relationships to Priestly-Taylor and Penman-Monteith.

      This is all about measuring the size of the (cough) steep slope between the near surface air temperature and the true surface temperature as the near surface water vapor pressure lags behind the degree of saturation of the surface. Temperatures have been used as surrogates for vapor pressures in many studies (Monteith and Unsworth 1990, Nishida et al. 2003). Although the relationship between vapor pressure and temperatures is not a linear one, it is commonly linearized for small temperature differences. The unknown surface temperature, Tw cannot be measured in the field, due to the process complexity and the intricate soil-vegetation-atmosphere feedback, but it can be estimated from the slope of the exponential surface vapor pressure curve.

      • Jan Pompe

        Steve “Doesn’t this effectively also shift TOA (where by definition tau = zero) upwards relative to a semi-infinite solution TOA (although not by much it seems)?”

        FM actually excludes tilde-tau_A = 0 in appendix B the relevant line

        “From Eq. (B10), assuming tilde-tau_A > 0″

        Like Nick says it shifts the singularity from the surface to the TOA if treated as and altitude measure which it isn’t in this case, or to the no absorber/air case if taken as an average tau for the entire atmosphere. Personally I think it’s more reasonable to have a singularity where there is no air, no emissions, no absorption and so on.

        • http://www.ecoengineers.com/ Steve Short

          “Personally I think it’s more reasonable to have a singularity where there is no air, no emissions, no absorption and so on.”

          At first thought one would be intuitively inclined to go with such a simple proposition simply because it takes the math singularity out in an almost utterly matterless milieu.

          But math is only a means of describing the fabric of reality more and more accurately purely in terms of testing it (yet again and again).

          I have to say it is my personal ‘visceral’ experience (from hang gliding, groundwater pumping and so on as explained) and also the physical fact of the non-radiative issues in, and at the surface of the ground with respect to soil water content, vegetation, wind and so on, that an actual real world singularity in favor of BOA is much more plausible physically.

          I’m not trying to pick a fight here, just simply say it how I see it. I am not convinced that Miskolczi has stumbed on any Holy Grail with this particular aspect of his theory.

          There is correspondingly no real proof of a near constant true LW IR tau (the supposed outcome of ‘eliminating’ any mathematical singularity within the ‘LW IR column’). Indeed, one might ask why should there really be any need to do so, if, as you say, most LW IR is absorbed in the lower, denser more H2O vapor-rich part of the atmosphere?

          I’m frustrated that Miskolczi seems to have only made a skimming pass past the underlying cause of (dare I say it) a global climate homeostasis (conditional or absolute). As you can see from my crude little spreadsheet model, I think the answers (in respect to Fo_f, S_U and OLR actually lie within the complex web of (Gyr timescale evolved) inter-relationships between biota, ET, biogenic aerosols, CCN, clouds, LH fluxes and albedo.

          I’m sorry but, in the grand scheme of things, what is the big deal about this new tau formalism? It’s a bit like concentrating on trying to have the most perfect potato on a plate with lots of other really good food on it.

          • Jan Pompe

            Steve I am quite happy not to pick a fight either.

            Just briefly: the absence of the exp(-tau_A) in the transfer function effectively closes the atmospheric IR window and there is no room for St in it at all whether 40 or 90 watt/m^2 and any value for St and it’s effect on Sg or surface temperature that cannot be arrived at without doing some or other fudge.

            As for the other I will, if I can find time before the ACAT comes after me, work on empirical testing of the TD. I don’t trust visceral feelings my own or anyone else’s for that matter.

  • Jan Pompe

    “I can't see that bunging in exp(-tau) has any math logic to it. What am I missing?Please explain. Just the math.”The classical radiative transport equations are semi-infinite Laplace Transforms using the the standard stellar atmosphere approximations. Evaluating them in the standard way eliminates exp(-tau) terms evaluating them for finite tau at the surface retains them. So the classical solution eg equations 16 and 17 in M2007 and 1&2 in L&M2003 having been evaluated with the semi-infinite approximation has no exp(-tau) term but B11 does.

  • Jan Pompe

    TRy not to twist things Alex. I said “No and if Bateman's paraphrase is true to the source neither does Emden.” I made no claim that Bateman is true to his source or adding his own opinion it's in the last line before the summary in full. No no time machines I thought you were a little more logical than that. You are disappointing me.After your song and dance about Milne coming after Emden I thought you might be worrying about the possibility that I thought Milne had a time machine. Now what is different about the equations that make it rather obvious that Emden did not use the semi-infinite approximation and others do. Thanks i don't need the paper I seem to have several copies.

  • Alex Harvey

    Jan,I realise on second reading, I misread you, that you're saying, since Bateman hasn't reproduced Emden's derivations… how do we know Emden didn't use Milne's approximation… in other words you're saying, how do we know they haven't made the same mistake independently. And of course this whole discussion began when you denied having ever said that they must have made the same mistake independently. Okay, so you have contradicted yourself already. You can't win this, you have to choose between two possibilities, either their discovery was independent, in which case we have an absurd story of many great astrophysicists (Emden, Eddington, Milne) all independently making a very obvious, silly error OR their discovery was dependent, in which case Emden must have come into the future and stolen Milne's 1922 result.The other problem is that Eddington's 1916 approximation didn't exist for Emden either… and they were in separate countries (England vs Germany)… and they were known to be working independently.Can you just admit this has gone way past the point of silliness?

  • Jan Pompe

    “in other words you're saying, how do we know they haven't made the same mistake independently.”You say it here alex they did not make the *same* mistake independently.they made different ones.

  • Alex Harvey

    Okay, so now you're saying there's no dependency on Milne (you have agreed that's absurd as it would involve time travel), they haven't all made the same mistake independently, so to get out of saying they made the same mistake independently, you're now saying they have made different mistakes independently, thus we have TWO temperature discontinuities, both of them wrong, for two different reasons. Is that it?

  • http://www.ecoengineers.com/ Steve Short

    Thanks Jan.I am going to re-visit this. Doesn't this effectively also shift TOA (where by definition tau = zero) upwards relative to a semi-infinite solution TOA (although not by much it seems)?BTW, on a completely other matter have those here noticed that the current crop of GCMs have considerable difficulty predicting the temporal behaviour of evaporation in real time?http://www.knmi.nl/samenw/eldas/GLASS_GABLS_presentati...This is not a trivial subject, as the validation of ET and EF on regional scales using remotely sensed data which is required for validate this component of GCMs is tricky due to its low data frequency and heavy reliance on application of Bouchet's or Grager and Grays complementary relationships to Priestly-Taylor and Penman-Monteith.This is all about measuring the size of the (cough) steep slope between the near surface air temperature and the true surface temperature as the near surface water vapor pressure lags behind the degree of saturation of the surface. Temperatures have been used as surrogates for vapor pressures in many studies (Monteith and Unsworth 1990, Nishida et al. 2003). Although the relationship between vapor pressure and temperatures is not a linear one, it is commonly linearized for small temperature differences. The unknown surface temperature, Tw cannot be measured in the field, due to the process complexity and the intricate soil-vegetation-atmosphere feedback, but it can be estimated from the slope of the exponential surface vapor pressure curve.

  • Alex Harvey

    Nick,Whether constant relative humidity is assumed or output is beside the point. The modellers certainly believe that relative humidity should remain constant, and we've also seen that they regard their data as hopelessly uncertain whenever it doesn't agree with what they already believe (witness, Santer+16 on the tropospheric temperature trends, or everyone on the NOAA radiosonde data that shows atmospheric humidity declining). We know for a fact that thermal convection affects the relative humidity, as do the circulation patterns of the atmosphere. We also know that the models can't actually get either of these right. Thus, the models are tuned to all this data you mention, and it seems just too hard to believe that they haven't been tuned in such a way as to give the result the modellers expected.The bottom line is, how can you model something when you know you don't know the underlying physical theory?Incidentally, have a look at this Jeffrey Kiehl (2007) paper:https://www.atmos.washington.edu/twiki/pub/Main…It's only short, and I've read it four times now. It's an incredibly eye-opening piece coming from an IPCC author. Correct me if I'm wrong, but Kiehl is as good as admitting here that the model hindcasts of the simulated 20th century temperature records must have been fudges. He doesn't say so explicitly, but it is implied. Read his concluding paragraph. He finally just shrugs it off and says, “well, it may be fudging to say we simulated the 20th century record, but who cares? Aerosols won't matter in the future, and this fudging of the 20th century is kinda like tuning a NWP model which is known to improve its accuracy anyway.”

  • Nick Stokes

    Alex I don't agree. But I'll have to leave it there, cos I'm away for a couple of weeks. Hope you've sorted it out when I return.

  • Jan Pompe

    “thus we have TWO temperature discontinuities, both of them wrong, for two different reasons. Is that it?”Yep and the one Emden made is of no interest to us. It's trivially true if you only take into account radiation balance, and forget about conduction, convection and latent heat, that you will get a temperature discontinuity. So in a sense Emden's error may have been no error at all but simply an artefact of what he was attempting to do. In any case it's of no interest to us because apart from the odd mention he does not appear to have been a major influence in the intervening years.Alex It wasn't I who was saying every astrophysicist etc since Schwarzschild this you will have to own up to yourself I'm afraid.You have wanted to bring the others prior to Milne into the narrative not I. While it might be interesting from a historical perspective it does not advance the debate on iota.

  • Jan Pompe

    Steve “Doesn't this effectively also shift TOA (where by definition tau = zero) upwards relative to a semi-infinite solution TOA (although not by much it seems)?”FM actually excludes tilde-tau_A = 0 in appendix B the relevant line”From Eq. (B10), assuming tilde-tau_A > 0″ Like Nick says it shifts the singularity from the surface to the TOA if treated as and altitude measure which it isn't in this case, or to the no absorber/air case if taken as an average tau for the entire atmosphere. Personally I think it's more reasonable to have a singularity where there is no air, no emissions, no absorption and so on.

  • Alex Harvey

    Jan,So when Manabe and Möller write in 1961: '…In the computation by the matrix method, we allowed the possibility of a temperature discontinuity at the earth's surface as was first obtained by Emden 1913, whereas in the present computation we did not. The magnitude of temperature discontinuity was very small, about 0.06 C. This is much smaller than the discontinuity of about 20 C which Emden obtained in his computation of radiative equilibrium based upon the assumption of gray radiation…'are they talking about Emden's temperature discontinuity, or are they talking about Milne's temperature discontinuity…?

  • http://www.ecoengineers.com/ Steve Short

    “Personally I think it's more reasonable to have a singularity where there is no air, no emissions, no absorption and so on.”At first thought one would be intuitively inclined to go with such a simple proposition simply because it takes the math singularity out in an almost utterly matterless milieu.But math is only a means of describing the fabric of reality more and more accurately purely in terms of testing it (yet again and again).I have to say it is my personal 'visceral' experience (from hang gliding, groundwater pumping and so on as explained) and also the physical fact of the non-radiative issues in, and at the surface of the ground with respect to soil water content, vegetation, wind and so on, that an actual real world singularity in favor of BOA is much more plausible physically.I'm not trying to pick a fight here, just simply say it how I see it. I am not convinced that Miskolczi has stumbed on any Holy Grail with this particular aspect of his theory. There is correspondingly no real proof of a near constant true LW IR tau (the supposed outcome of 'eliminating' any mathematical singularity within the 'LW IR column'). Indeed, one might ask why should there really be any need to do so, if, as you say, most LW IR is absorbed in the lower, denser more H2O vapor-rich part of the atmosphere?I'm frustrated that Miskolczi seems to have only made a skimming pass past the underlying cause of (dare I say it) a global climate homeostasis (conditional or absolute). As you can see from my crude little spreadsheet model, I think the answers (in respect to Fo_f, S_U and OLR actually lie within the complex web of (Gyr timescale evolved) inter-relationships between biota, ET, biogenic aerosols, CCN, clouds, LH fluxes and albedo.I'm sorry but, in the grand scheme of things, what is the big deal about this new tau formalism? It's a bit like concentrating on trying to have the most perfect potato on a plate with lots of other really good food on it.

  • Jan Pompe

    “are they talking about Emden's temperature discontinuity, or are they talking about Milne's temperature discontinuity…?”They are talking about Emden's temperature discontinuity. This does not mean they used his method. Look at their equations the all important indicator of using finite tau (which Emden did in his method) is missing. Since that is missing and they are gloating that they did better than Emden at reducing the discontinuity you can't really pin their error on Emden.

  • Jan Pompe

    Steve I am quite happy not to pick a fight either.Just briefly: the absence of the exp(-tau_A) in the transfer function effectively closes the atmospheric IR window and there is no room for St in it at all whether 40 or 90 watt/m^2 and any value for St and it's effect on Sg or surface temperature that cannot be arrived at without doing some or other fudge.As for the other I will, if I can find time before the ACAT comes after me, work on empirical testing of the TD. I don't trust visceral feelings my own or anyone else's for that matter.

  • cohenite

    Fascinating conversation; if Steve and/or Jan reach some final conclusion about M let the rest of us know.

    I guess the interest in a simple self-equilibrising Tau is just that; because it is simple [sic]; if equilibrium is reached without a constant Tau as Steve’s spreadsheet shows then that runs counter to the attractiveness of nice simple packaging which is what most people want; this is why AGW has such traction; it has nice simple packages to present to the public and the media.

    One of the messier aspects of AGW is its need for an ECS, and Equlibrium Climate Sensitivity; this invention provides considerable ducking room for AGW advocates when, as always, there is a dearth of evidence in the form of Transient Climate Responses; the ECS, as far as I can gather, can be some 8 years after the forcing ceases, as in Schwartz’s paper, or centuries if one accepts the ocean pipeline effect in all its glory. M doesn’t seem to look at the idea of an ECS/delay/lag factor; the response to GHG variation seems to be instantaneous so that climate changes and adjustments are pretty much straight away. This being the case the question is, is there any ACO2 effect at all, or are the slight perturbations we have seen over the last century entirely caused by water in the form of PDO variation and its equivalents?

  • cohenite

    Fascinating conversation; if Steve and/or Jan reach some final conclusion about M let the rest of us know. I guess the interest in a simple self-equilibrising Tau is just that; because it is simple [sic]; if equilibrium is reached without a constant Tau as Steve's spreadsheet shows then that runs counter to the attractiveness of nice simple packaging which is what most people want; this is why AGW has such traction; it has nice simple packages to present to the public and the media. One of the messier aspects of AGW is its need for an ECS, and Equlibrium Climate Sensitivity; this invention provides considerable ducking room for AGW advocates when, as always, there is a dearth of evidence in the form of Transient Climate Responses; the ECS, as far as I can gather, can be some 8 years after the forcing ceases, as in Schwartz's paper, or centuries if one accepts the ocean pipeline effect in all its glory. M doesn't seem to look at the idea of an ECS/delay/lag factor; the response to GHG variation seems to be instantaneous so that climate changes and adjustments are pretty much straight away. This being the case the question is, is there any ACO2 effect at all, or are the slight perturbations we have seen over the last century entirely caused by water in the form of PDO variation and its equivalents?

  • Alex Harvey

    Jan,We can't look at the equations in this particular instance because it's another paper that's never been translated from the German.Okay, so they think they're talking about Emden's temperature discontinuity but actually they're talking about Milne's. This is a strange story, but possible I suppose.Very well, but I suppose we can agree that their decision to “stuff the theory” and arbitrarily set the temperature discontinuity to 0 can not be blamed on either Emden or Milne, but that is clearly their own decision. Right?So after Milne made his mistake, where did it next appear in the literature? Has it ever appeared anywhere other than in Weaver & Ramanathan / Lorenz & McKay? GCM modelling began with Moller, Manabe et al. If they didn't actually use Milne, how else can it have ever affected GCM models?

  • kuhnkat

    Nick Stokes,

    my first reply about GISS Model E humidity was based on Relative Humidity. You objected. I looked at Specific humidity and find they show increasing Specific humidity in the stratosphere which is WRONG again.

    These errors are based on their assumptions of what the atmosphere SHOULD be doing and isn’t!!

    I ran across an interesting explanation (not proof) of why increasing CO2, or other gasses, would cause a decrease in Humidity:

    http://www.geocities.com/profadrian/ScienceOfGlobalWarming.html
    (scan down to the Forcing Concept paragraph to save time.)

    Sounds straight forward enough to be real.

    Cheers!!

  • kuhnkat

    Nick Stokes,my first reply about GISS Model E humidity was based on Relative Humidity. You objected. I looked at Specific humidity and find they show increasing Specific humidity in the stratosphere which is WRONG again.These errors are based on their assumptions of what the atmosphere SHOULD be doing and isn't!!I ran across an interesting explanation (not proof) of why increasing CO2, or other gasses, would cause a decrease in Humidity:http://www.geocities.com/profadrian/ScienceOfGl…(scan down to the Forcing Concept paragraph to save time.)Sounds straight forward enough to be real.Cheers!!

  • Jan Pompe

    “Okay, so they think they're talking about Emden's temperature discontinuity but actually they're talking about Milne's.”No they are talking about Emden's discontinuity.”So after Milne made his mistake,”I know we have been talking about “Milnes mistake” for my part mainly for language economy but is it really an error? To use an approximation for a windowless atmosphere when the window has yet to be discovered?Alex has it ever occurred to you that Energy Balance Models and General Circulation Models are different sorts of models?

  • Alex Harvey

    Jan,

    You’ve dodged the question: where did Milne’s method, erroneous or otherwise, enter the historical literature of climate modelling and lead to false predictions (if you want to include energy balance models, that’s fine with me)? Or are you saying that it entered in the early energy balance models of Budyko & Sellars? Is the ECHAM5 affected, if so why? Or is the answer in fact that it hasn’t actually affected the history of climate modelling at all since none of the climate models actually employ the faulty theory?

    • Jan Pompe

      “You’ve dodged the question:”

      Nope I just thought untwisting your remark more important.

  • Alex Harvey

    Jan,You've dodged the question: where did Milne's method, erroneous or otherwise, enter the historical literature of climate modelling and lead to false predictions (if you want to include energy balance models, that's fine with me)? Or are you saying that it entered in the early energy balance models of Budyko & Sellars? Is the ECHAM5 affected, if so why? Or is the answer in fact that it hasn't actually affected the history of climate modelling at all since none of the climate models actually employ the faulty theory?

  • Jan Pompe

    “You've dodged the question:”Nope I just thought untwisting your remark more important.

  • Alex Harvey

    Jan,

    If that was untwisting something I’d hate to see you tie something up in a knot! :)

    So are you going to admit that you don’t know the answer to my question?

    • Jan Pompe

      “So are you going to admit that you don’t know the answer to my question?”

      I’ll admit your question makes no sense. FM’s model is an Energy Balance Models and you are asking about General Circulation Models.

      Stuffing up EBM’s started with this
      “”The assumption of infinite thickness involves little or no loss of generality; ”

      but you know that already.

      • Alex Harvey

        Oh c’mon Jan… the question makes no sense… who are you trying to kid? You are proclaiming to know that Milne’s ‘assumption of infinite thickness’ led to a ‘temperature discontinuity’. I am asking you, where? Where is what I suppose we should call ‘Milne’s temperature discontinuity’ to distinguish it from Emden’s? You won’t give a straight answer. Now, you say it’s in the energy balance models. Great, I understand that one of the earliest of these was Budyko 1969 ( http://www.math.umn.edu/~mcgehee/Seminars/ClimateChange/references/Budyko1969Tellus21p611-Albedo.pdf ). So how did Milne’s ‘assumption of infinite thickness’ affect Budyko’s result?

        • Jan Pompe

          “Oh c’mon Jan… the question makes no sense… who are you trying to kid?”

          I am no politician and you are not Tony Jones how many times do I have to repeat myself before it finally sinks in that you have been answered many times I have just explained it again to Steve and Nick’s response “And again, the B eqs would be nonsense if he set tau to infinity.”

          Why on earth are you looking for the effect of infinite thickness in Budyko’s just looking at the heading is enough to tell you not to even bother. Budyko is looking at insolation variation FM at the effect of tilde_tau_A or if you like absorber concentration. So how do you think your question makes sense here?

  • Alex Harvey

    Jan,If that was untwisting something I'd hate to see you tie something up in a knot! :)So are you going to admit that you don't know the answer to my question?

  • Jan Pompe

    “So are you going to admit that you don't know the answer to my question?”I'll admit your question makes no sense. FM's model is an Energy Balance Models and you are asking about General Circulation Models.Stuffing up EBM's started with this “”The assumption of infinite thickness involves little or no loss of generality; “but you know that already.

  • Alex Harvey

    Oh c'mon Jan… the question makes no sense… who are you trying to kid? You are proclaiming to know that Milne's 'assumption of infinite thickness' led to a 'temperature discontinuity'. I am asking you, where? Where is what I suppose we should call 'Milne's temperature discontinuity' to distinguish it from Emden's? You won't give a straight answer. Now, you say it's in the energy balance models. Great, I understand that one of the earliest of these was Budyko 1969 ( http://www.math.umn.edu/~mcgehee/Seminars/Clima… ). So how did Milne's 'assumption of infinite thickness' affect Budyko's result?

  • Jan Pompe

    “Oh c'mon Jan… the question makes no sense… who are you trying to kid?”I am no politician and you are not Tony Jones how many times do I have to repeat myself before it finally sinks in that you have been answered many times I have just explained it again to Steve and Nick's response “And again, the B eqs would be nonsense if he set tau to infinity.”Why on earth are you looking for the effect of infinite thickness in Budyko's just looking at the heading is enough to tell you not to even bother. Budyko is looking at insolation variation FM at the effect of tilde_tau_A or if you like absorber concentration. So how do you think your question makes sense here?

  • Alex Harvey

    Okay, I give up.

    • http://www.ecoengineers.com/ Steve Short
      • http://www.ecoengineers.com/ Steve Short

        Note:

        1. In columns O and U, OLR can be calculated in two conceptually different ways.

        2. The average partitioning of upwelling (to TOA) and downwelling (to BOA) heat in the atmosphere is clearly about 0.375 : 0.625 i.e. 1 : 1.667 (3 : 5).

        3. With respect to S_U/(Miskolci E_U) = 2 (Virial Rule) FM was apparently more-or-less correct. Kirchoff is a dud though.

        4. FM must have mistaken S_T for the sum of S_T + LH_U where LH_U = upwelling latent heat from tops of (icing/precipitating) clouds exiting TOA.

        5. The source of conditional global homeostasis is the near-constancy of the sum of S_T + LH_U. This derives from the way the Evaporative Fraction (EF) scales with Latent Heat flux.

        6. The Miskolci so-called ‘tau’ (= -ln((S_T+LH_U)/S_U) is nearly constant for all sky conditions (1.818±0.078) but passes through a minimum near 60% cloud cover i.e. current conditions.

        7. S_U/OLR is not close to 3/2 but seems to vary tightly about 5/3 (1.643±0.058 i.e. ~1.667). It is inferred there must be something slightly wrong with FM’s transfer function.

        Regards
        Steve

        PS: It’s not perfect but hey ya gotta have a go.

    • Jan Pompe

      “Okay, I give up”

      Don’t do that your pursuit of the history us an interesting one but try to keep the apples, oranges and pears in their own baskets.

      You know the tell tale of the non-semi-infinite approximation exp(-tau*) in the equations work with that.

      Emden’s discontinuity is just the consequence of the geometry and reasonable if only radiation is taken into account.

      Milne’s approximation is one that is not reasonable for a thin atmosphere and leads to a singularity at the surface (e.g. division by zero or log(0) ).

      The climate system is a complex system and there are various studies that look at different aspects of it they don’t all overlap even.

      If I can help I will, I can’t help though if you want to know why an apple doesn’t taste like a grape even though they are both fruit. That’s outside my field of expertise and so are GCMs they be more like Nick’s babies.

      • Alex Harvey

        Milne’s approximation is one that is not reasonable for a thin atmosphere…

        Do you mean Milne’s approximation, or do you mean Eddington’s approximation?

        • Jan Pompe

          “Do you mean Milne’s approximation, or do you mean Eddington’s approximation?”

          Before I answer can you list the approximations used?

          • Alex Harvey

            List the approximations used where?

          • Jan Pompe

            In the text book chapters on radiative balance and transport.

          • Alex Harvey

            This seems to be a diversion, but, okay, I understand that the Eddington approximation is a special case of the two-stream approximation which is a special case of the plane-parallel approximation.

            Here is indeed a paper on lots and lots of methods for two-stream approximation, including the Eddington approximation, quadrature approximations, hemispheric-constant methods, and you-name-it approximations, even discussing our very interesting case of thin atmospheres.

            http://ams.allenpress.com/archive/1520-0469/37/3/pdf/i1520-0469-37-3-630.pdf

            So it remains with you still to provide even one paper that shows that a surface temperature discontinuity has arisen that in the literature that wasn’t discovered by Emden.

            Is there one in this paper?

          • Jan Pompe

            “So it remains with you still to provide even one paper that shows that a surface temperature discontinuity has arisen that in the literature that wasn’t discovered by Emden.”

            L&M2003 equations 1 & 2. Not only that you’ll also find it in Goody and Yung.

            The Eddington approximation is not the semi-infinite approximation. It is the two stream Milne ony derives a single stream with the semi-infinite L&M’s 2 equations are the two stream ie the two equations (Eddington) and semi-infinite i.e. a lack of exponential terms (Milne) as FM put’s it Eddington-Milne. Neither Goody and Yung nor L&M 2003 have anything to do with Emden’s temperature discontinuity which might be a species of two stream and prior to Eddington but that we don’t know but Emden does not use the semi-infinite approcimation.

            I just wanted to be sure you were clear on what Eddington’s approximation was and it is clear to me that you were thus on a fishing expedition that I find offensive.

          • Alex Harvey

            I was “thus”(?) on a “fishing expedition”? I can’t understand a word of this. This is clearly hopeless; beyond scant references cited in M’s paper itself, you are aware of any single reference to the problem you are allegedly solving in the entire history of meteorology, end of story.

            BPL said the problem M is solving was firstly invented by M; you are failing hopelessly to show otherwise.

          • Jan Pompe

            “you are aware of any single reference to the problem you are allegedly solving in the entire history of meteorology, end of story.”

            Your slip here says it all

            You have been shown the problem quite clearly you have asked questions to which you knew the answer. You have been shown two references where the problem has occurred yesterday and more prior to to that and you can’t even say properly that you still think that there aren’t any.

            BPL had it right when he left He said he was drowning :- people drown because they can’t swim and are out of their depth.

          • http://www.ecoengineers.com/ Steve Short

            Jan

            Could you please provide us with a mathematical proof of this statement of yours (i.e. post it somewhere for easy download):

            “the absence of the exp(-tau_A) in the transfer function effectively closes the atmospheric IR window and there is no room for St in it at all whether 40 or 90 watt/m^2 and any value for St and it’s effect on Sg or surface temperature that cannot be arrived at without doing some or other fudge.”

            Thanks.

            Regards
            Steve

            PS: BTW I do note that there is no reliable evidence within the last decade of mainstream literature of a global all sky LW IR S_T remotely near 90 W/m^2.

  • Alex Harvey

    Okay, I give up.

  • http://www.ecoengineers.com/ Steve Short
  • Jan Pompe

    “Okay, I give up”Don't do that your pursuit of the history us an interesting one but try to keep the apples, oranges and pears in their own baskets. You know the tell tale of the non-semi-infinite approximation exp(-tau*) in the equations work with that. Emden's discontinuity is just the consequence of the geometry and reasonable if only radiation is taken into account. Milne's approximation is one that is not reasonable for a thin atmosphere and leads to a singularity at the surface (e.g. division by zero or log(0) ). The climate system is a complex system and there are various studies that look at different aspects of it they don't all overlap even. If I can help I will, I can't help though if you want to know why an apple doesn't taste like a grape even though they are both fruit. That's outside my field of expertise and so are GCMs they be more like Nick's babies.

  • Alex Harvey

    Milne's approximation is one that is not reasonable for a thin atmosphere…Do you mean Milne's approximation, or do you mean Eddington's approximation?

  • Jan Pompe

    “Do you mean Milne's approximation, or do you mean Eddington's approximation?”Before I answer can you list the approximations used?

  • Alex Harvey

    List the approximations used where?

  • http://www.ecoengineers.com/ Steve Short

    Note:1. In columns O and U, OLR can be calculated in two conceptually different ways.2. The average partitioning of upwelling (to TOA) and downwelling (to BOA) heat in the atmosphere is clearly about 0.375 : 0.625 i.e. 1 : 1.667 (3 : 5).3. With respect to S_U/(Miskolci E_U) = 2 (Virial Rule) FM was apparently more-or-less correct. Kirchoff is a dud though.4. FM must have mistaken S_T for the sum of S_T + LH_U where LH_U = upwelling latent heat from tops of (icing/precipitating) clouds exiting TOA.5. The source of conditional global homeostasis is the near-constancy of the sum of S_T + LH_U. This derives from the way the Evaporative Fraction (EF) scales with Latent Heat flux.6. The Miskolci so-called ‘tau’ (= -ln((S_T+LH_U)/S_U) is nearly constant for all sky conditions (1.818±0.078) but passes through a minimum near 60% cloud cover i.e. current conditions.7. S_U/OLR is not close to 3/2 but seems to vary tightly about 5/3 (1.643±0.058 i.e. ~1.667). It is inferred there must be something slightly wrong with FM’s transfer function.RegardsStevePS: It’s not perfect but hey ya gotta have a go.

  • Jan Pompe

    In the text book chapters on radiative balance and transport.

  • Alex Harvey

    This seems to be a diversion, but, okay, I understand that the Eddington approximation is a special case of the two-stream approximation which is a special case of the plane-parallel approximation.Here is indeed a paper on lots and lots of methods for two-stream approximation, including the Eddington approximation, quadrature approximations, hemispheric-constant methods, and you-name-it approximations, even discussing our very interesting case of thin atmospheres.http://ams.allenpress.com/archive/1520-0469/37/…So it remains with you still to provide even one paper that shows that a surface temperature discontinuity has arisen that in the literature that wasn't discovered by Emden.Is there one in this paper?

  • Jan Pompe

    “So it remains with you still to provide even one paper that shows that a surface temperature discontinuity has arisen that in the literature that wasn't discovered by Emden.”L&M2003 equations 1 & 2. Not only that you'll also find it in Goody and Yung.The Eddington approximation is not the semi-infinite approximation. It is the two stream Milne ony derives a single stream with the semi-infinite L&M's 2 equations are the two stream ie the two equations (Eddington) and semi-infinite i.e. a lack of exponential terms (Milne) as FM put's it Eddington-Milne. Neither Goody and Yung nor L&M 2003 have anything to do with Emden's temperature discontinuity which might be a species of two stream and prior to Eddington but that we don't know but Emden does not use the semi-infinite approcimation.I just wanted to be sure you were clear on what Eddington's approximation was and it is clear to me that you were thus on a fishing expedition that I find offensive.

  • Alex Harvey

    I was “thus”(?) on a “fishing expedition”? I can't understand a word of this. This is clearly hopeless; beyond scant references cited in M's paper itself, you are aware of any single reference to the problem you are allegedly solving in the entire history of meteorology, end of story.BPL said the problem M is solving was firstly invented by M; you are failing hopelessly to show otherwise.

  • Jan Pompe

    “you are aware of any single reference to the problem you are allegedly solving in the entire history of meteorology, end of story.”Your slip here says it all You have been shown the problem quite clearly you have asked questions to which you knew the answer. You have been shown two references where the problem has occurred yesterday and more prior to to that and you can't even say properly that you still think that there aren't any.BPL had it right when he left He said he was drowning :- people drown because they can't swim and are out of their depth.

  • http://www.ecoengineers.com/ Steve Short

    JanCould you please provide us with a mathematical proof of this statement of yours (i.e. post it somewhere for easy download):”the absence of the exp(-tau_A) in the transfer function effectively closes the atmospheric IR window and there is no room for St in it at all whether 40 or 90 watt/m^2 and any value for St and it's effect on Sg or surface temperature that cannot be arrived at without doing some or other fudge.”Thanks.RegardsStevePS: BTW I do note that there is no reliable evidence within the last decade of mainstream literature of a global all sky LW IR S_T remotely near 90 W/m^2.

  • Jan Pompe

    “Could you please provide us with a mathematical proof of this statement of yours (i.e. post it somewhere for easy download):”

    Steve I don’t understand. It’s appendix B basically high school calculus at least it was when I went to high school.

    lim t -> infinity exp(-t) = 0.

    That’s it.

    RHS of B4 becomes -H/2pi and RHS of B5 becomes Bo

    the simplifying effect is obvious but as Nick says it doesn’t make sense to do it, so FM doesn’t.

    the transfer function becomes f = 2/(1 + tilde-tau_a) which gives the same result as on Milnes 1922 paper on page 884 knowing you you wilii probalbe nitpik that t/2 /= t but when t is infinite there is no difference.

    That is the math the rest is physical.

    as a chemist you aught to be familiar with Beer Lambert

    where transmittance is given by I/Io = exp(-tau) where tau = alpha*l*c, so without an exp(-tau) term in the transfer function there is zero transmittance therefore no allowance for IR window or any other transmittance outside the atmospheric window range. The 1/2 +tau (or tau/2) only covers emissions so here is where the fudge comes in.

    After having eliminated exp(-tau) by calling tau infinite they now reinvent it as something closer to reality (like 1<tau<3 for argument's sake) and because upward radiation is measurably different from downward radiation they generate a second equation for radiation in the opposite direction.

    There is no way with this scheme to derive St so ad hoc means are resorted to. For example in the IR window you have about to 1 significant digit ~10% of the surface IR spectrum that radiates 400 W/m^2 to 1 significant digit so to 1 significant digit you have 40 W/m^2 transmission through the IR window.

  • Jan Pompe

    “Could you please provide us with a mathematical proof of this statement of yours (i.e. post it somewhere for easy download):”Steve I don't understand. It's appendix B basically high school calculus at least it was when I went to high school.lim t -> infinity exp(-t) = 0.That's it. RHS of B4 becomes -H/2pi and RHS of B5 becomes Bothe simplifying effect is obvious but as Nick says it doesn't make sense to do it, so FM doesn't. the transfer function becomes f = 2/(1 + tilde-tau_a) which gives the same result as on Milnes 1922 paper on page 884 knowing you you wilii probalbe nitpik that t/2 /= t but when t is infinite there is no difference.That is the math the rest is physical.as a chemist you aught to be familiar with Beer Lambert where transmittance is given by I/Io = exp(-tau) where tau = alpha*l*c, so without an exp(-tau) term in the transfer function there is zero transmittance therefore no allowance for IR window or any other transmittance outside the atmospheric window range. The 1/2 +tau (or tau/2) only covers emissions so here is where the fudge comes in. After having eliminated exp(-tau) by calling tau infinite they now reinvent it as something closer to reality (like 1<tau<3 for argument's sake) and because upward radiation is measurably different from downward radiation they generate a second equation for radiation in the opposite direction. There is no way with this scheme to derive St so ad hoc means are resorted to. For example in the IR window you have about to 1 significant digit ~10% of the surface IR spectrum that radiates 400 W/m^2 to 1 significant digit so to 1 significant digit you have 40 W/m^2 transmission through the IR window.

  • Jan Pompe

    “Has it ever appeared anywhere other than in Weaver & Ramanathan Has it ever appeared anywhere other than in Weaver & Ramanathan “I remembered another here found by David are the lecture notes of Irina Sokolic of Georgia Tech. So it's in the text books of Milne wirtten in 1930, Goody and Yung 1989 and current lecture notes. It is what has been taught to students since 1930 at the very least and still being taught. Is it any surprise then that so many who are working in the field get the same results?

  • Alex Harvey

    Jan,

    I remembered another here found by David are the lecture notes of Irina Sokolic of Georgia Tech. So it’s in the text books of Milne wirtten in 1930, Goody and Yung 1989 and current lecture notes. It is what has been taught to students since 1930 at the very least and still being taught. Is it any surprise then that so many who are working in the field get the same results?

    No, this doesn’t seem to get us anywhere at all. Look on p. 4 under “solving for radiative equilibrium” and we find “Results: the radiative equilibrium surface temperature is too high and the temperature profile is unrealistic.” In other words, it seems that what is being taught in these lecture notes is a very simple approximation, presumably of interest because (a) it has historical significance (i.e. Moller & Manabe, 1961; Manabe & Strickler 1964; Manabe & Wetherald 1967, etc) and (b) it is easy enough to understand. It is acknowledged that the result is unphysical. That is to say, the convective adjustment is a hack and everyone seems to know it. Moreover, once again, it seems to have its origins pre-Eddington, and pre-Milne, i.e. in Emden!

    Here is another Lindzen paper (Zurita-Gotor and Lindzen 2006, “Theories of Baroclinic Adjustment and Eddy Equilibration”):

    “…The primary example of adjustment in the geophysical context is provided by convective adjustment, a concept that dates back to the work by Gold (1913) and Emden (1913) (both of which are well summarized in Brunt (1941) and Goody and Yung (1989))…”

    See, all these works lead back via Moller and Manabe 1961 to Emden 1913. Fritz Moller like Richard Lindzen was German — do you appreciate that? The idea that when he says he studied Emden’s result but that he was kidding and that he was really using the English method of Milne/Eddington — it just doesn’t make sense. And given that you’ve read neither Emden nor Moller yourself, it’s a little outrageous. Anyway, why would he do that? ‘Cause he was too lazy to do his homework? He doesn’t even cite Milne or Eddington. There is no evidence in any of these papers that they have looked at Milne at all. Remember, Moller can’t have got this out of Goody & Yung because this was 1961. Then, Lindzen, he misunderstands this as well, he’s also pretending to have read Emden? And Manabe, Strickler, Wetherald, and on and on.

    The same goes with the Lorenz & McKay 2003 you keep going on about. They are also just picking up the thread of Manabe et al’s work of the 1960s, and largely for convenience by the looks of it, and with no illusion that they are describing physical reality. They cite Lindzen et al 1982 who also play around with the convective adjustment, find a temperature discontinuity, but again not because they are confused about whether it is real or not, but because Lindzen they want to show that there is a more realistic way of treating convection.

    Jan, you said that there is another kind of “temperature discontinuity” out there that can not be traced back to Emden. THAT is the one I am talking about, THAT is the one that requires evidence. I don’t believe you’ll ever support this with evidence; I don’t believe the evidence is there.

    • Jan Pompe

      Alex you are talking rot.

      • http://ecoengineers.com/ Steve Short

        No Jan

        It is you who is talking the rot. What I suggest you do is take advantage of the fact that copies of “The Atmospheric Boundary Layer” by J. R. Garrett (1994) Cambridge University Press are now going really cheap and pick yourself up a copy from e.g. Amazon

        When you have waded through that you can then start sampling a few of the veritable multitude of papers in existence where hard working scientists are beavering away attempting to precisely describe the ABL in cities (UHI etc), in suburbs, down canyons, on crops, in forests etc etc. and so join the real world.

        You are obsessing over a mathematical pie-in-the-sky ‘chimaera’ or fantasy which basically means, in the context where we have a liquid (= fluid) sitting on top of another liquid (= fluid) or solid which may or may not contain another liquid (=fluid), STUFF ALL.

        • Jan Pompe

          “It is you who is talking the rot. ”

          Please explain how someone who did not know what the importance of exp(-tau) was would know that.

          Sorry Steve if you have even begun to understand what the issue there is I would be very surprised.

          • http://ecoengineers.com/ Steve Short

            That’s amusing coming from someone who has never actually made a living doing hard quantitative science day in and day out, year after year….

          • Jan Pompe

            Steve If you remarks in your last two posts are in any way relevant, that relevance escapes me completely.

          • http://ecoengineers.com/ Steve Short

            Getting right back to my original post of the little Excel spreadsheet model, to pick up on some of Ken’s points in our exchange of private emails:

            I’ve very slightly modified (‘forced’) the model to ensure that at a present average Albedo (A) of 0.30 (%cloud cover actually close to 64% – refer Column AC) there is agreement between estimated Fo(1-A) = 239.3 W/m^2 and estimated OLR = 239.1 W/m^2.

            I did this by correcting Fo to the real value of 341.75 W/m^2 and making some minor adjustments to SH in accord with the literature estimates of Sensible Heat, especially forcing SH = 17 W/m^2 for the ‘default’ present global average situation.

            https://download.yousendit.com/UmNMV28zQzNiR0ozZUE9PQ

            Thus this downloadable minor Revision 6b attempts to:

            (1) include Ken’s most valid points as above; and

            (2) cut out some columns to highlight the strange issue of the 62.5% reporting to BOA and 37.5% reporting to TOA aspect which must represent some sort of overall 3D geometric effect (?????),

            Just for the record, I’d like to categorically reject the rigidity of thinking which Jan always injects into the situation, by noting firmly that this model is not, and never was, intended to be a reflection of a fixed situation where the entire globe is perpetually covered in 100% cloud cover or 82% cloud or indeed the average 64% cloud cover at present simply because that doesn’t, and never would occur. This is an issue which Jan in his haste to run away (read = afraid to mess with a simple spreadsheet) completely ignores.

            The real world never has a fixed albedo or cloud cover! These are variables which are driven by a whole host of subtle causes and effects which we don’t yet completely know or understand (as I have pointed out in Niche Modeling ad nauseum).

            What I believe my model is saying is that as albedo and cloud cover falls the Earth will warm and vice versa. However, the model is also showing how the system can then realize, through:

            (a) increased Sensible Heat SH (i.e. dry thermals); and

            (b) the conversion of Evapotranspiration (ET) into clouds: increases in Albedo (A); and

            (c) through the realization of Latent Heat (LH) (c) more radiation to TOA via the fractions of LH and SH which escape TOA.

            I find it remarkable that the present day average condition (does not mean static please note) of the global heat balance is fully consistent with a view that almost exactly 37.5% of EACH OF absorbed SW IR (F), of Latent Heat released by clouds (LH), of Sensible Heat (SH) and even local heating of the atmosphere via radiation from BOA (A_A) is radiated via TOA to contribute to OLR, together with the LW IR which is transmitted through the entire atmospheric column from BOA to TOA (i.e. the true S_T).

            Consequently, both the atmosphere and the ground (via E_D) are heated by the 62.5% which remains!

            All this model needs to show is that on average over all possible albedos (read range of cloud covers) the Earth is, again on average, in close heat balance.

            The fact is that it does.

            Furthermore one could say this average condition occurs, at (say) the ± one standard deviation level at an average Albedo (A) of (say) 0.275±0.094, an average cloud cover of (say) 55±34%, an average incoming SW IR of Fo(1-A) of 247.8±32.0 W/m^2 and an average outgoing LW IR OLR of 247.1±18.3 W/m^2 – all ranges in which the Earth exists at present, simply shows that we inhabit a global climate system which acts to effectively damp out the effect of variations in surface solar insolation to constrain average internal temperatures to about 288.5±2.6 K.

            If anyone, I’d have expected Jan Pompe to perceive the elegance of a such a signal of a damping circuit!

            In other words the system exhibits a dynamic, but conditional homeostasis.

            I don’t really see any significant conceptual problem with the viewpoint this simple little spreadsheet is clarifying for us.

            I particularly don’t see why one should need to ‘force’ a global heat balance for any particular fixed value of Albedo (A) or %cloud cover as Jan Pompe dogmatically insists (before fleeing the field).

            I repeat that this Excel spreadsheet model is only a very crude attempt at trying to identify albedo/cloud-dependent heat balance from the best available values which one can ‘mine’ from the mainstream (not minority) literature.

            It relies utterly on the idea that while ET occurs mostly under low albedo/low cloud conditions the actual transfer of LH into the atmosphere (and LH_U to TOA) is ‘realized’ some time later when clouds have formed i.e. albedo/cloud cover is much higher.

            I am disappointed that Jan couldn’t even come to the party by allowing himself to just ‘play around’ with this little spreadsheet.

            After all this is what I invited everyone to do from the outset. As I point out above, the spreadsheet averages and standard deviations even indicate, if one looks closely, how the global climate system appears to work as a ‘damping circuit’.

            It is a pity that Jan didn’t even get so far as to pick up on that. He also didn’t notice that Miskolczi’s ‘Tau’ apparently passes through a minimum at around the ‘centre point of the homeostasis!

            IMHO, this simple model adequately serves two purposes:

            (1) it shows where Miskolczi had some elements of at least empirical validity and where he badly fudged e.g. the inter-related A_A=E_D & S_T & LW IR Tau issue; and

            (2) it raise questions as to why shouldn’t the global climate system be able to respond in a homeostatic manner given the overall constraints and ‘tools’ it has to work with even if atmospheric CO2 rises? That is, it identifies, in a very simple and transparent way, the means whereby CO2 sensitivity is most likely limited.

          • Jan Pompe

            “Just for the record, I’d like to categorically reject the rigidity of thinking which Jan always injects into the situation, by noting firmly that this model is not, and never was, intended to be a reflection of a fixed situation where the entire globe is perpetually covered in 100% cloud cover or 82% cloud or indeed the average 64% cloud cover at present simply because that doesn’t, and never would occur. ”

            I agree i have some rigidity in my thinking; I have zero tolerance for strawman arguments and shall not bother wasting time on them.

          • http://www.ecoengineers.com/ Steve Short

            Someone once said here on this blog that Miskolczi Theory was really just an exercise in linear programming of global heat fluxes.

            In the event, that was a prescient but not quite realized dream.

            Miskolczi Theory was really just an exercise in what in truth SHOULD have been linear programming.

            This plot shows clearly how, even allowing for an error in estimation of the order of ±5% at (say) the one standard deviation level, the mutually contrasting rates of variation of minor global climate heat fluxes with varying albedo i.e. of the fraction of SW IR absorbed in the atmosphere which is re-radiated through TOA (F_U), of the fraction of Latent Heat off the tops of clouds which is re-radiated through TOA (LH_U), and of the fraction of Sensible Heat (dry thermals) absorbed in the atmosphere which s re-radiated through TOA (SH_U) is easily enough to balance out the global major flux heat balance (SW insolation against OLR) over a mean albedo range of at least 0.35 – 0.25.

            https://download.yousendit.com/UmNKOGNhUENwaFNGa1E9PQ

            We live in a world of ample water with an ample supply of Cloud Condensation Nuclei (CCN). The source of CCN being in large part biogenic and increasing in proportion to atmospheric CO2 implies there is no reason whatsoever why the balance of SW insolation and OLR cannot be maintained by the mutual adjustment of F_U, LH_U and SH_U over at least the above average albedo range.

  • Alex Harvey

    Jan,I remembered another here found by David are the lecture notes of Irina Sokolic of Georgia Tech. So it's in the text books of Milne wirtten in 1930, Goody and Yung 1989 and current lecture notes. It is what has been taught to students since 1930 at the very least and still being taught. Is it any surprise then that so many who are working in the field get the same results?No, this doesn't seem to get us anywhere at all. Look on p. 4 under “solving for radiative equilibrium” and we find “Results: the radiative equilibrium surface temperature is too high and the temperature profile is unrealistic.” In other words, it seems that what is being taught in these lecture notes is a very simple approximation, presumably of interest because (a) it has historical significance (i.e. Moller & Manabe, 1961; Manabe & Strickler 1964; Manabe & Wetherald 1967, etc) and (b) it is easy enough to understand. It is acknowledged that the result is unphysical. That is to say, the convective adjustment is a hack and everyone seems to know it. Moreover, once again, it seems to have its origins pre-Eddington, and pre-Milne, i.e. in Emden!Here is another Lindzen paper (Zurita-Gotor and Lindzen 2006, “Theories of Baroclinic Adjustment and Eddy Equilibration”):”…The primary example of adjustment in the geophysical context is provided by convective adjustment, a concept that dates back to the work by Gold (1913) and Emden (1913) (both of which are well summarized in Brunt (1941) and Goody and Yung (1989))…”See, all these works lead back via Moller and Manabe 1961 to Emden 1913. Fritz Moller like Richard Lindzen was German — do you appreciate that? The idea that when he says he studied Emden's result but that he was kidding and that he was really using the English method of Milne/Eddington — it just doesn't make sense. And given that you've read neither Emden nor Moller yourself, it's a little outrageous. Anyway, why would he do that? 'Cause he was too lazy to do his homework? He doesn't even cite Milne or Eddington. There is no evidence in any of these papers that they have looked at Milne at all. Remember, Moller can't have got this out of Goody & Yung because this was 1961. Then, Lindzen, he misunderstands this as well, he's also pretending to have read Emden? And Manabe, Strickler, Wetherald, and on and on.The same goes with the Lorenz & McKay 2003 you keep going on about. They are also just picking up the thread of Manabe et al's work of the 1960s, and largely for convenience by the looks of it, and with no illusion that they are describing physical reality. They cite Lindzen et al 1982 who also play around with the convective adjustment, find a temperature discontinuity, but again not because they are confused about whether it is real or not, but because Lindzen they want to show that there is a more realistic way of treating convection.Jan, you said that there is another kind of “temperature discontinuity” out there that can not be traced back to Emden. THAT is the one I am talking about, THAT is the one that requires evidence. I don't believe you'll ever support this with evidence; I don't believe the evidence is there.

  • Jan Pompe

    Alex you are talking rot.

  • Alex Harvey

    Jan,

    Here is an earlier paper by Jean I.F. King:

    King, J.I.F 1952: Line absorption and radiative equilibrium, Journal of the Atmospheric Sciences, Volume 9, Issue 5, 311–321:

    http://ams.allenpress.com/archive/1520-0469/9/5/pdf/i1520-0469-9-5-311.pdf

    This paper contains lengthy discussions of the radiative equilibrium temperature discontinuity.

    I will reproduce the follow excerpt:

    “Introduction

    …The last calculations of radiative-equilibrium temperatures are those of Humphreys and Emden, who used crude model atmospheres of varying degrees of “greyness.” Thus, it seems worthwhile to calculate the radiative-equilibrium temperatures of a line-absorbing model atmosphere, whose optical properties are now well understood.

    Radiative heat-transfer is one of the determining factors of the temperature distribution of the earth’s atmosphere. An arbitrary motionless atmosphere will exchange heat by radiation with its surroundings, adjusting its lapse rate to conform to the temperature distribution of radiative equilibrium. This tendency to assume pure radiative-equilibrium configuration proceeds independently of any other heat-transfer processes at work. If, however, the radiative-equilibrium lapse rate is too steep, or contains discontinuities of temperature, non-radiative transfer mechanisms will be activated. A temperature discontinuity, for example, gives rise to heat transfer by conduction and convection, while steep lapse rates, with their thermal instability, lead to vertical overturning and large-scale turbulence.

    At the atmosphere-earth interface, a temperature discontinuity exists which decreases to 0 as n becomes infinite. The slope of the curve, and hence the temperature gradient, becomes infinite at this lower boundary as n approaches infinity.

    The results of this analysis lead to the following radiative theorem:

    In a partially absorbing, gravitational atmosphere, irradiatve solely from below, radiatve heat-transfer acting alone tends [Alex -- note that he says it only theoretically tends to discontinuity] to establish a temperature discontinuity at the base of the atmosphere. [footnote 4 inserted here reads: "Emden, using a "grey" model atmosphere in which the absorption coefficient was constant over a band, found a similar temperature-discontinuity to exist under radiative equilibrium (see Pekeris, 1932)]
    Corollary I: As a result of this steep temperature-gradient at the base, non-radiative heat-transfer processes, such as condunction, convection and turbulence, will be activated. This results in a constant feeding of heat energy into the atmosphere from below.
    Corollary II: It is impossible to have a purely radiatively controlled atmosphere [Alex -- i.e. because he knows as all meteorologists knew that temperature discontinuities can't "really" exist in nature!].

    At any rate, it is again clear that regardless of any mathematical errors/misunderstandings that Milne hypothetically made, the basic theory / idea that the radiative equilibrium temperature discontinuity is the underlying cause of convection and conduction in our atmosphere, predates Milne.

    • Jan Pompe

      Alex irrespective of what Emden found his calculations did not carry through to be generally used. Can’t you get this straight?

      I HAVE NOT EVER AND NEVER WILL MAKE THE CLAIM THAT MILNE WAS THE FIRST TO OBTAIN A TEMPERATURE DISCONTINUITY AT THE SURFACE.

    • Christopher Game

      Dear Alex Harvey,

      Thank you for your reference to King 1952. This was the most helpful thing I have had from any blog item that I can recall reading.

      Yours sincerely,

      Christopher Game

  • Alex Harvey

    Jan,Here is an earlier paper by Jean I.F. King:King, J.I.F 1952: Line absorption and radiative equilibrium, Journal of the Atmospheric Sciences, Volume 9, Issue 5, 311–321:http://ams.allenpress.com/archive/1520-0469/9/5…This paper contains lengthy discussions of the radiative equilibrium temperature discontinuity.I will reproduce the follow excerpt:”Introduction……The last calculations of radiative-equilibrium temperatures are those of Humphreys and Emden, who used crude model atmospheres of varying degrees of “greyness.” Thus, it seems worthwhile to calculate the radiative-equilibrium temperatures of a line-absorbing model atmosphere, whose optical properties are now well understood.Radiative heat-transfer is one of the determining factors of the temperature distribution of the earth's atmosphere. An arbitrary motionless atmosphere will exchange heat by radiation with its surroundings, adjusting its lapse rate to conform to the temperature distribution of radiative equilibrium. This tendency to assume pure radiative-equilibrium configuration proceeds independently of any other heat-transfer processes at work. If, however, the radiative-equilibrium lapse rate is too steep, or contains discontinuities of temperature, non-radiative transfer mechanisms will be activated. A temperature discontinuity, for example, gives rise to heat transfer by conduction and convection, while steep lapse rates, with their thermal instability, lead to vertical overturning and large-scale turbulence….At the atmosphere-earth interface, a temperature discontinuity exists which decreases to 0 as n becomes infinite. The slope of the curve, and hence the temperature gradient, becomes infinite at this lower boundary as n approaches infinity.The results of this analysis lead to the following radiative theorem:In a partially absorbing, gravitational atmosphere, irradiatve solely from below, radiatve heat-transfer acting alone tends [Alex -- note that he says it only theoretically tends to discontinuity] to establish a temperature discontinuity at the base of the atmosphere. [footnote 4 inserted here reads: "Emden, using a "grey" model atmosphere in which the absorption coefficient was constant over a band, found a similar temperature-discontinuity to exist under radiative equilibrium (see Pekeris, 1932)] Corollary I: As a result of this steep temperature-gradient at the base, non-radiative heat-transfer processes, such as condunction, convection and turbulence, will be activated. This results in a constant feeding of heat energy into the atmosphere from below. Corollary II: It is impossible to have a purely radiatively controlled atmosphere [Alex -- i.e. because he knows as all meteorologists knew that temperature discontinuities can't "really" exist in nature!].At any rate, it is again clear that regardless of any mathematical errors/misunderstandings that Milne hypothetically made, the basic theory / idea that the radiative equilibrium temperature discontinuity is the underlying cause of convection and conduction in our atmosphere, predates Milne.

  • Jan Pompe

    Alex irrespective of what Emden found his calculations did not carry through to be generally used. Can't you get this straight? I HAVE NOT EVER AND NEVER WILL MAKE THE CLAIM THAT MILNE WAS THE FIRST TO OBTAIN A TEMPERATURE DISCONTINUITY AT THE SURFACE.

  • http://ecoengineers.com/ Steve Short

    No JanIt is you who is talking the rot. What I suggest you do is take advantage of the fact that copies of “The Atmospheric Boundary Layer” by J. R. Garrett (1994) Cambridge University Press are now going really cheap and pick yourself up a copy from e.g. Amazon When you have waded through that you can then start sampling a few of the veritable multitude of papers in existence where hard working scientists are beavering away attempting to precisely describe the ABL in cities (UHI etc), in suburbs, down canyons, on crops, in forests etc etc. and so join the real world.You are obsessing over a mathematical pie-in-the-sky 'chimaera' or fantasy which basically means, in the context where we have a liquid (= fluid) sitting on top of another liquid (= fluid) or solid which may or may not contain another liquid (=fluid), STUFF ALL.

  • Jan Pompe

    “It is you who is talking the rot. “Please explain how someone who did not know what the importance of exp(-tau) was would know that.Sorry Steve if you have even begun to understand what the issue there is I would be very surprised.

  • http://ecoengineers.com/ Steve Short

    That's amusing coming from someone who has never actually made a living doing hard quantitative science day in and day out, year after year….

  • Jan Pompe

    Steve If you remarks in your last two posts are in any way relevant, that relevance escapes me completely.

  • Alex Harvey

    Jan,

    I am not interested in taking any of this personally, just getting at the truth, and some truth seems to have emerged here, and without a revised version of the theory or at least some sort of clarification from M himself, the case seems to be clearly hopeless now, and if the case is hopeless, continued discussion of it is merely going to serve as distraction and give ammunition to those who would write us all off as “deniers.”

    I am not of course a climate scientist or indeed any kind of scientist but this has got to a level that a layperson dragged in off the street could understand the problem:

    If the radiative equilibrium temperature discontinuity emerged at least as early as 1913 in a German monograph by Robert Emden, and if its existence was likewise hinted at around the same time in the work of Gold in England and Humphreys in America as appears to be the case, the theory today of its formal / theoretical existence (since it is clear that no one actually has ever believed it to have a physical existence as seen most clearly in J.I.F. King’s corollary 2 above) simply cannot-end-of-story-no-way-absolutely-not-no-sirree be the result of any mathematical error, any statement about an assumption of infinite thickness, or indeed anything at all to do with Milne’s work of either 1922 or 1930 or Eddington or Eddington’s famous approximation — unless there is something basic about the history that I have missed.

    Somewhere along the way we seem to have lost sight of the forest for the trees. M appears to have been arguing that the radiative equilibrium temperature discontinuity itself is the result of a mathematical error/misapplication/misunderstanding/whatever-else-you-might-wish-to-call-it, that Milne made in 1922. If that is not the case, if it has been independently argued for by others, regardless of Milne/Eddington, then this whole story is not true. Maybe there really is an error somewhere in the formalism of radiative transfer theory? We can’t know what M himself believes now since he’s gone completely silent. Miklos has indeed dropped all references to Milne in his presentation as Nick pointed out. I have been led to believe in private conversations with various others that the “actual error” is somehow implied in the work of K. Schwarzschild now, which would imply that all of these physicists, Emden, Milne, Eddington, King, Ostriker, Lindzen et al. have sort of independently made the same error in a way that hasn’t been explained to me. This is just too fantastic for my little brain to consider.

    My point is, the burden of proof is on M himself. I have politely hinted that this Emden business is problematic and it has sort of been shrugged off as far as I can see. There has been no response to a recent email asking the same question. Maybe this is a waste of his time, I don’t know. But continued discussion without some sort of response from M is surely a waste of everyone’s time.

  • Alex Harvey

    Jan,I am not interested in taking any of this personally, just getting at the truth, and some truth seems to have emerged here, and without a revised version of the theory or at least some sort of clarification from M himself, the case seems to be clearly hopeless now, and if the case is hopeless, continued discussion of it is merely going to serve as distraction and give ammunition to those who would write us all off as “deniers.” I am not of course a climate scientist or indeed any kind of scientist but this has got to a level that a layperson dragged in off the street could understand the problem:If the radiative equilibrium temperature discontinuity emerged at least as early as 1913 in a German monograph by Robert Emden, and if its existence was likewise hinted at around the same time in the work of Gold in England and Humphreys in America as appears to be the case, the theory today of its formal / theoretical existence (since it is clear that no one actually has ever believed it to have a physical existence as seen most clearly in J.I.F. King's corollary 2 above) simply cannot-end-of-story-no-way-absolutely-not-no-sirree be the result of any mathematical error, any statement about an assumption of infinite thickness, or indeed anything at all to do with Milne's work of either 1922 or 1930 or Eddington or Eddington's famous approximation — unless there is something basic about the history that I have missed.Somewhere along the way we seem to have lost sight of the forest for the trees. M appears to have been arguing that the radiative equilibrium temperature discontinuity itself is the result of a mathematical error/misapplication/misunderstanding/whatever-else-you-might-wish-to-call-it, that Milne made in 1922. If that is not the case, if it has been independently argued for by others, regardless of Milne/Eddington, then this whole story is not true. Maybe there really is an error somewhere in the formalism of radiative transfer theory? We can't know what M himself believes now since he's gone completely silent. Miklos has indeed dropped all references to Milne in his presentation as Nick pointed out. I have been led to believe in private conversations with various others that the “actual error” is somehow implied in the work of K. Schwarzschild now, which would imply that all of these physicists, Emden, Milne, Eddington, King, Ostriker, Lindzen et al. have sort of independently made the same error in a way that hasn't been explained to me. This is just too fantastic for my little brain to consider.My point is, the burden of proof is on M himself. I have politely hinted that this Emden business is problematic and it has sort of been shrugged off as far as I can see. There has been no response to a recent email asking the same question. Maybe this is a waste of his time, I don't know. But continued discussion without some sort of response from M is surely a waste of everyone's time.

  • http://ecoengineers.com/ Steve Short

    Getting right back to my original post of the little Excel spreadsheet model, to pick up on some of Ken's points in our exchange of private emails:I’ve very slightly modified (‘forced') the model to ensure that at a present average Albedo (A) of 0.30 (%cloud cover actually close to 64% – refer Column AC) there is agreement between estimated Fo(1-A) = 239.3 W/m^2 and estimated OLR = 239.1 W/m^2.I did this by correcting Fo to the real value of 341.75 W/m^2 and making some minor adjustments to SH in accord with the literature estimates of Sensible Heat, especially forcing SH = 17 W/m^2 for the ‘default’ present global average situation.https://download.yousendit.com/UmNMV28zQzNiR0oz… Thus this downloadable minor Revision 6b attempts to:(1) include Ken’s most valid points as above; and (2) cut out some columns to highlight the strange issue of the 62.5% reporting to BOA and 37.5% reporting to TOA aspect which must represent some sort of overall 3D geometric effect (?????), Just for the record, I'd like to categorically reject the rigidity of thinking which Jan always injects into the situation, by noting firmly that this model is not, and never was, intended to be a reflection of a fixed situation where the entire globe is perpetually covered in 100% cloud cover or 82% cloud or indeed the average 64% cloud cover at present simply because that doesn’t, and never would occur. This is an issue which Jan in his haste to run away (read = afraid to mess with a simple spreadsheet) completely ignores.The real world never has a fixed albedo or cloud cover! These are variables which are driven by a whole host of subtle causes and effects which we don’t yet completely know or understand (as I have pointed out in Niche Modeling ad nauseum).What I believe my model is saying is that as albedo and cloud cover falls the Earth will warm and vice versa. However, the model is also showing how the system can then realize, through:(a) increased Sensible Heat SH (i.e. dry thermals); and (b) the conversion of Evapotranspiration (ET) into clouds: increases in Albedo (A); and(c) through the realization of Latent Heat (LH) (c) more radiation to TOA via the fractions of LH and SH which escape TOA.I find it remarkable that the present day average condition (does not mean static please note) of the global heat balance is fully consistent with a view that almost exactly 37.5% of EACH OF absorbed SW IR (F), of Latent Heat released by clouds (LH), of Sensible Heat (SH) and even local heating of the atmosphere via radiation from BOA (A_A) is radiated via TOA to contribute to OLR, together with the LW IR which is transmitted through the entire atmospheric column from BOA to TOA (i.e. the true S_T).Consequently, both the atmosphere and the ground (via E_D) are heated by the 62.5% which remains!All this model needs to show is that on average over all possible albedos (read range of cloud covers) the Earth is, again on average, in close heat balance. The fact is that it does. Furthermore one could say this average condition occurs, at (say) the ± one standard deviation level at an average Albedo (A) of (say) 0.275±0.094, an average cloud cover of (say) 55±34%, an average incoming SW IR of Fo(1-A) of 247.8±32.0 W/m^2 and an average outgoing LW IR OLR of 247.1±18.3 W/m^2 – all ranges in which the Earth exists at present, simply shows that we inhabit a global climate system which acts to effectively damp out the effect of variations in surface solar insolation to constrain average internal temperatures to about 288.5±2.6 K. If anyone, I’d have expected Jan Pompe to perceive the elegance of a such a signal of a damping circuit!In other words the system exhibits a dynamic, but conditional homeostasis.I don’t really see any significant conceptual problem with the viewpoint this simple little spreadsheet is clarifying for us.I particularly don’t see why one should need to ‘force’ a global heat balance for any particular fixed value of Albedo (A) or %cloud cover as Jan Pompe dogmatically insists (before fleeing the field).I repeat that this Excel spreadsheet model is only a very crude attempt at trying to identify albedo/cloud-dependent heat balance from the best available values which one can ‘mine’ from the mainstream (not minority) literature. It relies utterly on the idea that while ET occurs mostly under low albedo/low cloud conditions the actual transfer of LH into the atmosphere (and LH_U to TOA) is ‘realized’ some time later when clouds have formed i.e. albedo/cloud cover is much higher.I am disappointed that Jan couldn’t even come to the party by allowing himself to just ‘play around’ with this little spreadsheet. After all this is what I invited everyone to do from the outset. As I point out above, the spreadsheet averages and standard deviations even indicate, if one looks closely, how the global climate system appears to work as a ‘damping circuit’. It is a pity that Jan didn’t even get so far as to pick up on that. He also didn’t notice that Miskolczi’s ‘Tau’ apparently passes through a minimum at around the ‘centre point of the homeostasis!IMHO, this simple model adequately serves two purposes: (1) it shows where Miskolczi had some elements of at least empirical validity and where he badly fudged e.g. the inter-related A_A=E_D & S_T & LW IR Tau issue; and(2) it raise questions as to why shouldn’t the global climate system be able to respond in a homeostatic manner given the overall constraints and ‘tools’ it has to work with even if atmospheric CO2 rises? That is, it identifies, in a very simple and transparent way, the means whereby CO2 sensitivity is most likely limited.

  • Jan Pompe

    “Just for the record, I'd like to categorically reject the rigidity of thinking which Jan always injects into the situation, by noting firmly that this model is not, and never was, intended to be a reflection of a fixed situation where the entire globe is perpetually covered in 100% cloud cover or 82% cloud or indeed the average 64% cloud cover at present simply because that doesn’t, and never would occur. “I agree i have some rigidity in my thinking; I have zero tolerance for strawman arguments and shall not bother wasting time on them.

  • Christopher Game

    Dear Alex Harvey,Thank you for your reference to King 1952. This was the most helpful thing I have had from any blog item that I can recall reading.Yours sincerely,Christopher Game

  • http://www.ecoengineers.com/ Steve Short

    Someone once said here on this blog that Miskolczi Theory was really just an exercise in linear programming of global heat fluxes.In the event, that was a prescient but not quite realized dream. Miskolczi Theory was really just an exercise in what in truth SHOULD have been linear programming.This plot shows clearly how, even allowing for an error in estimation of the order of ±5% at (say) the one standard deviation level, the mutually contrasting rates of variation of minor global climate heat fluxes with varying albedo i.e. of the fraction of SW IR absorbed in the atmosphere which is re-radiated through TOA (F_U), of the fraction of Latent Heat off the tops of clouds which is re-radiated through TOA (LH_U), and of the fraction of Sensible Heat (dry thermals) absorbed in the atmosphere which s re-radiated through TOA (SH_U) is easily enough to balance out the global major flux heat balance (SW insolation against OLR) over a mean albedo range of at least 0.35 – 0.25.https://download.yousendit.com/UmNKOGNhUENwaFNG…We live in a world of ample water with an ample supply of Cloud Condensation Nuclei (CCN). The source of CCN being in large part biogenic and increasing in proportion to atmospheric CO2 implies there is no reason whatsoever why the balance of SW insolation and OLR cannot be maintained by the mutual adjustment of F_U, LH_U and SH_U over at least the above average albedo range.

  • Alex Harvey

    Dear Christopher Game,

    Thank you, and I only wish I had found the King 1952 paper 18 months ago! Still, the history of meteorology is quite an interesting subject in its own right.

    Best regards,
    Alex

  • Alex Harvey

    Dear Christopher Game,Thank you, and I only wish I had found the King 1952 paper 18 months ago! Still, the history of meteorology is quite an interesting subject in its own right.Best regards,Alex

  • Jan Pompe

    Steve,

    “Someone once said here on this blog that Miskolczi Theory was really just an exercise in linear programming of global heat fluxes.”

    Yes it was and still is an idea I have the flux Equations 1 -10 in M7 are all simultaneous linear equations. I might yet explore that one day however your graph does not look like any linear programming (simplex method) charts that I’ve ever seen. That is not the issue that i would like to discuss here.

    Unless the Global Average Temperature is changing reasonably rapidly (not the .13 K/decade or ~4E-10/s shown in the UAH decadic trend) Fo(1-A) (as you have defined the absorbed insolation) = OLR regardless of albedo. It holds for all the planets with atmospheres to slow down the temperature changes as this spread sheet using data from NASA will show.

    “Planet” “SolConst” “Albedo” “S/4(1-A)” “Teff” “OLR” “Teq”
    “Venus” 2613.9 0.75 163.37 231.7 163.42 327.65
    “Earth” 1367.6 0.31 237.28 254.3 237.14 278.66
    “Mars” 589.2 0.25 110.48 210.1 110.49 225.76
    “Jupiter” 50.5 0.34 8.29 110 8.3 122.15
    “Saturn” 14.9 0.34 2.45 81.1 2.45 90.03
    “Uranus” 3.71 0.3 0.65 58.2 0.65 63.6
    “Neptune” 1.51 0.29 0.27 46.6 0.27 50.8
    “Pluto” 0.89 0.5 0.11 37.5 0.11 44.51

    OLR = sigma * Teff ^4 and Teq = Teff/(1-A)^.25 = (S/(4 * sigma))^.25

    • http://www.ecoengineers.com/ Steve Short

      A constructive response at long last! Hooray!

      Yes, I am totally familiar with this data. And I fully agree with you that on long timescales Fo(1-A) (as I have defined the absorbed insolation) = OLR regardless of albedo.

      However, I am considering the situation where changes such as the all important albedo, largely driven by (high and low) cloud cover – even global cloud cover is changing on much shorter timescales – minutes to days.

      What this means for the system is that at any point in time incoming SW will get out of balance with the synchronously occurring OLR.

      By definition the purpose of that system is to drive back to the Fo(1-A)=OLR equivalence for the obvious reasons you and I both know well i.e. that equilibrium thermodynamics will eventually apply. The data you reproduce above is simply a refection of that fact.

      Bottom line, Jan. Thermodynamic equilibrium is not instantaneous.

      I am sure that you would agree that some of the drivers of that return to balance i.e. ET (which becomes LH), and SH in particular have there own built in characteristic times of action. In probabilistic thermodynamics this becomes the e-folding time of the (re)action.

      This means that if we have some idea how these slower acting fluxes move in response to an overall system temporary lack of equilibrium we can get some idea of how it responds when driving back to thermodynamic equilibrium.

      I deal with chemothermodynamics almost every day of the week. If you want to figure out how a system gets back to chemical equilibrium you study the way in which the finer, key drivers (must) respond. I have even built real world hydromet plant and on the basis of theoretical and bench studies using this type of approach.

      If you look closely at my little spreadsheet model you will see there is only an induced temporary heat imbalance between Fo(1-A) and OLR at the max albedo (0.40) end of about 19.6 W/m^2 and at the minimum albedo end of 19.5 W/m^2 i.e. <10% of the high albedo end values and <8% at the low albedo (0.15) end.

      Given that average albedo for Earth at least is only likely to vary about a smaller zone – say 0.35 – 0.25 mostly, the offsets between F0(1-A) and OLR generated by this approach are fairly trivial IMHO.

      By further mining of the literature you would probably be able to eliminate them altogether but the important information which this spreadsheet provides on the key role of the minor drivers to equilibrium would still be apparent.

      When you said ages ago that the Miskolczi approach was essentially an exercise in linear programming of the equations governing the behavior of all the component heat fluxes in the system, I actually agreed with that insight – if this really was an insight in your or Ferenc's mind?

      But that approach was bound to fail with Miskolczi because he had:

      (1) not effectively drawn out all of the minor fluxes notably those within the K term i.e. LH + SH into his ‘system’; nor

      (2) produced any way of varying the major fluxes in and out (SW insolation and OLR) with (say) the all important albedo (or cloud cover); nor

      (3) produced any way of doing the same with those minor but more important fluxes which ‘adjust’ the system i.e. LH, SH, S_T and F, in response to perturbations such as variation in albedo (the major one), incoming SW, absorption and scattering (of S_T and/or F) etc.

      But now that I’ve shown one straightforward, relatively simple way in which we can achieve that exercise in linear programming.

      It is by no means perfect but it is a very useful start. My response to Dan L. in the Plimer Review thread explains exactly how I see this working.

      • http://www.ecoengineers.com/ Steve Short

        PS: I have been doing Simplex Optimization and many other methods of chemical engineered system optimization for a little over 30 years…..

      • Jan Pompe

        Steve you still don’t quite get it.

        You have to to physically realistic figures into your spreadsheet for it to be meaningful. You can’t mix up instantaneous effects with averages the way you do and expect to get reasonable results.

        You cannot perturb albedo with also perturbing absorptivity (1-albedo) and and emissivity at the same time at the same rate. The effect of this is that as you decrease albedo you increase absorptivity and emissivity and Fo(1 – A) = OLR no matter what you do. Your linear program should at the very least reflect this and it doesn’t.

        Fo whether as you define it Solar constant/4 is an average at the very least over a 24 hour period and more precisely over a year because in instantaneous solar constant varies by ~90 Wm/^2 over the course of a year. OLR is also an average, so are Su, St, Tg, Ta. K (LH + SH).

        What you are doing does not make sense physically or mathematically. There are no conservation laws for radiation intensity and the equations depend on the conservation of energy so we must count joules and working with averages (integrals) over the same period is way of achieving this.

        You might instantly (almost) feel cooler when a cloud passes over but at the same instant Ed rises (that’s how we detect clouds with radiometers) but we still feel cooler and with that increased Ed we also get decreased St and OLR and Fo(1-A) and on the average it makes no difference to Fo(1-A) = OLR.

        • http://www.ecoengineers.com/ Steve Short

          “Steve you still don’t quite get it.”

          “What you are doing does not make sense physically or mathematically. ”

          “The effect of this is that as you decrease albedo you increase absorptivity and emissivity and Fo(1 – A) = OLR no matter what you do.”

          As if I didn’t take that as a given. This shows you haven’t even bothered to look closely at the spreadsheet!
          Sounds to me like you have never even met finite element or finite difference calculus. Just for once try making small incremental changes to something and see what happens.

          The fact is, much of the interior workings of the climate system is operating at the level of non-equilibrium thermodynamics. That is why we have the whole school of MEP. Go away and read the Red Book please.

          It is ironic that you should be trying to lecture me on what does or does not make sense mathematically when you remain utterly brainwashed by the numerological drivel that is Miskolczi Theory.

          But I am not going to waste time with you – been there done that just too many times.

          Here is a version knocked up just for you where the Fo(1-A) = OLR balance is forced for each and every value of albedo. I suggest you examine it closely – especially in terms of what that implies – but doubt you will.

          https://download.yousendit.com/UmNJblRxa0R3NUtGa1E9PQ

          Mind you – most likely you won’t get it this time around either! You are even too lazy to mine the literature systematically to get a good appreciation of actual parameter ranges and limits. In a humorous aside I note that forces Misckolczi’s phony pseudo-tau to probably average about 1.947±0.221 with a minimum still passing through about 0.3 albedo.

          Ironic isn’t it given Miskolczi’s sleight-of-hand with clear sky and all sky – something that zipped through right under your pebble-lensed ‘radar’.

          One of these days you just might like to get around to asking yourself why many people who have had a long and successful career in science find your thinking so rigid and simplistic – quite apart from the endlessly tiresome semantic wrigglings.

          No, Jan, it is you that doesn’t get it. Patronising waffle does not an argument make!

          • Jan Pompe

            Steve,

            “This shows you haven’t even bothered to look closely at the spreadsheet!”

            I have and it is plain you are computing things that you don’t know how to.

            You have Aa decreasing with decreasing cloud cover :- that’s fair enough but you have Ed decreasing with increasing cloud cover this does not happen. Go out and measure it don’t take my word for it.

          • http://www.ecoengineers.com/ Steve Short

            “You have Aa decreasing with decreasing cloud cover :- that’s fair enough but you have Ed decreasing with increasing cloud cover this does not happen. Go out and measure it don’t take my word for it.”

            In a word, NO.

            If you look closely at the spreadsheet I just sent you, E_D does indeed increase weakly for increasing cloud cover ~ 64% (albedo ~0.3).

            But the fact is the greater part of E_D is made up of heat re-radiated from the atmosphere back to BOA i.e. it is a large fraction of A_A. In my little model it is set at 62.5% of A_A i.e. to confirm with what the consensus is.

            Now you may well reject the consensus view that A_A ~356 and E_D ~333 W/m^2 at the average albedo of ~0.03 (clouds ~64%) but that is just you , and your views on the Kirchoff crap don’t count, right?

            Now, the beauty of this spreadsheet is that IF you are correct then something else must increase E_D markedly with increasing albedo. This is because for small increments away from/above albedo (0.3 –> 0.4) we can assume the 62.5% fraction of A_A holds (after all it is essentially geometrically controlled right).

            The next most logical candidate is of course Latent Heat (LH). But there again one runs into a problem. Too much LH_D radiated back to BOA necessarily implies too much LH_U radiated through TOA. But this blows out your OLR so that then it doesn’t match the SW insolation [Fo(1-A)].

            Whatya gonna do? You can either control LH magnitude overall or change the fraction leaving TOA. None of the other minor terms like F, S_T, SH_U, SH_D are gonna do it for you.

            Either way you better be careful you don’t step outside the boundaries of what is in the literature.

            Bingo.

            I already did that to give you a spreadsheet which matches SW insolation to OLR at albedo = 0.4

            Ergo, where is all that heat gonna come from to give you a much bigger E_D?

            Sorry, don’t accept visceral impressions (or Miskolczi numerology).

            PS: Plus to make things harder for you, you are not even allowed to mess around with LH – after all it’s just not Miskolczian!

          • Jan Pompe

            “If you look closely at the spreadsheet I just sent you, E_D does indeed increase weakly for increasing cloud cover ~ 64% (albedo ~0.3).”

            Sure Steve anything you say

            %cloud cover E_D S_T
            100 334 5
            82 328.4 25
            64 331.9 40
            46 348.5 47
            28 363.8 55
            10 380.6 62

            MODTRAN will show transmittance (St =0) for cloud 100% cover so what did you do pick numbers out of hat to suit you?

          • http://www.ecoengineers.com/ Steve Short

            “MODTRAN will show transmittance (St =0) for cloud 100% cover so what did you do pick numbers out of hat to suit you?”

            For S_T I picked numbers that still fit with the literature ranges to force F0(1-A)= OLR at 0.4 albedo (as you wished)

            But please, go ahead and change 5 to zero at 0.4 albedo (and to ~20 at ~0.35 albedo too if you wish) and see where that gets you!

            Nowhere, Zilch. Nada.

            Why? Because E_D is not linked to S_T.

            However, you could raise LH to compensate (just about the only thing you can do and be within the mainstream literature milieu) and an LH of ~91 would allow E_D to get to about 342 W/m^2 at 0.4 albedo.

            Still a bit up from the ~333 at ambient, but still a long way down from the ~389 of A_A.

            I know exactly why you hate this spreadsheet.

            It only needs a tiny tweak to show how riddled with contradictions your blinkered Miskolczian world view really is – even with the Fo(1-A)=OLR equality forced at every albedo.

            So glad I sucked you in!

            You’re munching on the “phyto-plankton” already and too dumb to know it.

          • Jan Pompe

            “But please, go ahead and change 5 to zero ”

            Steve it’s only one thing wrong with that amateurish dreck that you have been producing lately

          • http://www.ecoengineers.com/ Steve Short

            Yes, you really have to watch out for that nasty DRECK stuff. I mean, even Dark Matter and Dark Energy is probably comprised of DRECK! Truth be told Saddam Hussein was probably stockpiling – you guessed it – DRECK!

            But one could conceivably wake up on some obscure planet in the galaxy with a mere 6.8 billion inhabitants only to find oneself to be the immaculately conceived offspring of the Prophet Ferenc who had just proclaimed the new religion of the Great Global Golden Flying Pig (GGGFP) . One would then be obliged to be, at all times, the utterly coherent, flawless High Priest of the new Messiah, an immaculate authority on the Truth! But then one might look around and suddenly find that the entire congregation of this new religion numbered only, uh, uh,…. three?

            To make matters worse, millions or more of the remaining billions might be rushing off to join yet another new religion – the Deep Green Revelation of the Big Hot Water Bottle (DGRBHWB). How depressing! What a bummer!

            But. whew, at least that would only be a bad dream – nothing at all in comparison with ‘DRECK’!

      • Jan Pompe

        “my buddies and I run a multi-million dollar consultancy.”

        Yeah I noticed out of a converted garage in your basement.

        • http://www.ecoengineers.com/ Steve Short

          Yeah, some drive to work.

          Others simply walk downstairs in plaid slippers, fire up the Bose, sit down at the nearest workstation and sip latte while the rest of the grumpy old bastards come online.

          Oh it’s hard.

          • Jan Pompe

            Oh yes I remember the life well only for me there were no stairs involved. I certainly never had to wait for anyone to come on line. I did have to chase payment at times chase work occasionally pay someone out when there was not enough work about and fire the incompetent who kept getting his feedback mixed up.

            I for one was glad to leave it behind in 2002 and rollover the super then got bored and dusted of my nursing papers in 2003. I don’t miss the old life a bit and I’m having fun I’ll probably do it until I’m 70.

            I don’t know why you are even interested in trying to convince me of your point of view in fact i don’t really care I’m really quite happy for you to go along believing that the microbes bootstrapped the world that made them.

            I really prefer the work I do now that is helping young folks that never really had a change to a better life. Only a totally self centred moron would belittle that work.

          • http://www.ecoengineers.com/ Steve Short

            So what? I mentor students frequently, support little kids on 2 continents and put $400/month into Medecins Sans Frontieres. We have both had to cope with the awful reality of losing a mature age child of our own. But all these things have nothing whatsoever to do with our interest in climate truth.

            All that sniping and snivelling doesn’t cover the fact you’ve simply never bothered to check your overall global heat balances and want to pour shit on anyone who has a go.

            Just because you’re a natural born rubbisher and you’ve somehow convinced yourself the sun shines out of FM’s ass.

            Crikey.

          • Jan Pompe

            “All that sniping ”

            Steve the only one sniping and pouring shit here is you.

            I was perfectly happy to leave you in your ignorance until you started to annoy me with your emails.

          • http://www.ecoengineers.com/ Steve Short
          • Jan Pompe

            Yawn!!

          • http://www.ecoengineers.com/ Steve Short

            Toss off elsewhere then.

          • Jan Pompe

            “Toss off elsewhere then.”

            Again Steve you only succeed in showing us what an unpleasant fellow you are.

            I’m not going any where.

          • http://www.ecoengineers.com/ Steve Short

            Here is just one example of comprehensive work to make regional annual energy budgets – in this case for the Mackenzie Basin in Western Canada.

            All atmospheric energy flux terms (W/m^2) have been multiplied by (8.64 X 10^4 s/day)/(Cp Ps/g), where Ps = monthly surface pressure, to provide normalized values in units of K/day. The surface energy terms are also multiplied by a constant atmospheric normalization (i.e., CvH=Cp(Ps/g), where H = depth of soil layer) in order to provide values in units of K/day so that they can easily be compared to their atmospheric counterparts. This is a common approach when comparing regional scale energy budgets.

            For mean annual cloud covers ranging from 48.4 to 67.3% i.e. a range of 18.9%, annual basin averages for:

            (1) the normalized LW IR down at BOA i.e. in Miskolczi terminology E_D, ranged from 2.11 to 2.36 K/day and was only WEAKLY correlated to rising cloud cover; and

            (2) the LW IR up at BOA i.e. in Miskolczi terminology A_A, ranged from 2.62 to 2.83 K/day and was uncorrelated to cloud cover.

            There was strictly no evidence that A_A= E_D and A_A was invariably >E_D.

            http://www.usask.ca/geography/MAGS/WEBS/html/data/annualTable.html

            How many nails need to be hammered into the A_A = E_D coffin?

          • Jan Pompe

            “(1) the normalized LW IR down at BOA i.e. in Miskolczi terminology E_D, ranged from 2.11 to 2.36 K/day and was only WEAKLY correlated to rising cloud cover; and

            (2) the LW IR up at BOA i.e. in Miskolczi terminology A_A, ranged from 2.62 to 2.83 K/day and was uncorrelated to cloud cover.”

            No measure of Aa in that lot so I think you just nailed your thumb to the coffin.

            This morning LWU = 394.91
            and LWD = 284.52 with partial cloud no Aa in that either apart from Ed. Why no Aa I can’t measure it with a radiometer and if I had an interferometer I still could not measure it. Neither could they with the methods they used.

          • http://www.ecoengineers.com/ Steve Short

            “No measure of Aa in that lot so I think you just nailed your thumb to the coffin.”

            Now you appear to be trying to say that A_A does not = S_U – S_T and hence there is ” no A_A there”.

            The lies never end with you do they.

          • http://www.ecoengineers.com/ Steve Short

            “No measure of Aa in that lot so I think you just nailed your thumb to the coffin.”

            Once again untrue.

            There is absolutely no mystery to determining A_A here. We know from the mainstream literature that at average conditions i.e. at Albedo = 0.30, cloud cover ~62%, S_U = 396 W/m^2 and S_T = 40 w/m^2 i.e. A_A is almost exactly 90% of S_U.

            For the 4 separate Mackenzie River Basin studies (covering different periods and mult-year durations in which mean annual S_U was found to be 2.79, 2.78, 2.62 and 2.83 K/d for mean annual cloud covers of 48.4, 55.7, 60.3 and 67.3% respectively, this simply means that A_A was very close to 2.51, 2.50, 2.36 and 2.55 K/d respectively. Very minor and relatively trivial adjustments can also be made to these numbers to account for the transparency window provided by the slightly different (from ~62%) mean cloud covers.

            But, as usual, the bottom line is that these numbers are still well in excess of the equivalent E_D numbers of 2.21, 2.26, 2.11 and 2.36 K/d.

            As I said A_A does not =E_D and E_D (which was measured directly and quoted explicitly) correlates only weakly with percent cloud cover.

            I haven’t nailed my thumb to anything here! That statement is just the primary school level debating flim flam we can expect from Jan.

            As for the comment about about not being able to measure A_A at ground level, the simple answer is so what?

            Just more noise from the flim flam man.

            The four Mackenzie River Basin studies used a mix of surface stations, radiosondes and satellite data.

          • Jan Pompe

            Steve there is no evidence that Aa has actually been measured by the team creating the data. In fact they can’t with the methods they use. You are making the the 40W/m^2 assumption based on admitted ad hoc calculation which is in turn based on the assumption that the only transmittance is in the IR window.

            Your argument is circular

          • http://www.ecoengineers.com/ Steve Short

            I don’t think so. If you read the 20 – 25 papers on the global heat flux balances over the last 10 years or so only, as I have done, in my view any reasonable person with an adequate education would not conclude that the estimations of the BOA -> TOA transmission of LW IR were ad hoc.

            Ergo I simply don’t believe you and I don’t think the vast majority of scientist who could follow the text of those papers would think so either.

            You are out on a limb. Your accusation of ‘ad hocery’ is totally unproven outside of your own head.

            As you have pointed out yourself – it either transparent or its opaque (e.g. clouds). The variations in transparency with specific humidity are insufficient to outweigh the bulk variation in S_U with changing SW insolation or (at high or low albedos) the overall window size.

            IMHO this permanent confusion on your part arose because Miskolczi has not actually been measuring just the true BOA->TOA tramsitted S_T but has also been mixing something else in with it. The most likely candidate seems to be LW IR emitted of the tops of clouds during the realization (water/ice) of latent heat.

            I gave Zagoni/Mmiskolczi a very fair and square chance to clarify that tau/S_T issue and they squibbed it.

            End of story.

            Incidentally this is all without also considering the truism that by ignoring the internal workings of the K term altogether your and FMs world view is irreparably narrowed by the failure to considewr what happens to latent and sensible heat.

          • Anonymous

            Steve, comment removed. Much as I enjoy your robust debating style, please try not to step over the line.

          • Jan Pompe

            Just been to the shops thought mine was over the top too and was going to edit it but you’ve beat me to it:)

            I’ve been looking at this issue with HARTCODE since someone said he thought the regression in figure 2 was too tight to be real. I’ve only done a few and they really are that tight so far. I’m currently trying to find a way to detect outliers in the data without running 2000+ trials, so I can just look at them.

  • Jan Pompe

    Steve,”Someone once said here on this blog that Miskolczi Theory was really just an exercise in linear programming of global heat fluxes.”Yes it was and still is an idea I have the flux Equations 1 -10 in M7 are all simultaneous linear equations. I might yet explore that one day however your graph does not look like any linear programming (simplex method) charts that I've ever seen. That is not the issue that i would like to discuss here.Unless the Global Average Temperature is changing reasonably rapidly (not the .13 K/decade or ~4E-10/s shown in the UAH decadic trend) Fo(1-A) (as you have defined the absorbed insolation) = OLR regardless of albedo. It holds for all the planets with atmospheres to slow down the temperature changes as this spread sheet using data from NASA will show.”Planet” “SolConst” “Albedo” “S/4(1-A)” “Teff” “OLR” “Teq””Venus” 2613.9 0.75 163.37 231.7 163.42 327.65″Earth” 1367.6 0.31 237.28 254.3 237.14 278.66″Mars” 589.2 0.25 110.48 210.1 110.49 225.76″Jupiter” 50.5 0.34 8.29 110 8.3 122.15″Saturn” 14.9 0.34 2.45 81.1 2.45 90.03″Uranus” 3.71 0.3 0.65 58.2 0.65 63.6″Neptune” 1.51 0.29 0.27 46.6 0.27 50.8″Pluto” 0.89 0.5 0.11 37.5 0.11 44.51OLR = sigma * Teff ^4 and Teq = Teff/(1-A)^.25 = (S/(4 * sigma))^.25

  • http://www.ecoengineers.com/ Steve Short

    A constructive response at long last! Hooray!Yes, I am totally familiar with this data. And I fully agree with you that on long timescales Fo(1-A) (as I have defined the absorbed insolation) = OLR regardless of albedo.However, I am considering the situation where changes such as the all important albedo, largely driven by (high and low) cloud cover – even global cloud cover is changing on much shorter timescales – minutes to days.What this means for the system is that at any point in time incoming SW will get out of balance with the synchronously occurring OLR.By definition the purpose of that system is to drive back to the Fo(1-A)=OLR equivalence for the obvious reasons you and I both know well i.e. that equilibrium thermodynamics will eventually apply. The data you reproduce above is simply a refection of that fact.Bottom line, Jan. Thermodynamic equilibrium is not instantaneous.I am sure that you would agree that some of the drivers of that return to balance i.e. ET (which becomes LH), and SH in particular have there own built in characteristic times of action. In probabilistic thermodynamics this becomes the e-folding time of the (re)action.This means that if we have some idea how these slower acting fluxes move in response to an overall system temporary lack of equilibrium we can get some idea of how it responds when driving back to thermodynamic equilibrium.I deal with chemothermodynamics almost every day of the week. If you want to figure out how a system gets back to chemical equilibrium you study the way in which the finer, key drivers (must) respond. I have even built real world hydromet plant and on the basis of theoretical and bench studies using this type of approach.If you look closely at my little spreadsheet model you will see there is only an induced temporary heat imbalance between Fo(1-A) and OLR at the max albedo (0.40) end of about 19.6 W/m^2 and at the minimum albedo end of 19.5 W/m^2 i.e. <10% of the high albedo end values and <8% at the low albedo (0.15) end. Given that average albedo for Earth at least is only likely to vary about a smaller zone – say 0.35 – 0.25 mostly, the offsets between F0(1-A) and OLR generated by this approach are fairly trivial IMHO. By further mining of the literature you would probably be able to eliminate them altogether but the important information which this spreadsheet provides on the key role of the minor drivers to equilibrium would still be apparent. When you said ages ago that the Miskolczi approach was essentially an exercise in linear programming of the equations governing the behavior of all the component heat fluxes in the system, I actually agreed with that insight – if this really was an insight in your or Ferenc's mind?But that approach was bound to fail with Miskolczi because he had:(1) not effectively drawn out all of the minor fluxes notably those within the K term i.e. LH + SH into his ‘system’; nor (2) produced any way of varying the major fluxes in and out (SW insolation and OLR) with (say) the all important albedo (or cloud cover); nor (3) produced any way of doing the same with those minor but more important fluxes which ‘adjust’ the system i.e. LH, SH, S_T and F, in response to perturbations such as variation in albedo (the major one), incoming SW, absorption and scattering (of S_T and/or F) etc. But now that I’ve shown one straightforward, relatively simple way in which we can achieve that exercise in linear programming. It is by no means perfect but it is a very useful start. My response to Dan L. in the Plimer Review thread explains exactly how I see this working.

  • http://www.ecoengineers.com/ Steve Short

    PS: I have been doing Simplex Optimization and many other methods of chemical engineered system optimization for a little over 30 years…..

  • Jan Pompe

    Steve you still don't quite get it.You have to to physically realistic figures into your spreadsheet for it to be meaningful. You can't mix up instantaneous effects with averages the way you do and expect to get reasonable results. You cannot perturb albedo with also perturbing absorptivity (1-albedo) and and emissivity at the same time at the same rate. The effect of this is that as you decrease albedo you increase absorptivity and emissivity and Fo(1 – A) = OLR no matter what you do. Your linear program should at the very least reflect this and it doesn't. Fo whether as you define it Solar constant/4 is an average at the very least over a 24 hour period and more precisely over a year because in instantaneous solar constant varies by ~90 Wm/^2 over the course of a year. OLR is also an average, so are Su, St, Tg, Ta. K (LH + SH). What you are doing does not make sense physically or mathematically. There are no conservation laws for radiation intensity and the equations depend on the conservation of energy so we must count joules and working with averages (integrals) over the same period is way of achieving this.You might instantly (almost) feel cooler when a cloud passes over but at the same instant Ed rises (that's how we detect clouds with radiometers) but we still feel cooler and with that increased Ed we also get decreased St and OLR and Fo(1-A) and on the average it makes no difference to Fo(1-A) = OLR.

  • http://www.ecoengineers.com/ Steve Short

    “Steve you still don't quite get it.””What you are doing does not make sense physically or mathematically. “”The effect of this is that as you decrease albedo you increase absorptivity and emissivity and Fo(1 – A) = OLR no matter what you do.” As if I didn't take that as a given. This shows you haven't even bothered to look closely at the spreadsheet!Sounds to me like you have never even met finite element or finite difference calculus. Just for once try making small incremental changes to something and see what happens. The fact is, much of the interior workings of the climate system is operating at the level of non-equilibrium thermodynamics. That is why we have the whole school of MEP. Go away and read the Red Book please.It is ironic that you should be trying to lecture me on what does or does not make sense mathematically when you remain utterly brainwashed by the numerological drivel that is Miskolczi Theory.But I am not going to waste time with you – been there done that just too many times.Here is a version knocked up just for you where the Fo(1-A) = OLR balance is forced for each and every value of albedo. I suggest you examine it closely – especially in terms of what that implies – but doubt you will. https://download.yousendit.com/UmNJblRxa0R3NUtG…Mind you – most likely you won't get it this time around either! You are even too lazy to mine the literature systematically to get a good appreciation of actual parameter ranges and limits. In a humorous aside I note that forces Misckolczi's phony pseudo-tau to probably average about 1.947±0.221 with a minimum still passing through about 0.3 albedo.Ironic isn't it given Miskolczi's sleight-of-hand with clear sky and all sky – something that zipped through right under your pebble-lensed 'radar'.One of these days you just might like to get around to asking yourself why many people who have had a long and successful career in science find your thinking so rigid and simplistic – quite apart from the endlessly tiresome semantic wrigglings.No, Jan, it is you that doesn't get it. Patronising waffle does not an argument make!

  • Jan Pompe

    Steve,”This shows you haven't even bothered to look closely at the spreadsheet!”I have and it is plain you are computing things that you don't know how to.You have Aa decreasing with decreasing cloud cover :- that's fair enough but you have Ed decreasing with increasing cloud cover this does not happen. Go out and measure it don't take my word for it.

  • http://www.ecoengineers.com/ Steve Short

    “You have Aa decreasing with decreasing cloud cover :- that's fair enough but you have Ed decreasing with increasing cloud cover this does not happen. Go out and measure it don't take my word for it.”In a word, NO.If you look closely at the spreadsheet I just sent you, E_D does indeed increase weakly for increasing cloud cover ~ 64% (albedo ~0.3).But the fact is the greater part of E_D is made up of heat re-radiated from the atmosphere back to BOA i.e. it is a large fraction of A_A. In my little model it is set at 62.5% of A_A i.e. to confirm with what the consensus is.Now you may well reject the consensus view that A_A ~356 and E_D ~333 W/m^2 at the average albedo of ~0.03 (clouds ~64%) but that is just you , and your views on the Kirchoff crap don't count, right?Now, the beauty of this spreadsheet is that IF you are correct then something else must increase E_D markedly with increasing albedo. This is because for small increments away from/above albedo (0.3 –> 0.4) we can assume the 62.5% fraction of A_A holds (after all it is essentially geometrically controlled right).The next most logical candidate is of course Latent Heat (LH). But there again one runs into a problem. Too much LH_D radiated back to BOA necessarily implies too much LH_U radiated through TOA. But this blows out your OLR so that then it doesn't match the SW insolation [Fo(1-A)].Whatya gonna do? You can either control LH magnitude overall or change the fraction leaving TOA. None of the other minor terms like F, S_T, SH_U, SH_D are gonna do it for you.Either way you better be careful you don't step outside the boundaries of what is in the literature. Bingo. I already did that to give you a spreadsheet which matches SW insolation to OLR at albedo = 0.4Ergo, where is all that heat gonna come from to give you a much bigger E_D?Sorry, don't accept visceral impressions (or Miskolczi numerology). PS: Plus to make things harder for you, you are not even allowed to mess around with LH – after all it's just not Miskolczian!

  • Jan Pompe

    “If you look closely at the spreadsheet I just sent you, E_D does indeed increase weakly for increasing cloud cover ~ 64% (albedo ~0.3).”Sure Steve anything you say%cloud cover E_D S_T 100 334 582 328.4 2564 331.9 40 46 348.5 4728 363.8 5510 380.6 62MODTRAN will show transmittance (St =0) for cloud 100% cover so what did you do pick numbers out of hat to suit you?

  • http://www.ecoengineers.com/ Steve Short

    “MODTRAN will show transmittance (St =0) for cloud 100% cover so what did you do pick numbers out of hat to suit you?”For S_T I picked numbers that still fit with the literature ranges to force F0(1-A)= OLR at 0.4 albedo (as you wished) But please, go ahead and change 5 to zero at 0.4 albedo (and to ~20 at ~0.35 albedo too if you wish) and see where that gets you!Nowhere, Zilch. Nada.Why? Because E_D is not linked to S_T.However, you could raise LH to compensate (just about the only thing you can do and be within the mainstream literature milieu) and an LH of ~91 would allow E_D to get to about 342 W/m^2 at 0.4 albedo.Still a bit up from the ~333 at ambient, but still a long way down from the ~389 of A_A.I know exactly why you hate this spreadsheet. It only needs a tiny tweak to show how riddled with contradictions your blinkered Miskolczian world view really is – even with the Fo(1-A)=OLR equality forced at every albedo.So glad I sucked you in! You're munching on the “phyto-plankton” already and too dumb to know it.

  • Jan Pompe

    “But please, go ahead and change 5 to zero “Steve it's only one thing wrong with that amateurish dreck that you have been producing lately

  • http://www.ecoengineers.com/ Steve Short

    Yes, you really have to watch out for that nasty DRECK stuff. I mean, even Dark Matter and Dark Energy is probably comprised of DRECK! Truth be told Saddam Hussein was probably stockpiling – you guessed it – DRECK!But one could conceivably wake up on some obscure planet in the galaxy with a mere 6.8 billion inhabitants only to find oneself to be the immaculately conceived offspring of the Prophet Ferenc who had just proclaimed the new religion of the Great Global Golden Flying Pig (GGGFP) . One would then be obliged to be, at all times, the utterly coherent, flawless High Priest of the new Messiah, an immaculate authority on the Truth! But then one might look around and suddenly find that the entire congregation of this new religion numbered only, uh, uh,…. three? To make matters worse, millions or more of the remaining billions might be rushing off to join yet another new religion – the Deep Green Revelation of the Big Hot Water Bottle (DGRBHWB). How depressing! What a bummer!But. whew, at least that would only be a bad dream – nothing at all in comparison with 'DRECK'!

  • Jan Pompe

    “”A Diagnosis Huh?”

    Yes Of Your Behaviour Which Has Been Anything But Professional.”

    “Yes, you really have to watch out for that nasty DRECK stuff. I mean, even Dark Matter and Dark Energy is probably comprised of DRECK! Truth be told Saddam Hussein was probably stockpiling – you guessed it – DRECK!”

    Ladies and Gentlemen of the jury I rest my case”

    • http://www.ecoengineers.com/ Steve Short

      Yeah, it’s really tough when you are being forced to pay attention to a simply Excel spreadsheet.

      Even that silly Roy Spencer wants us to look at Excel spreadsheets:

      http://www.drroyspencer.com/2009/05/global-warming-causing-carbon-dioxide-increases-a-simple-model/

      How tedious is that!

      “I decided to run a simple model in which the change in atmospheric CO2 with time is a function of sea surface temperature anomaly. The model equation looks like this:

      delta[CO2]/delta[t] = a*SST + b*Anthro

      Which simply says that the change in atmospheric CO2 with time is proportional to some combination of the SST anomaly and the anthropogenic (manmade) CO2 source. I then ran the model in an Excel spreadsheet and adjusted an “a” and “b” coefficients until the model response looked like the observed record of yearly CO2 accumulation rate at Mauna Loa.

      It didn’t take long to find a model that did a pretty good job (a = 4.6 ppm/yr per deg. C; b=0.1), as the following graph shows….

      …..What could be causing long-term warming of the oceans? My first choice for a mechanism would be a slight decrease in oceanic cloud cover. There is no way to rule this out observationally because ”

      Get that: “…our measurements of global cloud cover over the last 50 to 100 years are nowhere near good enough.”

      This also means our measurements of albedo variation over the last 50 to 100 years are nowhere near good enough.

      What is the difference between a Roy Spencer investigating the underlying mechanisms of the rise in CO2 using a simple Excel spreadsheet and coming to a conclusion about the anthropogenic and non-anthropogenic components and related issues of cloud cover and my examination, using a simple Excel spreadsheet, to examine the effects of the published literature base on ranges of Latent Heat, Sensible Heat, Absorbed SW IR and Absorbed LW IR versus albedo and cloud cover, while also trying to detect if there is any merit to the so-called Miskolczi Theory.

      Nothing, essentially nothing.

      The only difference is that Jan Pompe doesn’t have the guts or the context to take on a heavy weight Roy Spencer and tell him his Excel spreadsheet is amateurish dreck and only he (Pompe) has the revealed truth on greenhouse (i.e. the Miskolczi truth) and can ‘prove’ it….

      In his late night eccentric dreams maybe.

      Interestingly, it only took a simple Excel spreadsheet to show, at the end of the day that:

      A_A does NOT equal E_D (and both rises and falls with increasing and decreasing cloud cover away from the average 64%);

      Miskolczi’s ‘tau’ WASN’T EVER a real LW IR absorption tau at all but a slight-of-hand combination of absorbed LW IR and that emitted to TOA off the tops of clouds; and

      There is no climate homeostasis WHICH WORKS to create a constant LW IR constant tau (regardless of how it is defined),

      hence Miskolczi Theory is a crock and its proponents amateurs…

      It all boils down to an inability to use even a simple spreadsheet.

      • Jan Pompe

        “he only difference is that Jan Pompe doesn’t have the guts or the context to take on a heavy weight Roy Spencer and tell him his Excel spreadsheet is amateurish dreck ”

        Is there any reason I should? He seems to know what he is doing.

        I had a look at some of that stuff about 1.5 years ago posted a bit of up at Jen’s blog. I was more interested in the divergence between the growth rate of CO2 and annual fossil fuel usage.
        http://www.jennifermarohasy.com/blog/archives/002789.html

        I didn’t use Excell because I didn’t have it since I evicted windows from my network in 1994, and I generally find spreadsheets a pain but that is purely a matter of personal preference. If other people want to use them that’s fine the are just as good at garbage in garbage out as any other program.

        Still on another matter this was a long time coming:
        http://www.drroyspencer.com/2009/04/when-is-positive-feedback-really-negative-feedback/

        IIRC I started on him when Pielke Sr was still blogging gave up when neither really answered my questions but other engineers and one neuro-physiologist (whom you have also gone out of your way to insult) kept at him.

        It was however rather pleasing to finally read that when Ken Gregory pointed me to that article.

        Steve it’s the stuff you put into your spreadsheet that is amateurish and your behaviour that is amateurish and boorish.

        • http://www.ecoengineers.com/ Steve Short

          “Steve it’s the stuff you put into your spreadsheet that is amateurish…”

          Untrue.

          (1) The values/range of S_T are accessible from the mainstream literature. They are used within the typical degrees of precision quoted in the literature.

          (2) The values/ranges of S_U are accessible from the mainstream literature. They are used within the typical degrees of precision quoted in the literature.

          (3) The values/ranges of SH are accessible from the mainstream literature. They are used within the typical degrees of precision quoted in the literature. The values/ranges of SH_U are inferable from the mainstream literature.

          (4) The values/ranges of LH are accessible from the mainstream literature. They are used within the typical degrees of precision quoted in the literature. The values/ranges of LH_U are inferrable from the mainstream literature.

          (5) The values/ranges of F are accessible from the mainstream literature. They are used within the typical degrees of precision quoted in the literature. The values/ranges of F are inferrable from the mainstream literature.

          The spreadsheet has practical validity for simply identifying the constraints that must apply to the global heat balance over a range of albedos/cloud covers IF the above conditions apply.

          Conversely, you keep wanting to base your highly exclusive dogmas on a collection of what are essentially persistent and unproven assertions (A_A = E_D, S_U – 2E_U, Miskolczi’s ‘surface continuity’, K and its component parts LH and SH don’t really matter etc., etc) that bear only a weak relation to the data which appears in the mainstream literature on climate.

          It just doesn’t work. The evidence shows virtually no one believes you, Ferenc or Zagoni etc., no matter how many supremely silly ‘radiative homilies’ you may like to spout.

          When you say, in reference to Roy Spencer: “IIRC I started on him when Pielke Sr was still blogging gave up when neither really answered my questions but other engineers …..” all you are really doing is revealing your silly arrogance.

          You say: “I started on him…”!!! Like…..Roy Spencer?

          And you accuse others of being insulting. You have to be kidding yourself surely? What nuttiness is this!

          To suggest that somehow you had in the past needed to point out to Roy Spencer that the AGW had redefined what positive feedback should be and that somehow he had then subsequently ‘mended his ways’ is literally insane.

          Furthermore, I have never, ever met a fellow engineer before you that could or would say: “….since I evicted windows from my network (ha, ha network?!!!) in 1994, and I generally find spreadsheets a pain but that is purely a matter of personal preference. If other people want to use them that’s fine the are just as good at garbage in garbage out as any other program.”

          Spreadsheets have been the very stuff of systems engineering, chemical engineering, process control engineering etc. etc i.e. all kinds of technology from the very happy day Lotus 123 appeared.

          This is NOT what any professional engineer would say and your persistent crude inferences that you are somehow operating like one are laughable in the extreme.

          You persistently confuse real science with your personal beliefs/dogma (whatever the hell it is), you resort to tit-for-tat threads of never-ending wriggling sophistry whenever push comes to shove, and you indulge yourself in all sorts of nutty fantasies like forming transient little clubs with real scientists or ‘correcting the experts’ e.g. Roy Spencer.

          Stick to the job you are (just) lucky enough to hold down.

          • Jan Pompe

            Steve,

            Steve you simply fail to impress Aa=Ed has strong empirical support the only way to calculate it from the empirical data is with LBL code. You won’t do it your silly spread sheet (it’s not the tool it’s the way you use it). If you get a decreasing E_D with increasing cloud cover it’s just plain wrong and you can test this for yourself with a pyrgeometer.

            “Stick to the job you are (just) lucky enough to hold down.”

            Just waiting for someone to bring you in Steve.

          • http://www.ecoengineers.com/ Steve Short

            “If you get a decreasing E_D with increasing cloud cover it’s just plain wrong and you can test this for yourself with a pyrgeometer.”

            No, once again it is your logic which is silly because you cannot (or will not) operate a simple heat balance spreadsheet.

            For an imposed balance between SW insolation [Fo(1-A)] and OLR at each and every increment in albedo, the spreadsheet suggests:

            (1) E_D rises from the current literature value of around 333 W/m^2 at average conditions (Albedo = 0.30, % all cloud ~64%) to ~340 W/m^ at Albedo = 0.35; and again

            (2) E_D rises to ~342 W/m^2 at Albedo = 0.40.

            These estimates are based on the progressive reduction of S_T to zero at Albedo = 0.40.

            This is probably a conservative over-estimate (in the rate of reduction of S_T with increasing albedo) as the %cloud at Albedo = 0.40 is not 100% but about 92% (based on latest papers in Nature Geoscience etc., etc.) hence S_T is still non-zero at Albedo ~0.40

            NB: these are NOT decreases in E_D with increasing albedo/% cloud but ARE actually increases.

            They may be more modest than you would like but for the increase in E_D with increasing albedo/cloud to be LARGER than this (i.e. to start matching A_A) would require that the amount of realized Latent Heat (LH) at high albedo/high % cloud did not rise as much as the mainstream literature would suggest, hence there would be less loss of LH at TOA than the literature would suggest (noting that SH_U has already hit ~zero at even lower albedos).

            I note here that high albedo generally implies a very cloud full extra-tropical band 5 deg. N – 5 deg. S.

            Start reducing the LH and LH-U terms under those circumstances (in a bid to increase E_D) and you will eliminate the possibility of Lindzen and Spencers’s Iris Effect etc altogether.

            But go ahead – make my day – I see you are well experienced at taking a cane to poor Roy.

            The bottom line is that quantitatively you can’t have your cake and eat it too.

            The overall heat balance imposes significant limits to the rate of rise of E_D i.re. heating of the ground/ocean with increasing albedo/cloud.

            My spreadsheet shows that the average A_A over all albedos between about 0.15 (~22% cloud) and 0.40 (~92% cloud) is about 365±12 W/m^ at the one standard deviation level.

            Similarly the average E_D is estimated to be about 351±18 W/m^2.

            These values are admittedly close and hence we can say that empirically it is likely that A_A ~ E_D over the full available albedo range.

            I don’t have a problem with that – never did, never will. But it does NOT reflect any general rule or law. If it did the remainder of our mainstream literature understanding of F, LH, SH and S_T would be significantly in error.

            As you have clearly stated, you are only interested in situations where the full heat balance applies at each and every instance of albedo. If you want that then you have to live with the constraints which the heat balances impose on you in the light of the mainstream literature data.

            Otherwise you have to have a good technical reason to reject the mainstream literature data. The record shows you actually don’t (other than the usual unbalanced statements that all of NASA is wrong, only FM knows the science, blah, bah….)

            Get any more snarkey and I’ll state very clearly here just what you are forced do for a full time living and it certainly ain’t an engineering career- right!

          • Jan Pompe

            “No, once again it is your logic which is silly because you cannot (or will not) operate a simple heat balance spreadsheet.”

            I think you are losing it Steve.

            I thought you were a scientist so why don’t you like the good scientist that you think you are check your results against the real world behaviour? It’s not difficult. You can see for yourself that E_D increases with cloud cover. In fact the thicker to clouds to closer to Su it becomes.

          • http://www.ecoengineers.com/ Steve Short

            Wrong logic, Jan (in fact no logic at all).

            The thicker the clouds the closer A_A to S_U

            After all: A_A = S_U – S_T

            Hence as S_T -> zero

            A_A -> S_U

            Even Ken re-stated that obvious truism some time back in an email – albeit missing that S_T is not quite zero at Albedo = 0.40

            This is all noting of course we are talking about the TRUE LW IR S_T not the dodgy FM S_T.

            I think you are losing it Jan.

            No one here has to indulge you in your circular logic that E_D must exactly = A_A.

          • Jan Pompe

            “Wrong logic, Jan (in fact no logic at all).”

            I agree it isn’t logic it’s just measurement I suggest you try it sometime.

          • http://www.ecoengineers.com/ Steve Short

            What did I say?

            Catch Jan out and the wriggling sophistry commences.

            Classic Pompe:

            No mainstream literature data, no logic – in fact in this case just a confessed bull-necked unwillingness to consider the heat balance of the whole system – followed by the usual deflection into his own eccentric little world of circular logic….

            PS: Stand by for the wriggling (yawn) – I’m off to lunch.

          • Jan Pompe

            Why bother with what mainstream literature says (which you probably don’t understand) when you can go out and measure what is going on. You’ll soon see that what you measure and what your spread sheet says are quite different.

            You really should try it sometime.

            Now come to think of it why are you having such difficulty understanding that opaque objects do not transmit light even infra-red.

          • http://www.ecoengineers.com/ Steve Short

            “The thicker the clouds the closer A_A to S_U

            After all: A_A = S_U – S_T

            Hence as S_T -> zero

            A_A -> S_U”

            Pompe:

            “Now come to think of it why are you having such difficulty understanding that opaque objects do not transmit light even infra-red.”

            Pathetic. I suppose you were drooling as you typed?

            Too many years doing the night shift at the psych ward, you too can fly over the cuckoo’s nest.

          • Jan Pompe

            “Too many years doing the night shift at the psych ward, you too can fly over the cuckoo’s nest.”

            You’d think after all that time I’d recognise a potential client when I see one.

            I’m waiting for you.

            See you soon!!

          • http://www.ecoengineers.com/ Steve Short

            You’ll need more than a lot more pills to survive the wait. This is why you get up with an alarm clock and my buddies and I run a multi-million dollar consultancy.

  • Jan Pompe

    “”A Diagnosis Huh?”Yes Of Your Behaviour Which Has Been Anything But Professional.””Yes, you really have to watch out for that nasty DRECK stuff. I mean, even Dark Matter and Dark Energy is probably comprised of DRECK! Truth be told Saddam Hussein was probably stockpiling – you guessed it – DRECK!”Ladies and Gentlemen of the jury I rest my case”

  • http://www.ecoengineers.com/ Steve Short

    Yeah, it's really tough when you are being forced to pay attention to a simply Excel spreadsheet. Even that silly Roy Spencer wants us to look at Excel spreadsheets:http://www.drroyspencer.com/2009/05/global-warm…How tedious is that!”I decided to run a simple model in which the change in atmospheric CO2 with time is a function of sea surface temperature anomaly. The model equation looks like this:delta[CO2]/delta[t] = a*SST + b*AnthroWhich simply says that the change in atmospheric CO2 with time is proportional to some combination of the SST anomaly and the anthropogenic (manmade) CO2 source. I then ran the model in an Excel spreadsheet and adjusted an “a” and “b” coefficients until the model response looked like the observed record of yearly CO2 accumulation rate at Mauna Loa.It didn’t take long to find a model that did a pretty good job (a = 4.6 ppm/yr per deg. C; b=0.1), as the following graph shows………What could be causing long-term warming of the oceans? My first choice for a mechanism would be a slight decrease in oceanic cloud cover. There is no way to rule this out observationally because “Get that: “…our measurements of global cloud cover over the last 50 to 100 years are nowhere near good enough.”This also means our measurements of albedo variation over the last 50 to 100 years are nowhere near good enough.What is the difference between a Roy Spencer investigating the underlying mechanisms of the rise in CO2 using a simple Excel spreadsheet and coming to a conclusion about the anthropogenic and non-anthropogenic components and related issues of cloud cover and my examination, using a simple Excel spreadsheet, to examine the effects of the published literature base on ranges of Latent Heat, Sensible Heat, Absorbed SW IR and Absorbed LW IR versus albedo and cloud cover, while also trying to detect if there is any merit to the so-called Miskolczi Theory.Nothing, essentially nothing. The only difference is that Jan Pompe doesn't have the guts or the context to take on a heavy weight Roy Spencer and tell him his Excel spreadsheet is amateurish dreck and only he (Pompe) has the revealed truth on greenhouse (i.e. the Miskolczi truth) and can 'prove' it….In his late night eccentric dreams maybe.Interestingly, it only took a simple Excel spreadsheet to show, at the end of the day that:A_A does NOT equal E_D (and both rises and falls with increasing and decreasing cloud cover away from the average 64%);Miskolczi's 'tau' WASN'T EVER a real LW IR absorption tau at all but a slight-of-hand combination of absorbed LW IR and that emitted to TOA off the tops of clouds; andThere is no climate homeostasis WHICH WORKS to create a constant LW IR constant tau (regardless of how it is defined),hence Miskolczi Theory is a crock and its proponents amateurs…It all boils down to an inability to use even a simple spreadsheet.

  • Jan Pompe

    “he only difference is that Jan Pompe doesn't have the guts or the context to take on a heavy weight Roy Spencer and tell him his Excel spreadsheet is amateurish dreck “Is there any reason I should? He seems to know what he is doing.I had a look at some of that stuff about 1.5 years ago posted a bit of up at Jen's blog. I was more interested in the divergence between the growth rate of CO2 and annual fossil fuel usage. http://www.jennifermarohasy.com/blog/archives/0…I didn't use Excell because I didn't have it since I evicted windows from my network in 1994, and I generally find spreadsheets a pain but that is purely a matter of personal preference. If other people want to use them that's fine the are just as good at garbage in garbage out as any other program.Still on another matter this was a long time coming:http://www.drroyspencer.com/2009/04/when-is-pos…IIRC I started on him when Pielke Sr was still blogging gave up when neither really answered my questions but other engineers and one neuro-physiologist (whom you have also gone out of your way to insult) kept at him.It was however rather pleasing to finally read that when Ken Gregory pointed me to that article. Steve it's the stuff you put into your spreadsheet that is amateurish and your behaviour that is amateurish and boorish.

  • http://www.ecoengineers.com/ Steve Short

    “Steve it's the stuff you put into your spreadsheet that is amateurish…”Untrue.(1) The values/range of S_T are accessible from the mainstream literature. They are used within the typical degrees of precision quoted in the literature.(2) The values/ranges of S_U are accessible from the mainstream literature. They are used within the typical degrees of precision quoted in the literature.(3) The values/ranges of SH are accessible from the mainstream literature. They are used within the typical degrees of precision quoted in the literature. The values/ranges of SH_U are inferable from the mainstream literature.(4) The values/ranges of LH are accessible from the mainstream literature. They are used within the typical degrees of precision quoted in the literature. The values/ranges of LH_U are inferrable from the mainstream literature.(5) The values/ranges of F are accessible from the mainstream literature. They are used within the typical degrees of precision quoted in the literature. The values/ranges of F are inferrable from the mainstream literature.The spreadsheet has practical validity for simply identifying the constraints that must apply to the global heat balance over a range of albedos/cloud covers IF the above conditions apply.Conversely, you keep wanting to base your highly exclusive dogmas on a collection of what are essentially persistent and unproven assertions (A_A = E_D, S_U – 2E_U, Miskolczi's 'surface continuity', K and its component parts LH and SH don't really matter etc., etc) that bear only a weak relation to the data which appears in the mainstream literature on climate. It just doesn't work. The evidence shows virtually no one believes you, Ferenc or Zagoni etc., no matter how many supremely silly 'radiative homilies' you may like to spout.When you say, in reference to Roy Spencer: “IIRC I started on him when Pielke Sr was still blogging gave up when neither really answered my questions but other engineers …..” all you are really doing is revealing your silly arrogance. You say: “I started on him…”!!! Like…..Roy Spencer? And you accuse others of being insulting. You have to be kidding yourself surely? What nuttiness is this!To suggest that somehow you had in the past needed to point out to Roy Spencer that the AGW had redefined what positive feedback should be and that somehow he had then subsequently 'mended his ways' is literally insane.Furthermore, I have never, ever met a fellow engineer before you that could or would say: “….since I evicted windows from my network (ha, ha network?!!!) in 1994, and I generally find spreadsheets a pain but that is purely a matter of personal preference. If other people want to use them that's fine the are just as good at garbage in garbage out as any other program.”Spreadsheets have been the very stuff of systems engineering, chemical engineering, process control engineering etc. etc i.e. all kinds of technology from the very happy day Lotus 123 appeared.This is NOT what any professional engineer would say and your persistent crude inferences that you are somehow operating like one are laughable in the extreme. You persistently confuse real science with your personal beliefs/dogma (whatever the hell it is), you resort to tit-for-tat threads of never-ending wriggling sophistry whenever push comes to shove, and you indulge yourself in all sorts of nutty fantasies like forming transient little clubs with real scientists or 'correcting the experts' e.g. Roy Spencer.Stick to the job you are (just) lucky enough to hold down.

  • Jan Pompe

    Steve, Steve you simply fail to impress Aa=Ed has strong empirical support the only way to calculate it from the empirical data is with LBL code. You won't do it your silly spread sheet (it's not the tool it's the way you use it). If you get a decreasing E_D with increasing cloud cover it's just plain wrong and you can test this for yourself with a pyrgeometer. “Stick to the job you are (just) lucky enough to hold down.”Just waiting for someone to bring you in Steve.

  • http://www.ecoengineers.com/ Steve Short

    “If you get a decreasing E_D with increasing cloud cover it's just plain wrong and you can test this for yourself with a pyrgeometer.”No, once again it is your logic which is silly because you cannot (or will not) operate a simple heat balance spreadsheet.For an imposed balance between SW insolation [Fo(1-A)] and OLR at each and every increment in albedo, the spreadsheet suggests:(1) E_D rises from the current literature value of around 333 W/m^2 at average conditions (Albedo = 0.30, % all cloud ~64%) to ~340 W/m^ at Albedo = 0.35; and again(2) E_D rises to ~342 W/m^2 at Albedo = 0.40.These estimates are based on the progressive reduction of S_T to zero at Albedo = 0.40. This is probably a conservative over-estimate (in the rate of reduction of S_T with increasing albedo) as the %cloud at Albedo = 0.40 is not 100% but about 92% (based on latest papers in Nature Geoscience etc., etc.) hence S_T is still non-zero at Albedo ~0.40NB: these are NOT decreases in E_D with increasing albedo/% cloud but ARE actually increases.They may be more modest than you would like but for the increase in E_D with increasing albedo/cloud to be LARGER than this (i.e. to start matching A_A) would require that the amount of realized Latent Heat (LH) at high albedo/high % cloud did not rise as much as the mainstream literature would suggest, hence there would be less loss of LH at TOA than the literature would suggest (noting that SH_U has already hit ~zero at even lower albedos).I note here that high albedo generally implies a very cloud full extra-tropical band 5 deg. N – 5 deg. S. Start reducing the LH and LH-U terms under those circumstances (in a bid to increase E_D) and you will eliminate the possibility of Lindzen and Spencers's Iris Effect etc altogether. But go ahead – make my day – I see you are well experienced at taking a cane to poor Roy.The bottom line is that quantitatively you can't have your cake and eat it too. The overall heat balance imposes significant limits to the rate of rise of E_D i.re. heating of the ground/ocean with increasing albedo/cloud.My spreadsheet shows that the average A_A over all albedos between about 0.15 (~22% cloud) and 0.40 (~92% cloud) is about 365±12 W/m^ at the one standard deviation level. Similarly the average E_D is estimated to be about 351±18 W/m^2. These values are admittedly close and hence we can say that empirically it is likely that A_A ~ E_D over the full available albedo range.I don't have a problem with that – never did, never will. But it does NOT reflect any general rule or law. If it did the remainder of our mainstream literature understanding of F, LH, SH and S_T would be significantly in error.As you have clearly stated, you are only interested in situations where the full heat balance applies at each and every instance of albedo. If you want that then you have to live with the constraints which the heat balances impose on you in the light of the mainstream literature data.Otherwise you have to have a good technical reason to reject the mainstream literature data. The record shows you actually don't (other than the usual unbalanced statements that all of NASA is wrong, only FM knows the science, blah, bah….)Get any more snarkey and I'll state very clearly here just what you are forced do for a full time living and it certainly ain't an engineering career- right!

  • Jan Pompe

    “No, once again it is your logic which is silly because you cannot (or will not) operate a simple heat balance spreadsheet.”I think you are losing it Steve.I thought you were a scientist so why don't you like the good scientist that you think you are check your results against the real world behaviour? It's not difficult. You can see for yourself that E_D increases with cloud cover. In fact the thicker to clouds to closer to Su it becomes.

  • http://www.ecoengineers.com/ Steve Short

    Wrong logic, Jan (in fact no logic at all).The thicker the clouds the closer A_A to S_UAfter all: A_A = S_U – S_THence as S_T -> zeroA_A -> S_UEven Ken re-stated that obvious truism some time back in an email – albeit missing that S_T is not quite zero at Albedo = 0.40This is all noting of course we are talking about the TRUE LW IR S_T not the dodgy FM S_T.I think you are losing it Jan. No one here has to indulge you in your circular logic that E_D must exactly = A_A.

  • Jan Pompe

    “Wrong logic, Jan (in fact no logic at all).”I agree it isn't logic it's just measurement I suggest you try it sometime.

  • http://www.ecoengineers.com/ Steve Short

    What did I say?Catch Jan out and the wriggling sophistry commences.Classic Pompe:No mainstream literature data, no logic – in fact in this case just a confessed bull-necked unwillingness to consider the heat balance of the whole system – followed by the usual deflection into his own eccentric little world of circular logic….PS: Stand by for the wriggling (yawn) – I'm off to lunch.

  • Jan Pompe

    Why bother with what mainstream literature says (which you probably don't understand) when you can go out and measure what is going on. You'll soon see that what you measure and what your spread sheet says are quite different.You really should try it sometime.Now come to think of it why are you having such difficulty understanding that opaque objects do not transmit light even infra-red.

  • http://www.ecoengineers.com/ Steve Short

    “The thicker the clouds the closer A_A to S_UAfter all: A_A = S_U – S_THence as S_T -> zeroA_A -> S_U”Pompe:”Now come to think of it why are you having such difficulty understanding that opaque objects do not transmit light even infra-red.”Pathetic. I suppose you were drooling as you typed?Too many years doing the night shift at the psych ward, you too can fly over the cuckoo's nest.

  • Jan Pompe

    “Too many years doing the night shift at the psych ward, you too can fly over the cuckoo's nest.”You'd think after all that time I'd recognise a potential client when I see one. I'm waiting for you.See you soon!!

  • http://www.ecoengineers.com/ Steve Short

    You'll need more than a lot more pills to survive the wait. This is why you get up with an alarm clock and my buddies and I run a multi-million dollar consultancy.

  • Jan Pompe

    “my buddies and I run a multi-million dollar consultancy.”Yeah I noticed out of a converted garage in your basement.

  • http://www.ecoengineers.com/ Steve Short

    Yeah, some drive to work. Others simply walk downstairs in plaid slippers, fire up the Bose, sit down at the nearest workstation and sip latte while the rest of the grumpy old bastards come online.Oh it's hard.

  • Jan Pompe

    Oh yes I remember the life well only for me there were no stairs involved. I certainly never had to wait for anyone to come on line. I did have to chase payment at times chase work occasionally pay someone out when there was not enough work about and fire the incompetent who kept getting his feedback mixed up.I for one was glad to leave it behind in 2002 and rollover the super then got bored and dusted of my nursing papers in 2003. I don't miss the old life a bit and I'm having fun I'll probably do it until I'm 70.I don't know why you are even interested in trying to convince me of your point of view in fact i don't really care I'm really quite happy for you to go along believing that the microbes bootstrapped the world that made them. I really prefer the work I do now that is helping young folks that never really had a change to a better life. Only a totally self centred moron would belittle that work.

  • http://www.ecoengineers.com/ Steve Short

    So what? I mentor students frequently, support little kids on 2 continents and put $400/month into Medecins Sans Frontieres. We have both had to cope with the awful reality of losing a mature age child of our own. But all these things have nothing whatsoever to do with our interest in climate truth.All that sniping and snivelling doesn't cover the fact you've simply never bothered to check your overall global heat balances and want to pour shit on anyone who has a go. Just because you're a natural born rubbisher and you've somehow convinced yourself the sun shines out of FM's ass. Crikey.

  • Jan Pompe

    “All that sniping “Steve the only one sniping and pouring shit here is you. I was perfectly happy to leave you in your ignorance until you started to annoy me with your emails.

  • http://www.ecoengineers.com/ Steve Short
  • Jan Pompe

    Yawn!!

  • http://www.ecoengineers.com/ Steve Short

    Toss off elsewhere then.

  • Jan Pompe

    “Toss off elsewhere then.”Again Steve you only succeed in showing us what an unpleasant fellow you are. I'm not going any where.

  • http://www.ecoengineers.com/ Steve Short

    Here is just one example of comprehensive work to make regional annual energy budgets – in this case for the Mackenzie Basin in Western Canada.All atmospheric energy flux terms (W/m^2) have been multiplied by (8.64 X 10^4 s/day)/(Cp Ps/g), where Ps = monthly surface pressure, to provide normalized values in units of K/day. The surface energy terms are also multiplied by a constant atmospheric normalization (i.e., CvH=Cp(Ps/g), where H = depth of soil layer) in order to provide values in units of K/day so that they can easily be compared to their atmospheric counterparts. This is a common approach when comparing regional scale energy budgets.For mean annual cloud covers ranging from 48.4 to 67.3% i.e. a range of 18.9%, annual basin averages for:(1) the normalized LW IR down at BOA i.e. in Miskolczi terminology E_D, ranged from 2.11 to 2.36 K/day and was only WEAKLY correlated to rising cloud cover; and(2) the LW IR up at BOA i.e. in Miskolczi terminology A_A, ranged from 2.62 to 2.83 K/day and was uncorrelated to cloud cover.There was strictly no evidence that A_A= E_D and A_A was invariably >E_D.http://www.usask.ca/geography/MAGS/WEBS/html/da…How many nails need to be hammered into the A_A = E_D coffin?

  • Jan Pompe

    “(1) the normalized LW IR down at BOA i.e. in Miskolczi terminology E_D, ranged from 2.11 to 2.36 K/day and was only WEAKLY correlated to rising cloud cover; and(2) the LW IR up at BOA i.e. in Miskolczi terminology A_A, ranged from 2.62 to 2.83 K/day and was uncorrelated to cloud cover.”No measure of Aa in that lot so I think you just nailed your thumb to the coffin.This morning LWU = 394.91and LWD = 284.52 with partial cloud no Aa in that either apart from Ed. Why no Aa I can't measure it with a radiometer and if I had an interferometer I still could not measure it. Neither could they with the methods they used.

  • http://www.ecoengineers.com/ Steve Short

    “No measure of Aa in that lot so I think you just nailed your thumb to the coffin.”Now you appear to be trying to say that A_A does not = S_U – S_T and hence there is ” no A_A there”.The lies never end with you do they.

  • davids99us

    Steve, comment removed. Much as I enjoy your robust debating style, please try not to step over the line.

  • Jan Pompe

    Just been to the shops thought mine was over the top too and was going to edit it but you've beat me to it:)I've been looking at this issue with HARTCODE since someone said he thought the regression in figure 2 was too tight to be real. I've only done a few and they really are that tight so far. I'm currently trying to find a way to detect outliers in the data without running 2000+ trials, so I can just look at them.

  • http://www.ecoengineers.com/ Steve Short

    “No measure of Aa in that lot so I think you just nailed your thumb to the coffin.”Once again untrue.There is absolutely no mystery to determining A_A here. We know from the mainstream literature that at average conditions i.e. at Albedo = 0.30, cloud cover ~62%, S_U = 396 W/m^2 and S_T = 40 w/m^2 i.e. A_A is almost exactly 90% of S_U.For the 4 separate Mackenzie River Basin studies (covering different periods and mult-year durations in which mean annual S_U was found to be 2.79, 2.78, 2.62 and 2.83 K/d for mean annual cloud covers of 48.4, 55.7, 60.3 and 67.3% respectively, this simply means that A_A was very close to 2.51, 2.50, 2.36 and 2.55 K/d respectively. Very minor and relatively trivial adjustments can also be made to these numbers to account for the transparency window provided by the slightly different (from ~62%) mean cloud covers.But, as usual, the bottom line is that these numbers are still well in excess of the equivalent E_D numbers of 2.21, 2.26, 2.11 and 2.36 K/d.As I said A_A does not =E_D and E_D (which was measured directly and quoted explicitly) correlates only weakly with percent cloud cover.I haven't nailed my thumb to anything here! That statement is just the primary school level debating flim flam we can expect from Jan.As for the comment about about not being able to measure A_A at ground level, the simple answer is so what?Just more noise from the flim flam man.The four Mackenzie River Basin studies used a mix of surface stations, radiosondes and satellite data.

  • Jan Pompe

    Steve there is no evidence that Aa has actually been measured by the team creating the data. In fact they can't with the methods they use. You are making the the 40W/m^2 assumption based on admitted ad hoc</c> calculation which is in turn based on the assumption that the only transmittance is in the IR window. Your argument is circular

  • http://www.ecoengineers.com/ Steve Short

    I don't think so. If you read the 20 – 25 papers on the global heat flux balances over the last 10 years or so only, as I have done, in my view any reasonable person with an adequate education would not conclude that the estimations of the BOA -> TOA transmission of LW IR were ad hoc.Ergo I simply don't believe you and I don't think the vast majority of scientist who could follow the text of those papers would think so either.You are out on a limb. Your accusation of 'ad hocery' is totally unproven outside of your own head.As you have pointed out yourself – it either transparent or its opaque (e.g. clouds). The variations in transparency with specific humidity are insufficient to outweigh the bulk variation in S_U with changing SW insolation or (at high or low albedos) the overall window size.IMHO this permanent confusion on your part arose because Miskolczi has not actually been measuring just the true BOA->TOA tramsitted S_T but has also been mixing something else in with it. The most likely candidate seems to be LW IR emitted of the tops of clouds during the realization (water/ice) of latent heat.I gave Zagoni/Mmiskolczi a very fair and square chance to clarify that tau/S_T issue and they squibbed it.End of story.Incidentally this is all without also considering the truism that by ignoring the internal workings of the K term altogether your and FMs world view is irreparably narrowed by the failure to considewr what happens to latent and sensible heat.

  • Jan Pompe

    I had a look at what MODTRAN calculates for transmittance for no cloud mid-latitude summer with CO2 concentration 375 ppmv.
    a cut and paste from the text output follows

    0INTEGRATED ABSORPTION FROM 100 TO 1500 CM-1 = 1140.11 CM-1
    AVERAGE TRANSMITTANCE =0.1856

    0INTEGRATED RADIANCE = 8.906E-03 WATTS CM-2 STER-1
    MINIMUM RADIANCE = 4.058E-07 WATTS CM-2 STER-1 (CM-1)-1 AT 1458.0 CM-1
    MAXIMUM RADIANCE = 1.256E-05 WATTS CM-2 STER-1 (CM-1)-1 AT 560.0 CM-1
    BOUNDARY TEMPERATURE = 294.20 K
    BOUNDARY EMISSIVITY = 0.980
    0 CARD 5 ***** 0

    Note transmittance Ta = .1856 so transmitted flux is Su*Ta = 416.3 * .1856 = 77.3 W/m^2 so Aa = 416.3 – 77.3 = 339 W/m^2.

    now putting some cumulus cloud into the mix:

    0INTEGRATED ABSORPTION FROM 100 TO 1500 CM-1 = 1400.00 CM-1
    AVERAGE TRANSMITTANCE =0.0000

    0INTEGRATED RADIANCE = 8.029E-03 WATTS CM-2 STER-1
    MINIMUM RADIANCE = 3.997E-07 WATTS CM-2 STER-1 (CM-1)-1 AT 1458.0 CM-1
    MAXIMUM RADIANCE = 1.173E-05 WATTS CM-2 STER-1 (CM-1)-1 AT 532.0 CM-1
    BOUNDARY TEMPERATURE = 294.20 K
    BOUNDARY EMISSIVITY = 0.980
    0 CARD 5 ***** 0

    and average transmittance is 0, zilch, nada, zippidydo.

    • http://www.ecoengineers.com/ Steve Short

      As they say, maybe it’s all a case of garbage in garbage out.

      Given that global albedo for a cloud-free mid-summer scenario is about 0.07, kindly explain how you get from an incoming SW insolation over 310 W/m^2 to an S_U of only 416 W/m^2?

      • Jan Pompe

        “As they say, maybe it’s all a case of garbage in garbage out.”

        Well the Steve I’d suggest you take up the cudgel with David and Jeremy Archer and their friends and RealClimate.

        • http://www.ecoengineers.com/ Steve Short

          Gotcha!

          • Jan Pompe

            Delusional

          • http://www.ecoengineers.com/ Steve Short

            “Delusional”

            David, I object to that as an offensive remark which has nothing to do with the question posed.

            All Pompe has to do is fulfill the simple request:

            “Given that global albedo for a cloud-free mid-summer scenario is about 0.07, kindly explain how you get from an incoming SW insolation over 310 W/m^2 to an S_U of only 416 W/m^2?”

          • Jan Pompe

            Steve your “Gotcha” is clearly delusional.

            If you have a problem with the results of David & Jeremy Archer’s tool and the data they put into it you need to take it up with them. I have no control over that.

            If Steve has not worked out yet that
            a) there are no known conservation laws for radiation intensity, and
            b) Su is not the net flux leaving the surface.
            by now then I can’t help him.

          • http://www.ecoengineers.com/ Steve Short

            “If Steve has not worked out yet that
            a) there are no known conservation laws for radiation intensity, and
            b) Su is not the net flux leaving the surface.
            by now then I can’t help him.”

            It’s all in your head alone.

          • Jan Pompe

            “It’s all in your head alone.”

            Then I’m sure you can quote the law of conservation of thermal radiation.

            Let’s have it.

            you can also show that Su is the net flux leaving the surface.

            Do it.

          • Jan Pompe

            Further more Steve if you can’t take it don’t dish it out.

            Would you like a list of every objectionable infantile personal attack that you’ve mad not only to me but others as well then I’ll be only to happy to do it.

            It’s high time that YOU grew up, stop snivelling and start apologising.

            I for one find your circular arguments ad nauseam tiresome.

          • http://www.ecoengineers.com/ Steve Short

            Insane.

          • Jan Pompe

            Insane.

            Do you honestly believe that you are qualified to make such a diagnosis.

            I don’t.

            David might call your debating technique “robust” I would call it just plain rude.

          • http://www.ecoengineers.com/ Steve Short

            “Delusional”???

            A little homily about pots and black kettles comes to mind.

            Rather than continually asking me to jump through hoops which are based on silly fantasies within your own mind (about what I do or do not know), another crude ‘technique’ which never worked with Neal King, BPL, Nick Stokes etc and won’t work with me, you need to ask yourself the following:

            (1) Just why do you have to resort to Real Climate as your ‘ultimate authority’ in the matter of cloud free summer day S_U and hence surface temperature in the context where you believe there is no surface discontinuity and A_A supposedly =E_D – an utterly bizarre (rational?) convergence if views if ever there was one; and

            (2) how would you explain the SSTs identified from paleothermometers during the (super-greenhouse) conditions which occurred sporadically in the Cretaceous and Eocene, when CO2 levels never exceeded 1500 ppmv (and were often lower), on the basis of the David and Archer tool while still maintaining consistency with you other views?

            There’s a couple of nice ‘hoops’ for you. Actually they are more like Mobius strips as far as you are concerned.

          • Jan Pompe

            “Rather than continually asking me to jump through hoops which are based on silly fantasies within your own mind ”

            I’m not asking you to do anything Steve I’m just suggesting that if you have problem with MODTRAN results tha you take it up with the authors, one of whom has been known to hang out at RealClimate.

            Whether you do or not is entirely up to you.

    • http://www.ecoengineers.com/ Steve Short

      OK, let’s keep it really, really simple.

      For a boundary emissivity of 0.9980 and an S_U of 416.3 W/m^2, how, in the absence of any discontinuity, do you get a temperature of 294.2o K (or vice versa)?

      • Jan Pompe

        OK very simple:

        “For a boundary emissivity of 0.9980 and an S_U of 416.3 W/m^2, how, in the absence of any discontinuity, do you get a temperature of 294.2o K (or vice versa)?”

        I don’t. I get 292.8K, but with a surface emissivity of .98 I get 294.2K (We don’t usually put the degree symbol with Kelvin)

        • http://www.ecoengineers.com/ Steve Short

          Fair enough.

          Now give me the surface temperature at the (average) S_U = 396 W/m^2 for a surface emissivity of 0.98?

          Oh, in deg. C please.

          • Jan Pompe

            I’m not your trained seal Steve I don’t even need to break out the calculator for that so I’ll do it

            290.6 Kelvin or 17.4 Celcius.

            What is this deg C business? I’m the deniersaurus rex around here.

          • http://www.ecoengineers.com/ Steve Short

            That’s a relief, I get 17.45 C.

            But isn’t that a little hot for the global mean surface temperature – like about 3.45 C too hot?

            In other words, why adopt a an emissivity of 0.98?

          • Jan Pompe

            You are

            You’re possibly using an oldish value for sigma. I’m using 5.6704E-8. which is the latest standard I can find.

            I’ll email the pdf with it to you. Ferenc sent it to me after I grumbled about different numbers coming out prior is was using 5.67051E-8.

            I honestly don’t know why they chose .98. Fact of the matter is that we have little data on it Ferenc uses .96.

            I’m not sure what you mean by too hot. What they “measured” in 1990 which is where the 15C comes from there might have been a bit of warming since but not that much. But if you are getting this form FK&T they assume a surface emissivity of 1 which implies a temperature of 289.1K which is only 1K hotter than the 1990 number this is how I think they arrived at their figure of 396 W/m^2.

            Oops I just realised I hadn’t posted this you must be curious about the email.

          • http://www.ecoengineers.com/ Steve Short

            Thanks for the codata-nist file. I’ll look at as soon as I can get the time.

            I have checked out Jeremy Archers’ simple Modtrans-based utility at:

            http://geosci.uchicago.edu/~archer/cgimodels/radiation.html

            However, as pleasant as it may be for some students quite obvioulsy there is no way one can a priori use the tool for realistically determining global mean S_U for a range of albedos.

            This is very easily verified by changing the sensor altitude for example. Simple choose say all over cumulus at 0.66 Km and set the sensor at (say) 0.5 km. Or pump heavy rain into it. It will produce no difference in the surface temperature.

            In other words it is purely 1D it seems. All it tells you is what to expect from the surface up for a single column with ‘stuff’ in that column.

            Incidentally, Archers tool for looking at ocean CO2 saturation is similarly crude and there I am firmly on home territory.

          • Jan Pompe

            You are welcome for the data list.

            I agree the MODTRAN is bit crude there is a “full” version available for arms and legs. I don’t think this one is even for students but it’s available on the internet and handy. It sacrifices a lot for speed. I can’t say much more about it than that. A tool like hartcode takes about 10 minutes to run through a single profile.

            I haven’t had time to check them out but there is a list here:
            http://en.wikipedia.org/wiki/List_of_atmospheric_radiative_transfer_codes

            You might like to take a look. Some of those listed say the handle clouds. I can’t give you HARTCODE without permission. I think a comparison with someone else’s model would be desirable in any case and is something I’d like to get around to doing, but I don’t see it happening soon. If you want clouds, I think you’d be better off with something else any way.

            I think the U Chicago version is to limited to look at anything but full cloud.

            I plead no contest on the CO2 saturation model.

    • http://www.ecoengineers.com/ Steve Short

      There seems to be a possible problem here with the calculation of S_U.

      If the inference is that the global mean value of S_U for zero cloud cover and atmospheric CO2 of 375 ppmv is only around 416 W/m^2, then it can be easily shown that the total range of S_U from zero cloud to 100% cloud would then only be about 416 – 366 W/m^2 or in other words a total range of mean global surface temperatures for an assumed emissivity of 1.0 of only ~293 – ~283 K (under present ~375 ppmv CO2 conditions).

      This does not jibe well with other literature sources I have on file going back over many years which suggest the total range of S_U (for zero to 100% cloud cover) may be something more like 441 – 322 W/m^2 or in other words a total range of mean global surface temperatures for an assumed emissivity of ~1.0 of ~297 – 274 K i.e. maximum low cloud deviation from the mean of +297 – 289 (say) i.e. ~8 K.

      If e.g. Kump and Pollard can estimate that the Cretaceous and Eocene ‘supergreenhouse’ episodes when CO2 rose to ~1500 ppmv i.e. two doublings over present conditions during the Cretaceous and Eocene ‘super-greenhouse’ episodes only required reduction of mean cloud cover from the present ~64% to only ~55% then there surely has to be something wrong with estimates of modern day global mean S_U which only rise to ~416 W/m^2 for zero cloud cover and 375 ppmv CO2.

      I am not expressing a hard and fast opinion either way here just noting what appears to be an anomaly in the findings detailed above.

      Therefore I’d be interested in any references to other independent estimates of the total possible range of modern day S_U than appears to be implied by Jan’s MODTRAN exercise above.

  • Jan Pompe

    I had a look at what MODTRAN calculates for transmittance for no cloud mid-latitude summer with CO2 concentration 375 ppmv.a cut and paste from the text output follows

    0INTEGRATED ABSORPTION FROM 100 TO 1500 CM-1 = 1140.11 CM-1 AVERAGE TRANSMITTANCE =0.18560INTEGRATED RADIANCE = 8.906E-03 WATTS CM-2 STER-1 MINIMUM RADIANCE = 4.058E-07 WATTS CM-2 STER-1 (CM-1)-1 AT 1458.0 CM-1 MAXIMUM RADIANCE = 1.256E-05 WATTS CM-2 STER-1 (CM-1)-1 AT 560.0 CM-1 BOUNDARY TEMPERATURE = 294.20 K BOUNDARY EMISSIVITY = 0.9800 CARD 5 ***** 0

    Note transmittance Ta = .1856 so transmitted flux is Su*Ta = 416.3 * .1856 = 77.3 W/m^2 so Aa = 416.3 – 77.3 = 339 W/m^2. now putting some cumulus cloud into the mix:

    0INTEGRATED ABSORPTION FROM 100 TO 1500 CM-1 = 1400.00 CM-1 AVERAGE TRANSMITTANCE =0.00000INTEGRATED RADIANCE = 8.029E-03 WATTS CM-2 STER-1 MINIMUM RADIANCE = 3.997E-07 WATTS CM-2 STER-1 (CM-1)-1 AT 1458.0 CM-1 MAXIMUM RADIANCE = 1.173E-05 WATTS CM-2 STER-1 (CM-1)-1 AT 532.0 CM-1 BOUNDARY TEMPERATURE = 294.20 K BOUNDARY EMISSIVITY = 0.9800 CARD 5 ***** 0

    and average transmittance is 0, zilch, nada, zippidydo.

  • http://www.ecoengineers.com/ Steve Short

    As they say, maybe it's all a case of garbage in garbage out. Given that global albedo for a cloud-free mid-summer scenario is about 0.07, kindly explain how you get from an incoming SW insolation over 310 W/m^2 to an S_U of only 416 W/m^2?

  • Jan Pompe

    “As they say, maybe it's all a case of garbage in garbage out.”Well the Steve I'd suggest you take up the cudgel with David and Jeremy Archer and their friends and RealClimate.

  • Jan Pompe

    “If you read the 20 – 25 papers on the global heat flux balances over the last 10 years or so only, as I have done, ”

    Steve I haven’t built a valve amplifier or anything from valves for around 40 years now.

    Think about it.

  • Jan Pompe

    “If you read the 20 – 25 papers on the global heat flux balances over the last 10 years or so only, as I have done, “Steve I haven't built a valve amplifier or anything from valves for around 40 years now.Think about it.

  • http://www.ecoengineers.com/ Steve Short

    Gotcha!

  • Jan Pompe

    Delusional

  • http://www.ecoengineers.com/ Steve Short

    “Delusional”David, I object to that as an offensive remark which has nothing to do with the question posed.All Pompe has to do is fulfill the simple request:”Given that global albedo for a cloud-free mid-summer scenario is about 0.07, kindly explain how you get from an incoming SW insolation over 310 W/m^2 to an S_U of only 416 W/m^2?”

  • Jan Pompe

    Steve your “Gotcha” is clearly delusional. If you have a problem with the results of David & Jeremy Archer's tool and the data they put into it you need to take it up with them. I have no control over that.If Steve has not worked out yet that a) there are no known conservation laws for radiation intensity, andb) Su is not the net flux leaving the surface.by now then I can't help him.

  • Jan Pompe

    Further more Steve if you can't take it don't dish it out. Would you like a list of every objectionable infantile personal attack that you've mad not only to me but others as well then I'll be only to happy to do it.It's high time that YOU grew up, stop snivelling and start apologising.I for one find your circular arguments ad nauseam tiresome.

  • http://www.ecoengineers.com/ Steve Short

    “If Steve has not worked out yet thata) there are no known conservation laws for radiation intensity, andb) Su is not the net flux leaving the surface.by now then I can't help him.”It's all in your head alone.

  • http://www.ecoengineers.com/ Steve Short

    Insane.

  • Jan Pompe

    Insane.Do you honestly believe that you are qualified to make such a diagnosis.I don't.David might call your debating technique “robust” I would call it just plain rude.

  • Jan Pompe

    “It's all in your head alone.”Then I'm sure you can quote the law of conservation of thermal radiation.Let's have it.you can also show that Su is the net flux leaving the surface.Do it.

  • http://www.ecoengineers.com/ Steve Short

    “Delusional”???A little homily about pots and black kettles comes to mind.Rather than continually asking me to jump through hoops which are based on silly fantasies within your own mind (about what I do or do not know), another crude 'technique' which never worked with Neal King, BPL, Nick Stokes etc and won't work with me, you need to ask yourself the following:(1) Just why do you have to resort to Real Climate as your 'ultimate authority' in the matter of cloud free summer day S_U and hence surface temperature in the context where you believe there is no surface discontinuity and A_A supposedly =E_D – an utterly bizarre (rational?) convergence if views if ever there was one; and(2) how would you explain the SSTs identified from paleothermometers during the (super-greenhouse) conditions which occurred sporadically in the Cretaceous and Eocene, when CO2 levels never exceeded 1500 ppmv (and were often lower), on the basis of the David and Archer tool while still maintaining consistency with you other views?There's a couple of nice 'hoops' for you. Actually they are more like Mobius strips as far as you are concerned.

  • Jan Pompe

    “Rather than continually asking me to jump through hoops which are based on silly fantasies within your own mind “I'm not asking you to do anything Steve I'm just suggesting that if you have problem with MODTRAN results tha you take it up with the authors, one of whom has been known to hang out at RealClimate.Whether you do or not is entirely up to you.

  • http://www.ecoengineers.com/ Steve Short

    OK, let's keep it really, really simple. For a boundary emissivity of 0.9980 and an S_U of 416.3 W/m^2, how, in the absence of any discontinuity, do you get a temperature of 294.2o K (or vice versa)?

  • Jan Pompe

    OK very simple:”For a boundary emissivity of 0.9980 and an S_U of 416.3 W/m^2, how, in the absence of any discontinuity, do you get a temperature of 294.2o K (or vice versa)?”I don't. I get 292.8K, but with a surface emissivity of .98 I get 294.2K (We don't usually put the degree symbol with Kelvin)

  • http://www.ecoengineers.com/ Steve Short

    Fair enough. Now give me the surface temperature at the (average) S_U = 396 W/m^2 for a surface emissivity of 0.98?Oh, in deg. C please.

  • Jan Pompe

    I'm not your trained seal Steve I don't even need to break out the calculator for that so I'll do it290.6 Kelvin or 17.4 Celcius.What is this deg C business? I'm the deniersaurus rex around here.

  • http://www.ecoengineers.com/ Steve Short

    That's a relief, I get 17.45 C.But isn't that a little hot for the global mean surface temperature – like about 3.45 C too hot?In other words, why adopt a an emissivity of 0.98?

  • Jan Pompe

    You areYou're possibly using an oldish value for sigma. I'm using 5.6704E-8. which is the latest standard I can find.I'll email the pdf with it to you. Ferenc sent it to me after I grumbled about different numbers coming out prior is was using 5.67051E-8.I honestly don't know why they chose .98. Fact of the matter is that we have little data on it Ferenc uses .96. I'm not sure what you mean by too hot. What they “measured” in 1990 which is where the 15C comes from there might have been a bit of warming since but not that much. But if you are getting this form FK&T they assume a surface emissivity of 1 which implies a temperature of 289.1K which is only 1K hotter than the 1990 number this is how I think they arrived at their figure of 396 W/m^2.Oops I just realised I hadn't posted this you must be curious about the email.

  • http://www.ecoengineers.com/ Steve Short

    Thanks for the codata-nist file. I'll look at as soon as I can get the time.I have checked out Jeremy Archers' simple Modtrans-based utility at:http://geosci.uchicago.edu/~archer/cgimodels/ra…However, as pleasant as it may be for some students quite obvioulsy there is no way one can a priori use the tool for realistically determining global mean S_U for a range of albedos.This is very easily verified by changing the sensor altitude for example. Simple choose say all over cumulus at 0.66 Km and set the sensor at (say) 0.5 km. Or pump heavy rain into it. It will produce no difference in the surface temperature.In other words it is purely 1D it seems. All it tells you is what to expect from the surface up for a single column with 'stuff' in that column.Incidentally, Archers tool for looking at ocean CO2 saturation is similarly crude and there I am firmly on home territory.

  • Jan Pompe

    You are welcome for the data list.I agree the MODTRAN is bit crude there is a “full” version available for arms and legs. I don't think this one is even for students but it's available on the internet and handy. It sacrifices a lot for speed. I can't say much more about it than that. A tool like hartcode takes about 10 minutes to run through a single profile. I haven't had time to check them out but there is a list here:http://en.wikipedia.org/wiki/List_of_atmospheri…You might like to take a look. Some of those listed say the handle clouds. I can't give you HARTCODE without permission. I think a comparison with someone else's model would be desirable in any case and is something I'd like to get around to doing, but I don't see it happening soon. If you want clouds, I think you'd be better off with something else any way. I think the U Chicago version is to limited to look at anything but full cloud.I plead no contest on the CO2 saturation model.

  • http://www.ecoengineers.com/ Steve Short

    There seems to be a possible problem here with the calculation of S_U. If the inference is that the global mean value of S_U for zero cloud cover and atmospheric CO2 of 375 ppmv is only around 416 W/m^2, then it can be easily shown that the total range of S_U from zero cloud to 100% cloud would then only be about 416 – 366 W/m^2 or in other words a total range of mean global surface temperatures for an assumed emissivity of 1.0 of only ~293 – ~283 K (under present ~375 ppmv CO2 conditions).This does not jibe well with other literature sources I have on file going back over many years which suggest the total range of S_U (for zero to 100% cloud cover) may be something more like 441 – 322 W/m^2 or in other words a total range of mean global surface temperatures for an assumed emissivity of ~1.0 of ~297 – 274 K i.e. maximum low cloud deviation from the mean of +297 – 289 (say) i.e. ~8 K.If e.g. Kump and Pollard can estimate that the Cretaceous and Eocene 'supergreenhouse' episodes when CO2 rose to ~1500 ppmv i.e. two doublings over present conditions during the Cretaceous and Eocene 'super-greenhouse' episodes only required reduction of mean cloud cover from the present ~64% to only ~55% then there surely has to be something wrong with estimates of modern day global mean S_U which only rise to ~416 W/m^2 for zero cloud cover and 375 ppmv CO2.I am not expressing a hard and fast opinion either way here just noting what appears to be an anomaly in the findings detailed above. Therefore I'd be interested in any references to other independent estimates of the total possible range of modern day S_U than appears to be implied by Jan's MODTRAN exercise above.

  • http://www.ecoengineers.com/ Steve Short

    There seems to be a possible problem here with the calculation of S_U. If the inference is that the global mean value of S_U for zero cloud cover and atmospheric CO2 of 375 ppmv is only around 416 W/m^2, then it can be easily shown that the total range of S_U from zero cloud to 100% cloud would then only be about 416 – 366 W/m^2 or in other words a total range of mean global surface temperatures for an assumed emissivity of 1.0 of only ~293 – ~283 K (under present ~375 ppmv CO2 conditions).This does not jibe well with other literature sources I have on file going back over many years which suggest the total range of S_U (for zero to 100% cloud cover) may be something more like 441 – 322 W/m^2 or in other words a total range of mean global surface temperatures for an assumed emissivity of ~1.0 of ~297 – 274 K i.e. maximum low cloud deviation from the mean of +297 – 289 (say) i.e. ~8 K.If e.g. Kump and Pollard can estimate that the Cretaceous and Eocene 'supergreenhouse' episodes when CO2 rose to ~1500 ppmv i.e. two doublings over present conditions during the Cretaceous and Eocene 'super-greenhouse' episodes only required reduction of mean cloud cover from the present ~64% to only ~55% then there surely has to be something wrong with estimates of modern day global mean S_U which only rise to ~416 W/m^2 for zero cloud cover and 375 ppmv CO2.I am not expressing a hard and fast opinion either way here just noting what appears to be an anomaly in the findings detailed above. Therefore I'd be interested in any references to other independent estimates of the total possible range of modern day S_U than appears to be implied by Jan's MODTRAN exercise above.