170 Thoughts on “Open thread: 27 Nov 2012

  1. Richard C,
    I did not choose the 1950-1980 period you did. Now you appear to prefer 1940-1980 period.

    Neither of these periods show the cooling you claim, although the warming from 1950-1980 is much stronger.

    Just to clarify, what exactly is your definition of cherry picking?

  2. Richard C (NZ) on December 3, 2012 at 8:31 pm said:

    >”…there is no “break down” in MEI/PDO/SST linkage, it is completely synchronous 1900 – present.”

    Hence: Temp/PDO+AMO+Sunspot Integral correlation R2 0.96 vs Temp/CO2 0.44.


  3. Sounds great Bruce, I look forward to reading your letter!

  4. Just as a general observation, without drilling down into details, I find this line of argument somewhat unconvincing. The IPCC come up with.a projection, reality does not match it, and somehow they manage to find, out of the hat so to speak, an adjustment that makes their models valid.

    Wasn’t it the same guys, the Potsdam ones, that is, who claimed that the cold winters that Europe had recently experienced were caused by global warming?

    It’s all very plausible if you also buy into astrology and palm reading, but personally I’d like to see these guys put their money on some future projections and stick with it.

  5. Richard C (NZ) on December 3, 2012 at 9:51 pm said:

    >”I did not choose the 1950-1980 period you did. Now you appear to prefer 1940-1980 period.”

    I explained why Nick. MEI only starts at 1950. If MEI started at 1940 i could have included that period but I CANNOT, hence my subsequent recourse to PDO which is a longer index. This is second time I’ve explained this, I hope there doesn’t have to be a third.

    >”Neither of these periods show the cooling you claim,

    The 1930s/40s were clearly warmer in absolute terms than the 1950s – 1970s i.e. cooling. And there has been a cooling trend in SST since 2004.

    >”..although the warming from 1950-1980 is much stronger.”

    Stronger than what? 1910 – 1940 say?


    Stronger? I don’t think so.

    1950-1980 doesn’t rise above 30s/40s levels as I’ve already pointed out. Again I hope I don’t have to explain this a third time

    >”Just to clarify, what exactly is your definition of cherry picking?”

    The most desirable selection to suit an ulterior motive e.g. Rahmstorf, Foster and Cazenova picking 1980 as their start point because including the rest of the century ruins their warmist narrative. The last century SST looks like this with the 30 yr trends at each end compared:-


    If I were a warmist, I would not be comfortable attempting to explain why – in terms of CO2 forcing – the early trend is steeper than the most recent trend therefore I would simply ignore the earlier data as FR&C did. Neither could I explain the intervening period in terms of CO2 forcing either when it is so obviously out of kilter with the warming either side. That is the essence of cherry picking Nick; to EXCLUDE the inconvenient.

    The IPCC models vs Obs would look so much better if they didn’t have to include that pesky 30s/40s warm period and the negative trend in the polynomial that results in HadCRUT3 1945 – 1965 or the 1915 – 1935 steepness, fortunately they cannot be so blatant:-


  6. @ Nick: Both the satellites and the radiosondes confirm the lack of a tropospheric hot spot.

    Page 13 – http://www.rossmckitrick.com/uploads/4/8/0/8/4808045/mmh_asl2010.pdf


  7. Richard C (NZ) on December 3, 2012 at 11:50 pm said:

    >”If MEI started at 1940 i could have included that period but I CANNOT”

    As luck would have it, there’s an extended MEI index (MEI.ext) that goes back to 1871:-


    From this ESRL page:-


    The major positive departure around 1940 is evident as is negative around 1910, both of which coincide with respective high and low SSTs.

    From that page, 1939 was a strong El Niño event:-


    And 1908 was a strong La Niña:-


    Late 1970s to present produces the most positive departure and late 1800s – early 1900s produces the most negative departure, again generally coinciding with respective warm and cool SST periods e.g. the 1877ish positive in among the negative of the period coincides with an SST spike at that time:-


  8. Alexander K on December 4, 2012 at 9:01 am said:

    I am puzzled, Andy, as to why the world in general thinks the awful Mugabe is even crazier when he has money printed in large quantities, but when Greens and other political groupings support the idea, the practice is suddenly renamed ‘quantitive easing’ and becomes theoretically different in every way. ‘Quantative Easing is actually stealing from my pension fund and that annoys me more than somewhat!
    I too look forward to reading Bruce’s rebuttal letter in ‘North and South’, but I don’t expect any MSM journalist to think differently from the Establishment. It takes guts and some intellect to think for oneself. The best we can hope for is that Global Warming (or whatever it’s name of the day is) will just fade away as a topic, as journalists do not like admitting mistakes.

  9. The false science of eugenics also just faded away. It was all the fashion in the early 20th century. All the major science establishments, politicians, etc were onboard with this. There was a “consensus”. Then came WW2, the Germans took eugenics to the extreme, and after the war it was quietly forgotten about.

  10. Richard C (NZ) on December 4, 2012 at 9:39 am said:

    Rahmstorf, Foster, and Cazenave (2012) detrend for solar using PMOD (their detrending is a “novel” and “clever” approach according to SkS).

    The following series of papers (there are others) demonstrates the deficiency of relying on PMOD to the exclusion of other TSI datasets and that detrending for solar has been carried out previously with other TSI datasets since 1980 providing explanations for much of planetary warming over that 30 period and also for a longer 50 year period.

    Mordinov and Willson 2003 found that PMOD’s straight line in TSI over the last 30 years can be explained by not correcting the ERBE/ERBS data during the ACRIM Gap.

    The absence of a minima-to-minima trend in the PMOD composite is an artifact of uncorrected ERBS degradation. ERBS degradation during the gap equals the trend difference and the PMOD offsets (within computational uncertainty).


    Scafetta 2009 documents the ERBS discrepency. Most (65-70%) of the warming can be explained with TSI alone over the last 30 years if ACRIM and other TSI measurements are to be right. If PMOD’s dataset were to be right, little attribution would be needed for TSI.

    The solar contribution to global mean air surface temperature change is analyzed by using an empirical bi-scale climate model characterized by both fast and slow characteristic time responses to solar forcing: 1 = 0.4 ± 0.1 yr, and 2 = 8 ± 2 yr or 2 = 12 ± 3 yr. Since 1980 the solar contribution to climate change is uncertain because of the severe uncertainty of the total solar irradiance satellite composites. The sun may have caused from a slight cooling, if PMOD TSI composite is used, to a significant warming (up to 65% of the total observed warming) if ACRIM, or other TSI composites are used. The model is calibrated only on the empirical 11-year solar cycle signature on the instrumental global surface temperature since 1980. The model reconstructs the major temperature patterns covering 400 years of solar induced temperature changes, as shown in recent paleoclimate global temperature records.


    Scafetta and Willson 2009 also documented PMOD’s flaws (supports Wilson 1997) and that the TSI contribution since 1980 could be underestimated

    This finding has evident repercussions for climate change and solar physics. Increasing TSI between 1980 and 2000 could have contributed significantly to global warming during the last three decades [Scafetta and West, 2007, 2008]. Current climate models [Intergovernmental Panel on Climate Change, 2007] have assumed that the TSI did not vary significantly during the last 30 years and have therefore underestimated the solar contribution and overestimated the anthropogenic contribution to global warming.


    Scafetta and West 2008 found that up to 69% of the warming over the last 50 years can be explained by just TSI alone using the ACRIM dataset. This does not include other natural causes like Cloud Cover decrease, PDO, AMO, Volcanism etc.

    Solar cycles
    We estimate that the Sun could account for as much as 69% of the increase in Earth’s average temperature, depending on the TSI reconstruction used


    The ACRIM website describes the ACRIM – PMOD difference

    There are a number of differences between the ACRIM and PMOD composites but the most important is the trend during solar cycles 21 – 23. The absence of a trend in the PMOD composite and any composite based on the ERBS/ERBE ACRIM gap ratio has been shown to be an artifact of uncorrected degradation of ERBE results during the gap (See Fig. 4 linked).


    Another TSI dataset, IRMB, agrees with ACRIM that TSI increased between the minima of SC 21 and 22, though not to the extent of ACRIM.


    A is TSI measured by ACRIM, B is TSI measured by IRMB, and C is TSI measured by PMOD.

  11. Richard C (NZ) on December 4, 2012 at 10:36 am said:

    >”I am interested to know how we reconcile the apparent loss in land ice with the increase in sea ice in Antarctica”

    Two VERY illuminating commentaries on Shepherd et al 2012:-

    New Ice Surveys Finds Slower Ice-sheet Melting


    Why ice loss and sea level measurements via satellite and the new Shepherd et al paper are highly uncertain at the moment


  12. Richard C (NZ) on December 4, 2012 at 12:00 pm said:

    Re RFC12 Figure 1:-



    Why are the 1998 El Niño and 2007 La Niña events heavily reduced but the 2010 El Niño actually enhanced (added to)? There is no specific comment on the 2010 event in the paper or on why and how they arrived at a 2010 adjustment that is significantly different to the other ENSO adjustments (apart from their overall linear regression approach).

    Adjustment to the 2010 El Niño proportionally similar to RFC12’s adjustment to the 1998 El Niño would place the RFC12 2010 level at the lower bound of the simulations and mean no warming 2006 – 2012. Their entire series would then concur with (be consistent with) Christy’s 7 year smoothing of RSS and UAH here:-


    There’s something very dodgy about RFC12’s adjustment to the 2010 El Niño and their treatment of that critical event changes the game completely i.e. their series is inconsistent with the 7 year smoothed series because of that single adjustment.

    The 2009/10 event was one of the 7 strongest since 1950:-


    Why have RF&C enhanced the 2012 event (added to it) rather than reducing it consistent with other ENSO events?

  13. Hi Richard T,
    Have a look at the earlier paper

    Foster G and Rahmstorf S 2011 Global temperature evolution


    It is clarified there but let me know if it still does not make sense. I’m happy to run through it with you.

  14. Richard C (NZ) on December 4, 2012 at 5:08 pm said:

    >”It is clarified there”

    Nothing in F&R as to why the 2010 event should be treated any differently than all the other ENSO events.

    >”but let me know if it still does not make sense.”

    It actually makes sense to a degree. F&R and RF&C are essentially just short-term data smoothing exercises despite the hype but smoothing must retain the characteristics of the data, not modify them. Christy’s 7 year smoothing of RSS and UAH is far more radical by comparison so that he ends up with two series without the fluctuations of RF&C, but the signal is retained in the 7 year smoothing.

    The only difference between the respective series (Christy vs FR&C) is the 2010 event. The 7 year smoothing retains the intrinsic trajectory of the data i.e. the 2010 ENSO event is merely smoothed just the same as all the other ENSO events. RF&C for some reason, instead of smoothing as for 1998, 2007 and all the other ENSO activity actually ADD data either side of the 2010 event where there was none previously and the smoothing to the peak is minimal compared to the reductions to the other event peaks – why?

    >”I’m happy to run through it with you.”

    Please do. Specific to the 2010 event in comparison to other significant events e.g. 1998 and 2007.

  15. If they have managed to detrend the short term noise from the dataset, wouldn’t it be more useful to provide future IPCC projections with the short term noise added in?

    Presumably there is some certainty to ENSO etc that they could add back in.

  16. Richard C (NZ) on December 4, 2012 at 6:12 pm said:

    >”Presumably there is some certainty to ENSO etc that they could add back in.”

    No certainty or cycle that I know of. There’s general regimes that can be seen in MEI.ext e.g. positive late 1970s to mid 2000s or negative late 1800s to early 1900s but in amongst the general regimes are the opposite departures:-


    ENSO can only be added back in retroactively as for volcanism as far as I can tell. Tisdale states in the conclusion of a very long post (as usual):-

    “And as shown in Figures 17 and 18, when the linear effects of ENSO are removed from Global Temperature anomalies, the remainder logically shows variations that reflect the opposing effects of ENSO.”


    I have no idea what that means but it seems vaguely relevant to your comment Andy. Perhaps you can make something of it.

    I’m more inclined to go with the Temp/PDO+AMO+Sunspot integral correlation 0.96. That’s a model that can be projected from historic values.

  17. Richard C (NZ) on December 4, 2012 at 6:37 pm said:

    >”I’m more inclined to go with the Temp/PDO+AMO+Sunspot integral”

    Problem there is that SC 24 sunspot prediction is not going well. This is the latest observed vs prediction chart:-


    The addition of the sunspot integral to PDO+AMO moves the temp correlation from 0.85 to 0.96. Nice to add in if the number can be predicted but it’s not necessary if if can’t.

  18. I have to admit I am very cynical about this, however I don’t want to interrupt a good-faith discussion between Richard C and Nick.

    To me it all feels like – if nature doesn’t fit the models, adjust nature.

    Anyway, I’ll leave you chaps to have your in-depth discussion

    You can ignore me (*ducks*)

  19. Richard C (NZ) on December 4, 2012 at 7:39 pm said:

    >”I’m more inclined to go with the Temp/PDO+AMO+Sunspot integral”

    Changed my mind. Make that PDO+AMO+TSI (TSI Hoyt) as in:-

    ‘US Temperatures and Climate Factors since 1895′
    By Joseph D’Aleo,


    TSI provides the long-term warming (or cooling) trend. Information there (page 5) about the relationship between MEI and PDO +Warm Mode and -Cold Mode and the effect on temperatures.

    Adding CO2 to PDO+AMO+TSI only moves the correlation from 0.85 to 0.89 (page 8).

    D’Aleo paper featured at WUWT here:-


  20. That isn’t a paper. It’s a naive regression that any first year statistics student could perform.
    It features pre-smoothing of data, inconsistent time periods, limited geographic exposure and conclusions unwarranted from the results. Why are you willing to believe this rubbish over and above more robust analyses such as Rahmstorf et al?

  21. Richard C (NZ) on December 4, 2012 at 8:58 pm said:

    Just realized that “Hoyt” is a TSI proxy, not similar to PMOD or ACRIM type satellite sets. D’Aleo page 4:-

    “The Hoyt-Schatten TSI series uses five historical proxies of solar irradiance, including sunspot cycle amplitude, sunspot cycle length, solar equatorial rotation rate, fraction of penumbral spots, and decay rate of the 11-year sunspot cycle. I found a correlation strength (r-squared) of 0.57″

    Given the aforementioned difficulty with sunspot prediction, Hoyt-Schatten is not an improvement. I searched around for an improvement and found a recent paper that didn’t make the news to my knowledge:-

    Reconstructed Total Solar Irradiance as a precursor for long-term solar activity predictions: a nonlinear dynamics approach

    Stefano Sello. May 2012

    Mathematical and Physical Models, Enel Research, Pisa – Italy


    Page 13:-

    Fig. 9. HadCRUT3N yearly averaged temperature anomalies behavior (red line); the cycle (10-12 years) averaged (blue line) and predicted values for the next four cycles (green line) using both the TSI predictions and the suggested linear correlation between solar cycle length and the average temperature in the next cycle.

    Looks like Dan Pangburn’s model out as far as 2020.

    Figure 9 shows the related behavior for predicted HadCRUT3N cycle averaged temperature anomalies. The above result appears coherent with some previous suggestions on the future trend of average temperature anomalies based on solar activity. de Jager and Duhau (2011) conclude that the solar activity is presently going through a short transition period (2000- 2014), which will be followed by a Grand Minimum of the Maunder type, most probably starting in the twenties of the current century. Another prediction, based on reduced solar irradiance due to reduced solar radius, is a sequence of quite lower solar activity cycles leading to a Maunder like minimum starting around 2040 (Abdussamatov, 2007). It is well known that the Maunder Minimum in sunspot numbers in the second half of the seventeenth century coincided with what has become known as the Little Ice Age during which western Europe experienced significantly cooler temperatures. Here we found that, after a significant reduction of the temperature anomalies during the current cycle 24, we could have a pause during the following cycles 25 and 26 with a new average temperature rise (large fluctuations), followed by a significant strong downward of temperature anomalies around 2039-2040 and during the cycle 27.

    The conclusions are well worth a read. Sello reckons his technique is good for extrapolation of the next 3 solar cycles.

  22. Richard C (NZ) on December 4, 2012 at 9:12 pm said:

    >”To me it all feels like – if nature doesn’t fit the models, adjust nature.”

    Most of the RF&C adjustment is merely smoothing Andy (leaving aside use of PMOD). It’s only the RF&C treatment of the 2010 ENSO event the makes their series any different from Christy’s 7 year running average smoothing.

    RF&C have definitely “adjusted nature” around the 2010 event and I’ll be very interested in Nick’s justification because that makes ALL the difference to the data trajectory and renders their series inconsistent with the smoothed RSS and UAH series by Christy.

  23. Richard C (NZ) on December 4, 2012 at 9:32 pm said:

    >”Why are you willing to believe this” ?

    I don’t subscribe to the use of Hoyt-Schatten for TSI (see up-thread) but the main reasons for PDO+AMO+TSI are:-

    1) Correlation with observations (note addition of CO2 only improves correlation by 0.04).

    2) Tested predictive capability of ocean oscillations (SOI has been used similarly and successfully by Dr Theodore Landsheidt) and progress on TSI as a predictor as data quality improves.

    >”……over and above more robust analyses such as Rahmstorf et al?”

    Robust? I think not e.g. PMOD uses uncorrected data and their treatment of the 2010 ENSO event is looking more bogus by the minute. The predictive capability of the RF&C series is not tested yet either. But I understand your need to denigrate a threat to your poster boys and warmist icon at every opportunity Simon.

  24. ClimateCyclist on December 4, 2012 at 11:24 pm said:



    Let me try to explain better why carbon dioxide has no effect …

    The process of diffusion in the vertical direction in a gravitational field effectively turns a “level base” into a “sloping base” like a concrete driveway running down a hillside.

    This diffusion process ensures that the sum of the PE and KE of individual molecules has a propensity towards equality in all molecules at all altitudes. Those lower down (with less PE) thus have higher KE, leading to higher temperature in the lower regions.

    There will be some absorption of Solar insolation at all levels in the Venus atmosphere, because we know at least some gets through to the surface. Think of this absorption as being like lots of different size loads of sand dumped on that sloping driveway. In general, the piles will be smaller as you go towards the top. So there’s no real propensity for convection rising in the atmosphere (sand from higher piles flowing down through the bigger piles further down the slope) so what happens is simply that the amount of radiation varies at different levels to get rid of the sand. But it stops when it gets down to the concrete driveway. The mean amount of radiation has to equate with the incident radiation, so this requirement (long ago) set the level of the driveway, but not its gradient – gravity and the specific heat of the gas set the gradient.

    Now I know that some radiation (roughly half) is directed towards the hotter surface, but those who understand what Prof Johnson proved, will realise that the electro-magnetic energy in such radiation is never converted to thermal energy in a hotter region than that from whence it came. Instead it is immediately re-emitted, just as if “pseudo scattered.” Hence the energy in all radiation from the atmosphere always ends up eventually getting to space, even if it strikes the surface, or gets partly absorbed by cooler gas and subsequently re-emitted.

    So the diffusion process in a gravitational field sets the gradient of the temperature plot in the atmosphere, with some small variation depending on the specific heat of the gases. The incident Solar radiative flux sets the overall level. These combine to produce a sloping, near linear temperature plot which of course intercepts the surface at a temperature which is determined by the input factors just mentioned, and nothing else.

    Any additional absorption of either incident or upwelling radiation merely adds temporary energy which will be quickly radiated away and, even though such radiation is in all directions, it will eventually transfer energy out of the planetary system and back to space.

    Venus is a good example, because it is so much more obvious that the surface is not heated to the temperature it reaches by the direct Solar radiation it absorbs. Instead, an interplay of conduction (diffusion) and radiation at the surface/atmosphere interface keeps the surface at a temperature close to that of the base of the atmosphere.

    Which came first – the chicken or the egg? The temperature of the base of the atmosphere must have come first because otherwise it would be just too much of a coincidence that the same formula “works” on all planets with sufficient atmospheres.

    So, if you don’t accept the above, then please explain in a similar level of detail, exactly what you think explains the surface temperature, being sure to keep within the confines of the laws of thermodynamics and atmospheric physics, as I have.

    Doug Cotton

  25. Richard C (NZ) on December 5, 2012 at 9:56 am said:

    >”PMOD uses uncorrected data”

    This isn’t correct. There are corrections but it is the nature of the data used to fill the gap in each composite (result of Challenger disaster) that is in question, 90% interpolated in PMOD.

    Point is that there is more than just one TSI composite to apply and consider, PMOD is the one with a negative trend so the likes of Rahmstorf and Foster are prone to exclude any other considerations. It wont be such an issue as it was as time goes on though, now that there is continuity of measurement and corroborating overlap of observations from different platforms.

  26. Richard C (NZ) on December 5, 2012 at 11:07 am said:

    I decided to keep an eye out at SkS for Ari Jokimäki’s ‘New research from last week’ to see if ‘Global and diffuse solar radiation in Spain: Building a homogeneous dataset and assessing their trends’ – Sanchez-Lorenzo et al. (2012) was listed. It was (third down):-


    Ari’s update’s are very useful I find. Cook didn’t feature the paper of course, preferring Rahmstorf et al.

  27. Hi Richard C,
    As requested the explanation for the adjustments to the periods you do not understand follows. The numbers are a bit rough as they are taken from the (fairly low res) graphs rather than the raw data sets but it should be sufficient for your understanding.

    The summary is that 2010 is not treated any differently from the other years. Rather you have not understood how the adjusted series is generated. If you still think it is similar to 7 year smoothing then I suggest you read the paper rather than just looking at the pictures.

    From fig 7 of Foster and Rahmstorf 2011, Global temperature evolution 1979–2010, the following can be determined:

    In 1998 MEI (0.22C) is dominant but AOD (-0.01C) and TSI (-0.01C) pull it back somewhat so total exogenous forcing is 0.2C.

    In 1999 the negative influence of MEI (-0.04C) is slightly weaker than TSI (0.05C) so total exogenous forcing is 0.01C.

    In 2007 MEI (0.06C) is mostly cancelled by TSI (-0.05C) so total exogenous forcing is 0.01C.

    In 2008 the negative influence of MEI (-0.2C) is further reduced by TSI (-0.04C) so total exogenous forcing is -0.24C.

    In 2010 MEI (0.07) is dominant but is partly cancelled by TSI (-0.04) so total exogenous forcing is 0.03C.

    In 2011 MEI (-0.26) is further reduced by TSI (-0.02) so total exogenous forcing is -0.28C.

    This matches what is shown in fig 1 of Rahmstorf et al. 2012 who subtracts the exogenous forcing from the raw temperature record. This demonstrates that the IPCC predictions (which do not include the exogenous forcing) are accurate once these influences are removed from the raw data.


  28. Richard C (NZ) on December 5, 2012 at 7:27 pm said:

    >”Rather you have not understood how the adjusted series is generated. If you still think it is similar to 7 year smoothing then I suggest you read the paper rather than just looking at the pictures.”

    You then follow that with the smoothing technique RF&C use. RF&C smooths the data Nick, there is no escaping that, but I’ve got no problem with smoothing (but they also make some data up and that I do have a problem with)).

    Four points:

    1) RF&C introduce data either side (mostly in the 2010 year) of the measured series peak at 2010 (light red line) where no datapoints existed in the measured data. The datapoints they introduce are derived from MEI (offset by TSI) but the MEI is an index, not measured temperature in degrees Celsius. They cannot simply introduce derived datapoints into a measured temperature series where no datapoints in degrees Celcius existed in the first instance. To do so is simply making up extraneous values, even to the extent of introducing a peak “temperature” in the 2010 year of about +0.37 anomaly when the actual measured temperature is down around +0.2 C.

    There’s a similar instance at about 1993/4 on RF&C Figure 1:-


    2) The actual 2012 peak reduction is fine by me mathematically but not physically. RF&C reduce the 1998 peak by 0.2 C. The 2010 MEI peak is 50% of the 1998 peak so an MEI-only proportional reduction to 2010 is 0.1 C. RF&C’s reduction after TSI is about 0.03 C, therefore TSI must equate to temperature of 0.07 C. SOHO/VIRGO (PMOD) shows the 1998/2010 difference in W/m2 which is shown as degrees C in the F&R TSI chart:-


    The difference is say 0.3 W/m2. The SB law tells us that the Earth’s surface should warm by 1 degree Celsius for every 3.3 Watts per square meter of radiative forcing, therefore 0.3 W/m2 equates to 0.09 C so the actual reduction to the peak is OK mathematically.

    But does that TSI radiative forcing translate to measured temperature instantaneously (within a month say) so that the instantaneous offsetting rationale against MEI is valid? I don’t think so and I think this is why the net MEI/TSI/AOD temperature adjustment produces “temperature” where there wasn’t any to start with. The TSI will be absorbed at the earth’s surface especially by the ocean. That energy may take days, weeks, decades or even centuries to be released as heat measurable by temperature at surface. The 0.07 C TSI offset is not necessarily valid in this case. The proportional 0.1 C MEI-only reduction could even be a more valid adjustment.

    3) The CMIP5 ensemble average is about 0.6 C higher by 2010 than the 1980 level that coincides with observations:-


    That puts the CMIP5 2010 level at 0.5 C anomaly on RF&C Figure 1 and renders Figure 1 either out of date, or plotted incorrectly, or irrelevant, or something.

    4) The absolute rise in RF&C Figure 1 1980 – 2010 is about 0.43 C (CMIP5 0.6 C). The alleged CO2 forcing over that period is:

    dF = 5.35 ln(C/Co)
    dF = 5,35 ln(389.78/338.68)
    dF = 0,75 C

    The alleged CO2 forcing overshoots RF&C’s 2010 level by 0.32 C (CMIP5 0.15 C indicating CMIP5 is a better comparison). We could use an at surface figure of 0.2 C (0.75/3.7) as per Hansen (I think) but then the CO2 forcing falls short by 0,23 C (CMIP5 0.4 C).

    As always, it is difficult fitting CO2 into the picture.

    # # #

    In summary, the F&R/RF&C rationale is flawed by the application of a mix of offsetting phenomena (MEI/TSI/AOD) that are not mutually interacting in the instantaneous timeframe in which the net adjustment is derived.

    It is not necessary to deconstruct the data by extracting selected phenomena in order to identify the intrinsic signal in the data over time e.g. EMD extracts intermediate mode frequencies (IMFs) that are the decadal and multidecadal oscillations of climate data and a residual which is the overall intrinsic data signal. The HadSST2 EMD residual looks exactly like RF&C Figure 1 from 1980 – 2011:-


    I could have saved Rahmstorf, Foster and Cazenave a great deal of time and effort.

  29. Richard C (NZ) on December 5, 2012 at 7:37 pm said:

    It remains for the RF&C approach to be tested over time (just like all the other models).

    The key is identifying inflexions in the trajectory of the data. It will only take a few years (maybe 3 or 5) to confirm or otherwise whether the RF&C approach is valid. If the trajectory of a moving average or polynomial say, continues unabated, it will be proved invalid and they’ve missed an inflexion back around 2004 (but Scafetta and INM-CM4 are still in-the-money). If the RF&C Figure 1 trajectory continues, their approach is valid along with the lower CMIP5 STDev limit and everyone else is wrong. If the trajectory turns down, they, the GCM’s and Scafetta are wrong but Sella, Abdusamatov, Pangburn and others are right.

    Given that the latest El Niño is a fizzer, the PDO is in cold mode, SST is trending down, there’s not much to support a rise above 2010 level for a while.

    NINO3.4 index back just above neutral:-


    RSS to August 2012:-


    An updated RF&C Figure 1 will necessarily drop back down at least 0.2 C from its 2010 level in the absence of an El Niño. And it wont go back up there (let alone above it) unless there is a strong El Niño in the near future.

  30. Hi Richard C,
    In response to your points above.

    1) Foster and Rahmstorf 2011 explains how MEI, TSI and AOD indexs are converted into C

    2) Foster and Rahmstorf 2011 incorporates process lag for each of the exogenous forcings into the temperature correction. Maybe you could do a little more reading and a little less typing

    3) “Or something” is the correct answer, you are mistaken in your calculation

    4) That is what the models are for, they are slightly more sophisticated than your estimate.

    As for your EMD calculation, why don’t you present it properly, as a pdf say. As it is it can’t actually be read and frankly I can’t be bothered fixing it for you. I’m surprised no one else here has mentioned it.

  31. Richard C (NZ) on December 5, 2012 at 8:53 pm said:

    >”That energy may take days, weeks, decades or even centuries to be released as heat measurable by temperature at surface”

    SKS article touching on the delayed TSI => temperature aspect here:-



    TSI Figure 1 caption “Warming already committed, but not yet manifest in surface temperature” and “The circled area is (roughly) the solar energy already absorbed by the ocean and yet to manifest itself in global temperatures i.e – warming already committed.”

    The Solar Cycle

    Because of the ocean’s thermal inertia (it takes a long time to warm up), global temperature change caused by the sun’s variabilty lags solar irradiance by about 18 months. The ‘trough’ in the solar cycle (figure 1) was therefore still exerting a cooling influence on surface temperatures in 2011. However this is expected to quickly change to a warming effect over the next 3-5 years because the sun is on its ascent to the peak of the next cycle. As circled in figure 1 – extra sunlight has gone into the oceans in the last 18 months. This warming is a ‘train that has already left the station’ so-to-speak, and will soon manifest itself in global temperature.

    “…the sun is on its ascent to the peak of the next cycle”

    Possibly at peak levels late September (same as previous peak level 2003/04):-


    “Due to a combination of the warm phase of the solar cycle and an overdue switch to El Niño – when the ocean gives up a lot of heat to the atmosphere, near-future warming is expected.”

    Wishful thinking back in February re the El Niño and the current solar warm phase has pushed levels back up to 2003/4 levels in UAH but not RSS. The chances of going higher on TSI power alone are slim now.

  32. Richard C (NZ) on December 5, 2012 at 9:56 pm said:

    1) Foster and Rahmstorf 2011 explains how MEI, TSI and AOD indexs are converted into C

    That would be Figure 3 “Coefficients of temperature response to MEI, AOD and TSI i.e. a theoretical response, not a conversion,

    TSI power (thats not an index BTW) can be converted to theoretical temperature on an instantaneous basis as I’ve done in my check – no problem, but that is not what happens in the real world. But an offset of one phenomenon to another is another matter entirely in a physical sense when one is in units of power and the other is an index e.g. MEI/TSI. F&R doesn’t justify how the offset process can create data where there was none previously (and they never will be able to do so). Application of a coefficient then net calculation doesn’t turn an index into measured degrees Celsius that can supersede actual measured degrees C.

    2) Foster and Rahmstorf 2011 incorporates process lag for each of the exogenous forcings into the temperature correction.

    What is the process lag for TSI to temperature? If you mean Table 1, there is 0 lag for TSI from RSS or UAH and 1 month for the other 3. That is hardly process lag. They say:-

    “The influence of exogenous factors can have a delayed effect on global temperature. Therefore for each of the three factors we tested all lag values from 0 to 24 months, then selected the lag values which gave the best fit to the data.”

    A 0 (RSS,UAH) or 1 month (others) TSI => temperature lag is fanciful over the ocean. It is not called a heat sink for nothing. Also see this comment re SkS article on thermal lag:-


    3) “Or something” is the correct answer, you are mistaken in your calculation

    Explain how exactly.

    4) That is what the models are for, they are slightly more sophisticated than your estimate.

    But the FAR/TAR levels in RF&C don’t reconcile with the CMIP5 levels do they? The “estimate” is the IPCC methodology, much closer to CMIP5 than FAR/TAR/RF&C. INM-CM4 is right between RSS and UAH, That would make it the most “sophisticated” wouldn’t it?

    >”As for your EMD calculation, why don’t you present it properly, as a pdf say. As it is it can’t actually be read and frankly I can’t be bothered fixing it for you”

    The Dropbox link to Excel workbook works fine for me, what’s the problem? EMD software I used here:-


    Or you could just view the chart on the EMD sheet of the workbook. That can’t be hard, surely.

  33. Richard C (NZ) on December 6, 2012 at 8:17 am said:

    An alternative approach to RF&C 2012 is to accept without question that the technique is all kosher and that we have a model from which to project from after the 2010+ levels where the series leaves off. Fine, we’ll work from there.

    If CO2 is the phenomenon that drives temperature up on the trajectory of RF&C Figure 1 (observations AND simulations) then the level of temperature will continue to rise above the level where the Figure 1 series ends because all other exogenous phenomena are accounted for.

    But since 2010 by whatever measurement method – HatAT2 (radiosondes), RSS/UAH (satellites), GISTEMP/HadCRUT (thermometers) – temperatures have fallen down and away from RF&C 2010 levels now that the one major exogenous phenomenon (ENSO) is all but neutral (no adjustment reqd) and another (TSI) looks to be at peak levels (no adjustment either because TSI at peak is close to their 0 TSI anomaly)

    RF&C cannot hope for the biggest El Niño the world has ever seen to raise current levels above the end of their series because that effect will have to be removed given their rationale. CO2 hasn’t pushed the level higher. All indications (e.g. PDO) are for a cool period from now out to 2020 or so.

    Basically, RF&C are already out-of-the-money.

  34. And now for something completely different:

    U.N. Agency Says 2012 Celebrities Hottest On Record

    HELSINKI—In a report released Thursday at the United Nations pop culture summit in Finland, a consortium of leading entertainment scientists confirmed that the year 2012 has witnessed the hottest celebrities in recorded history.

    Citing evidence such as dangerously hot celebrity beach bodies, steadily rising chemistry between sizzling-hot megastars, and a myriad of extreme, record-breaking blockbuster movie events, the U.N. group claimed the overall hotness of celebrities worldwide is increasing at a staggering rate, with many specific celebrities reported to be “so on fire right now it’s scary.”

    “We are seeing an unmistakable pattern: Celebrities across the world are extremely hot, and they are only getting hotter,” said U.N. entertainment agency director Michael Carver, who confirmed that 2012’s celebrities were more gorgeous and charming than those in any year since data on hotness was first collected in 1955. “The chance that natural variability produced such an unprecedented slew of good-looking superstars is vanishingly small.”


  35. Richard C (NZ) on December 6, 2012 at 2:18 pm said:

    >”one major exogenous phenomenon (ENSO)”

    On reflection, El Niño is not an exogenous phenomenon. It is an accumulation of solar energy from earlier timeframes that is released in one big dollop. Therefore it should not be removed from the series. The smearing of it along the previous 7 previous years by running average for smoothing purposes is a better representation of the event because a large part of the El Niño energy is allocated (roughly) to the period in which it originated.

  36. Richard C (NZ) on December 6, 2012 at 3:02 pm said:

    >”…but that is not what happens in the real world”

    RF&C’s 0 thermal lag over the ocean (RSS/UAH) implies that the TSI energy laid down over the tracklength of a photon plunging several metres vertically into the tropical ocean is instantly manifest as heat about 1km above the surface that can be immediately detected by a satellite-borne sensor passing overhead. I don’t think so. If that were the case there would be no OHC buildup, no El Niños, no oceanic heat sink, no oceanic heat transport mechanisms bringing warm tropical water down around New Zealand or transporting warm water poleward, and no SST rise.

    The only TSI energy instantly available to the atmosphere is radiation reflection and scattering from the surface. Re-emission occurs diurnally. Conduction/convection and evaporation from near-surface follows in hours, days and maybe weeks. The bulk of the energy sequestered deeper down (most of that happens in the tropical zone) stays resident in the mixed layer above the planetary boundary for some indeterminate time (weeks/months/years/decades?) before dissipating maybe by El Niño. Some heat is taken down and moved vast distances over long timeframes (decades or more) by currents before dissipating, again maybe at the surface by El Niño, maybe to extratropical, subpolar or polar regions.

    The entire TSI departure from a norm cannot be used as an instantaneous factor (0 or even 1 month lag) because the ocean response is by no means instant no matter what they’ve arrived at statistically. Foster and Rahmstorf have not thought this through.

  37. “But since 2010…” Short term noise. The last refuge of the terminally deluded.

  38. Taking a trip down memory lane, from the glory days of climate alarmism, this one from Geoffrey Lean.

    Truly outstanding weapons grade tosh, MSM back then reads like the Daily Mash today


  39. Richard C (NZ) on December 7, 2012 at 10:15 am said:

    Noise? Since 2010 (i.e. at this juncture in 2012), no ENSO adjustment required, no TSI adjustment required, no AOD adjustment required – in short – no noise (in RF&C’s erroneous terms). What we have before us now in late 2012 is a noiseless level of temperature as defined by RF&C and the anomalies of the “exogenous” factors they’ve used.

    But RF&C don’t understand oceanic thermal lag (Rob Painting at SKS does on the other hand). They don’t understand the atmospheric response to TSI via the ocean, neither do they understand El Niño. Consequently, they’ve erroneously defined exogenous factors and “noise” thereby missing an intermediate inflexion that a running average uncovers.

    RF&C could have achieved a similar result simply by applying a linear regressed trend to the entire dataset sans exogenous adjustments. Everyone already knows the rising linear trend of temperature series 1980 – present (think NZ7SS and NZCSET v NIWA squabble over rising linear trend – they’re not squabbling over linear rise, but just over the slope). RF&C have done nothing “novel” or “clever” (as Cook or whoever puts it at SKS) by establishing a linear trend (EMD achieves that too).

    But by erroneously eliminating the shoulder of data around the turn of the century and being fixated with the overall linear aspect they’ve missed a vital intermediate inflexion in the data. That inflexion is glaringly evident in SST data by use of trends which better represent the data e.g. polynomial. Over time, that inflexion will pull down the long-term linear trend as it is doing with SSL and SST. Tisdale has made an SST data-model comparison here:-

    ‘Model-Data Comparison: Pacific Ocean Satellite-Era Sea Surface Temperature Anomalies’


    From the WUWT post ‘Once again, reality trumps models – Pacific SST’s are flat':-


    Just as with RF&C Figure 1 there is a discrepancy – data vs models (RF&C “data” same as models) – late 2012. The absolute level of CMIP5 and RF&C (assuming their series keeps rising as a result of supposed CO2 forcing) at end of 2012 is imaginary – the data does NOT exist in reality.

    If you wish to characterize model output and erroneously invented values as valid data and actual data unaffected by any significant anomalous events as “noise” then I suggest that it is you that is afflicted by delusion Nick – not I.

  40. Richard C (NZ) on December 7, 2012 at 11:23 am said:

    >”….they’ve missed a vital intermediate inflexion in the data”

    Even the CMIP5 ensemble looks to have mimiced the 2003/04 inflexion in the Pacific SST data.

  41. Richard C (NZ) on December 7, 2012 at 1:20 pm said:

    >”Even the CMIP5 ensemble looks to have mimiced the 2003/04 inflexion in the Pacific SST data”

    And in the atmosphere, there are at least 2 model runs in amongst the spaghetti of Christy’s EPW plot that have mimiced the inflexion and are on the same flat (even fractionally negative) trajectory as the observations along to 2010 but at a higher absolute level:-


    Add to that Christy’s EPS Figure 2.1 update plot in which INM-CM4 mimics the inflexion, trajectory and absolute level of the observations. There may even be another updated run in the EPS plot (hard to see) that turns at the inflexion and is on the trajectory but is at a higher absolute level making 3 or 4 models that mimic the inflexion and trajectory of observations 2004 – 2010 (Figure 2.1, page 19):-


    These model runs do not support the monotonic linear rise contention put forward by Rahmstorf, Foster and Cazenave.

  42. Exxon Hates your Children


    Yet the same people who made this video are probably quite happy for Exxon and others to pay them for climate research,and to fly to Doha business class

    Such utter, total, hypocrites.

    These activists make my flesh crawl

  43. Site domain owned by this guy:


  44. How naive of Richard C to think that the RF&C analysis has removed all the noise. In fact if he had bothered to read the abstract of Foster and Rahmstorf 2011 he might have noted that they said “…the global warming signal becomes even more evident as noise is reduced.” Not eliminated as he seems to believe.

    Additionally in the body of Foster and Rahmstorf 2011 they state “The full model explains 70%–80% of the variance for all five data sets” not all of it, which further demonstrates his folly.

    He does make one valid point when he says that the authors could treat the lag more realistically. However what they have done is demonstrably better than using unadjusted temperature data and it is not clear that a more realistic treatment would be worth the additional complexity. Perhaps a subsequent paper will address this and incrementally improve the result. That would be how science works.

    The fact remains that the IPCC projections from 2001 and 2007 for global temperatures have been proven to be remarkable accurate once exogenous factors are removed from the temperature data set.

  45. Richard C (NZ) on December 8, 2012 at 10:24 am said:

    >”The fact remains that the IPCC projections from 2001 and 2007 for global temperatures have been proven to be remarkable accurate once exogenous factors are removed from the temperature data set.”

    Yes, FAR, TAR, out of date data sets, made up “data”, and fallacious rationales are great for gullible folks who live in the past as you appear to Nick. Unfortunately, subsequent CMIP5 runs and late 2012 observed temperature updates disprove what RF&C have previously “proven” (according to you). Some of the models (at least 3, maybe 4) have improved considerably since CMIP3 (2007) to the extent of mimicing observed temperature trajectory in GAT and SST (a flat trajectory since 2003/04) as pointed out here:-


    Models, rather than “remove” RF&C’s “exogenous” factors actually account for TSI and El Niño. They don’t mimic the El Niño fluctuation that releases energy in one short time span but they do account for the energy input from solar that contributes to it and they release it over a broader span. That is why a 7 year running mean of observed temperature provides an (almost) apples-to-apples comparison of models to observations and El Niño energy should be left in the series, not taken out as F&R/RF&C do i.e. El Niño is not an “exogenous” factor.

    Same for TSI by model use of a solar “constant” that normally would represent the peaks and troughs of TSI because the ocean modulates the fluctuating solar input anyway and there is no instantaneous atmospheric response to TSI over the ocean as RF&C would have us believe (and as you in your naivety appear to Nick).

    Unfortunately (again), the sun is not playing by modelers rules. The latest peak in TSI looks unlikely to reach the 2000 levels of the VIRGO series here (fractionally below RF&C’s 0 anomaly baseline at present):-


    SSNs are similarly not reaching the predicted peak. Therefore the solar “constant” is no longer valid and will have to be adjusted as Lean and others do periodically. This means that there will not be as much energy in the oceanic heat sink (lagged heat) to sustain future temperatures (“warming” as Rob Painting puts it at SKS) and we are seeing the early stages of the solar recession predicted by astrophysics. It also means that all IPCC GCM simulations to date will have used an invalid solar constant for at least the next 30 yrs and probably out to 2100.

    And rather than be an instantaneous manifestation in atmospheric temperature, that present lower level of TSI peak will not translate to atmospheric temperature for some time, maybe end of 2013 (no El Niño predicted for 2013) or sometime in 2014. Because TSI is “real” power (in electrical terms) i.e. higher intensity, frequency and energy-per-photon as opposed to the “apparent” power of CO2 LWIR (lower intensity, frequency and energy-per-photon) there wont be an offset from rising CO2 and there will be less re-emitted solar energy available to be intercepted by CO2 anyway.

    So rather than a global “warming” signal being evident as you content Nick, it is a global cooling signal that is emerging. Foster and Rahmstorf miss that signal completely of course

  46. Richard C (NZ) on December 8, 2012 at 12:45 pm said:

    >”Not eliminated as he seems to believe”

    If by this statement you are alluding to significant factors that F&R have neglected to consider e.g. cloud forcing, then perhaps you could present a synopsis of the comparative effect of that forcing (W/m2 comparisons) over the last 30 yrs with particular attention to each decade Nick?

    With recourse to relevant and up to date papers and reports of course.


  47. As mentioned before Nick, if Rahmstorf is correct then he undermines the AGW hypothesis further due to the fact that the discrepancy between what was predicted and what is observed is even worse. A surface temperature that is rising faster than the upper troposphere proves there is no positive feedback from water vapour, and if Rahmstorf says the surface is rising even faster then it only reinforces and strengthens the fact.

    Over 30,000,000 radiosondes and 2 satellites reinforce the empirical truth, & all you have to do to believe in AGW is ignore all three corroborating lines of evidence in favour of unfounded and empirically disproved speculation.

    I don’t know why you pro AGW guys keep going on about AGW when the lack of a hot spot stops your theory in it’s tracks, perhaps it’s just easier to ignore the inconvenient. Everything else fails without it as there is no proof that the warming has anything to do with mankind’s emissions except for the 1.2C max. per doubling of total atmospheric CO2 levels. Especially when the temperature rise bears no difference to previous records that is supposed to be natural.

  48. Richard C (NZ) on December 8, 2012 at 3:41 pm said:

    >”Unfortunately……the sun is not playing by modelers rules”

    >”…..means that all IPCC GCM simulations to date will have used an invalid solar constant for at least the next 30 yrs and probably out to 2100″

    Constant and forcing. Solar forcing 1850 to 2050 in GISS Model E:-


    From Forcings in GISS Climate Model – Solar Irradiance


    From 2000 onwards, the Max, Min and 11 year cycle length is assumed to continue unabated from late 1980s level. Already by late 2012 that is not playing out.

    When Will it Start Cooling?


    SC 24 could extend to 17 years long ending in 2024, leaving 12.5 years of cooling from mid-2013. Some of the more radical solar predictions are for grand minimum conditions. Viera et al 2011 reconstructed Holocene TSI finding about 1.5 W.m2 min to max. The 2000 peak was near max levels so the extent of possible reduced TSI expectation should not exclude 1 W.m2.

    Sun Headed Into Hibernation, Solar Studies Predict


Comment navigation


Post Navigation