Climate Models

This thread is for discussion of computer climate models, or General Circulation Models (GCMs).

37 Thoughts on “Climate Models

  1. Well, well, it looks like someone got the models wrong again.

    How often have we heard that droughts will increase due to global warming? It’s the single most-quoted effect that alarmists use when discussing Africa, for example.

    Seems they were wrong.

    We find no evidence in our analysis of a positive feedback—that is, a preference for rain over wetter soils—at the spatial scale (50–100 kilometres) studied. In contrast, we find that a positive feedback of soil moisture on simulated precipitation does dominate in six state-of-the-art global weather and climate models—a difference that may contribute to excessive simulated droughts in large-scale models.

    This is why these blokes should have checked their models before shouting about the end of the world.

  2. Richard C (NZ) on September 18, 2012 at 8:51 pm said:

    New paper shows negative feedback from clouds ‘may damp global warming’

    A paper published today in The Journal of Climate uses a combination of two modelling techniques to find that negative feedback from clouds could result in “a 2.3-4.5% increase in [model projected] cloudiness” over the next century, and that “subtropical stratocumulus [clouds] may damp global warming in a way not captured by the [Global Climate Models] studied.” This strong negative feedback from clouds could alone negate the 3C alleged anthropogenic warming projected by the IPCC.

    As Dr. Roy Spencer points out in his book,

    “The most obvious way for warming to be caused naturally is for small, natural fluctuations in the circulation patterns of the atmosphere and ocean to result in a 1% or 2% decrease in global cloud cover. Clouds are the Earth’s sunshade, and if cloud cover changes for any reason, you have global warming — or global cooling.”

    According to the authors of this new paper, current global climate models “predict a robust increase of 0.5-1 K in EIS over the next century, resulting in a 2.3-4.5% increase in [mixed layer model] cloudiness.”

    EIS or estimated inversion strength has been shown by observations to be correlated with cloudiness, as demonstrated by the 2nd graph below from the University of Washington, indicating a 1 K increase in EIS results in an approximate 4-5% increase in low cloud cover [CF or cloud fraction]. Thus, a combination of observational data and modelling indicate clouds have a strong net negative feedback upon global warming that is “not captured” by current climate models.

    CMIP3 Subtropical Stratocumulus Cloud Feedback Interpreted Through a Mixed-Layer Model



  3. Richard C (NZ) on October 18, 2012 at 7:54 am said:

    Climate change research gets petascale supercomputer

    1.5-petaflop IBM Yellowstone system runs 72,288 Intel Xeon cores

    Computerworld – Scientists studying Earth system processes, including climate change, are now working with one of the largest supercomputers on the planet.

    The National Center for Atmospheric Research (NCAR) has begun using a 1.5 petaflop IBM system, called Yellowstone, that is among the top 20 supercomputers in the world, at least until the global rankings are updated next month.

    For NCAR researchers it is an enormous leap in compute capability — a roughly 30 times improvement over its existing 77 teraflop supercomputer. Yellowstone is a 1,500 teraflops system capable of 1.5 quadrillion calculations per second.

    The NCAR-Wyoming Supercomputing Center in Cheyenne, where this system is housed, says that with Yellowstone, it now has “the world’s most powerful supercomputer dedicated to geosciences.”

    Along with climate change, this supercomputer will be used on a number of geoscience research issues, including the study of severe weather, oceanography, air quality, geomagnetic storms, earthquakes and tsunamis, wildfires, subsurface water and energy resources.


    Scientists will be able to use the supercomputer to model the regional impacts of climate change. A model that is 100 km (62 miles) is considered coarse because the grid covers a large distance. But this new system may be able to reduce resolution to as much as 10 km (6.2 miles), giving scientists the ability to examine climate impacts in greater detail.


    Yellowstone is running in a new $70 million data center. The value of the supercomputer contract was put at $25 million to $35 million. It has 100 racks, with 72,288 compute cores from Intel Sandy Bridge processors.


    Rather large energy requirement too – “The facility was designed with a total capacity of 4 to 5 megawatts of electricity, but with Yellowstone now in production, usage is considerably lower. Total power for computing, cooling, office, and support functions has averaged 1.8 to 2.1 MW”

    NCAR-Wyoming Supercomputing Center
    Fact Sheet

  4. Richard C (NZ) on October 18, 2012 at 8:39 am said:

    I queried John Christy as to which modeling group it was that has mimiced absolute temperature and trajectory this century so far in his EPS statement Figure 2.1. This was his reply:-


    This model labeled 27 should be inmcm4 (Russia)

    John C.

    John R. Christy
    Director, Earth System Science Center
    Distinguished Professor, Atmospheric Science
    University of Alabama in Huntsville
    Alabama State Climatologist

  5. Richard C (NZ) on October 18, 2012 at 8:55 am said:

    What did the Russians do that everyone else didn’t in CMIP5 for AR5? Did they ramp GHG forcing down to zero I wonder? They do say there were “some changes in the formulation”


    The INMCM3.0 climate model has formed the basis for the development of a new climate-model version: the INMCM4.0. It differs from the previous version in that there is an increase in its spatial resolution and some changes in the formulation of coupled atmosphere-ocean general circulation models. A numerical experiment was conducted on the basis of this new version to simulate the present-day climate. The model data were compared with observational data and the INMCM3.0 model data. It is shown that the new model adequately reproduces the most significant features of the observed atmospheric and oceanic climate. This new model is ready to participate in the Coupled Model Intercomparison Project Phase 5 (CMIP5), the results of which are to be used in preparing the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC).

    # # #

    Good to see a modeling group validating their model against observations (GCM group that is, RTM groups do this religiously) – this is a major breakthrough.

  6. Richard C (NZ) on October 18, 2012 at 1:29 pm said:

    Simulating Present-Day Climate with the INMCM4.0 Coupled Model of the Atmospheric and Oceanic General Circulations

    E. M. Volodin, N. A. Dianskii, and A. V. Gusev, 2010

    Institute of Numerical Mathematics, Russian Academy of Sciences, ul. Gubkina 8, Moscow, 119991 Russia

    Page 2:-

    This makes it possible to analyze systematic errors in simulating the present-day climate and to assess the range of its possible changes caused, for example, by anthropogenic forcing.

    Page 3:-

    On the basis of this model, a numerical experiment was carried out to simulate the modern climate. To this end, the concentrations specified for all radioactive gases and aerosols corresponded to those in 1960.

    Page 4:-

    Name: Air temperature at the surface °C
    Observations: 14.0 ± 0.2 [34]
    INMCM3.0: 13.0 ± 0.1
    INMCM4.0: 13.7 ± 0.1

    Page 4:-

    The 1951–2000 NCEP reanalysis data [31] were used to compare the model atmospheric dynamics with observational data, and data from [32–41] were used to compare the integral atmospheric characteristics.

    Page 3:-

    The parameterizations of the basic physical processes in the model have changed only slightly; namely, some of the tuning parameters have changed. Among these are the parameterizations of radiation [18],

    18. V. Ya. Galin, “Parametrization of Radiative Processes
    in the DNM Atmospheric Model,”

    ‘Parametrization of radiative processes in the DNM atmospheric model’

    Galin, V.Y. [Russian Academy of Sciences, Moscow (Russian Federation)]

    The radiative code of the atmospheric model (DNM model) of the Institute of Numerical Mathematics (IVM), Russian Academy of Sciences is described. The code uses spectral transmission functions and the delta-Eddington approximation to take into account the absorption and scattering of radiation in the atmosphere due to atmospheric gases, aerosols, and clouds. The simplest regularization procedure in combination with the nonmonotonic factorization method is used to find a stable solution to the ill-conditioned system of delta-Eddington equations. Computation algorithms are presented, and the results obtained are compared to both the data of benchmark line-by-line calculations and the model data of ICRCCM international radiative programs. It was found that the DNM model yields a high accuracy of computing the thermal and solar radiation.

    # # #

    Unfortunately I can’t access the body of the Galin paper. Unfortunate because the “absorption and scattering” characteristics of CO2 used (and any changes made in INMCM4.0) would make VERY interesting reading.

  7. Richard C (NZ) on October 18, 2012 at 3:43 pm said:

    5.2 Heat emission on page 43 of Volodin, Dianskii, and Gusev gives the formulae, share of emissions across the spectrum, and references tables of coefficients.

  8. Richard C (NZ) on October 18, 2012 at 4:02 pm said:

    Description of the CCM INM RAS and model experiments

    Description of the atmospheric climate model inmcm4.0.(new) [hotlink]

    Short description of the coupled climate model inmcm3.0 and model experiments. [hotlink]

    Timetable of the model experiments.

    Selected publications [hotlinked]

    Volodin E.M., Diansky N.A.. “Prediction of the climate change in 19-22th centuries using coupled climate model”.

    Volodin E.M., Diansky N.A. “ENSO reconstruction in the Coupled Climate Model”.

    Volodin E.M.”Simulation of the modern climate. Comparison with observations and data of other climate models”.

    Volodin E.M. “Reliability of the future climate change forecasts”.

    Volodin E.M., Diansky N.A., Gusev A.V. “Simulating Present Day Climate with the INMCM4.0 Coupled Model of the Atmospheric and Oceanic General Circulations”.(new)

    Volodin E.M. “Atmosphere-Ocean General Circulation Model with the Carbon Cycle”.(new)

  9. “Pinatubo Climate Sensitivity and Two Dogs that didn’t bark in the night”

    Interesting article on climate sensitivity over at Lucia’s

  10. Richard C (NZ) on October 24, 2012 at 12:47 pm said:

    Lucia’s blog analysis makes Nuccitelli et al’s DK12 Comment look somewhat ordinary.

    For about 2 yrs data and “a single ocean heat capacity model” (one-heat-sink), Lucia’s model “is “seeing” an ocean capacity of 53 watt-months/deg C/m2 – equivalent to about 30 to 40m water depth”. Further down page, the model is “(still) “seeing” a total ocean heat capacity corresponding to about the top 30-40m of ocean”. This for 60S to 60N only.

    According to Nuccitelli et al, that’s all “noise” and 5 yr smoothed data should be used down to 2000m.

    Can’t say I’m convinced by globally averaged approximations for these calculations. I think the 0-GCM approach using observed ocean heat climatology (which one?) corresponding to TOA satellite observations cell-by-cell are about the only way to arrive at anything anywhere near meaningful. Not that I know what it is about at Lucia-level.

  11. Richard C (NZ) on December 31, 2012 at 5:21 pm said:

    AR5 Chapter 11; Hiding the Decline (Part II)

    Figure 11.33: Synthesis of near-term projections of global mean surface air temperature. a), b) and c):-

    They hid the decline! In the first graph, observational data ends about 2011 or 12. In the second graph though, it ends about 2007 or 8. There are four or five years of observational data missing from the second graph. Fortunately the two graphs are scaled identically which makes it very easy to use a highly sophisticated tool called “cut and paste” to move the observational data from the first graph to the second graph and see what it should have looked like:

    Well oops. Once one brings the observational data up to date, it turns out that we are currently below the entire range of models in the 5% to 95% confidence range across all emission scenarios. The light gray shading is for RCP 4.5, the most likely emission scenario. But we’re also below the dark gray which is all emission scenarios for all models, including the ones where we strangle the global economy.

    + + +

    Also John Christy’s preliminary plot (incomplete) of CMIP5 RCP4.5 vs observations (UAH/RSS):-

  12. Richard C (NZ) on February 3, 2013 at 11:34 am said:

    The controversy

    by Anastassia Makarieva, Victor Gorshkov, Douglas Sheil, Antonio Nobre, Larry Li

    Thanks to help from blog readers, those who visited the ACPD site and many others who we have communicated with, our paper has received considerable feedback. Some were supportive and many were critical. Some have accepted that the physical mechanism is valid, though some (such as JC) question its magnitude and some are certain it is incorrect (but cannot find the error). Setting aside these specific issues, most of the more general critical comments can be classified as variations on, and combinations of, three basic statements:

    1. Current weather and climate models (a) are already based on physical laws and (b) satisfactorily reproduce observed patterns and behaviour. By inference, it is unlikely that they miss any major processes.

    2. You should produce a working model more effective than current models.

    3. Current models are comprehensive: your effect is already there.

    Let’s consider these claims one by one.

    Models and physical laws


    Thus, while there are physical laws in existing models, their outputs (including apparent circulation power) reflect an empirical process of calibration and fitting. In this sense models are not based on physical laws. This is the reason why no theoretical estimate of the power of the global atmospheric circulation system has been available until now.

    The models reproduce the observations satisfactorily

    As we have discussed in our paper (p. 1046) current models fail when it comes to describing many water-related phenomena. But perhaps a more important point to make here is that even where behaviours are satisfactorily reproduced it would not mean that the physical basis of the model are correct. Indeed, any phenomenon that repeats itself can be formally described or “predicted” completely without understanding its physical nature


    For example, a climate model empirically fitted for a forest-covered continent cannot inform us about the climatic consequences of deforestation if we do not correctly understand the underlying physical mechanisms.

    You should produce a better model than the existing ones


    To expect a few theorists, however keen, can achieve that is neither reasonable nor realistic. We have invested our efforts to show, using suitable physical estimates, that the effect we describe is sufficient to justify a wider and deeper scrutiny. (At the same time we are also developing a number of texts to show how current models in fact contain erroneous physical relationships (see, e.g., here)).

    Your effect is already present in existing models

    Many commentators believe that the physics we are talking about is already included in models. There is no omission. This argument assumes that if the processes of condensation and precipitation are reproduced in models, then the models account for all the related phenomena, including pressure gradients and dynamics. This is, however, not so. Indeed this is not merely an oversight but an impossibility. The explanation is interesting and deserves recognition – so we shall use this opportunity to explain.


    In current models in the absence of a theoretical stipulation on the circulation power, a reverse logic is followed. The horizontal pressure gradients are determined from the continuity equation, with the condensation rate calculated from the Clausius-Clapeyron law using temperature derived from the first law of thermodynamics with empirically fitted turbulence. However, as we have seen, to correctly reproduce condensation-induced dynamics, condensation rate requires an accuracy much greater than γ << 1. Meanwhile the imprecision of the first law of thermodynamics as applied to describe the non-equilibrium atmospheric dynamics is precisely of the same order of γ. The kinetic energy of the gas is not accounted for in equilibrium thermodynamics.


    Summary and outlook

    The Editor’s comment on our paper ends with a call to further evaluate our proposals. We second this call. The reason we wrote this paper was to ensure it entered the main-stream and gained recognition. For us the key implication of our theory is the major importance of vegetation cover in sustaining regional climates. If condensation drives atmospheric circulation as we claim, then forests determine much of the Earth’s hydrological cycle (see here for details). Forest cover is crucial for the terrestrial biosphere and the well-being of many millions of people. If you acknowledge, as the editors of ACP have, any chance – however large or small – that our proposals are correct, then we hope you concede that there is some urgency that these ideas gain clear objective assessment from those best placed to assess them.

  13. Richard C (NZ) on April 30, 2013 at 4:33 pm said:

    New paper finds IPCC climate models unable to reproduce solar radiation at Earth’s surface

    A new paper published in the Journal of Geophysical Research – Atmospheres finds the latest generation of IPCC climate models were unable to reproduce the global dimming of sunshine from the ~ 1950s-1980s, followed by global brightening of sunshine during the 1990’s. These global dimming and brightening periods explain the observed changes in global temperature over the past 50-60 years far better than the slow steady rise in CO2 levels. The authors find the models underestimated dimming by 80-85% in comparison to observations, underestimated brightening in China and Japan as well, and that “no individual model performs particularly well for all four regions” studied. Dimming was underestimated in some regions by up to 7 Wm-2 per decade, which by way of comparison is 25 times greater than the alleged CO2 forcing of about 0.28 Wm-2 per decade. The paper demonstrates climate models are unable to reproduce the known climate change of the past, much less the future, that the forcing from changes in solar radiation at the Earth surface is still far from being understood and dwarfs any alleged effect of increased CO2.

    ‘Evaluation of multidecadal variability in CMIP5 surface solar radiation and inferred underestimation of aerosol direct effects over Europe, China, Japan and India’

    R. J. Allen 1, J. R. Norris 2, M. Wild 3

    DOI: 10.1002/jgrd.50426

  14. Richard C (NZ) on May 19, 2013 at 12:39 pm said:

    ‘Global warming slowdown retrospectively “predicted” ‘

    By Ashutosh Jogalekar

    When I was in graduate school I once came across a computer program that’s used to predict the activities of as yet unsynthesized drug molecules. The program is “trained” on a set of existing drug molecules with known activities (the “training set”) and is then used to predict those of an unknown set (the “test set”). In order to make learning the ropes of the program more interesting, my graduate advisor set up a friendly contest between me and a friend in the lab. We were each given a week to train the program on an existing set and find out how well we could do on the unknowns.

    After a week we turned in our results. I actually did better than my friend on the existing set, but my friend did better on the test set. From a practical perspective his model had predictive value, a key property of any successful model. On the other hand my model was one that still needed some work. Being able to “predict” already existing data is not prediction, it’s explanation. Explanation is important, but a model such as mine that merely explained what was already known is an incomplete model since the value and purpose of a truly robust model is prediction. In addition, a model that merely explains can be made to fit the data by tweaking its parameters with the known experimental numbers.

    These are the thoughts that went through my mind as I read a recent paper from Nature Climate Change in which climate change modelers “predicted” the last ten years of global temperature stagnation.


    This kind of retrospective calculation is a standard part of model building. But let’s not call it a “prediction”, it’s actually a “postdiction”. The present study indicates that models used for predicting temperature changes need some more work, especially when dealing with tightly coupled complex systems such as ocean sinks. In addition you cannot simply make these models work by tweaking the parameters; the problem with this approach is that it risks condemning the models to a narrow window of applicability beyond which they will lack the flexibility to take sudden changes into account. A robust model is one with a minimal number of parameters which does not need to be constantly tweaked to explain what has already happened and which is as general as possible. Current climate models are not useless, but in my opinion the fact that they could not prospectively predict the temperature stagnation implies that they lack robustness. They should really be seen as “work in progress”.

    I can also see how such a study will negatively affect the public image of global warming. People are usually not happy with prediction after the fact……..


  15. Richard C (NZ) on June 19, 2013 at 2:46 pm said:

    ‘The “ensemble” of models is completely meaningless, statistically’

    Posted on June 18, 2013 by Anthony Watts

    This comment from rgbatduke, who is Robert G. Brown. at the Duke University Physics Department on the No significant warming for 17 years 4 months thread has gained quite a bit of attention [e.g. reproduced by Dr Judith Curry at Climate Etc] because it speaks clearly to truth. So that all readers can benefit, I’m elevating it to a full post

    rgbatduke says:

    June 13, 2013 at 7:20 am

    Last two paragraphs:

    “It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.

    Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and still possibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.”

  16. Richard C (NZ) on June 24, 2013 at 7:21 pm said:

    One of the first jobs for NIWA’s High Performance Computing Facility (HPCF) was snow modeling partly funded by the Ski Areas Association of New Zealand:

    ‘New Zealand snow areas confident they will adapt to any risks from climate change’

    16 December 2010

    New climate modelling shows seasonal snow levels at New Zealand ski areas will be reduced by the effects of climate change in the coming years, but the good news is the loss may actually be less than originally anticipated and we should be able to continue to make snow, even under a more extreme climate scenario

    A lot less. I’ve just seen a newsclip from Mt Hutt (I think it was) where they were saying the 3m base was the most they had ever seen.

  17. Richard C (NZ) on June 26, 2013 at 7:06 pm said:

    ‘New Weather Service supercomputer faces chaos’

    By Steve Tracton

    The National Weather Service is currently in the process of transitioning its primary computer model, the Global Forecast System (GFS), from an old supercomputer to a brand new one [Weather and Climate Operational Supercomputer System (WCOSS)]. However, before the switch can be approved, the GFS model on the new computer must generate forecasts indistinguishable from the forecasts on the old one.

    One expects that ought not to be a problem, and to the best of my 30+ years of personal experience at the NWS, it has not been. But now, chaos has unexpectedly become a factor and differences have emerged in forecasts produced by the identical computer model but run on different computers.

    This experience closely parallels Ed Lorenz’s experiments in the 1960s, which led serendipitously to development of chaos theory (aka “butterfly effect). What Lorenz found – to his complete surprise – was that forecasts run with identically the same (simplistic) weather forecast model diverged from one another as forecast length increased solely due to even minute differences inadvertently introduced into the starting analyses (“initial conditions”).


    So what lay behind the chaotic like divergence of solutions between the identically same GFS run on different computer systems? Simply speaking, the error in model’s sequence of short range (3 hour) forecasts, which provide the “first guess” in assimilation of the latest observations, does not result in precisely the same initial conditions for the next pair of GFS extended range forecasts (see schematic illustration below).

    The differences in the simulations arise solely from exceedingly small, but apparently consequential differences in numerical calculations. These are associated with differences in the computer systems’ structure and logical organization (architecture) and compilers which translate programming codes (e.g., versions of Fortran) to machine language – and probably other factors way over my head to understand.


  18. Richard C (NZ) on June 28, 2013 at 10:08 pm said:

    ‘Policy Implications of Climate Models on the Verge of Failure’

    By Paul C. Knappenberger and Patrick J. Michaels
    Center for the Study of Science, Cato Institute, Washington DC

    [converted from a poster displayed at the AGU Science Policy Conference, Washington, June 24-26]


    Assessing the consistency between real-world observations and climate model projections
    is a challenging problem but one that is essential prior to making policy decisions which
    depend largely on such projections. National and international assessments often mischaracterize the level of consistency between observations and projections.
    Unfortunately, policymakers are often unaware of this situation, which leaves them
    vulnerable to developing policies that are ineffective at best and dangerous at worst.

    Here, we find that at the global scale, climate models are on the verge of failing to
    adequately capture observed changes in the average temperature over the past 10 to 30
    years—the period of the greatest human influence on the atmosphere. At the regional
    scale, specifically across the United States, climate models largely fail to replicate known
    precipitation changes both in sign as well as magnitude.



    It is impossible to present reliable future projections from a collection of climate
    models which generally cannot simulate observed change. As a consequence, we
    recommend that unless/until the collection of climate models can be demonstrated to accurately capture observed characteristics of known climate changes, policymakers should avoid basing any decisions upon projections made from them. Further, those policies which have already be established using projections from these climate models should be revisited.

    Assessments which suffer from the inclusion of unreliable climate model projections include those produced by the Intergovernmental Panel on Climate Change and the U.S. Global Climate Change Research Program (including the draft of their most recent National Climate Assessment). Policies which are based upon such assessments include those established by the U.S. Environmental Protection Agency pertaining to the regulation of greenhouse gas emissions under the Clean Air Act.

    Re the EPA assessments, see ‘Amicus brief to the Supreme Court’ (filed May 23, 2013):

  19. Richard C (NZ) on July 12, 2013 at 10:35 am said:

    ‘Climate change: The forecast for 2018 is cloudy with record heat’

    Efforts to predict the near-term climate are taking off, but their record so far has been patchy.

    * Jeff Tollefson

    In August 2007, Doug Smith took the biggest gamble of his career. After more than ten years of work with fellow modellers at the Met Office’s Hadley Centre in Exeter, UK, Smith published a detailed prediction of how the climate would change over the better part of a decade1. His team forecasted that global warming would stall briefly and then pick up speed, sending the planet into record-breaking territory within a few years.

    The Hadley prediction has not fared particularly well. Six years on, global temperatures have yet to shoot up as it projected. Despite this underwhelming result, such near-term forecasts have caught on among many climate modellers, who are now trying to predict how global conditions will evolve over the next several years and beyond. Eventually, they hope to offer forecasts that will enable humanity to prepare for the decade ahead just as meteorologists help people to choose their clothes each morning.

    These near-term forecasts stand in sharp contrast to the generic projections that climate modellers typically produce, which look many decades ahead and don’t represent the actual climate at any given time. “This is very new to climate science,” says Francisco Doblas-Reyes, a modeller at the Catalan Institute of Climate Sciences in Barcelona, Spain, and a lead author of a chapter that covers climate prediction for a forthcoming report by the Intergovernmental Panel on Climate Change (IPCC). “We’re developing an additional tool that can tell us a lot more about the near-term future.”

    In preparation for the IPCC report, the first part of which is due out in September, some 16 teams ran an intensive series of decadal forecasting experiments with climate models. Over the past two years, a number of papers based on these exercises have been published, and they generally predict less warming than standard models over the near term. For these researchers, decadal forecasting has come of age. But many prominent scientists question both the results and the utility of what is, by all accounts, an expensive and time-consuming exercise.


    By starting in the present with actual conditions, Smith’s group hoped to improve the model’s accuracy at forecasting the near-term climate. The results looked promising at first. The model initially predicted temperatures that were cooler than those seen in conventional climate projections — a forecast that basically held true into 2008. But then the prediction’s accuracy faded sharply: the dramatic warming expected after 2008 has yet to arrive (see ‘Hazy view’). “It’s fair to say that the real world warmed even less than our forecast suggested,” Smith says. “We don’t really understand at the moment why that is.”


    Smith says that his group at the Hadley Centre has doubled the resolution of its model, which now breaks the planet into a grid with cells 150 kilometres on each side. Within a few years, he hopes to move to a 60-kilometre grid, which will make it easier to capture the connections between ocean activities and the weather that society is interested in. With improved models, more data and better statistics, he foresees a day when their models will offer up a probabilistic assessment of temperatures and perhaps even precipitation for the coming decade.

    In preparation for that day, he has set up a ‘decadal exchange’ to collect, analyse and publish annual forecasts. Nine groups used the latest climate models to produce ten-year forecasts beginning in 2011. An analysis of the ensemble6 shows much the same pattern as Smith’s 2007 prediction: temperatures start out cool and then rise sharply, and within the next few years, barring something like a volcanic eruption, record temperatures seem all but inevitable.

    “I wouldn’t be keen to bet on that at the moment,” Smith says, “but I do think we’re going to make some good progress within a few years.”

    # # #

    No mention of UKMO’s Dec 2012 5 year forecast to 2017 but basically, all these near-term model predictions “start out cool and then rise sharply” no matter what year they start them.

    I think they have a collective problem.

  20. Richard C (NZ) on August 29, 2013 at 2:42 pm said:

    Two GCM papers appear to be creating a “buzz” at present.

    First paper:

    ‘Recent global warming hiatus tied to equatorial Pacific surface cooling’

    Yu Kosaka and Shang-Ping Xie

    [Judith Curry] “….the same natural internal variability (primarily PDO) that is responsible for the pause is a major and likely dominant cause (at least at the 50% level) of the warming in the last quarter of the 20th century”

    [John Michael Wallace of the University of Washington] “It argues that not only could the current hiatus in the warming be due to natural causes: so also could the rapidity of the warming from the 1970s until the late 1990s”

    Second paper:

    ‘Overestimated global warming over the past 20 years’

    Opinion & Comment by Fyfe, Gillett and Zwiers

    [Judith Curry] “Their conclusion This difference might be explained by some combination of errors in external forcing, model response and internal climate variability is right on the money IMO”

    [The Hockey Schtick] “The authors falsify the models at a confidence level of 90%, and also find that there has been no statistically significant global warming for the past 20 years”

    # # #

    “Pause”, “hiatus”, and “divergence” now standard climatological terms in the literature apparently.

  21. Richard C (NZ) on August 29, 2013 at 4:36 pm said:

    Twitter / BigJoeBastardi: Now “climate researchers” will …

    Now “climate researchers” will want huge grants to tell us that when pdo warms in 20 years, warming will resume,after drop to late 70s temps

    Twitter / RyanMaue: Cold-phase of PDO means …

    Cold-phase of PDO means “hiatus/less/pause/plateau” of warming. We need a Nature article w/climate models to prove this?

    Twitter / RyanMaue: I already blamed lack of global …

    I already blamed lack of global TC activity from 2007-2012 on colder Pacific conditions. I thought it was so apparent to be non-publishable

    Twitter / BigJoeBastardi: The arrogance and ignorance …

    The arrogance and ignorance of these guys, now “discovering” what many have forecasted to happen due to cold PDO is stunning

  22. Richard C (NZ) on August 29, 2013 at 4:43 pm said:

    Tisdale re Kosaka and Xie:

    “Anyone with a little common sense who’s reading the abstract and the hype around the blogosphere and the Meehl et al papers will logically now be asking: if La Niña events can stop global warming, then how much do El Niño events contribute? 50%? The climate science community is actually hurting itself when they fail to answer the obvious questions.”

    ‘Global warming pause caused by La Nina’

    The researchers said similar decade-long pauses could occur in future, but the longer-term warming trend was “very likely to continue with greenhouse gas increases”.

    Read more:

    # # #

    Or “…the longer-term warming trend was “very likely to [turn to cooling] with [solar decreases]”

    It all depends on the (correct) attribution.

  23. Richard C (NZ) on August 29, 2013 at 5:10 pm said:

    Settled science: The heat is hiding in the ocean, while the Pacific Ocean cools, and it’s “pretty straightforward” and “complicated” and a “a chicken vs. egg problem” dogs the finding Pacific Ocean cools, flattening global warming

    “Really, this seems pretty straightforward. The climate is complicated, and natural variability can mask trends seen over century-long timescales,” says climate scientist David Easterling of the National Oceanic and Atmospheric Administration’s National Climatic Data Center in Asheville, N.C.

    MIT’s Susan Solomon is more skeptical of the Pacific Ocean cooling as an explanation for the flattening, saying “a chicken vs. egg problem” dogs the finding. “Did the sea surface temperatures cool on their own, or were they forced to do so by, for example, changes in volcanic or pollution aerosols, or something else? This paper can’t answer that question.”

  24. Richard C (NZ) on September 12, 2013 at 3:16 pm said:

    New paper finds ‘up to 30% discrepancy between modeled and observed solar energy absorbed by the atmosphere’

    More problems for the climate models: A paper published today in Geophysical Research Letters finds that there is “up to 30% discrepancy between the modeled and the observed solar energy absorbed by the atmosphere.” The authors attribute part of this large discrepancy, which would alone have a greater radiative forcing effect than all of the man-made CO2 in the atmosphere, to water vapor absorption in the near UV region [see hotlink], “But the magnitude of water vapor absorption in the near UV region at wavelengths shorter than 384 nm is not known.” The authors note, “Water vapor is [the most] important greenhouse gas in the earth’s atmosphere” and set out to discover [apparently for the first time] “The effect of the water vapor absorption in the 290-350 nm region on the modeled radiation flux at the ground level.”

    ‘The influence of water vapor absorption in the 290-350 nm region on solar radiance: Laboratory studies and model simulation’

    Juan Du, Li Huang, Qilong Min, Lei Zhu


    [1] Water vapor is an important greenhouse gas in the earth’s atmosphere. Absorption of the solar radiation by water vapor in the near UV region may partially account for the up to 30% discrepancy between the modeled and the observed solar energy absorbed by the atmosphere. But the magnitude of water vapor absorption in the near UV region at wavelengths shorter than 384 nm is not known. We have determined absorption cross sections of water vapor at 5 nm intervals in the 290-350 nm region, by using cavity ring-down spectroscopy. Water vapor cross section values range from 2.94 × 10-24 to 2.13 × 10-25 cm2/molecule in the wavelength region studied. The effect of the water vapor absorption in the 290-350 nm region on the modeledradiation flux at the ground level has been evaluated using radiative transfer model.


  25. Richard C (NZ) on September 23, 2013 at 7:10 pm said:

    ‘Leaked SPM AR5: Multi-decadal trends’

    Data Comparisons Written by: lucia


    A way into the section the draft states.

    “Models do not generally reproduce the observed reduction in surface warming trend over the last 10–15 years……………….”


    Earlier in the draft we find:

    “There is very high confidence that climate models reproduce the observed large-scale patterns and multi-decadal trends in surface temperature, especially since the mid-20th century”

    So evidently the AR5 will admit that they have not reproduced observed warming in the past 10-12 years, speculate that it might be unpredictable climate variability, solar, volcanic or aerosol forcings or possibly due to ‘too strong a response to increasing greenhouse-gas forcings”, which mostly amounts too excess climate sensitivity. That said, reading the leaked draft, I can’t help but wonder about their definition of “multi-decadal”. Generally, I assume that means “two or more decades”. So, I ran my script to get roughly 15, 20 and 25 year trends, comparing the observed earth trend to the spread in trends in the ‘AR5′ models forced using the rcp45 scenario.


    As you can see, while the 15 year trend (discussed in the leaked draft SPM) are just on the edge of the model spread, longer term predictions are fall outside. So I would think if they don’t have great confidence in predicting 15 year trends, they would have even less confidence in predicting “multi-decadal” trends. But what do I know?

    Anyway, possibly this leaked draft is a hoax. We’ll see.

  26. Richard C (NZ) on September 29, 2013 at 4:51 pm said:

    ‘Viewpoints: Reactions to the UN climate report’


    Professor John Shepherd, ocean & earth science, University of Southampton

    “….no-one ever claimed that climate models could predict all these decadal wiggles”

    # # #

    Successive decadal wiggles are what make multidecadal projections. And Kosaka and Xie (2013) modeled (in retrospect) the present decadal wiggle when constrained by natural oceanic variation.

    Therefore, natural variation (e.g. PDO/AMO) must be integrated in the models before realistic projections can be made – the sceptics argument for yonks,

  27. Richard C (NZ) on November 15, 2013 at 10:02 am said:

    ‘New paper finds simple laptop computer program reproduces the flawed climate projections of supercomputer climate models’

    The Hockey Schtick

    A new paper finds a simple climate model based on just three variables “and taking mere seconds to run on an ordinary laptop computer, comes very close to reproducing the results of the hugely complex climate models.” and “The [laptop computer] model was based on three key processes: how much energy carbon dioxide prevents from escaping to space (radiative forcing), the relationship between rate of warming and temperature, and how rapidly the ocean takes up heat (ocean thermal diffusivity).”

    Actually, you only need one independent variable [CO2 levels] to replicate what the highly complex supercomputer climate models output. This has been well demonstrated by Dr. Murry Salby in his lecture, which shows 1:1 agreement between the supercomputer-simulated global temperature and CO2 levels over the 21st century: [see graph]


  28. I remember, even clearer now 20yrs later, when I was studying Electrical Engineering at University doing Philosophy 101. We were the first year that had philosophy added to the course with the intention to opening the eyes of potential engineers to a frame of reference for the decisions we may one day make as engineers.

    The topics I remember were:
    • Value judgments, how our personal values influence what’s right
    • The energy crisis – over reliance of fossil fuels
    • Global warming, The Greenhouse effect, man-made CO2 emissions
    I very much enjoyed the Value judgments topic. Why do we make bridges only 2.5 times stronger than their maximum loading? What makes your decisions and values more important than others? Does it take into account the potential of natural disasters? Excellent stuff!
    The Energy Crisis topic didn’t make as much sense to me. So many loaded learnings. We weren’t philosophizing, we were being brain washed. I could understand that fossil fuel is a finite resource but I also knew, at the same time I was being brain washed, the constant discovery of more and larger deposits that technology was helping us find were being discovered. We were being told to use Nuclear energy, solar, wind, tidal, etc. etc. This only had me think, what is the environmental cost of those resources? Why aren’t we discussing those in philosophy rather than being brain washed? I was sure, even without evidence, the environmental cost of making solar panels was likely to be expensive. Not only were fossil fuels required to make them but how much processing and environmental damage? I knew we weren’t being encouraged to think, but to agree. Anyway, so what if I don’t agree?
    Then the topic of Global Warming. I can’t tell you why, it must have been instinct but the whole topic did not sit right with me. Maybe because it was so accusational? It was our fault! And therefore it was our responsibility to fix it. Nope! I didn’t have anything to do with our current position at 20yrs of age. I knew it wasn’t me, and I was feeling uncomfortable about the whole delivery of this brain washing. I immediately agreed, we probably should stop polluting the planet and reduce our use of fossil fuels but the rest was rubbish.
    I was not happy and there was a lack of scientific evidence. And then, the evidence that was produced? Well it was a chart of the earth’s temperature related to the suns radiation. I don’t have a copy any longer, all these years later and I can’t find it online. What I saw, at least in my mind was a direct correlation between the suns output compared to the earth’s temperature. It was as clear as day for me. I wanted to find evidence to support my gut feeling and the only piece of evidence that seemed to matter, but there was none. Keep in mind the internet wasn’t what it is today. The best thing about the internet at that moment in time was the release of Netscape, so I got to see boobs on the computer. Yes indeed, remember the very first steps into the World Wide Web? I do.
    Needless to say, I failed philosophy. I would not spew their lies. I still say, I have never learned more than I did when I failed philosophy.
    All these years later I have found a growing movement of educated and intelligent people who share my suspicion towards the global warming lie. Ok, I better clarify that comment. The lie that global warming is due to man-made CO2 emissions.
    I encourage everyone to research this for themselves. It shouldn’t be a surprise that I refer you to a community I am involved with SuspiciousObservers.
    Here is Ben’s latest conference which is a great start and overview. Watch this if nothing else. Ben Davidson: The Variable Sun and Its Effects on Earth | EU2014
    Their website contains a wide variety of brilliant information including:
    • Starwater – water comes from stars and every planet has water
    • C(lie)mate – the global warming lie
    • Agenda 21
    Check out the daily SO news on YouTube at
    See weather presented from a space perspective.
    It’s bigger than you think.

  29. Richard C (NZ) on January 17, 2015 at 7:53 am said:

    ‘Does the Uptick in Global Surface Temperatures in 2014 Help the Growing Difference between Climate Models and Reality?’

    Bob Tisdale / January 16, 2015


    As illustrated and discussed, while global surface temperatures rose slightly in 2014, the minor uptick did little to overcome the growing difference between observed global surface temperature and the projections of global surface warming by the climate models used by the IPCC.

    In comments:

    January 16, 2015 at 9:06 am

    Quote from NOAA’s annual summary…
    “This is the first time since 1990 the high temperature record was broken in the absence of El Niño conditions at any time during the year in the central and eastern equatorial Pacific Ocean, as indicated by NOAA’s CPC Oceanic Niño Index. This phenomenon generally tends to increase global temperatures around the globe, yet conditions remained neutral in this region during the entire year and the globe reached record warmth despite this.”

    As much as this article has tried to imply this record year is not significant, the paragraph above would say otherwise.


    Bob Tisdale
    January 16, 2015 at 9:19 am

    NOAA is playing games, Simon. They well know that this year’s El Nino was not focused on the NINO3.4 region. The JMA uses the NINO3 region and they’ve stated that El Nino conditions have existed since June 2014:


  30. Richard C (NZ) on January 17, 2015 at 8:02 am said:

    ‘Peer-reviewed pocket-calculator climate model exposes serious errors in complex computer models and reveals that Man’s influence on the climate is negligible’

    Anthony Watts / January 16, 2015

    What went wrong?

    A major peer-reviewed climate physics paper in the first issue (January 2015: vol. 60 no. 1) of the prestigious Science Bulletin (formerly Chinese Science Bulletin), the journal of the Chinese Academy of Sciences and, as the Orient’s equivalent of Science or Nature, one of the world’s top six learned journals of science, exposes elementary but serious errors in the general-circulation models relied on by the UN’s climate panel, the IPCC. The errors were the reason for concern about Man’s effect on climate. Without them, there is no climate crisis.

    Thanks to the generosity of the Heartland Institute, the paper is open-access. It may be downloaded free from Click on “PDF” just above the abstract.


  31. Richard C (NZ) on January 17, 2015 at 9:40 am said:

    ‘Warmest year’, ‘pause’, and all that

    by Judith Curry, January 16, 2015


    El Nino?

    One of the key aspects of the hype about the ‘warmest year in 2014′ was that 2014 was not even an El Nino year. Well, there has been a great deal of discussion about this issue on the Tropical ListServ. Here is what I have taken away from that discussion:

    A global circulation response pattern to Pacific convection with many similarities to El Niño has in fact been present since at least June. Convection to the east of New Guinea is influencing zonal winds in the upper troposphere across the Pacific and Atlantic, looking similar to an El Nino circulation response.

    So, is it El Niño? Not quite, according to some conventional indices, but a broader physical definition might be needed to capture the different flavors of El Nino. A number of scientists are calling for modernizing the ENSO identification system. So I’m not sure how this event might eventually be identified, but for many practical purposes (i.e. weather forecasting), this event is behaving in many ways like an El Nino.

    What does this mean for interpreting the ‘almost warmest year’? Well not much; I think it is erroneous to infer that ‘it must be AGW since 2014 wasn’t even an El Nino year’ is useful reasoning here.

    That said, there is definitely some unusual events on the North Pacific, including extreme warm anomalies in the mid-high latitudes, and positive value of the PDO.

    Bottom line

    Berkeley Earth sums it up well with this statement:

    “That is, of course, an indication that the Earth’s average temperature for the last decade has changed very little.”

    The key issue remains the growing discrepancy between the climate model projections and the observations: 2014 just made the discrepancy larger.

    Speculation about ‘warmest year’ and end of ‘pause’ implies a near term prediction of surface temperatures – that they will be warmer. I’ve made my projection – global surface temperatures will remain mostly flat for at least another decade. However, I’m not willing to place much $$ on that bet, since I suspect that Mother Nature will manage to surprise us. (I will be particularly surprised if the rate of warming in the next decade is at the levels expected by the IPCC.)

  32. Richard C (NZ) on February 3, 2015 at 9:26 am said:

    ‘Questioning the robustness of the climate modeling paradigm’

    by Judith Curry, February 2, 2015

    Are climate models the best tools? A recent Ph.D. thesis from The Netherlands provides strong arguments for ‘no’.

  33. Richard C (NZ) on February 9, 2015 at 8:16 pm said:

    Remote Sensing Systems (RSS) – Climate Analysis

    Atmospheric Temperature

    […] The troposphere has not warmed as fast as almost all climate models predict.

    To illustrate this last problem, we show several plots below. Each of these plots has a time series of TLT temperature anomalies using a reference period of 1979-2008. In each plot, the thick black line is the measured data from RSS V3.3 MSU/AMSU Temperatures. The yellow band shows the 5% to 95% envelope for the results of 33 CMIP-5 model simulations (19 different models, many with multiple realizations) that are intended to simulate Earth’s Climate over the 20th Century. For the time period before 2005, the models were forced with historical values of greenhouse gases, volcanic aerosols, and solar output. After 2005, estimated projections of these forcings were used. If the models, as a whole, were doing an acceptable job of simulating the past, then the observations would mostly lie within the yellow band. For the first two plots (Fig. 1 and Fig 2), showing global averages and tropical averages, this is not the case. Only for the far northern latitudes, as shown in Fig. 3, are the observations within the range of model predictions.

    Ouch! From RSS no less.

  34. Richard C (NZ) on February 15, 2015 at 7:19 pm said:

    ‘Winters in Boston Becoming Drier’

    Written by Dr. Roy Spencer on 13 February 2015.

    Much has been said in recent weeks about how bigger snowstorms in Boston are (supposedly) just what climate models have predicted. “Global warming” is putting more water vapor into the air, leading to more “fuel” for winter storms and more winter precipitation.

    While this general trend is seen in climate models for global average conditions (warming leads to more precipitation), what do the models really predict for Boston?

    And what has actually been observed in Boston?

    The following plot shows that the observed total January precipitation in Boston has actually decreased since the 1930′s, contrary to the average “projections” (in reality, hindcasts) from a total of 42 climate models, at the closest model gridpoint to Boston:

    [See graph]

    Note that even the forecast increase in January precipitation is so small that it probably would never be noticed if it actually occurred.

    During the same period, January temperatures in Boston have seen a statistically insignificant +0.1 deg. F per decade warming, in contrast to 2.5 times faster average warming produced by the 42 climate models:

    [See graph]

    What is very evident is the huge amount of natural variability from year to year, as Bostonians are well aware.

    It’s just weather, folks. Blaming everything on “climate change” is just plain lazy.

  35. Richard C (NZ) on February 25, 2015 at 9:18 pm said:

    ‘Are Climate Modelers Scientists?’

    by Pat Frank February 24, 2015

    For going on two years now, I’ve been trying to publish a manuscript that critically assesses the reliability of climate model projections. The manuscript has been submitted twice and rejected twice from two leading climate journals, for a total of four rejections. All on the advice of nine of ten reviewers. More on that below.

    The analysis propagates climate model error through global air temperature projections, using a formalized version of the “passive warming model” (PWM) GCM emulator reported in my 2008 Skeptic article. Propagation of error through a GCM temperature projection reveals its predictive reliability.


    I will give examples of all of the following concerning climate modelers:

    They neither respect nor understand the distinction between accuracy and precision.
    They understand nothing of the meaning or method of propagated error.
    They think physical error bars mean the model itself is oscillating between the uncertainty extremes. (I kid you not.)
    They don’t understand the meaning of physical error.
    They don’t understand the importance of a unique result.

    Bottom line? Climate modelers are not scientists. Climate modeling is not a branch of physical science. Climate modelers are unequipped to evaluate the physical reliability of their own models.

    The incredibleness that follows is verbatim reviewer transcript; quoted in italics. Every idea below is presented as the reviewer meant it. No quotes are contextually deprived, and none has been truncated into something different than the reviewer meant.

    And keep in mind that these are arguments that certain editors of certain high-ranking climate journals found persuasive.


    In their rejection of accuracy and fixation on precision, climate modelers have sealed their field away from the ruthless indifference of physical evidence, thereby short-circuiting the critical judgment of science.

    Climate modeling has left science. It has become a liberal art expressed in mathematics. Call it equationized loopiness.

    The inescapable conclusion is that climate modelers are not scientists. They don’t think like scientists, they are not doing science. They have no idea how to evaluate the physical validity of their own models.

    They should be nowhere near important discussions or decisions concerning science-based social or civil policies.

  36. Richard C (NZ) on February 27, 2015 at 7:05 pm said:

    ‘On Steinman et al. (2015) – Michael Mann and Company Redefine Multidecadal Variability And Wind Up Illustrating Climate Model Failings’

    Bob Tisdale / 4 hours ago February 26, 2015

    Some good comments too e.g. Dr Norman Page:

    “That the Steinman et al paper got through peer review for Science Magazine says much about the current state of establishment science. However in a short comment on the paper in the same Science issue Ben Booth of the Hadley center does sound a refreshingly cautionary ( for Science Mag and Hadley ) note saying that the paper is only useful if the current models accurately represent both the external drivers of past climate and the climate responses to them and that there is reason to be cautious in both of these areas. This comment is an encouraging sign that empirical reality may be finally making an impression on the establishment consciousness.”

  37. Richard C (NZ) on March 24, 2015 at 9:24 am said:

    INMCM4 (Russian Academy of Sciences) in Judith Curry’s post:

    ‘Climate sensitivity: lopping off the fat tail’

    There is one climate model that falls within the range of the observational estimates: INMCM4 (Russian). I have not looked at this model, but on a previous thread RonC makes the following comments.

    “On a previous thread, I showed how one CMIP5 model produced historical temperature trends closely comparable to HADCRUT4. That same model, INMCM4, was also closest to Berkeley Earth and RSS series.

    Curious about what makes this model different from the others, I consulted several comparative surveys of CMIP5 models. There appear to be 3 features of INMCM4 that differentiate it from the others.”

    1.INMCM4 has the lowest CO2 forcing response at 4.1K for 4XCO2. That is 37% lower than multi-model mean

    2.INMCM4 has by far the highest climate system inertia: Deep ocean heat capacity in INMCM4 is 317 W yr m22 K-1, 200% of the mean (which excluded INMCM4 because it was such an outlier)

    3.INMCM4 exactly matches observed atmospheric H2O content in lower troposphere (215 hPa), and is biased low above that. Most others are biased high.

    So the model that most closely reproduces the temperature history has high inertia from ocean heat capacities, low forcing from CO2 and less water for feedback.

    Definitely worth taking a closer look at this model, it seems genuinely different from the others.

    And, I suggest, throw out all the others.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>