Ask the research team: NASA’s climate data

I occasionally have my research team answer emails from people seeking clarity on different energy issues. From time to time, I’m going to feature some of the best questions and answers on this blog.

Question: How would you respond to the claim that NASA’s data on climate demonstrates that warming is clearly accelerating?

Answer (from CIP’s head of research, Steffen Henne): The short answer: To the unbiased observer there is no acceleration in the data with the exception of the end point of the trend–which is the peak of a natural El Nino event. The models that predict significant warming have also overpredicted the warming to this point, which does not support the notion of particularly rapid or even accelerating warming.

Let me elaborate.

Is the data reliable?

Regarding the instrumental record, it is noteworthy that the NASA page uses the GISTEMP temperature (by NASA’s Goddard Institute for Space Studies), which skeptics regard as one of the most biased records. The reason is that NASA seems to show the most “alarming” trends in areas where the data is subject to interpretation–for example where there are no observations available at the polar regions.

There are two other data sets we can look at that provide some additional information. One is the UK Met Office’s HadCRUT data, regarded by many as the most comprehensive surface temperature data, the other is satellite data of the lower troposphere (the lowest layer of the atmosphere) as interpreted by the team at University of Alabama at Huntsville. (Spencer and Christy who maintain the data set are “skeptics” but their data is in good agreement with the competing Remote Sensing System data product, which is maintained by “warmists;” both groups depend largely on the same systems for the raw data.)

What’s the trend?

There seems to be no visible acceleration apparent in the temperature anomaly curves, although cherry-picking the endpoint of the data can create a distorted impression. Note that the NASA data and the HadCRUT data end near the peak of the latest strong El Nino, a natural climate phenomenon that creates a temporary peak in global surface temperatures.

If we put that aside, all long-term data seem to agree with a mild warming trend starting in the early 20th century (before CO2 emissions started to really take off) and the trend from about 1915 to the 1940s seems of similar magnitude as the recent trend since the 1970s. Interpreting recent warming as acceleration from human emissions seems unfounded, especially considering the natural climate cycles impacting up- and down-swings over decadal periods can mask long-term trends.

Remarkably, the lower troposphere temperature anomaly at this point (late 2018) is below the 1998 El Nino peak, which might also be true for the surface data but we will get the full picture for this year only in early 2019. The steep upswing towards the end of the data seems based on the temporary El Nino and not a good indicator of the longer trend.

What’s the future? The track record of climate models

More relevant than the trend so far, which has not led to alarming climate impacts up to this point, is what the future holds in terms of climate. For that we need to look at what is known about the climate system and what we can predict with what level of certainty.

Since model projections of future warming are the foundation of all climate policy debate and represent the current understanding of the complex climate system, we should try to understand what models predict and how good they are at predicting it.

In my view climate models are so divergent in their predictions and so off-the-mark even for only one climate variable–average surface temperature–that we cannot rely on them as a starting point for predictions of future climate livability or for making investment decisions (decisions that might impact future generations much more than the actual climate of the future).

The best discussion I have seen on this so far can be found here. The source website (Watts Up with That) is often dismissed by catastrophists, but the author links to all the relevant arguments of the other side and explains everything in detail for the reader to understand the opposing positions. Everyone can judge for themselves.

What I found noteworthy is that the opposing expert cherry-picked the endpoint of the climate prediction at the peak of the recent El Nino and retroactively changed the original prediction to better fit the data. But even then he ignores the multi-year deviation of the model average from the actual temperature. It’s also likely that after the decline of the recent El Nino peak, even this representation will have the models overshoot the data again.

Problems with existing climate models and how we use them

Unfortunately, both sides in this case argue around a framework that uses an “ensemble mean”–i.e. the average of multiple models and model runs running on different assumptions–of models that have significant discrepancies. That seems highly questionable since the models represent stand-alone predictions that have to stand or fall on their own premises and explanations of how the climate works.

As an example, here is a plot of 39 model runs (from the CMIP5 program that compares climate model projections) in the most extreme emission scenario considered by the IPCC:

All of these model runs are equally valid stand-alone projections but come to vastly different conclusions about the future state of the climate (as well as the recent past) under the same CO2 emission scenario. Why is that problematic?

To use a simplified analogy, suppose the CDC would predict the deaths from a virus spreading in a population and uses two models that differ in their assumptions and the modeling of the mechanisms by which the disease spreads. One model predicts 10 deaths, the other predicts 50 deaths. Now the CDC “averages” the predictions to 30 deaths from the virus, which happens to be close to the actual death toll of 28. Everyone observing this would come to the conclusion that both models are wrong and probably useless for future predictions, especially since we don’t know WHY both models were wrong. How can we be sure that the average of both incorrect models is a good prediction for the next outbreak? It probably isn’t.

In addition to the discrepancy between models and observations, we should note that the model predictions didn’t forecast the El Nino spike, so even if they would go right through the data point, the predicted mechanism would still be wrong. The models show even low skill for curve-fitting for past climates, where the actual temperature is already known.

The spread of predictions from various models is another indicator of unsolved problem with climate complexity. How can different models predict vastly different temperature changes from 3°C of warming to 6°C of warming in the same emission scenario if the only thing that counts for the warming is human CO2 and everything else is allegedly well understood?

What should we do?

In my view there should definitely be a vigorous debate about all this.

For the debate about climate impacts and policies, I would recommend the testimony before congress by Manhattan Institute’s Oren Cass, which lays bare the flaws in economic projections used to justify drastic policy measures today. He also wrote an article in WSJ about this.