Roy Spencer on Models and Observations

A few days ago, Dr. Roy Spencer wrote a piece for the Heritage Foundation called, "Global Warming: Observations vs. Climate Models" (PDF) essentially arguing that models show too much warming compared to observations, and if we stick to observations, "global warming offers no justification for carbon-based regulation." He claims to frame his argument in terms of answering three questions:
  1. Is recent warming of the climate system materially attributable to anthropogenic greenhouse gas emissions, as is usually claimed?
  2. Is the rate of observed warming close to what computer climate models—used to guide public policy—show?
  3. Has the observed rate of warming been sufficient to justify alarm and extensive regulation of CO2 emissions?
We should keep in mind that this is a political document intended to support the political aims of the Heritage Foundation, and Spencer has carefully selected what he says and doesn't say to fit the political agenda of the Heritage Foundation. In the interests of keeping my response to this "report" brief, though, I'm only going to cover the most significant claims that he uses to support his conclusions. I'll cover them under the headings of Spencer's three questions.

1. Attribution of Global Warming to Human Activity

Here Spencer correctly argues that changes in GMST are caused by changes in the Earth's energy imbalance (EEI), and he also observes that the satellites can only measure the absolute value of the Earth's energy flows to a few W/m^2, yet the current rates of ocean warming show that EEI is estimated to be only 0.6 W/m^2. He argues that scientists do not understand the natural causes of climate change, and so they cannot be certain that current warming is caused by the burning of fossil fuels. He then claims that scientists simply assume that the Earth's climate state without human activity would be in equilibrium - "that the rate of energy input into the climate system from the sun is, on average, exactly equal to the rate of energy loss to outer space from IR radiation when averaged globally and over many years." Scientists are said to just assume that the "small roughly 0.6 W/m^2 imbalance" is "entirely blamed on fossil fuels."

There's a lot within this portion of his argument that is true, but some parts are bafflingly strange. Scientists do not simply assume that climate would be in equilibrium apart from human activity and then just assume that humans are responsible for all perturbations of the earth's energy balance. There are many papers evaluating paleoclimate that spend a great deal of effort quantifying the forcings that upset the Earth's energy imbalance as well as the feedbacks that amplify or dampen the effects of those forcings. IPCC reports have also consistently shown the natural carbon cycle to be a net sink - natural processes remove slightly more CO2 from the atmosphere than they contribute to it. So without human emissions that flip the carbon cycle to be a net source of CO2, we would likely be cooling. Scientists have also quantified solar forcings. Not every 11-year solar cycle is equal, and so there are slight changes in solar variability on multidecadal time scales due to differences between solar cycles. Solar activity reached a high point in the 1960s and have been steady to decreasing ever since. Scientists even evaluate the impact of volcanic eruptions, which almost always cause short-term cooling for a couple years. Scientists have actually shown that not that we would be in an "energy equilibrium" without human carbon emissions but rather that there would be slightly more energy escaping the climate system than entering it - we should be slowly cooling.

Anything that changes CO2 concentrations will have the effect of changing the Earth's energy imbalance. Doubling atmospheric CO2 has the effect of reducing outgoing energy flow by ~3.7 W/m^2 assessed from the tropopause. Human CO2 emissions since the industrial revolution have increased CO2 concentrations by 50%, so we can estimate that these emissions have caused 5.35*ln(1.5) = 2.2 W/m^2 decrease in outgoing LW radiation, or about a 1% decrease in the ~240 W/m^2 incoming solar flux. The Earth's surface must warm in response until the outgoing energy flux returns to 240 W/m^2 again. We have witnessed about a 1.2 C increase in GMST since the 1850-1900 mean, but EEI has not returned to 0; in fact it's increasing, and Spencer's estimate of 0.6 W/m^2 appears to be a bit low. One recent study placed it at 0.77 ± 0.06 W/m^2.[1] The small confidence interval is telling. Even though satellite data has a large margin of error in assessing the absolute value of energy flows in and out, there is good agreement between satellite observations and in situ measurements, so we have pretty good data about what EEI currently is and how quickly its changing. Loeb's estimate is an average from 2005 to 2019, but the trend shows that it's still increasing and currently ~1 W/m^2. This means necessarily that there is more warming built into the current state of our climate. The planet must warm until EEI returns to 0. We'll talk about how much in a little bit.
Here's an update to the above that continues through mid-2023. The average EEI for the last decade probably exceeds 1 W/m^2.


It's safe to say that scientists do not simply assume that human activity is responsible for this warming. Scientists go through the trouble of quantifying natural and human forcings, and natural forcings have been negligible (some are pushing us towards cooling), while anthropogenic forcings have been large. In order to make the claim that a large fraction of current warming is caused by natural forcings, you'd have find a natural forcing (or group of natural forcings) that can be shown to be large. To be responsible for 50% of current warming, you'd need natural forcings totaling about 2.2 W/m^2 (roughly equal to human forcings), but Spencer can't identify any natural forcings that could reasonably be quantified anywhere near that large. This is the actual reason why consensus documents like the IPCC assessment reports have concluded (not assumed) that human activity is responsible for virtually all the warming above the 1850-1900 mean.[2]

2. CMIP6 Climate Models Produce Too Much Warming

Spencer's second point is that climate models produce too much warming compared to observations. This is a somewhat trivial point, even if superficially true. It's been well-publicized that the CMIP6 suite of models contained a subset that produced too much warming,[3] mostly due to early attempts at more sophisticated attempts to simulate clouds.[4] The tendency of these models was recognized early and has received considerable attention. The IPCC limited the influence of these high sensitivity models in the AR6 report, such that projections in AR6 and recommendations to policy makers were built on a subset of the CMIP6 suite of models. But because there was so much interest in looking at these high sensitivity models, model runs with these high sensitivities also appeared in the KNMI Climate Explorer, which Spencer used to generate his model-observations comparisons. 

Because it is well-known that 1) some CMIP6 models produced sensitivities that were too high and 2) the influence of these models is limited in documents written for policy recommendations (like the AR6, NCA5), any comparison between models and observations should be absolutely clear about what is being compared to what and how. Spencer really should provide more details about the 36 model runs he used, including the 95% confidence envelope, the SSP scenario used, and the average ECS or TCR of the model runs. He should also name the 5 observational datasets he used. Spencer didn't do this.

There are well over 100 models in the CMIP6 suite, and there are 40 in the KNMI Climate Explorer. We don't know how Spencer chose these 36 climate models or to what extent these were biased by the high sensitivity models that were not used for policy recommendations. If the average ECS for these was significantly higher than 3°C or if these model runs used SSP5-8.5, for instance, we'd expect mean to show more warming than observations. And what were the 5 observational datasets? Why didn't Spencer include 2023? Spencer says he accessed the KNMI Climate Explorer on January 10, and all the major surface datasets published their Dec. 2023 results on January 12. I see no reason why this couldn't have been updated to 2023. 

And it's very puzzling to me that Spencer says both that these data were set to a 1991-2020 average and that they were adjusted so that the trendlines converge in 1979. I don't think it's possible that both claims are true of both datasets. From the looks of it, at least the models were not set to a 1991-2020 baseline. Perhaps observations were and the models were adjusted to meet the trendline in 1979. But this is actually not a good way to align models with observations, and both Spencer and Christy have been criticized for this practice since 2016 or so. So what can we say from this? We can say that it's possible to choose 36 model runs that superficially show more warming than 5 selected datasets (whatever they are) if you start in 1979 and align the trend lines to converge at 1979. 

Spencer then compares 36 models to 5 observational datasets for trends in summer temperatures for the U.S. corn belt. If you weren't thinking cherry picking was going on before, I suspect you are now. Spencer is absolutely right that models produce too much warming compared to observations in this particular <1% of the globe during 25% of the year. And this is something that has been discussed in multiple peer-reviewed studies.[5][6][7][8] The adage is true: "all models are wrong; some are useful." Here we can certainly point to an area in which models are wrong. But we can also point to the literature describing this so that policy recommendations are not overly influenced by these model results. I could see a legitimate point here that we need to make sure that the literature gets into the hands of policy makers with corrections for areas where models aren't performing well. But Spencer's concern here seems to be a different from this. He's arguing that models have sensitivities that are too high, and observational data shows warming to be "benign." But his choices of comparisons are not making his case.

Spencer's third example compares lower troposphere temperatures to observations. Here he uses a paper from McKitrick and Christy that used 38 model runs and several observational datasets. The spreadsheet in the supplementary material for that paper showed these model runs had an average ECS of 3.83°C (of the ones where ECS was noted). The paper also helpfully included time series for each of the model runs. Spencer didn't show the individual runs or note the average ECS. He also didn't name the three sets of 3 observational datasets used to compare to the average. And still the trendlines are aligned to converge in 1979. This is not a helpful graph. While we can say that the mean of the selected 38 models show more warming than the mean of 3 groups of 3 datasets, we still don't have evidence that the observational data is inconsistent with an ECS near 3°C. He didn't make that case.


Zeke Hausfather published a model-observation comparison that I think avoids the problems I have with Spencer's graphs. Notice here that Hausfather is acknowledging the warming bias in the suite of CMIP6 models and is using the models more consistently with the way they are used in policy recommendations, and he used models with TCR values consistent with the likely range in the IPCC's AR6 report. Hausfather also includes the uncertainty envelope for the models with the model average, and he plots the individual datasets so you can know exactly what you're looking at. Both the model and observations are also plotted with a 1981-2010 baseline rather than forcing the trends to meet at an arbitrarily-chosen year. This shows that models with a TCR near 2°C (ECS near 3°C) are doing a pretty good job.


Hausfather also included a comparison of all CMIP6 models with observations. You can see that in this graph models are not doing quite as well (though not terrible) and you can see the influence of the high sensitivity models on the full CMIP6 suite.

3. What Do Current Warming Rates Show?

In Spencer's third section, he argues that ECS is likely lower than the results of the vast majority of the estimates in the scientific literature. The IPCC assesses the consensus range for ECS to be 2.5°C - 4.0°C warming for 2xCO2. Spencer highlights two studies, one from Lewis and Curry from 2018 (LC18)[9] that estimated ECS to be between 1.5°C to 1.8°C and another from Spencer and Christy (SC23) that arrived at a range of 1.5°C to 2.2°C. Both of these studies are outliers in the scientific literature. I personally don't think it's wise to base policy recommendations on highly optimistic outliers. This post is too long for me to evaluate these papers on their merits. I did do a post with an update to the LC18 paper that found ECS to be within the IPCC range near 3°C. The most comprehensive study to date on ECS puts the 5% - 95% range to be within 2.3°C and 4.7°C[10], and this range is consistent with most ECS estimates and the IPCC.  But for the sake of argument here, let's assume an ECS of 1.8°C (the middle of the SC23) and consider whether Spencer's conclusions at the end of his report are plausible.

Spencer's main point here is that nature will become increasingly efficient at removing our fossil fuel emissions  in the future, such that we may never double atmospheric CO2 concentrations, and ECS is low enough that we may never cross the IPCC's targets for when global warming will become dangerous: "1.5°C of future warming above pre-industrial times is often cited as a goal for a safe limit to future warming. As a result, special energy policies may not be needed to limit future warming to relatively benign levels." Perhaps the most implausible part of Spencer's report is his reasoning for why we may never double atmospheric CO2 concentrations. He writes, "might the Earth’s atmosphere surpass 2xCO2 in the future? This depends on highly uncertain projections of future usage of fossil fuels. The good news is that nature is quite efficient at removing 'excess' CO2 from the atmosphere, and, depending on future rates of fossil fuel burning, it turns out that the atmosphere might not even reach 2xCO2." Spencer is arguing that nature will remove our future emissions rapidly enough to keep us from reaching 2xCO2 (560 ppm) and because sensitivity is low (~1.8°C) we may never cross the 1.5°C threshold even if we never enact special energy policies.

But is it plausible that we will never achieve 2xCO2 even if we continue to burn fossil fuels indefinitely? Absolutely not. This claim comes from a simple model that Spencer set up and shared on his blog. Recently he had this published in a predatory Opast journal. The "paper" built a model with the assumption that "the yearly CO2 sink rate is found to be 2.02% of the atmospheric excess above 293.6 ppm." Spencer's model is actually just a curve fitting exercise that comes up with results that are wildly at odds with observations. I reviewed a similar paper making a similar argument (actually I believe this paper was inspired by Spencer's blogposts) so I won't go into detail about Spencer's version the paper here. Suffice it to say that the observational data from the annually published carbon budgets[11] show that while natural sinks are taking up more carbon, the airborne fraction of our emissions is increasing, not decreasing. And Spencer's model doesn't pass even the simplest of sniff tests. We know from paleoclimate history that CO2 concentrations have risen much higher than today at emission rates much slower than today. If Spencer's model were accurate, this would be impossible, and we wouldn't be able to detect any large increases in atmospheric CO2 concentrations driven by slower emission rates than we are experiencing.[12][13][14]

Natural Sinks take up a smaller fraction of emissions with continued emissions

The more CO2 we emit, the larger the percentage of our emissions will remain in the atmosphere, and without drastic cuts in emissions, we will continue to cause CO2 concentrations to increase. If we assume a continued rate of 36 GtCO2 annually and 50% airborne fraction (which is optimistic), we'll add 18 GtCO2 to the atmosphere every year, or about 18/7.81 = ~2.3 ppm/yr. We're currently at 420 ppm, so at current rates, so that means we'd get to 560 ppm by 140 ppm/2.3ppm/yr = 61 years, or by ~2085. That's conservative, because ocean and land sinks are going to become less efficient as CO2 increases, and there's evidence that the airborne fraction of our emissions is already increasing.

And is it plausible that we can avoid crossing the 1.5°C target? Not at all. As the end of 2023, the three major GMST datasets that go back to 1850 show that the current long-term trend in global warming is approaching 1.3°C while warming rates are at 0.22 C/decade, meaning we're on pace to cross 1.5°C target in about a decade and the 2.0°C target in the 2050s.


Earlier, Spencer argued that the current energy imbalance was 0.6 W/m^2 and that ECS was 1.8°C. The evidence we have shows that both of these estimate are low. But if we take his numbers, we can use these values to estimate the amount of warming we can expect from current CO2 levels even if we were to freeze CO2 concentrations at current levels. If ECS = 1.8 then the earth warms at 1.8/3.7 = ~0.49 C/W/m^2. So in order to reduce a 0.6 W/m^2 EEI to 0, the surface would have to warm by 0.6*.49 = 0.3°C. Since we're already approaching 1.3°C that means that 1.6°C warming is baked into the current climate state even if we agree with Spencer's values for EEI and ECS. If we take more conventional estimates, ECS = 3°C and EEI = 1 W/m^2, then we can expect warming to continue at 3/3.7 = 0.81 C/W/m^2 and the 1 W/m^2 energy imbalance means 0.8°C is baked into the current climate state and we'll hit the 2.0°C target.

Conclusion

For me, Spencer's report for the Heritage foundation leaves much to be desired. The overwhelming evidence we have is that our carbon emissions are heating the planet and producing dangerous outcomes. Warming continues to occur at rates consistent with the conventional estimates for ECS, even if subsets of CMIP6 models show too much warming. We can have no confidence that nature will become more efficient at removing our emissions or that we can avoid the 1.5°C target at all. Our only chance at avoiding the 2.0°C target is through drastic reductions of carbon emissions, and I highly doubt that we'll accomplish that. However, AGW is a very fixable problem, and I have increasing hope that we will fix it, even if we do it too slowly to avoid the effects of a 2.0°C warmer world. I don't think it serves anyone well to choose the most optimistic results from a tiny minority of scientists to the exclusion of the centrist results of the vast majority of scientists, and in this regard I think Spencer did us a disservice with this report.

That said, I can agree with Spencer that there's nothing in the scientific evidence that requires us to enact mitigation policies to achieve net zero emissions. That would be to commit the naturalistic fallacy. What I'd prefer, though, is that we take a good hard look at all the evidence and then clearly articulate our values, and then we can build policy recommendations that are consistent with the evidence while being explicit about the values that inform those recommendations. 

Update: Gavin Schmidt has weighed in on this as well, and his expertise with climate models is far greater than Spencer's (and obviously mine!).  I'd encourage you to read this as well, but it looks like his take is very similar to mine.


References:

[1] Loeb, N. G., Johnson, G. C., Thorsen, T. J., Lyman, J. M., Rose, F. G., & Kato, S. (2021). Satellite and ocean data reveal marked increase in Earth’s heating rate. Geophysical Research Letters, 48, e2021GL093047. https://doi.org/10.1029/2021GL093047

[2] Gillett, N.P., Kirchmeier-Young, M., Ribes, A. et al. Constraining human contributions to observed warming since the pre-industrial period. Nat. Clim. Chang. 11, 207–212 (2021). https://doi.org/10.1038/s41558-020-00965-9

[3] Hausfather, Z. et al. Climate simulations: recognize the ‘hot model’ problem. Nature 605, 26-29 (2022). doi: https://doi.org/10.1038/d41586-022-01192-2

[4] Li, J.-L. F., Xu, K.-M., Lee, W.-L., Jiang, J. H., Tsai, Y.-C., Yu, J.-Y., et al. (2023). Warm clouds biases in CMIP6 models linked to indirect effects of falling ice-radiation interactions over the tropical and subtropical Pacific. Geophysical Research Letters, 50, e2023GL104990. https://doi.org/10.1029/2023GL104990

[5] Mueller et al. 2016 DOI 10.1038/nclimate2825 – Cooling of US Midwest summer temperature extremes from cropland intensification

[6] Lin et al. 2017 DOI 10.1038/s41467-017-01040-2 – Causes of model dry and warm bias over central U.S. and impact on climate projections

[7] Alter et al. 2018 DOI 10.1002/2017GL075604 – Twentieth Century Regional Climate Change During the Summer in the Central United States Attributed to Agricultural Intensification

[8] Zhang et al. 2018 DOI 10.1002/2017JD027200 – Diagnosis of the Summertime Warm Bias in CMIP5 Climate Models at the ARM Southern Great Plains Site

[9] Lewis, N., & Curry, J. (2018, April 23). The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity. Journal of Climate. Retrieved from https://journals.ametsoc.org/doi/10.1175/JCLI-D-17-0667.1

[10] Sherwood, S. C., Webb, M. J., Annan, J. D., Armour, K. C., Forster, P. M., Hargreaves, J. C., et al. (2020). An assessment of Earth's climate sensitivity using multiple lines of evidence. Reviews of Geophysics, 58, e2019RG000678. https://doi.org/10.1029/2019RG000678

[11] 2021 Global Carbon Budget. https://www.icos-cp.eu/science-and-impact/global-carbon-budget/2021

[12] Tardif, R., Hakim, G. J., Perkins, W. A., Horlick, K. A., Erb, M. P., Emile-Geay, J., Anderson, D. M., Steig, E. J., and Noone, D.: Last Millennium Reanalysis with an expanded proxy database and seasonal proxy modeling, Clim. Past, 15, 1251–1273, https://doi.org/10.5194/cp-15-1251-2019, 2019.

[13] The Cenozoic CO2 Proxy Integration Project (CenCO2PIP) Consortium, Toward a Cenozoic history of atmospheric CO2. Science 382,eadi5177(2023). DOI:10.1126/science.adi5177. Accepted version online at: https://oro.open.ac.uk/94676/1/Accepted_manuscript_combinepdf.pdf

[14] Judd, E.J., Tierney, J.E., Huber, B.T. et al. The PhanSST global database of Phanerozoic sea surface temperature proxy data. Sci Data 9, 753 (2022). https://doi.org/10.1038/s41597-022-01826-0.

Comments

Popular posts from this blog

The Marketing of Alt-Data at Temperature.Global

Are Scientists and Journalists Conspiring to Retract Papers?

Tropical Cyclone Trends