Find a subject

- Anthropology & Archaeology
- Behind the Scenes
- Business & Economics
- Cambridge Now
- Cambridge Reflections: Covid-19
- Climate Change
- Computer Science
- Earth & Life Sciences
- Education
- Higher Education
- History & Classics
- Into the Intro
- Language & Linguistics
- Law & Government
- Literature
- Mathematics
- Medicine
- Multimedia
- Music, Theatre & Art
- Philosophy & Religion
- Podcast
- Politics
- Psychology
- Science & Engineering
- Sociology
- Uncategorized

16

Apr

2014

Apr

2014

When estimating voter’s intentions, pollsters know that statements like “40% of the voters support party A”: will nearly always be wrong. However, when qualified with: “19 times out of 20, this percentage is correct to within 5%”, then the statement may be exactly correct. The qualification “19 times out of 20” is a standard statement of the confidence that the true support is in the range 35 to 45% of the voters, it is estimated from elementary statistical theory.

So what about global warming? Shouldn’t we apply the same statistical methodology and determine the probability of it being natural in origin? If the International Panel on Climate Change (IPCC) is right that it is “extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century” (IPCC, Assessment Report 5, AR5), then surely we should be able to easily statistically reject the hypothesis that the change is due to natural variability? Now for the first time, [*Lovejoy*, 2014] claims to have done this, rejecting the natural warming hypothesis with confidence levels greater than 99% and most likely greater than 99.9%.

In IPCC usage, “extremely likely” refers to a probability in the range 95-100%, so that the new result is quite compatible the AR5, yet the two conclusions are really more complementary than equivalent. Whereas the IPCC focuses on determining how much confidence we have in the truth of anthropogenic warming, the new approach determines our confidence in the falsity of natural variability. As any scientist knows there is a fundamental asymmetry in the two approaches: whereas no theory can ever be *proven* to be true beyond a somewhat subjective “reasonable doubt” – a theory can effectively be *disproven* by a single decisive experiment. In the case of anthropogenic warming, our confidence is based on a complex synthesis of data analysis, numerical model outputs and expert judgements. But no numerical model is perfect, no two experts agree on everything, and the IPCC confidence quantification itself depends on subjectively chosen methodologies. In comparison, the new approach makes no use of numerical models nor experts, instead it attempts to directly evaluate the probability that the warming is simply a giant century long natural fluctuation. While students of statistics know that the statistical rejection of a hypothesis cannot be used to conclude the truth of any specific alternative, nevertheless – in many cases including this one – the rejection of one greatly enhances the credibility of the other.

The new study will be a blow to any remaining climate change deniers since their two most convincing arguments – that the warming is natural in origin, and that the models are wrong – are either directly contradicted by or simply do not apply to the new study. Indeed, by bypassing any use of Global Circulation Models (huge computer models), the new study was able to predict the effective sensitivity of the climate to a doubling of CO_{2} to be: 2.5 – 4.2^{ o}C (with 95% confidence) which is significantly more precise than the IPCC’s GCM based climate sensitivity of 1.5 – 4.5 ^{o}C (“high confidence”) an estimate that – in spite of vast improvements in computers, algorithms and models – hasn’t changed since 1979. Whereas the main uncertainty in the CGM based approach comes from uncertain radiative feedbacks with clouds and aerosols, in the new approach, the uncertainty is due to the poorly discerned time lag between the radiative forcing and atmospheric heating (much of any new heating goes into warming the ocean, and only somewhat later does this warm the atmosphere). Figure 1 shows the unlagged forcing – temperature relationship; one can see that it is quite linear. Even the recent “pause” in the warming (since 1998) is pretty much on the line.

The new approach is based on two innovations. The first is the use globally averaged CO_{2} radiative forcings as a proxy for all the anthropogenic forcings. This is justified by the tight relation between global economic activity and the emission of aerosols (particulate pollution) and Greenhouse gases. Most notably, this allows the new approach to implicitly include the cooling effects of aerosols that are still poorly quantified in GCMs. The second innovation is to use nonlinear geophysics ideas about scaling combined with paleo temperature data to estimate the probability distribution of centennial scale temperature fluctuations in the pre-industrial period. These probabilities are currently beyond the reach of GCM’s. In future developments, the new technique can be used to estimate return periods for natural warming events of different strengths and durations, this includes the post-war cooling as well as the slow down (“pause”) in the warming since 1998.

The global temperature anomaly since 1880 as a function of the anthropogenic forcing (using the CO_{2} heating as a linear surrogate for all the anthropogenic effects). The regression indicates the anthropogenic contribution, the residual is the natural variability. The slope is 2.33 K/CO_{2} doubling, it is the climate sensitivity for the annual averaged global temperature as a function of the annually averaged global radiative forcing for the same year (unlagged).

**Reference:**

Lovejoy, S. (2014), Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming, *Climate Dynamics*, *(in press)*.

## Latest Comments

Have your say!