|2011-11-25 08:54:53||Matt Ridley wrapped most AGW denial assertions into one easily swallowed package|
Matt Ridley gave a talk October 31, 2011 titled ‘Science Heresy.’
It has received a lot of attention by people and news agencies in the UK and by the English-speaking denial blogspace. I decided to write a detailed analysis of this talk because of the excitement generated, the sophistry employed and the many factual errors contained in it. I’ve italicized the portions of Ridley’s speech that I have quoted. You can download a complete copy of this speech here:
The structure of his speech is as follows:
It’s a fascinating speech because of the craftsmanship and rhetorical techniques employed. He’s not a professional scientist but he was a professional journalist. His expertise is apparent when reading his talk.
The critical transition in his speech, the big lie that must be swallowed, is contained in steps b and c. After ‘establishing’ this alternative reality he’s off and running free. It’s important to note that this speech was given just a couple days ago. Some of his ‘factual’ claims would have been less egregious if they had been made 10 or 20 years ago. There is no excuse for being so profoundly ignorant about the state of the science when you claim to be an expert on the subject.
He starts out brilliantly with a series of statements and homilies that would appeal to most scientists as we make our living by questioning authority, to expose weakness in our collective understanding and then performing the studies necessary to illuminate reality. In particular I liked:
“So I learnt lesson number 1: the stunning gullibility of the media. Put an ‘ology’after your pseudoscience and you can get journalists to be your propagandists.”
It’s at this point in the talk that he lays the groundwork for his attack on the many fields of science involved in studying the Earth’s climate.
“Experts are worse at forecasting the future than non-experts…..The experts were no better than ‘a dart-throwing Chimpanzee.’”
“Lesson 6. Never rely on the consensus of experts about the future. Experts are worth listening to about the past, but not the future. Futurology is pseudoscience.”
“Using these six lessons, I am now going to plunge into an issue on which almost all the experts are not only confident they can predict the future, but absolutely certain their opponents are pseudoscientists. It is an issue on which I am now a heretic. I think the establishment view is infested with pseudoscience. The issue is climate change.”
In this sequence of statements he made a fundamental logic error. Predicting the future based on best guesses is not the same thing as:
-Predicting when and where a projectile or rocket ship will land
-Predicting whether a structure will hold up three tons
-Predicting whether a radiator will cool an engine or if brakes will stop a car
One is a guess. The other is based on well-understood, independently replicated, quantitative relationships.
Matt Ridley confused the two during this assertion that climate change is the same thing as predicting whether we’ll have personal flyers and live underground in 100 years. This confusion of science with futurologists is the big lie upon which his thesis hangs.
Matt Ridley, trained as a zoologist and employed as a journalist, doesn’t understand the math or the physics behind climate science. His lack of knowledge leaves him imagining that magic happened. It’s magic to him because he isn’t capable of solving coupled differential equations. It’s just another physics problem for those of us who do it for a living.
The talk he continues with small points…some correctly and others incorrectly to differing degrees. Ticking through those one by one:
“I fully accept that carbon dioxide is a greenhouse gas, the climate has been warming and that man is very likely to be at least partly responsible.
The problem is that you can accept all the basic tenets of greenhouse physics and still conclude that the threat of a dangerously large warming is so improbable as to be negligible, while the threat of real harm from climate-mitigation policies is already so high as to be worrying, that the cure is proving far worse than the disease is ever likely to be.”
What constitutes “dangerously?”
There has been no ‘cure’ proposed by any government that is close to being sufficient to stabilize the changes in the composition of our atmosphere. It’s rational to be concerned about the efficacy of future decision. It’s fantasy to make the claims he did in these quotes.
“Yet it has been utterly debunked by the work of Steve McIntyre and Ross McKitrick. I urge you to read Andrew Montford’s careful and highly readable book The Hockey Stick Illusion. Here is not the place to go into detail, but briefly the problem is both mathematical and empirical.”
McIntyre and McKitrick questioned the statistical methods and the data sources. Their 2005 statistical complaints were specious. As an example of one test Mann’s methods failed to reject was generation of a hockeystick-like response after the input of red noise. Red noise is distinguished from white noise it’s a spectral energy distribution. White noise has uniform amounts of energy across the spectrum. Red noise has a distribution of energy that is inherently a hockey stick. McIntrye’s 2003 complaints had some merit in logic, if not in effect, and were addressed by Mann’s team (Mann, et al., Nature, 430, 105 (2004) and Mann et al., Science, 105, 13252-13257 (2008). The 2005 complaints were not as robust as demonstrated by Wahl and Ammann, Climatic Change, 83, 33-69 (2007).
Perhaps more importantly, the hockey stick was independently investigated by the BEST publications in great depth. Two of their papers deal specifically with the import of those concerns and did so using a statistical approach that was significantly different than any previously applied to surface temperature data. They found that the data (in excess of 1.6 billion data points) had been handled properly and the effect of those changes was insignificant. In the end rather than being debunked, the hockey stick has been validated, revalidated and then independently reproduced using entirely independent analytic techniques and much larger datasets.
‘In this paper, this framework is applied to the Global Historical Climatology Network land temperature dataset to present a new global land temperature reconstruction from 1800 to present with error uncertainties that include many key effects. In so doing, we find that the global land mean temperature has increased by 0.911 ± 0.042 C since the 1950s (95% confidence for statistical and spatial uncertainties). This change is consistent with global land-surface warming results previously reported, but with reduced uncertainty.’ By Rohde, et al., 2011, BEST Website, submitted for publication.
While not explicitly cited, the download of Ridley’s talk from Anthony Watts’s blog implies concern about the quality of the temperature data available for analysis as some of the land temperature stations were now located in the middle of parking lots, etc. and therefore could be artificially biasing the result.
These concerns were addressed in detail and found to have no significant impact on previously reported results. Hansen et al., Rev. Geophys., 48, RG4004 (2010). More recently it was found to have no statistically significant impact: “From this analysis we conclude that the difference in temperature rate of rise between poor stations and OK stations is –0.014± 0.028 C per century.” Earth Atmospheric Land Surface Temperature and Station Quality in the United States By Mueller et al. 2011, BEST website, submitted for publication.
"Greenland is losing ice at the rate of about 150 gigatonnes a year, which is 0.6% per century.”
Close but not correct. ‘Since 2006, high summer melt rates have increased Greenland ice sheet mass loss to 273 gigatons per year’ from Broeke et al., Science 326, 984 (2009). Which is also 1.1% per century. What can I say? That’s a lot of ice! Further, Greenland’s rate of ice loss is accelerating, having doubled in only 10 years.
“There has been no significant warming in Antarctica, with the exception of the peninsula.“
Wrong. The data show a 0.60 C average temperature rise over the entire continent between 1957-2006. This is about the same as the rest of the world. Steig et al., Nature, Vol 457, 459-463 (2009).
“Methane has largely stopped increasing. “
Methane has risen from 1550 ppb to just over 1800 ppb since 1978. Since 2006 the rise has been very sharp. However it is true that Methane rose very little between 1999 and 2004. http://www.esrl.noaa.gov/gmd/aggi/
“Tropical storm intensity and frequency have gone down, not up, in the last 20 years.”
Here is a small sampling of recent results where causal relationships are established between human contributions to our atmospheric contribution and the frequency of extreme weather events:
‘Here we show that human-induced increases in greenhouse gases have contributed to the observed intensification of heavy precipitation events found over approximately two-thirds of data-covered parts of Northern Hemisphere land areas. These results are based on a comparison of observed and multi-model simulated changes in extreme precipitation over the latter half of the twentieth century analyzed with an optimal fingerprinting technique. Changes in extreme precipitation projected by models, and thus the impacts of future changes in extreme precipitation, may be underestimated because models seem to underestimate the observed increase in heavy precipitation with warming. Min et al., Nature vol 378, 378-381, 2011.’
From Nature Editorial ‘Extreme Measures’ by Q. Schiermeir Nature Vol 477, 148-149, 2011. ‘The studies that appeared in Nature last February offer pioneering examples of how to do this. In one, Pardeep Pall, an atmosphere researcher at the University of Oxford, UK, and his team generated several thousand simulations of the weather in England and Wales during the autumn of 2000. Some of the simulations included observed levels of human-generated greenhouse gases, whereas others did not. The researchers then fed the results of each simulation into a model of precipitation and river run-off to see what kind of flooding would result. In 10% of the cases, twentieth-century greenhouse gases did not affect the local flood risk. But in two-thirds of the cases, emissions increased the risk of a catastrophic flood — like the one that occurred in 2000 — by more than 90%. ‘
‘Another group, led by climate scientist Seung-Ki Min of the Climate Research Division of Environment Canada in Toronto, used a similar approach. Inspired by the observation that intense rainfall in the Northern Hemisphere has worsened over the second half of the twentieth century, the group compared actual precipitation data with simulations from six different climate models, both with and without greenhouse warming. They found that the extreme precipitation patterns observed did not match anything expected from natural climate cycles, but closely matched those expected from greenhouse warming.’
‘Such attribution studies can sometimes exonerate climate change. In one published in March3, Randall Dole and his colleagues at the National Oceanic and Atmospheric Administration in Boulder, Colorado, concluded that the intense 2010 Russian heat wave was probably a result of natural cycles. ‘
1. Pall, P. et al. Nature 470, 382–385 (2011). ‘
2. Min, S.-K., Zhang, X., Zwiers, F. W. & Hegerl, G. C. Nature 470, 378–381 (2011).
3. Dole, R. et al. Geophys. Res. Lett. 38, L06702 (2011).
The most recent was published since your talk linking Arabian Sea tropical cyclones to aerosols (Evan, et al., Nature, vol 479, 94-97): 'Here we report an increase in the intensity of pre-monsoon Arabian Sea tropical cyclones during the period 1979–2010, and show that this change in storm strength is a consequence of a simultaneous upward trend in anthropogenic black carbon and sulphate emissions. We use a combination of observational, reanalysis and model data to demonstrate that the anomalous circulation, which is radiatively forced by these anthropogenic aerosols, reduces the basin-wide vertical wind shear, creating an environment more favourable for tropical cyclone intensification. Because most Arabian Sea tropical cyclones make landfall1, our results suggest an additional impact on human health from regional air pollution.'
Matt Ridley continues:
“Your probability of dying as a result of a drought, a flood or a storm is 98% lower globally than it was in the 1920s.”
Specious. That’s because we have modern sanitation, large public water utilities in most of the world and ways to rapidly transport food. This has nothing to do with the insignificance of such events.
“Malaria has retreated not expanded as the world has warmed.”
This statement is also specious since we developed DDT, modern medical treatment, and effective anti-malarial pills over this same time period.
“I’ve looked and looked but I cannot find one piece of data –as opposed to a model – that shows either unprecedented change or change is that is anywhere close to causing real harm.”
Two different points juxtaposed for shock value. The data shows change that is without precedent over the past million years. Real harm: some so far, a lot more to come.
“Well, if you have an x that persuades you that rapid and dangerous climate change is on the way, tell me about it.”
‘Rapid’ is an interesting word choice. Does this mean a snap change in the next year (then he is likely correct), or in the next several decades?
Just because a major change occurs over the space of years or decades, doesn’t mean that there is sufficient time to prepare. I posit that it will be difficult for many countries to adapt their infrastructures, population centers and agriculture in time to respond to many of the changes on the way (increased or decreased rainfall, melting permafrost, storm surges, saltwater intrusion into ground water, continued growth of the Sahara, etc.).
Given the lack of response to date, and the scope of the infrastructure changes that will be required to for many countries to adapt during the next several decades to the changes now developing or underway: rapid is almost certainly the appropriate adjective.
The populations of most nations are unlikely to take action until convinced of the likelihood of specific, adverse impacts on their lives. It is at the local level that many bad things will occur from a human perspective. If by ‘dangerous’ he means more F5 hurricanes hitting the SE US, SE Asia or millions of people being displaced from their desertified farmland then there is little doubt of the reality of the ‘danger.’
We know that economies, and populations will be profoundly impacted: some adversely and favorably. We expect more violent storms in some regions, other regions will get dryer and others wetter. While some become too warm (e.g. California vineyards are going to get hammered as they need 50 years to fully recoup their investment), others (vinters in British Columbia and Washington) are going to be loving life.
At the same time, he has a point. Today, we can’t reliably predict local/regional impacts. We can’t say a whole lot today that is specific in terms of impact or timing. As a result, developing that capability is one of the hottest areas of climate research.
“Water vapour forms clouds and whether clouds in practice amplify or dampen any greenhouse warming remains in doubt.”
Boy is he wrong on that one. Read ‘Atmospheric CO2: Principal Control Knob Governing Earth’s Temperature’ Science 330, 356 (2010) by Andrew A. Lacis, et al. They demonstrated that water (vapor and clouds) triple the effect of non-condensable greenhouse gases. In fact, this phenomenon has been fairly well understood for quite a long time: our precision regarding its global effects is what’s gotten markedly better in the past 20 years.
“So to say there is a consensus about some global warming is true; to say there is a consensus about dangerous global warming is false.”
Once again, the key word is dangerous. Dangerous to whom and on what scale? If by that he means total annihilation of the human race I would have to agree with him. Yet millions of people displace from they homes and in the case of North Africa from their countries by drought (nearly 3 million forced to find refuge in Nigeria alone), that has already happened. Is displacement of more millions of people sufficient to meet the threshold of being ‘dangerous.’ What’s dangerous mean? It’s a soft word. Danger, like beauty, is in the eye of the beholder.
“Well here’s why it matters. The alarmists have been handed power over our lives; the heretics have not.”
I can’t speak for the UK but in the US and everywhere else in the world that I’m aware of heretics are in full control of the helm.
Interesting, if not surprising, from the Matt Ridley that took Northern Rock to destruction. Why does anyone listen to him?
If you're writing this up for a post, some thoughts:
On the Hockey Stick and the M&M's, it's worth noting that McIntyre's analysis has been destroyed by Deep Climate, who found that McIntyre's algorithm quietly selected samples that suited his agenda via a cheeky 'sort'. That's quite apart from the many independent replications of the Hockey Stick with and without tree rings, and the other palaeo evidence (e.g. Arctic ice caps smaller and sea ice less than in thousands of years).
Montford's book is reviewed by Tamino at RC:
I think there have been posts on the fact that the rate of change now is faster than even the PETM, and of course we're already on our way to a significant fraction of a glacial-interglacial temperature change which is not trivial.
Good critique. I suppose like Monckton, you have to ask again, why on earth would anyone listen to Matt Ridley? But people do...
skywatcher, Deep Climate did not destroy McIntyre's analysis. If you read closely you will notice that Deep Climate acknowledges the effect that McIntyre points to as being real. However, McIntyre greatly exagerates the significance of the effect (as I understand) and cherry picked the examples with the highest Hockey Stick index while suggesting that they were a random sample of the data.
Personally, I would be interested in seeing the mean Hockey Stick Index of McIntyres and Deep Climate's runs, along with the Standard Deviation to compare with the HSI of MBH98. That is essential information to assess the validity of McIntyres claims, and the fact that he does no present it is very damning to his case. On the other hand, the fact that Deep Climate does not present it either suggests that it does not clearly validate MBH's method.
One additional point is that McIntyre purports to show that MBH's method will produce hockey sticks from random data having particular characteristics. However, it has been shown that MBH' method will extract any genuine signal from non-random data, even with significant noise. Further, the proxies chosen by MBH where chosen because they where known to contain a temperature signal. Therefore, it is uncertain even if the mean of the HSIndices generated by MBH was larger than MBH's hockey stick index there is a valid argument that the MBH proxy represented just noise. Certainly with the multiple replications, the argument is not justified. However, McIntyre's argument could still well justify a claim that MBH98's uncertainty was underestimated.
Tom, I get your points, but I'd still argue that the deliberate sorting to cherry pick the examples was the key to many people shouting that Mann's algorithm produced hockey sticks from any old data. Without those cherry-selected figures, there would have been much less of a brouhaha about the HS, and McIntyre's weak case would have been obscure and unintelligible to most, and far less compelling even to the wilinglly led. The methodological details were discussed by Wahl and Ammann, showing that although Mann's methodology was not ideal, it didn't materially alter the result. As we know, different methodologies and different proxies replicate the HS regardless.
More Deep Climate info about the red noise issues, and about Dave Ritson's part in uncovering it here:
If I understand it, technically the mean HSI for all 10,000 runs would be 0, as about half the HSI values are negative*! By using red noise with much too high an autocorrelation, which increases the persistence of 'wiggles' in the timeseries, the chances were that the end of the series (the 'instrumental' section) would likely be located on a deviation away from the mean state. The HSI was calculated to be the difference in standard deviations between the 'instrumental'/'calibration' section and the whole of the record. For the whole 600 year record, there would likely be some upward wiggles and some downward wiggles, even with high autocorrelation, hence a mean value close to 0. With high autocorrelation, the last wiggle would likely cover the whole instrumental period, and stand a high chance of being more than one standard deviation away from the overall mean of the series. Hence most HSI values had an absolute value of >1 (some were <-1). But on understanding the way McIntyre set up the test, this is not in the least bit surprising, as he used a much higher autocorrelation than Mann did, I think, and the mean of a small subset has a good chance of being far from the mean of a large set.
Then cherry-pick the strongest hockey sticks with an HSI value >~2 (not the negative ones though), plot them and provide skeptics with a big stick to beat Mann. I may be wrong with how I see it, feel free to correct me, although don't spend much time as this is such an old, dead horse it's not really worth flogging much more!
*If I were to plot a histogram of the distribution of HSI values, a virtual beer says it would be bimodal, with two peaks. McIntyre has given us enough information to estimate this from the Replication and Due Diligence Deep Climate post, if we assume symmetry of positive and negative values (which we must givent he structure of the test and the fact we know there were 12 HSI values >2). The mean absolute HSI is ~1.5, but given that this mean is critically dependent on the level of autocorrelation, it is meaningless. The test is flawed anyway, the selection of data for figures for presentation was egregious:
skywatcher, from memory (which is very vague) McIntyre chose his pseudo-proxies based on the absolute magnitude of the HSI, and then inverted pseudo-proxies with negative HSI's in his presentation. Looking at his original paper he does show a historogram of his pseudo-proxes, but nowhere mentions the HSI of MBH98, a statistic I consider critical to his argument. By eye, it is at least 2 and probably greater.
I believe your summation of McIntyre's argument (and intentions) is correct. However, because the MBH98 method will introduce a hockey stick into red noise, there does remain a relevant basis of criticism. I believe we woud be much more convincing when discussing the issue if we acknowledged that, but pointed out its true significance. The reason is that I believe most people presented with McIntyre's argument believe correctly that his demonstration must have some relevance to the validity of MBH98. However, because the real (and minor) issue is not discussed, they are left having to accept McIntyres spin for want of a proper substitute.
The genuine relevance can be seen by supposing MBH98's reconstruction had a HSI of 1.2 (for example). In that case, I would have no confidence that it was a genuine reconstruction, and with good reason. In contrast, if the HSI of MBH98 was 6, that would clearly demonstrate that the reconstruction was very unlikely to be an artifact of the method and red noise. As it stands, properly examined I believe McIntyre's analysis actually demonstrates that the chance that the MBH reconstruction represents noise rather than a genuine signal is >> 5%, and probably less than 0.2%. Unfortunately neither knowing the HSI of MBH98, nor being a statistician of any skill, I cannot formulate that suspicion into a proper argument. That is a small loss as MBH 98 is out of date, and multiply replicated. That does not, however, remove my disappointment when people more talented than I do not take this issue on directly and show how little significance McIntyre's argument has, even if we give it its full due.
Thanks for that Tom, it's always interesting to understand as much as possible what is going on in these cases. I think, if I'm right about the significance of the autocorrelation being too high, that would utterly change McIntyre's results. It would show that although some element of a hockey stick is introduced into the data by Mann's method, it is tiny compared to the real, data-driven, HS (as replicated, observed and verified elsewhere). Essentially the distribution of pseudoproxies shown above would have two peaks much closer to 0 if the autocorrelation was lower, while the real data would be isolated out at ~2 as you say. Even with the exaggerated autocorrelation the real HS may well lie within the top 1% of pseudoproxies as you say!
Anyway... time for the weekend :)
I'd appreciate a link to my previous SkS articles on Ridley, especially this one, where I do a Gish Gallop analysis like Larry's, but briefer, since I chose to focus on Ridley's astonishing misreading of the literature.
I see that Rob P has something in the works, as well, more Gish Gallop debunking.
Is it not worth reviving Andy's Ridley's Riddles series and turning it into a permanent section? Ridley is rather prolific in the UK press and a prominent GWPF stooge - precisely who SkS should be seen to be debunking at every opportunity.
Yeah, I started drafting something up, but got sidetracked when it was obvious that Ridley's rant wasn't gaining any traction. I'll leave it to Larry.
I gave up after 3 examples. It's tiresome to combat a Gish Gallop, which is what makes them so effective!
Bump. The OP (or someone with editing powers) needs to reformat this.