There's a new paper out claiming to find a "consensus on [the] consensus" on global warming. It concludes:
We have shown that the scientific consensus on AGW is robust, with a range of 90%–100% depending on the exact question, timing and sampling methodology. This is supported by multiple independent studies despite variations in the study timing, definition of consensus, or differences in methodology including surveys of scientists, analyses of literature or of citation networks.
With it's one and only figure is used to demonstrate the claim:
Figure 1 demonstrates that consensus estimates are highly sensitive to the expertise of the sampled group. An accurate estimate of scientific consensus reflects the level of agreement among experts in climate science; that is, scientists publishing peer-reviewed research on climate change. As shown in table 1, low estimates of consensus arise from samples that include non-experts such as scientists (or non-scientists) who are not actively publishing climate research, while samples of experts are consistent in showing overwhelming consensus.
If you've followed the discussion about this paper so far, you may have seen my recent post discussing this chart:
In which I explained:
Look at the x-axis. See how it says "Expertise"? Tell me, what scale do you think that's on?
You're wrong. It doesn't matter what your answer might have been; it's wrong. It's wrong because there is no scale for the x-axis on this chart.
Seriously. This is what the authors of the paper had to say about the chart:
Figure 1 uses Bayesian credible intervals to visualise the degree of confidence of each consensus estimate (largely a function of the sample size). The coloring refers to the density of the Bayesian posterior, with anything that isn’t gray representing the 99% credible interval around the estimated proportions (using a Jeffreys prior). Expertise for each consensus estimate was assigned qualitatively, using ordinal values from 1 to 5. Only consensus estimates obtained over the last 10 years are included.
For today, let's ignore the part about the "coloring" and "credible intervals." Let's just focus on the part where it says the expertise values were "assigned qualitatively." What that means is there was no rigorous method to how they assigned these values. They just went with whatever felt right. That's why there is no rubric or guideline published for the expertise rankings.
Kind of weird, right? Well that's not too important. What is important is... there are five categories. Look at the chart. Where are they?
I then showed what the chart would look like if you labeled the various categories in it:
One category (5) covers more than half the chart's range while another category (4) doesn't even appear on the chart. Any claim "consensus estimates are highly sensitive to the expertise of the sampled group" based on this chart is heavily biased by the authors decision to present their data in a misleading way. Had they simply shown their data by category, they would have gotten a chart like this:
Which doesn't make for anywhere near as compelling an image, and it wouldn't allow the authors to create graphics like this one which they use to promote their conclusions:
By choosing not to label the values on their x-Axis, and by choosing to place every point next to another rather than grouping the data by category, the authors of this paper were able to create the visual impression of a relationship between expertise level and size of the consensus estimate.
That alone should be damning, but it turns out there are many other problems with this chart as well. To highlight them, I am going to run a little mini-series of posts under the title of this one. The series will demonstrate how data used in this chart has been cherry-picked, adjusted, and in one case seemingly pulled out of thin air.
Because this post is already running long, I'll close it out with one of the more peculiar aspects of this chart. It's a mystery I cannot unravel. Continue reading