I think I may have found the strangest chart I have ever seen. You can see it below, taken from the newly published paper on the supposed "consensus on the consensus" on global warming:
Now, I discussed this paper a bit yesterday, and there are probably a lot of things more important to discuss than this chart. Those other things aren't as funny though. You see, this chart is complete nonsense. Look at the x-axis. See how it says "Expertise"? Tell me, what scale do you think that's on?
You're wrong. It doesn't matter what your answer might have been; it's wrong. It's wrong because there is no scale for the x-axis on this chart.
Seriously. This is what the authors of the paper had to say about the chart:
Figure 1 uses Bayesian credible intervals to visualise the degree of confidence of each consensus estimate (largely a function of the sample size). The coloring refers to the density of the Bayesian posterior, with anything that isn’t gray representing the 99% credible interval around the estimated proportions (using a Jeffreys prior). Expertise for each consensus estimate was assigned qualitatively, using ordinal values from 1 to 5. Only consensus estimates obtained over the last 10 years are included.
For today, let's ignore the part about the "coloring" and "credible intervals." Let's just focus on the part where it says the expertise values were "assigned qualitatively." What that means is there was no rigorous method to how they assigned these values. They just went with whatever felt right. That's why there is no rubric or guideline published for the expertise rankings.
Kind of weird, right? Well that's not too important. What is important is... there are five categories. Look at the chart. Where are they?
To answer this question, I did a quick tabulation of the table presented in the paper. I found the number of papers in each category is:
1 – 2 2 – 2 3 – 3 4 – 0 5 – 9
I was able to match the coded entries in the chart to those in the table to confirm this. Based on that, I was able to add lines to the chart showing where the categories are. Take a look:
I have no idea what to call that kind of scale. The 1 and 2 values on the x-axis each have two items, the 3 value on it has three items, the 4 value doesn't exist and the 5 value covers more than half the chart. If you divided the chart in half, splitting "Higher" and "Lower" evenly, one category would fall in both halves. How does one even begin to interpret that?
Here's what you would get if you just plotted each point by its category:
It's not pretty, but it at least presents the results in a meaningful and honest manner. For instance, you can't just rearrange the points in the proper chart however you'd like. You largely can with the one these authors presented. For instance, we could present the "consensus estimates" in this order:
Or we could present them in this order:
Both make just as much sense. All the "consensus estimates" for expertise level 1 are together, all of the "consensus estimates" for expertise level 2 are together, all of the "consensus estimates" for expertise level 3 are together, all of the "consensus estimates" for expertise level 4 are non-existent, and all of the "consensus estimates" for expertise level 5 are together.
Everything's the same, except the arbitrary order of which "consensus estimates" within a given category is changed. But hey, there's no right way to do that. The authors of the paper just picked the one which they liked the look of best. We can do the same thing if we want.
But it gets worse. The authors of the paper don't just rely on this image to convey the idea greater expertise leads to more belief in a "consensus" on global warming. They made it explicit by creating this image:
Which is even shown in animated form in this video created by John Cook:
Try imagining that same image on a realistic scale with each category spaced evenly. Or rather, don't try it because you'll get killer headache. Any coherent scaling of the x-axis in that chart would completely ruin the visual appeal it holds. The only way the authors could create a nice, neat image showing the "consensus" gets stronger as expertise levels rise is to do the equivalent of counting:
1. Beat. 2. Beat. 3. Beat. Beat. 5. Beat. Beat. Beat. Beat. Beat. Beat. Beat. Beat.
And don't ask me how authors created the line for the chart Cook uses in his video. It's clearly not based on any sort of mathematics. It's difficult to do any sort of (logarithmic?) regression on unevenly spaced data when you have so little, but it would be practically impossible to come up with such a pretty regression after you compressed it to account for the spacing differences between the categories. Odds are the authors just hand-drew the line.
Sort of like how they just hand-picked whichever "Expertise" values they felt like picking for the various "consensus estimates." And like how they just hand-picked whichever order for the estimates withing a category they felt made their results look best.
I'm sure there's plenty more to be said about this chart, but for now, I need to stop. I'm going to go see if I can figure out what this chart would look like if the five categories had been given equal spacing. It's probably pointless, but it's fun to imagine what an honest depiction of these results would show.