Our last post continued examining a very strange chart from a recent paper by John Cook of Skeptical Science and many other people. The chart ostensibly shows the consensus on global warming increases with a person's expertise. To "prove" this claim, Cook et al assigned "expertise" levels to a variety of "consensus estimates" they took from various papers. You can see the results in the chart below, which I've added lines to to show each category:
As you can see, the "consensus estimates" are all plotted one next to the other, without concern for how the categories are spaced. The result is Category 4 doesn't exist in the chart, and Category 5 covers more than half of the chart. This creates a vastly distorted impression of the results which, if displayed properly, would look something like:
The last post in this little series shows there is even more wrong with the chart. Cook et al openly state the "expertise" values they used were assigned in a subjective manner, with no objective criteria or formal guidelines. In effect, they just assigned whatever "expertise" level they felt like to each "consensus estimate" then plotted the results in a chart where they chose not to show the expertise levels they had chosen.
The arbitrary assignment of these "expertise" levels is an interesting topic which I intend to cover in more detail in a future post as a number of the decisions they made are rather bizarre, but for today, I'd like to discuss something else Cook et al did. Namely, I'd like to discuss how Cook et al arbitrarily chose to exclude some results while including others.
This topic has come up before, but for today I am not going to look at which papers Cook et al did and did not include in their analysis. Instead, I'm just going to look at which results in some of the papers they examined they chose to shows and which ones they didn't show.
Doran & Zimmerman 2009
There are 16 points in this chart, taken from a total of ten different papers. For today, I'm going to look at two of those papers. The first of these papers is Doran & Zimmerman 2009. It is the source of data points DZ1, DZ2 and DZ3. DZ1 is shown as having the second lowest consensus value at 46.6% in Category 1. This data point is said to come from "Economic Geologists."
This is a peculiar entry. I hadn't heard of "Economic Geologists" before, and I couldn't think of any reason they would be signled out for a survey on global warming. The other two esitmates from Doran & Zimmerman 2009 are for "Meteorologists" and "Publishing climate scientists," groups whose expertise has a clear bearing on global warming. Economic geologists though? Very weird.
To try to figure out what is going on, we should read Doran & Zimmerman. It says:
With survey participants asked to select a single category, the most common areas of expertise reported were geochemistry (15.5%), geophysics (12%), and oceanography (10.5%). General geology, hydrology/hydrogeology, and paleontology each accounted for 5–7% of the total respondents. Approximately 5% of the respondents were climate scientists...
Clearly, economic geologists were not singled out for this survey. Why then, did Cook et al choose to single them out for reporting? A possible answer can be found in Doran & Zimmerman:
In our survey, the most specialized and knowledgeable respondents (with regard to climate change) are those who listed climate science as their area of expertise and who also have published more than 50% of their recent peer-reviewed papers on the subject of climate change (79 individuals in total). Of these specialists, 96.2% (76 of 79) answered “risen” to question 1 and 97.4% (75 of 77) answered yes to question 2.... The two areas of expertise in the survey with the smallest percentage of participants answering yes to question 2 were economic geology with 47% (48 of 103) and meteorology with 64% (23 of 36).
Doran & Zimmerman singled out economic geologists as having the lowest "consensus estimate." There were only 103 of them though, out of a total of 3,146 people. That means economic geologists made up only ~3% of the respondents, and they gave the lowest "consensus estimate." And they are the ones Cook et al singled out for inclusion in their paper, labeling this group as expertise Category 1.
Cook et al also show results for meteorologists (36 people) and results for publishing climate scientists who answered a particular question affirmatively (77 people). That totals up to 216 people. 3,146 people took this survey, and Cook et al only showed results for 216 (~7%) of them. And they conveniently picked one group (economic geologists) whose opinion holds no special value other than being the most extreme example they could show of non-experts having a lower consensus estimate than experts.
One could perhaps understand this decision as Doran & Zimmerman didn't publish results for each group in their paper. They only published the results for the three groups Cook et al used. Cook et al could perhaps have been forgiven as they might simply not have had access to any other results they could have included.
If this were simply a case of not having access to results, Cook et al should have warned people their sample was non-representative, but otherwise could be forgiven. However, the author list of the Cook et al paper is:
John Cook, Naomi Oreskes, Peter T Doran, William R L Anderegg, Bart Verheggen, Ed W Maibach, J Stuart Carlton, Stephan Lewandowsky, Andrew G Skuce, Sarah A Green
Peter T. Doran was the lead author of Doran & Zimmerman 2009. That means the authors of Cook et al (2016) had access to the results for each and every group that responded to the Doran & Zimmerman survey. They just chose to not include 93% of those responses, picking out only the ones which most supported their views.
Stenhouse et al 2014
Another three data points for this chart come from Stenhouse et al 2014. This includes the data point with the smallest value, S141 at 46.2% assigned to expertise Category 1. The group for this data point is "Non-publishers (climate science)." The other two data points are S142 for "Publishing (other)" and S143 for "Publishing climate." They are assigned to Category 3 and 5 respectively, with "consensus estimates" of 80.5% and 87.9%. These are taken from this table in the paper:
The values can be found in the first three columns by summing together the first two entries in each column. These entries are for responses to the prompt:
Is global warming (GW) happening? If so, what is its cause?
One strange aspect of this is the two answers summed together to get these results are, "Yes; Mostly human" and, "Yes; Equally human and natural." Combining these two categories means the consensus represented by these data points is not that humans are the main cause of global warming, but merely, that they are one of the main causes.
Cook et al don't explain why they chose to group these two categories, nor do they explain how that consensus position compares to that examined in other studies. This seems strange, as if they had only looked at the consensus given by Stenhouse et al (2014) on the idea humans are the main cause of global warming, their results would have dropped from 46.2%, 80.5% and 87.9% to 38%, 71% and 78%. That is not a trivial change.
There is a bigger issue though. These consensus estimates are only for three columns. This table contains nine columns for consensus estimates from various groups (plus a tenth for all groups combined). Cook et al simply chose not to report results for six of these groups.
Three of those groups are listed as having a field of expertise of "Meteorology & Atmospheric Science." Cook et al showed results for meteorologists from Doran & Zimmerman, so clearly, the views of these groups should be relevant (they are certainly more relevant than those of economic geologists). Why didn't Cook et al report them?
I don't know. What I do know is these categories include 61 people who have mostly published on climate change in the last five years, 501 people who have published some work on climate change in the last five years and 641 people who haven't published on climate change in the last five years. The consensus estimates given for these groups on the idea humans are the main cause of global warming is 61%, 57% and 35%. If we include the second answer, that natural effects have caused as much global warming as humans, those numbers go up to 71%, 67% and 46%. And that doesn't even address the remaining four columns, which provide information about views of people who are neither climate scientists nor meteorologists.
There is a wealth of information here. Cook et al ignore most of it. I have no explanation for why. What I do have is a problem. Cook et al rated the climate scientists of Column 2 as expertise Category 3, the same as meteorologists in Doran & Zimmerman. Why? Because the climate scientists hadn't published predominantly on climate change in the last five years. That got them rated as equal in expertise to meteorologists. The climate scientists of Column 3 were rated as expertise Category 1 because they hadn't published on climate change at all in the last 5 years. That put them equal in expertise to economic geologists.
Given that, what would Columns 4-6 be rated as? They are for people whose field of expertise is Meteorology & Atmospheric Science. Would they all be rated as expertise Category 3 like the meteorologists of Doran & Zimmerman, regardless of how much they had published on climate change in the last five years? Would a meteorologist who hasn't published on climate change in the last five years be put in Category 3 along with climate scientists who have only published a bit on climate change in the last 5 years? Or would this meteorologist be put in a lower category, unlike the meteorologists from Doran & Zimmerman 2009?
And what about the people whose field of expertise isn't climate science, meteorology or atmospheric science? It would take a little bit of arithmetic to separate out that group's responses given Columns 7-9, but it'd be easy enough to do. And even if it weren't, Edward Maibach is an author on both the Stenhouse et al and Cook et al papers. He could have obtained the results for those groups directly.
So why didn't he? Why didn't Cook et al show these results? I don't know. What I do know is if these results had been included, Cook et al would have had to change the expertise levels assigned to at least some of their data points if they wanted to have any sort of consistency. If they had done that, included these six extra results from Stenhouse et al and included who knows how many more results from Doran & Zimmerman, there's no telling what their chart would have looked like.