Consensus Chart Craziness - Part 2

Our highlighted a couple pecularities in the chart of a new paper, Cook et al (2016). This chart ostensibly shows as climate science expertise increases, one becomes more likely to endorse the "consensus" position on global warming. There are a number of problems with this chart, the most fundamental of which I highlighted by added lines to it:

4_13_scaling_example

To show where each of the five categories used to represent "expertise" fall. As you can see, multiple points with the same x-value are plotted side by side, causing the categories to be unevenly spaced. As a result, Category 5 covers more than half the chart while Category 4 doesn't even appear. This is highly unusual. Had the data been displayed in a normal manner, the result would have been something like:

4_13_scaling_proper

Which does not give the strong visual effect Cook et al (2016) gave in their chart. Additionally, there appear to be a number of problems with the data used in creating this figure. As I discussed in the last post in this series, Cook et al (2016) give two "consensus estimates" from one paper, Cartlon et al (2015), as such:

4_20_Carlton_2

And say:

Carlton et al (2015) adapted questions from Doran and Zimmerman (2009) to survey 698 biophysical scientists across various disciplines, finding that 91.9% of them agreed that (1) mean global temperatures have generally risen compared with pre-1800s levels and that (2) human activity is a significant contributing factor in changing mean global temperatures. Among the 306 who indicated that 'the majority of my research concerns climate change or the impacts of climate change', there was 96.7% consensus on the existence of AGW.

Even though Carlton et al (2015) clearly state only 5.50% of their respondents said. "The majority of my research concerns climate change or the impacts of climate change." Basic arithmetic shows you would need over 5,000, not fewer than 700, for 5.50% to give 306. That makes it clear this 306 value used by Cook et al (2016) is wrong. However, there are more problems, and I intend to discuss some in this post.

Before I move onto other issues though, I want to quote a commenter (HaroldW) who wrote this on my last post:

I was bothered by the combination of the survey response number (N=698) and the helpfully very precise breakdown of the research area response (5.50% / 42.45% / 50.04%). As you wrote, 5.50% of 698 is *about* 38; in fact it's 38.39. One naturally expects to see an integer here. Fiddling with the number of responses N, I inferred from the percentages that the number of responses to this question (Q25) was N=636, with response distribution (35 / 270 / 331 ).

The 50.04% was a typo, with him having meant 52.04%. Otherwise, I believe he's spot on. It appears 62 people surveyed by Carlton et al did not answer this question, with 35 of the remaining 636 people answering that the majority of their resarch concerns climate change or the impacts of climate change. Where Cook et al got the idea 306 people responded such is beyond me. This is particularly strange as J Stuart Carlton, lead author of Carlton et al (2015) was an author on Cook et al (2016).

Expertise of Carlton et al (2015) Respondents

Another peculiarity about arises from how Cook et al (2016) assign what they call expertise ratings to each "consensus estimate" they used in this figure. The figure above shows they rated the 35/306 people in question as being in the highest category, Category 5. The 698 people as a whole were rated as Category 3. These are the sort of fields the 698 people work in:

4_22_Carlton3

Astronomy, chemistry, engineering, physics, biological sciences. These are not fields one would associate very heavily with global warming. I am sure there is some overlap, but a lot of these people almost certainly have no real training, knowledge or expertise involving climate science.

If any random chemist falls in expertise Category 3, and people who do most of their research on climate science fall in expertise Category 5, who falls in Category 4? Nobody. Category 4 didn't get used. That would seem to mean the highest category is for people who spend most their time on climate science with the next highest category being for people who are involved in any sort of physical science.

We can check this impression by examining other consensus estimates placed in Category 3. They are:

Doran & Zimmerman 2009 - Meteorologists
Stenhouse et al 2014 - Publishing (other)

A person might reasonably expect meteorologists, who have to learn things like atmospheric science, would have a greater level of expertise regarding climate science than the average chemist or physicist. No explanation is given for why Cook et al (2016) feel that isn't the case.

The other line is peculiar as well. There were nine different categories in Stenhouse et al 2014, but for some reason Cook et al only reported results for three. I'll have more to say about this in a future post, but for now, what matters is Stenhouse et al surveyed members of the AMS, also known as the American Meteorology Society and explain:

Table 1 shows the proportion of survey respondents—divided by their area of expertise (climate change vs meteorology and atmospheric science) and their publishing record (publishing mostly on climate change vs publishing mostly on other topics vs nonpublishing)—who report each of several different views on whether global warming is happening and what is causing it.

Examining Table 1 shows the result being used for Category 3 by Cook et al has the area of expertise listed as "Climate SCience" with the publication focus being "Mostly Other." They only looked at publication history for the last five years in this paper, meaning we're talking about people who's area of expertise is climate change who have published mostly on other topics in the last five years.

For Doran & Zimmerman (2009), meteorologists were rated as Category 3. For Stenhouse et al (2014), people who's expertise is climate science but haven't focused on publishing on it for the last five years were rated as Category 3. For Carlton et al (2015), chemists, physicists, engineers and whatnot were rated as Category 3. There seems to be no consistency here.

It gets even stranger when you look at the other categories. Category 2 has this entry:

Farnsworth and Lichter 2012 - AMS/AGU members

Here we see the AMS again, but we also see the American Geophysical Union (AGU). Geophysics includes things like the study of fluid dynamics in the atmosphere and oceans, things that are very important for climate science. It also includes things like plate tectonics, which has much less bearing on climate science. Perhaps that's why Cook et al (2016) rates AMS/AGU members as belonging in Category 2.

But are chemists, engineers and physicists really more of experts on climate science than AMS/AGU members? Stenhouse et al (2014) shows AMS members include climate scientists. What about meteorologists as a whole? Doran & Zimmerman (2009) had meteorologists as a whole placed in this category, but Stenhouse et al (2014) didn't even report any results from the American Meteological Society for its non-climate scientist members.

That's particularly weird as Cook et al (2016) lists Stenhouse et al (2014) as having a consensus estimate with expertise level 1:

Stenhouse et al 2014 - Non-publishers (climate science)

Remember, Stenhouse et al (2014) only looked at publication history for the last five years. That means anyone whose area of expertise is climate science but hasn't published on it in the last 5 years is rated as having the lowest level of expertise possible. They are rated as having a lower level of expertise than meteorologists, chemists, engineers, astronomers and who knows what else. Are we supposed to believe if you don't publish on climate science for five years you forget so much or fall so far behind a person who spends his days looking at the stars is far more of an expert than you?

While thinking about that, take a look at Table 1 of Stenhouse et al (2014). There are three columns in the Climate Science section. Those are what were reported by Cook et al (2016). There are six more categories though. Three are for AMS members whose field of expertise is Meteorology & Atmospheric Science, and the other three are for AMS members as a whole, including several hundred who didn't say their field of expertise is climate science, meteorology or atmospheric science.

What would those consensus estimates be rated as? We don't know as Cook et al (2016) somehow ignored them. There doesn't seem to be a plausible answer though. A climate scientist who hasn't published on the topic in six years is Expertise Level 1 and Meteorologists are Expertise Level 3. AMS and AGU members are Expertise level 2, but AMS members who are climate scientists can be Expertise Level 5 if they've published mostly on climate science over the last five years. And apparently, if you're something like an engineer or astronomer, you're more of an expert than AMS/AGU members and as much of an expert as many climate scientists.

It gets even more bizarre if you look at the other consensus estimates rated as Expertise Level 5, but that's a topic for another post. The point of this post is just to highlight how meaningless this result given for Carlton et al (2014) is. The number of people Cook et al (2016) claims provide this consensus estimate seems completely off, with the real result being something like 35 instead of their claimed 306.

As for the expertise level assigned to the Carlton et al (2014) estimates that doesn't seem to be meaningful either as there doesn't appear to be any rhyme or reason to how these expertise levels were assigned. There's certainly no rubric or guidelines given to help people understand how to interpret these results.

On a final note, the 35/306 people rated as Expertise Level 5 for the Carlton et al (2014) paper are part of the total 698 sample rated as Category 3. I don't know if the authors intended to double-count them like that, but since they don't describe or explain how they chose which results to use and/or which expertise levels to assign, it's worth taking a note of.

For the next post in this series, I'll taking another look at that Stenhouse et al (2014) paper and how John Cook and his colleagues managed to show only three of the nine consensus estimates it lists. Oh, and as a teaser, they may have kind of fudged the results they did show as well.

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *