Hey guys. It's time to resume the series of posts I'm writing about a series of papers, and a PHD dissertation based on them which got halted because I've been playing too many games of Rock, Paper, Scissors (if you want to know why I've been playing that, see here). Today I will be discussing how not only are the results the authors published based upon a inappropriate methodology, but fail a basic sanity check.
A couple months ago I contacted a scientist asking to examine the data used in three papers which made up the bulk of her PhD dissertation. The initial response contained this:
Thank you very much for your email and interest in our publications.
We follow ethical guidelines from the American Psychological Association, and we are happy to share our data to other competent researchers. Would you please indicate your background and outline how you plan to use the data?
Which struck me as odd as I have no idea how one would determine which people are "competent researchers." I was pessimistic about this response as it seemed like this might be used as an excuse for not sharing data with me, but fortunately, the issue of whether or not I am a "competent" researcher never came up again.
After examining the data for these three papers, I came to the conclusion the papers were fundamentally flawed in a way which invalidated their analysis and conclusions. I informed the author of this thesis of my concerns and tried to give her time to examine the issue privately. I believe several months was long enough so now I'd like to discuss the matter in public. Hopefully, this will demonstrate I am in fact competent.
A little while back I wrote a post asking if something was an example of self-plagiarism. A person had had written a media article about a year ago. I noticed the text of that article had been copied near verbatim into a larger paper published in a scientific journal. I was uncertain if this would be considered self-plagiarism since the text originated in a non-journal article.
The obvious solution to me was to see what the journal had to say on self-plagiarism. I tried looking online to see what their policy was, but I couldn't find a clear answer. As such, I Contacted the journal to ask what their policy on self-plagiarism is in regard to matters like this. Today I'd like to review their ruling on the matter.
Getting back to our discussion of the newest paper by Stephan Lewandowsky and John Cook, I'd like to discuss something about the paper I have found troubling since day one. I didn't bring this up before because I wanted to contact the journal about it first. You see, the paper is titled:
The ‘Alice in Wonderland’ mechanics of the rejection of (climate) science: simulating coherence by conspiracism
I immediately recognized this title because it was similar to one I had seen before:
'Alice through the Looking Glass' mechanics: the rejection of (climate) science
This is the title for a media article Lewnandowsky published on October 23rd, 2015. Its text was copied nearly verbatim into the new paper. Today, I'd like to discuss whether or not that qualifies as self-plagiarism.
I'm growing a bit tired of repeating the same point over and over in regard to the recent paper by John Cook and Stephan Lewandowsky (that they repeatdly call things contradictions even though they are not), so I decided it would be a good time to take a break and discuss something else that has been bugging me. You guys may remember this tweet:
Which wasn't actually written by Barack Obama or by anyone representing him. The group using his name for the Twitter account is Oragnizing for Action, a non-profit advocacy group which explicitly denies any affiliation with any government. When asked, "Is OFA affiliated in any way with the federal or any other government, or funded with taxpayer dollars," it says, "No."
Combined with the fact the account's profile says:
This account is run by Organizing for Action staff. Tweets from the President are signed -bo.
It should be clear President Obama had nothing to do with this tweet. Despite that, John Cook wrote this in his doctoral thesis:
Consequently, our study received a significant amount of media attention, including a number of tweets by President Obama (Cook, Bedford, & Mandia, 2014).
For today's post, I would like to discuss whether or not this was a lie.
Yesterday's post focused on Table 1 of a recent paper by John Cook and Stephan Lewandowsky named "The ‘Alice in Wonderland’ mechanics of the rejection of (climate) science: simulating coherence by conspiracism." This came after a post focusing on Table 2 of the paper. These posts focused on these two tables because there are no other figures or tables in the paper, causing these two tables to have the largest visual impact.
It was suggested to me I was unfair in pointing out the authors offered absolutely no evidence anyone believes the contradictions in Table 1 exist, or even that the stated beliefs are contradictory. The reason is the authors did give seven examples in their text with arguments and sources to support them. There are seven of these examples, whereas Table 1 is described as:
Over one hundred incoherent pairs of arguments can be found in contrarian discourse. (See www.skepticalscience.com/contradictions.php). In this article, we have explored a representative sample in some detail. For further illustration we show several other incoherent arguments in Table 1. Each of the arguments in the table is subject to the same critical analysis as the examples in the preceding sections.
Table 1 had some 20 different examples listed, and the text discussing it referred to there being over 100 examples in total. That seemed the most relevant topic to discuss. After all, even if all seven points of contradiction discussed in the body of the paper were real, that is only seven points on which various global warming skeptics disagree. That's hardly "incoherent." You could find just as many points of disagreement on most scientific issues.
Still, it is worth discussing those seven examples. As such, I will do so in today's post.
In our last post, we looked at how a recent paper by the proprietor of the Skeptical Science website, a man named John Cook (and two co-authors), claimed global warming skeptics hold "incoherent" beliefs by grossly misrepresenting and distorting a variety of quotes.
Specifically, Table 2 of the paper provided quotations from several different skeptics which supposedly showed those skeptics contradicting themselves. This was a key issue for the paper, which was titled "The ‘Alice in Wonderland’ mechanics of the rejection of (climate) science: simulating coherence by conspiracism" based on the well-known quote from the story Alice and Wonderland:
“Why, sometimes I’ve believed as many as six impossible things before breakfast.”
This is the key concept for the paper. It's entire concept rests on the idea skeptics hold "incoherent" beliefs because they are willing to and capable of holding contradictory beliefs at the same time. The evidence they offer to support this claim is bogus though. We can tell just by looking at Nazis.
As I discussed in the last post, a new paper titled, "The ‘Alice in Wonderland’ mechanics of the rejection of (climate) science: simulating coherence by conspiracism" with John Cook and Stephan Lewandowsky has a number of problems, including the one where Cook falsely claimed his own work and the work of others shows there is a consensus global warming is a "global problem." Cook and his co-authors know fully well none of the work they cite shows anything of the sort.
Another issue I commented on is how the paper claims global warming "contrarians" have incoherent belief systems in which they are content to believe contradictory things. This concept is founded on a paper by Michael Wood in which he misused basic statistical tests to draw conclusions about groups of people he had 0 data for. Lewandowsky has also used this same bogus approach to statistics in papers to portray global warming skeptics are conspiracy nuts even when his subjects overwhelmingly said they didn't believe in the conspiracies he smeared them with.
A related issue to this is how these authors give specific examples of how "contrarians" supposedly contradict themselves. In the previous post, I pointed out one key problem to this - the paper cites arguments from different people. That two different "contrarians" might hold contradictory beliefs is completely uninformative. Even climate scientists hold contradictory beliefs. It's called disagreement. It's a normal part of life.
Given that, the only real basis for this paper's headline is the set of examples where an individual supposedly contradicts himself. I discussed the headline example used in the paper in that last post, but today, I'm going to discuss a few of the other ones the authors offer.
Today I wasted $15. I had seen this tweet by Skeptical Science team member Andy Skuce:
So naturally, I took a look at the paper he's promoting. The paper begins with two quotes:
“CO2CO2 keeps our planet warm ....”
— Ian Plimer, Australian climate “skeptic”, Heaven & Earth, p. 411
“Temperature and CO2CO2 are not connected.”
— Ian Plimer, Australian climate “skeptic”, Heaven & Earth, p. 278
It makes hay of how these two quotes are contradictory and a perfect example of how "contrarians" will believe multiple, contradictory things at the same time. This is a commom meme people like Stephan Lewandowsky and John Cook have been trying to spread, and There is history with them using completely bogus "evidence" to make their case.
Given that, I decided to check the quotations for myself. I needn't have bothered though. It turns out the issue here is exactly what you would likely expect. So you don't have to spend $15 yourself, I'll explain.
We've been discussing a strange chart from a recent paper published by John Cook of Skeptical Science and many others. The chart ostensibly shows the consensus on global warming increases with a person's expertise. To "prove" this claim, Cook et al assigned "expertise" levels to a variety of "consensus estimates" they took from various papers. You can see the results in the chart below, which I've added lines to to show each category:
As you can see, the "consensus estimates" are all plotted one next to the other, without concern for how the categories are spaced. The result is Category 4 doesn't exist in the chart, and Category 5 covers more than half of the chart. This creates a vastly distorted impression of the results.
But while that is a damning problem in and of itself, there is much more wrong with the chart and corresponding paper. One of the key issues we've been looking at in this series is how Cook et al arbitrarily chose which results from the studies it examined to report and which ones not to. Today I'd like to discuss one of the most severe cases of this. It deals with the paper Verheggent et al (2014).