Last week, I demonstrated how the mathematics which go into calculating "correlation scores" is relatively simple. Today, I'd like to look at some of the steps involved to better understand what correlation scores actually mean.
I've been struggling (a lot) with a series of posts I'm trying to write, and I recently realized the problem is I need to start at the beginning. These posts are supposed to be about how "correlation scores" are being misused and abused within the scientific community. The problem is, what are "correlation scores"? That's where we'll begin today.
People familiar with my writing know I have discussed work by a man named Stephan Lewandowsky quite a bit. The short version of the discussion is he has behaved unethically, published false statements and, most importantly generated bogus results by misusing what is relatively simple mathematics.
I'm not the only person to say such, but the discussion has been spread out across many locations over several years. Today, I'd like to start working on collecting the information into a single resource by beginning with a discussion of the gross misuse of simple statistics.
Whatever one may believe about Lewandowsky and his behavior, the indisputable truth is the methodology he relied upon to publish several papers fabricates results because f how he misused it. Results he published are completely and utterly without merit.
Hey guys. It's time to resume the series of posts I'm writing about a series of papers, and a PHD dissertation based on them which got halted because I've been playing too many games of Rock, Paper, Scissors (if you want to know why I've been playing that, see here). Today I will be discussing how not only are the results the authors published based upon a inappropriate methodology, but fail a basic sanity check.
A couple months ago I contacted a scientist asking to examine the data used in three papers which made up the bulk of her PhD dissertation. The initial response contained this:
Thank you very much for your email and interest in our publications.
We follow ethical guidelines from the American Psychological Association, and we are happy to share our data to other competent researchers. Would you please indicate your background and outline how you plan to use the data?
Which struck me as odd as I have no idea how one would determine which people are "competent researchers." I was pessimistic about this response as it seemed like this might be used as an excuse for not sharing data with me, but fortunately, the issue of whether or not I am a "competent" researcher never came up again.
After examining the data for these three papers, I came to the conclusion the papers were fundamentally flawed in a way which invalidated their analysis and conclusions. I informed the author of this thesis of my concerns and tried to give her time to examine the issue privately. I believe several months was long enough so now I'd like to discuss the matter in public. Hopefully, this will demonstrate I am in fact competent.
A little while back I wrote a post asking if something was an example of self-plagiarism. A person had had written a media article about a year ago. I noticed the text of that article had been copied near verbatim into a larger paper published in a scientific journal. I was uncertain if this would be considered self-plagiarism since the text originated in a non-journal article.
The obvious solution to me was to see what the journal had to say on self-plagiarism. I tried looking online to see what their policy was, but I couldn't find a clear answer. As such, I Contacted the journal to ask what their policy on self-plagiarism is in regard to matters like this. Today I'd like to review their ruling on the matter.
Getting back to our discussion of the newest paper by Stephan Lewandowsky and John Cook, I'd like to discuss something about the paper I have found troubling since day one. I didn't bring this up before because I wanted to contact the journal about it first. You see, the paper is titled:
The ‘Alice in Wonderland’ mechanics of the rejection of (climate) science: simulating coherence by conspiracism
I immediately recognized this title because it was similar to one I had seen before:
'Alice through the Looking Glass' mechanics: the rejection of (climate) science
This is the title for a media article Lewnandowsky published on October 23rd, 2015. Its text was copied nearly verbatim into the new paper. Today, I'd like to discuss whether or not that qualifies as self-plagiarism.
Yesterday's post focused on Table 1 of a recent paper by John Cook and Stephan Lewandowsky named "The ‘Alice in Wonderland’ mechanics of the rejection of (climate) science: simulating coherence by conspiracism." This came after a post focusing on Table 2 of the paper. These posts focused on these two tables because there are no other figures or tables in the paper, causing these two tables to have the largest visual impact.
It was suggested to me I was unfair in pointing out the authors offered absolutely no evidence anyone believes the contradictions in Table 1 exist, or even that the stated beliefs are contradictory. The reason is the authors did give seven examples in their text with arguments and sources to support them. There are seven of these examples, whereas Table 1 is described as:
Over one hundred incoherent pairs of arguments can be found in contrarian discourse. (See www.skepticalscience.com/contradictions.php). In this article, we have explored a representative sample in some detail. For further illustration we show several other incoherent arguments in Table 1. Each of the arguments in the table is subject to the same critical analysis as the examples in the preceding sections.
Table 1 had some 20 different examples listed, and the text discussing it referred to there being over 100 examples in total. That seemed the most relevant topic to discuss. After all, even if all seven points of contradiction discussed in the body of the paper were real, that is only seven points on which various global warming skeptics disagree. That's hardly "incoherent." You could find just as many points of disagreement on most scientific issues.
Still, it is worth discussing those seven examples. As such, I will do so in today's post.
In our last post, we looked at how a recent paper by the proprietor of the Skeptical Science website, a man named John Cook (and two co-authors), claimed global warming skeptics hold "incoherent" beliefs by grossly misrepresenting and distorting a variety of quotes.
Specifically, Table 2 of the paper provided quotations from several different skeptics which supposedly showed those skeptics contradicting themselves. This was a key issue for the paper, which was titled "The ‘Alice in Wonderland’ mechanics of the rejection of (climate) science: simulating coherence by conspiracism" based on the well-known quote from the story Alice and Wonderland:
“Why, sometimes I’ve believed as many as six impossible things before breakfast.”
This is the key concept for the paper. It's entire concept rests on the idea skeptics hold "incoherent" beliefs because they are willing to and capable of holding contradictory beliefs at the same time. The evidence they offer to support this claim is bogus though. We can tell just by looking at Nazis.
As I discussed in the last post, a new paper titled, "The ‘Alice in Wonderland’ mechanics of the rejection of (climate) science: simulating coherence by conspiracism" with John Cook and Stephan Lewandowsky has a number of problems, including the one where Cook falsely claimed his own work and the work of others shows there is a consensus global warming is a "global problem." Cook and his co-authors know fully well none of the work they cite shows anything of the sort.
Another issue I commented on is how the paper claims global warming "contrarians" have incoherent belief systems in which they are content to believe contradictory things. This concept is founded on a paper by Michael Wood in which he misused basic statistical tests to draw conclusions about groups of people he had 0 data for. Lewandowsky has also used this same bogus approach to statistics in papers to portray global warming skeptics are conspiracy nuts even when his subjects overwhelmingly said they didn't believe in the conspiracies he smeared them with.
A related issue to this is how these authors give specific examples of how "contrarians" supposedly contradict themselves. In the previous post, I pointed out one key problem to this - the paper cites arguments from different people. That two different "contrarians" might hold contradictory beliefs is completely uninformative. Even climate scientists hold contradictory beliefs. It's called disagreement. It's a normal part of life.
Given that, the only real basis for this paper's headline is the set of examples where an individual supposedly contradicts himself. I discussed the headline example used in the paper in that last post, but today, I'm going to discuss a few of the other ones the authors offer.