Hey guys. It's time to resume the series of posts I'm writing about a series of papers, and a PHD dissertation based on them which got halted because I've been playing too many games of Rock, Paper, Scissors (if you want to know why I've been playing that, see here). Today I will be discussing how not only are the results the authors published based upon a inappropriate methodology, but fail a basic sanity check.
Let's do a quick recap since it's been a little while since I discussed this subject. here is how I began my last post about this topic:
I've written a post titled, "Correlation is Meaningless" once before. It makes the same basic point I made in a recent post discussing the PhD dissertation by one Kirsti Jylhä. I'm going to continue my discussion of Jylha's work today to examine more of a phenomenon where people misuse simple statistics to come up with all sorts of bogus results. In Jylha's case, it undercuts much of the value of her PhD.
To briefly recap the last post on this topic, the first paper Jylha relies on for her thesis has two studies in it. The results of the first study are given by this table in the paper:
As well as a discussion of a follow-up analysis. For now, I want to continue focusing on this first table. It supposedly demonstrates a statistically significant relationship between "climate change denial" and various political/social ideologies. However, when we visualize the data, we see this relationship is an artifact. Here is the data plotted to show the supposed relationship between "climate change denial" and "socialdominance orientation," an ideology which accepts inequality amongst groups of people:
The data has a small jitter value added to it to enable us to see data density. As this graph shows, there is no meaningful relationship between climate change denial and social dominance orientation. People who strongly believe in global warming strongly oppose inequality, but there is no evidence the opposite is true.
The reason the authors can claim there is a "statistically significant" correlation between these two traits is they collected almost no data from anyone who "denies" climate change. The approach the authors have taken is to draw a line through their data, which is how you normally calculate the relationship between two variables, then extrapolate it out far beyond where their data extends.
There are a lot of ways of describing this approach. When I've previously said correlation is meaningless, I used an example in which I demonstrated a "statistically significant" correlation between belief in global warming and support for genocide. It was completely bogus. I was able to do it because I used the same approach the authors used. Namely:
1) Collect data for any group of people.
2) Determine views that group holds.
3) Find a group which is "opposite" the group you study.
4) Assume they must hold the opposite view of the group you studied on every issue.
This will work with literally any subject and any group of people. You can reach basically any conclusion you want because this approach doesn't require you have any data for the group of people you're drawing conclusions about.
This methodology is simple. If you want to say a group of people you disagree with is terrible, ask anyone but them questions then assume they'd give the opposite answer. That's all there is to it. You dislike conservatives? Ask liberal questions. You dislike women? Ask men questions. You dislike black people? Ask white people questions.
That's all you have to do. It's irrational and inappropriate, but you can get tens, if not hundreds, of thousands of dollars in grant funding to do it. All you have to do is ask liberals if they like ice cream. When they say yes, you can conclude conservatives are the opposite of liberals and write a paper claiming to "prove" conservatives are evil jerks who hate ice cream.
I'm not going to harp on that any more today. It's crazy, but let's try to move past the insanity and try to look at the rest of this work. Let's even look past the impossible results the authors published. Let's look past all that so we can see just how strange this sort of work is.
To begin, let's consider the experiment design the authors came up with for the second study they discuss in their paper:
The climate change denial (a= .92) and political orientationmeasures were the same as in Study 1. For the RWA and SDOscales, we aimed to conduct pre- and post-measurement of theconcepts. Therefore, we split each scale in three parcels and usedparcel 1 and 2 for the pre- and parcel 2 and 3 for the post-manip-ulation measurement. Cronbach’s alphas for pre-/post measure-ment were .69/.69 (9/10 items) and .90/.91 (11/11 items) for RWA and SDO respectively. Response alternatives for all items were the same as Study 1.
In the newscast condition, a climate-related video was shown.In order to ensure high ecological validity, we used a newscastwhich was originally shown in the national television (September27, 2013). The video, 2.46 min long, communicated ﬁndings thatare presented in the most recent IPCC report. In the video, researchers expressed that the conclusions in this rapport are more cer-tain compared to those in the previous one. They particularlyfocused on the human impact on climate change and the impor-tance of mitigation in order to prevent a rise above 2 degreesCelsius globally.
In the control condition, participants were given a word sortingtask instead of watching the video. This was done in order toensure equal study length across the two conditions. In the task,ﬁve separate lists containing four words were presented on thecomputer screen. Participants were asked to determine whichword was unrelated to the other words (example of a word list:digital, analog, clock, computer). The words were selected so thatthey would be unrelated to climate and emotionally neutral. Tocontrol for the potential effects that the task might have on theoutcomes speciﬁcally in the control condition, an additional sort-ing task was done in both conditions. This task had the same ratio-nale as the ﬁrst sorting task.
The test is fairly simple. The authors decided to look at how global warming skepticism correlates to political orientation, "right wing authoritarianism" and "socidal dominance orientation." As in the first study contained in the paper, discussed in our previous posts, the authors have practically no data from global warming skeptics so their results are without merit (and due solely to the authors use of a bogus methodology).
Try to look past that though. You see, the test the authors performed is somewhat interesting. The authors decided to examine how people's views on things change when presented with a video promoting mainstream global warming views. Such a video might be expected to change people's views on global warming, but what about issues like social equality? As the authors say:
The second aim of this study was to investigate if a newscast,that communicates evidence for climate change, has an impacton the levels of climate change denial or on the relation betweendenial and ideology. The results showed that climate change denialwas signiﬁcantly lower in the newscast compared to the controlcondition. Importantly, the relation between denial and the ideo-logical variables did not differ substantially between the two con-ditions. These ﬁndings show that the relation between ideologyand climate change denial is stable across conditions/situations.This relation could be seen as the source of stability of climatechange denial or, more generally, environmental attitudes.
There's an important point I managed to miss the first few times I read this paper. Look at this portion of that paragraph:
The results showed that climate change denialwas signiﬁcantly lower in the newscast compared to the controlcondition.
The "newscast" refers to the group of people the researchers had watch the video promoting mainstream global warming views. As the paper notes, this group had "significantly lower" climate change denial after watching the video than the group who didn't watch the video. Put in numbers, the averages after watching the video were 2.19 and 1.79. This is a change from the averages before watching the video of...
Well, um... I don't know what those averages were. The authors explain:
Participants completed the study online and began by respond-ing to RWA, SDO, and political orientation (in that order). Theythen either watched the video (newscast condition) or conducteda word sorting task (control condition). Subsequently, participantsin the newscast condition received a word sorting task and those inthe control condition received another word sorting task. After thesorting task, participants responded to items measuring climatechange denial followed by the second set of SDO and RWA items.Participants received a scratch card as a reward (3.5€).
Do you spot the problem? It's okay if you don't. It's easy to miss. I missed it myself. I only noticed it when I looked at the data. I wanted to see how people's views on global warming changed after seeing the video so I loaded up the intermediary results provided to me by Jylhä (despite requesting it, I was not provided the raw data) and saw the columns in it went:
There were more columns, and I spent a little while trying to figure out where the "Beforemanipulation" and "aftermanipulation" versions of the "Climate_change_denial" and "Political_orientation" values were. I assumed I was missing something obvious. The authors claimed to show watching a video changed people's views on global warming. Obviously, they would have asked people about their views on global warming before and after watching the video, right? Wrong. Reread what they said:
Participants completed the study online and began by respond-ing to RWA, SDO, and political orientation (in that order).
According to this description, the authors didn't bother to ask the research participants what their views on global warming were before showing them a video about global warming. They only asked the participants about global warming after some of them watched this video.
Why? Who knows. The authors don't say. The authors don't explain why they asked the research participants one set of questions before showing a video then asked a different set of questions after showing the video. I'm not going to try to figure it out. It's bizarre. It's wrong. It's also maybe not entirely relevant. After all:
The analyses revealed signiﬁcant differences between thetwo conditions in denial only [t(99) = 2.69, p = .008] (see Table 2).Speciﬁcally, denial was signiﬁcantly lower in the newscast (M =1.79, SD = 0.60) compared to the control (M = 2.19, SD = 0.89) con-dition, suggesting that the newscast effectively decreased the lev-els of denial.
The control group and the experimental group did not have "significant differences" except in regard to views on global warming. This could, perhaps, prove a point even though the researchers chose not to do the obvious thing of asking both groups for their views on global warming prior to the experiment.
I could raise questions about the math the authors use to determine what is a "significant difference." I mean, their math uses probability distribution functions based on groups of only ~50 people, groups whose data is not even close to normally distributed. I don't want to do that though. The math is interesting but not that important. WHat is important is the authors say:
Further, we examined the relation between the ideology vari-ables and climate change denial in both conditions. We found signif-icant zero-order correlations between the ideology variables andclimate change denial in both conditions with one exception – therelation between RWA and denial did not reach the conventional signiﬁcance level in the Newscast condition (see Table 2). This out-come is roughly in line with the results of Study 1 and our predic-tions.
Notice they say they found "signficiant" correlations between the the variables with the exception of "the relation between RWA and [climate change] denial." That's missing an important detail - is it before or after the research participants watched the video?
Prior to watching (or not watching) the video, the control and experimental group should be the same. Both groups are just people who signed up to participate in this study. They're the same as one another until the experiment (watching the video or not) is performed. So when the authors say there's a "significant" difference between the groups, we would expect that to refer to the groups after the experiment is performed.
That's not the case. Here are the authors results:
Take note of how there are two sections: Pre-manipulation and post-manipulation. The pre-manipulation section shows results prior to performing any experiment. The post-manipulation section shows results after performing the experiment.
As the authors noted, there is a "statistically significant" correlation between "right-wing authoritarianism" and "climate change denial" in the control group but not in the experimental group. Naturally, one might think the experiment caused the correlation between these two variables to change. That's not the case.
Look at which section did not have a "statistically significant" correlation between these two variables. It's the first one. It's the "pre-manipulation" section. That is, prior to performing any experiment on the research participants, the control group showed a "statistically significant" correlation of .40 between RWA and climate change denial while the experimental group showed a not statistically significant correlation of .20. That's why the authors say:
We found signif-icant zero-order correlations between the ideology variables andclimate change denial in both conditions with one exception – therelation between RWA and denial did not reach the conventional signiﬁcance level in the Newscast condition
I'm not sure why the authors fail to highlight the fact this difference only exists in the pre-manipulation stage, at which point the experimental and control group have been treated exactly the same. Personally, I think it should worry you if your correlation score can double from .20 to .40 based solely upon the chance of which group is picked as the control group,
And remember, it's not just the RWA/climate change denial relationship that's different in the control group than in the experimental group. The "social dominance orientation" variable correlates to climate change denial at a level of .35 in the experimental group but .47 in the control group. I'd think a difference in correlation of 30% deserves a little attention. Instead, the authors say:
Importantly, the relation between denial and the ideo-logical variables did not differ substantially between the two con-ditions. These ﬁndings show that the relation between ideologyand climate change denial is stable across conditions/situations.This relation could be seen as the source of stability of climatechange denial or, more generally, environmental attitudes.
Failing to mention the control group's correlation betwen denial and RWA going from .20 to .34 while the control group went from .40 to .45. Similarly, it fails to mention the experimental group's corrleation between SDO and denial goes from .34 to .35 while the control group's goes from .47 to .58. Even if one chalks these changes up to random variance, it's remarkable the correlation scores for the control group change so much.
Why would the results for a control group change? The point of a control group is to have an untreated group you can use to calibrate your results. How do you calibrate your results against a control group if the control group's results change?
Yeah, I know. There are a variety of reasons a control group's results might change. For instance, if research participants spend an hour on a study, they might get bored. While bored, they might be more inclined to pick neutral options on questions out of laziness. If they tended to pick more neutral options on every question, the correlations between the questions would tend to increase - which is what we see.
Is that the case here? Who knows? How are we supposed to tell? Remember, the authors didn't bother to ask the study participants what their views on global warming were prior to conducting their experiment. We don't know what their views on global warming were before the experiment. We know the control and experimental groups can have notably different results. Remember, on one issue the control group had a correlation score of .40 while the experimental group's correlation was only .20.
How can we possibly interpret these results? Even if we ignore that it is completely inappropriate to use simple correlations tests on data which does not have a (univariate, much less multivariate) normal distribution, how can we draw any conclusions about this data? How do we make a coherent comparison of results based upon post-experiment views on global warming to pre-experiment views on other matters? How can the authors possibly conclude:
the relation between denial and the ideo-logical variables did not differ substantially between the two con-ditions. These ﬁndings show that the relation between ideologyand climate change denial is stable across conditions/situations.
When they have no data for pre-experiment views on global warming? That post-experiment views on global warming might have a certain relationship with pre-experiment views on other matters doesn't mean pre-experiment views on global warming, which the authors simply chose not to collect, will have that same relationship.
I don't get this. Why would the authors choose not to ask what their study participants thought about global warming before running their experiment? They asked about everything except that. Then, they claimed the relationship with that was stable even though they had no data for what it was.
How is that supposed to make sense?