Failing at Control

Hey guys. It's time to resume the series of posts I'm writing about a series of papers, and a PHD dissertation based on them which got halted because I've been playing too many games of Rock, Paper, Scissors (if you want to know why I've been playing that, see here). Today I will be discussing how not only are the results the authors published based upon a inappropriate methodology, but fail a basic sanity check.

Let's do a quick recap since it's been a little while since I discussed this subject. here is how I began my last post about this topic:

I've written a post titled, "Correlation is Meaningless" once before. It makes the same basic point I made in a recent post discussing the PhD dissertation by one Kirsti Jylhä. I'm going to continue my discussion of Jylha's work today to examine more of a phenomenon where people misuse simple statistics to come up with all sorts of bogus results. In Jylha's case, it undercuts much of the value of her PhD.

To briefly recap the last post on this topic, the first paper Jylha relies on for her thesis has two studies in it. The results of the first study are given by this table in the paper:

As well as a discussion of a follow-up analysis. For now, I want to continue focusing on this first table. It supposedly demonstrates a statistically significant relationship between "climate change denial" and various political/social ideologies. However, when we visualize the data, we see this relationship is an artifact. Here is the data plotted to show the supposed relationship between "climate change denial" and "socialdominance orientation," an ideology which accepts inequality amongst groups of people:

The data has a small jitter value added to it to enable us to see data density. As this graph shows, there is no meaningful relationship between climate change denial and social dominance orientation. People who strongly believe in global warming strongly oppose inequality, but there is no evidence the opposite is true.

The reason the authors can claim there is a "statistically significant" correlation between these two traits is they collected almost no data from anyone who "denies" climate change. The approach the authors have taken is to draw a line through their data, which is how you normally calculate the relationship between two variables, then extrapolate it out far beyond where their data extends.

There are a lot of ways of describing this approach. When I've previously said correlation is meaningless, I used an example in which I demonstrated a "statistically significant" correlation between belief in global warming and support for genocide. It was completely bogus. I was able to do it because I used the same approach the authors used. Namely:

1) Collect data for any group of people.
2) Determine views that group holds.
3) Find a group which is "opposite" the group you study.
4) Assume they must hold the opposite view of the group you studied on every issue.

This will work with literally any subject and any group of people. You can reach basically any conclusion you want because this approach doesn't require you have any data for the group of people you're drawing conclusions about.

This methodology is simple. If you want to say a group of people you disagree with is terrible, ask anyone but them questions then assume they'd give the opposite answer. That's all there is to it. You dislike conservatives? Ask liberal questions. You dislike women? Ask men questions. You dislike black people? Ask white people questions.

That's all you have to do. It's irrational and inappropriate, but you can get tens, if not hundreds, of thousands of dollars in grant funding to do it. All you have to do is ask liberals if they like ice cream. When they say yes, you can conclude conservatives are the opposite of liberals and write a paper claiming to "prove" conservatives are evil jerks who hate ice cream.

I'm not going to harp on that any more today. It's crazy, but let's try to move past the insanity and try to look at the rest of this work. Let's even look past the impossible results the authors published. Let's look past all that so we can see just how strange this sort of work is.

To begin, let's consider the experiment design the authors came up with for the second study they discuss in their paper:

The climate change denial (a= .92) and political orientationmeasures were the same as in Study 1. For the RWA and SDOscales, we aimed to conduct pre- and post-measurement of theconcepts. Therefore, we split each scale in three parcels and usedparcel 1 and 2 for the pre- and parcel 2 and 3 for the post-manip-ulation measurement. Cronbach’s alphas for pre-/post measure-ment were .69/.69 (9/10 items) and .90/.91 (11/11 items) for RWA and SDO respectively. Response alternatives for all items were the same as Study 1.

In the newscast condition, a climate-related video was shown.In order to ensure high ecological validity, we used a newscastwhich was originally shown in the national television (September27, 2013). The video, 2.46 min long, communicated findings thatare presented in the most recent IPCC report. In the video, researchers expressed that the conclusions in this rapport are more cer-tain compared to those in the previous one. They particularlyfocused on the human impact on climate change and the impor-tance of mitigation in order to prevent a rise above 2 degreesCelsius globally.

In the control condition, participants were given a word sortingtask instead of watching the video. This was done in order toensure equal study length across the two conditions. In the task,five separate lists containing four words were presented on thecomputer screen. Participants were asked to determine whichword was unrelated to the other words (example of a word list:digital, analog, clock, computer). The words were selected so thatthey would be unrelated to climate and emotionally neutral. Tocontrol for the potential effects that the task might have on theoutcomes specifically in the control condition, an additional sort-ing task was done in both conditions. This task had the same ratio-nale as the first sorting task.

The test is fairly simple. The authors decided to look at how global warming skepticism correlates to political orientation, "right wing authoritarianism" and "socidal dominance orientation." As in the first study contained in the paper, discussed in our previous posts, the authors have practically no data from global warming skeptics so their results are without merit (and due solely to the authors use of a bogus methodology).

Try to look past that though. You see, the test the authors performed is somewhat interesting. The authors decided to examine how people's views on things change when presented with a video promoting mainstream global warming views. Such a video might be expected to change people's views on global warming, but what about issues like social equality? As the authors say:

The second aim of this study was to investigate if a newscast,that communicates evidence for climate change, has an impacton the levels of climate change denial or on the relation betweendenial and ideology. The results showed that climate change denialwas significantly lower in the newscast compared to the controlcondition. Importantly, the relation between denial and the ideo-logical variables did not differ substantially between the two con-ditions. These findings show that the relation between ideologyand climate change denial is stable across conditions/situations.This relation could be seen as the source of stability of climatechange denial or, more generally, environmental attitudes.

There's an important point I managed to miss the first few times I read this paper. Look at this portion of that paragraph:

The results showed that climate change denialwas significantly lower in the newscast compared to the controlcondition.

The "newscast" refers to the group of people the researchers had watch the video promoting mainstream global warming views. As the paper notes, this group had "significantly lower" climate change denial after watching the video than the group who didn't watch the video. Put in numbers, the averages after watching the video were 2.19 and 1.79. This is a change from the averages before watching the video of...

Well, um... I don't know what those averages were. The authors explain:

Participants completed the study online and began by respond-ing to RWA, SDO, and political orientation (in that order). Theythen either watched the video (newscast condition) or conducteda word sorting task (control condition). Subsequently, participantsin the newscast condition received a word sorting task and those inthe control condition received another word sorting task. After thesorting task, participants responded to items measuring climatechange denial followed by the second set of SDO and RWA items.Participants received a scratch card as a reward (3.5€).

Do you spot the problem? It's okay if you don't. It's easy to miss. I missed it myself. I only noticed it when I looked at the data. I wanted to see how people's views on global warming changed after seeing the video so I loaded up the intermediary results provided to me by Jylhä (despite requesting it, I was not provided the raw data) and saw the columns in it went:

[1] "ID"
[2] "Experimental_condition"
[3] "Climate_change_denial"
[4] "Political_orientation"
[5] "Right_wing_authoritarianism_beforemanipulation"
[6] "Social_dominance_orientation_beforemanipulation"
[7] "Right_wing_authoritarianism_aftermanipulation"
[8] "Social_dominance_orientation_aftermanipulation"

There were more columns, and I spent a little while trying to figure out where the "Beforemanipulation" and "aftermanipulation" versions of the "Climate_change_denial" and "Political_orientation" values were. I assumed I was missing something obvious. The authors claimed to show watching a video changed people's views on global warming. Obviously, they would have asked people about their views on global warming before and after watching the video, right? Wrong. Reread what they said:

Participants completed the study online and began by respond-ing to RWA, SDO, and political orientation (in that order).

According to this description, the authors didn't bother to ask the research participants what their views on global warming were before showing them a video about global warming. They only asked the participants about global warming after some of them watched this video.

Why? Who knows. The authors don't say. The authors don't explain why they asked the research participants one set of questions before showing a video then asked a different set of questions after showing the video. I'm not going to try to figure it out. It's bizarre. It's wrong. It's also maybe not entirely relevant. After all:

The analyses revealed significant differences between thetwo conditions in denial only [t(99) = 2.69, p = .008] (see Table 2).Specifically, denial was significantly lower in the newscast (M =1.79, SD = 0.60) compared to the control (M = 2.19, SD = 0.89) con-dition, suggesting that the newscast effectively decreased the lev-els of denial.

The control group and the experimental group did not have "significant differences" except in regard to views on global warming. This could, perhaps, prove a point even though the researchers chose not to do the obvious thing of asking both groups for their views on global warming prior to the experiment.

I could raise questions about the math the authors use to determine what is a "significant difference." I mean, their math uses probability distribution functions based on groups of only ~50 people, groups whose data is not even close to normally distributed. I don't want to do that though. The math is interesting but not that important. WHat is important is the authors say:

Further, we examined the relation between the ideology vari-ables and climate change denial in both conditions. We found signif-icant zero-order correlations between the ideology variables andclimate change denial in both conditions with one exception – therelation between RWA and denial did not reach the conventional significance level in the Newscast condition (see Table 2). This out-come is roughly in line with the results of Study 1 and our predic-tions.

Notice they say they found "signficiant" correlations between the the variables with the exception of "the relation between RWA and [climate change] denial." That's missing an important detail - is it before or after the research participants watched the video?

Prior to watching (or not watching) the video, the control and experimental group should be the same. Both groups are just people who signed up to participate in this study. They're the same as one another until the experiment (watching the video or not) is performed. So when the authors say there's a "significant" difference between the groups, we would expect that to refer to the groups after the experiment is performed.

That's not the case. Here are the authors results:

Take note of how there are two sections: Pre-manipulation and post-manipulation. The pre-manipulation section shows results prior to performing any experiment. The post-manipulation section shows results after performing the experiment.

As the authors noted, there is a "statistically significant" correlation between "right-wing authoritarianism" and "climate change denial" in the control group but not in the experimental group. Naturally, one might think the experiment caused the correlation between these two variables to change. That's not the case.

Look at which section did not have a "statistically significant" correlation between these two variables. It's the first one. It's the "pre-manipulation" section. That is, prior to performing any experiment on the research participants, the control group showed a "statistically significant" correlation of .40 between RWA and climate change denial while the experimental group showed a not statistically significant correlation of .20. That's why the authors say:

We found signif-icant zero-order correlations between the ideology variables andclimate change denial in both conditions with one exception – therelation between RWA and denial did not reach the conventional significance level in the Newscast condition

I'm not sure why the authors fail to highlight the fact this difference only exists in the pre-manipulation stage, at which point the experimental and control group have been treated exactly the same. Personally, I think it should worry you if your correlation score can double from .20 to .40 based solely upon the chance of which group is picked as the control group,

And remember, it's not just the RWA/climate change denial relationship that's different in the control group than in the experimental group. The "social dominance orientation" variable correlates to climate change denial at a level of .35 in the experimental group but .47 in the control group. I'd think a difference in correlation of 30% deserves a little attention. Instead, the authors say:

Importantly, the relation between denial and the ideo-logical variables did not differ substantially between the two con-ditions. These findings show that the relation between ideologyand climate change denial is stable across conditions/situations.This relation could be seen as the source of stability of climatechange denial or, more generally, environmental attitudes.

Failing to mention the control group's correlation betwen denial and RWA going from .20 to .34 while the control group went from .40 to .45. Similarly, it fails to mention the experimental group's corrleation between SDO and denial goes from .34 to .35 while the control group's goes from .47 to .58. Even if one chalks these changes up to random variance, it's remarkable the correlation scores for the control group change so much.

Why would the results for a control group change? The point of a control group is to have an untreated group you can use to calibrate your results. How do you calibrate your results against a control group if the control group's results change?

Yeah, I know. There are a variety of reasons a control group's results might change. For instance, if research participants spend an hour on a study, they might get bored. While bored, they might be more inclined to pick neutral options on questions out of laziness. If they tended to pick more neutral options on every question, the correlations between the questions would tend to increase - which is what we see.

Is that the case here? Who knows? How are we supposed to tell? Remember, the authors didn't bother to ask the study participants what their views on global warming were prior to conducting their experiment. We don't know what their views on global warming were before the experiment. We know the control and experimental groups can have notably different results. Remember, on one issue the control group had a correlation score of .40 while the experimental group's correlation was only .20.

How can we possibly interpret these results? Even if we ignore that it is completely inappropriate to use simple correlations tests on data which does not have a (univariate, much less multivariate) normal distribution, how can we draw any conclusions about this data? How do we make a coherent comparison of results based upon post-experiment views on global warming to pre-experiment views on other matters? How can the authors possibly conclude:

the relation between denial and the ideo-logical variables did not differ substantially between the two con-ditions. These findings show that the relation between ideologyand climate change denial is stable across conditions/situations.

When they have no data for pre-experiment views on global warming? That post-experiment views on global warming might have a certain relationship with pre-experiment views on other matters doesn't mean pre-experiment views on global warming, which the authors simply chose not to collect, will have that same relationship.

I don't get this. Why would the authors choose not to ask what their study participants thought about global warming before running their experiment? They asked about everything except that. Then, they claimed the relationship with that was stable even though they had no data for what it was.

How is that supposed to make sense?

5 comments

  1. MikeN, I bet you were wondering how rock, paper and scissors halted a PhD paper.

    Off subject, Brandon, I found a link to an article contemporanious with Climategate that reveals quotes that show that Chris Folland was putting pressure on Mann and Briffa "not to dilute the message", in the hockey stick due to its propaganda importance.
    http://www.dailymail.co.uk/news/article-1235395/SPECIAL-INVESTIGATION-Climate-change-emails-row-deepens--Russians-admit-DID-send-them.html

  2. Ron Graf, there was some controversy over the coverage of that e-mail with people saying skeptics (including Steve McIntyre) were misrepresenting it. I was going to write a post about it a while back, but it seems at least one of the e-mails involved in the story is largely unavailable. None of the old searchable databases for the Climategate e-mails seem online anymore, and at least one of the e-mails doesn't appear to have been included in the full dossier released in 3.0. It's very strange.

    I need to find a copy of the Climategate 1.0 release and see how many e-mails in it aren't included in the "full" release. I can't remember if I have a copy of it or not. Once I got the "full" dossier I quit paying attention to the partial releases.

    For the moment, I recommend exercising caution when deciding what the truth is on this matter.

  3. Okay, it turns out all the e-mails may be in the "full" release after all. My computer finally finished searching the 200,000+ files and did find each of the e-mails in question. However, take a look at this post by McIntyre:

    No minutes of this meeting are available, but Climategate correspondence on Sep 22-23, 1999 provides some contemporary information about the meeting. Mann noted that “everyone in the room at IPCC was in agreement that the [decline in the Briffa reconstruction] was a problem”:

    Keith’s series… differs in large part in exactly the opposite direction that Phil’s does from ours. This is the problem we all picked up on (everyone in the room at IPCC was in agreement that this was a problem and a potential distraction/detraction from the reasonably concensus viewpoint we’d like to show w/ the Jones et al and Mann et al series. (Mann, Sep 22, 1999, 0938018124.txt)

    IPCC Chapter Author Folland of the U.K. Hadley Center wrote to Mann, Jones and Briffa that the proxy diagram was a “clear favourite” for the Summary Policy-makers, but that the existing presentation showing the decline of the Briffa reconstruction “dilutes the message rather significantly”. After telling the section authors about the stone in his shoe, Folland added that he only “wanted the truth”.

    A proxy diagram of temperature change is a clear favourite for the Policy Makers summary. But the current diagram with the tree ring only data [i.e. the Briffa reconstruction] somewhat contradicts the multiproxy curve and dilutes the message rather significantly. [We want the truth. Mike thinks it lies nearer his result (which seems in accord with what we know about worldwide mountain glaciers
    and, less clearly, suspect about solar variations). The tree ring results may still suffer from lack of multicentury time scale variance. This is probably the most important issue to resolve in Chapter 2 at present..(Folland, Sep 22, 1999, in 0938031546.txt)

    The entire post is worth reading, but the point I want to make involves these quotations. Or rather, the e-mails they're from. You can see a chain with each of these e-mails here. Note, it is given as e-mail: 938018124.txt. Since it quotes 0938031546.txt (or 938031546.txt), it must have been sent after the other. However, its number is smaller. The e-mails are ordered by time. A lower number should mean the e-mail was sent earlier. That's not the case here.

    Even stranger, there is no 938018124 e-mail in the "full" dossier. My search turned up the e-mail though. It's e-mail 938098920. That number makes more sense. What doesn't make sense is that it would be different. Why would the e-mail have been given a different number in the Climategate 1.0 release? Other e-mails weren't (or at least, some weren't). And even if they had been, how would that number have been chosen? Or was the number not changed in the release yet somehow mis-cited by many people and uploaded incorrectly to searchable websites?

    It's very strange. Unfortunately, it turns out I don't have a copy of the 1.0 release so I can't pursue this much further right now. What I will say is I don't think it is fair to claim Folland pressured anyone for propaganda purposes. The e-mails I see don't support that claim.

Leave a Reply

Your email address will not be published. Required fields are marked *