Can You Explain This?

In my last post, I asked for help explaining correlations between Rater IDs for people who took a survey and the responses they gave to that survey. The order in which people take a survey should not affect how they respond to the survey, yet according to a data set I was examining, they do.

Today I'd like to go further and show even more inexplicable results. I don't like accusing people of fraud or tampering with data, but I can't come up with any other explanation. Perhaps someone else can help me come up with one.

For today's post, I'm not going to go into much context. I believe the work relied upon in this PhD dissertation is fundamentally flawed. Today's post is not about that though. Previous posts have covered that in enough detail. For today, I want to focus on this:

See the column and row labeled "ID"? That's just a number from 1 to 221 included in the data set so each survey responded has its own ID number. After making a couple of these tables, I realized I should have filtered those values out as there obviously shouldn't be any correlation between the order people take a survey and what their results are.

Emphasis on the word "shouldn't." You see, while there should be no correlation between respondent ID and their responses, these tables show there are. In fact, there is a "statistically significant" correlation between responded ID and responses to a number of questions.

For men, there were only 75 respondents so the statistical power of any test would necessarily be limited. Perhaps because of this, only two variables show a "statistically significant correlation" with respondent ID as a 90% level, and none do so at a 95% level (though one comes in at 94.9%).

There is far more data for women (146 respondents). Perhaps because of this, there are "statically significant correlations" between female respondent ID and "Climate Change Denial, "Social Dominance Orientation, "Domineering" and "Empathy" traits.

That's two out of nine pairings for men which reach the 90% level and four out of nine pairings for women which reach the 95% level. Moreover, these correlation scores:

I said that in my last post (full data tables are included in it). I also said:

So how did this happen? The results aren't sorted by any of the data columns. As far as I can tell, they're not sorted by anything. You're welcome to look for yourself to see if you can find a pattern. The data is available here (data for this paper is in the third tab). It only contains the averages for each set of questions as I wasn't given the raw data (even though I asked), but that shouldn't matter.

The order people take a survey should not have any effect on the outcomes of their results. I don't know if all 221 people took the survey one-by-one in a single room, if they all took it at the same time in a large room, if they all took it online at different times or what. It doesn't matter. There is absolutely no reason me taking a survey after a hundred other people have taken it should cause me to be more likely to deny global warming. I shouldn't become more or less empathetic just because you've asked other people who empathetic they are first.

As a sanity check, I ran these same calculations on the first study of the first paper covered in this PhD dissertation. It doesn't have correlations like these. I randomly re-ordered the respondent IDs. There were no correlations like these. I've done every test I can think of, and I simply cannot come up with a data set that has these sorts of correlations without manually altering the data.

Am I missing something? Could there maybe have been some sort of data processing error? Can anyone offer any explanation why we would find "statistically significant correlations" between a respondent's ID number and their responses to a survey other than, "Someone has tampered with the data"?

I don't want to accuse anyone of fraud. I just cannot fathom why the 146th woman to take this survey should be expected to give different responses than the first woman to take it.

Today I'd like to build upon this by examining the third paper discussed in that PhD dissertation. The dissertation explains the origin of the data for this paper:

The Brazilian data was collected as a part of a broader study (Cantal et al.,
2015). The sample consisted of 367 participants (59 % women, Mage = 29.7,
SDage = 10.80) who completed an online survey in January 2014. In the Swedish
sample, we used data that was collected for the Paper II (see above for
the details of sample and procedure).

Paper II is the paper I was discussing in the last post. The Brazilian data is data from an entirely different survey, carried out with an entirely different group of people. I thought it would be useful to compare the results of these two surveys to see if the same strange correlations appeared. They didn't.

I was going to write a post showing the difference in these two surveys to highlight how unusual the correlation between Rater IDs and rater responses was. Only, when I actually looked at the data, I discovered a bigger mystery. The mystery can perhaps best be shown by comparing this table from Paper II:

To this table in Paper III:

Okay, maybe this isn't the best way. The point was to compare the "Swedish data" (below the diagonal) to the data from Paper II as it is said to be the same data. That tables show different variables though. This makes it a bit difficult to compare the two. However, we can compare the correlation between Climate Change Denial and Social Dominance Orientation:

Paper II: .37
Paper III: .29

For a more thorough comparison, here are tables showing the same three variables from each paper beginning with Paper II:

           cc_denial  SDO pol_orient
cc_denial       1.00 0.37       0.24
SDO             0.37 1.00       0.25
pol_orient      0.24 0.25       1.00

And continuing on with Paper III:

           cc_denial  SDO pol_orient
cc_denial       1.00 0.29       0.15
SDO             0.29 1.00       0.24
pol_orient      0.15 0.24       1.00

The results are similar, yet different. The mildest difference comes from the political orientation variable, shown for the two data sets in this image:

They are identical. It turns out the authors rounded a correlation score of 0.2449 to 0.24 in Paper II but rounded it to 0.25 in Paper III. It's a minor error that has no impact on anything save for raising a flag when trying to replicate/verify results. As such, we won't look at political orientation any further in today's post.

Instead, I want to focus on the other two variables: Climate Change Denial and Social Dominance Orientation. The correlation between these two variables changed from 0.37 to 0.29 between these two papers. That's unusual. Why did this change happen? I don't know the reason, but it's clear the data has changed. Look at the data for Social Dominance Orientation:

Because it can be difficult to eyeball differences in data sets, I've included a third chart showing those differences for you. The differences in values for this variable aren't large, but they aren't zero either. We could discuss if perhaps they arose from something like rounding, but before we do that, let's look at the remaining variable:

I've got nothing. The data on Climate Change Denial for Paper III has clearly been rounded. The stratification makes that obvious. That just doesn't do anything to explain how data could change by as much as 1.9 points. The data is on a 5 point scale. Changing values by 1.9 in one direction or 1.2 in the other direction is a big deal.

Not only is this a big deal just because data shouldn't be tampered with, it's a big deal because a central problem with these papers has been the fact they don't have data for groups they're discussing. As in, they're drawing conclusions about groups of people without data for those groups of people.

You can see previous posts for a discussion of how that happens and how the authors have misused/abused statistics to do it. Today, the point is simply more data for one of those groups now exists.

Paper II had practically no data on the upper half of the denial variable, meaning the authors had practically no data regarding people who deny global warming. Paper III claims to use the same data, yet suddenly, it has more data for global warming deniers. How is this possible?

How does the order people take a survey in change how they respond to the survey? How does a data set change to large extents between two papers that claim to use the same data? I don't want to say "data tampering." I don't want to say "fraud." I just can't come up with any other explanation. Can you?

For those who are interested, the data is available here. Paper II's data is in the second tab. The "Swedish data" is in the fourth tab.

7 comments

  1. Brandon:

    Your data links at the end aren't live (as in, there's no link in the word "here", which I assume is what you intended).

    Interesting series of posts. When you've gotten through them all, you might want to do a highlights reel that draws together your different strands of analysis, questions & criticisms.

    Cheers,

  2. Thanks guys. I did forget to include the link. I've edited the post to fix that.

    Ian, I am planning on discussing this thesis in an eBook I'm writing. The eBook originated from my discussions of Stephan Lewandowsky's work which suffers the same fundamental problem. I started looking at this thesis because I wanted to have additional examples. It turns out the thesis had more problems than I expected so I'm spending more time on it than I expected.

    The eBook should be available at the end of April. I don't know how much will be resolved regarding this thesis by then, but it will definitely get an entry. I'm more worried about what to call the eBook. I've been using the phrase, "Correlation is meaningless" a lot, but I don't know if that would be a good title.

  3. I like that line HaroldW. I don't think it'd work for a title, but I might steal it for a subhead.

Leave a Reply

Your email address will not be published. Required fields are marked *