A Quick Reminder

My last couple posts have examined how it appears data used in two scientific papers, making up a significant portion of a PhD dissertation by Kirsti Jylha, has been tampered with. I don't want that issue to dominate the discussion though. While data tampering would obviously be a serious problem, I want to remind people this work was complete nonsense even without concerns of data tampering.

You see, the fundamental methodology of this paper is to use correlations between traits in one group of people to assume a different group of people possess the opposite traits. Suppose you find men like ice cream. You decide women are the opposite of men, thus women must hate ice cream.

It's nonsense. There are technical reasons why it happens tied to the fact naive correlation calculations assume bivariate normality in the data, and if that assumption is violated, the test will (likely) no longer give accurate results. These correlation tests were not appropriate for this data. The authors of these papers should have known this, but it seems researchers often don't understand the tests they use.

But most people don't care about technical details. For them, I'd like to make this problem as clear as I possibly can. Correlation scores are relationships between variables that can be represented by drawing a line through the data. That line will show how when one trait changes another trait changes along with it. Here is an animation showing these relationships for the first study Jylha was involved with:

Each frame of this animation shows the supposed relationship between climate change denial and some other characteristic (Social Dominance Orientation, Right Wing Authoritarianism, Liberal/Conservative political view). Each frame also shows significant amounts of white space. That white space shows where there is no data.

For instance, there is practically no data for people who deny climate change. There is also practically no data for people who exhibit Social Dominance Orientation, described as:

Societies consist of hierarchical layers, where some people enjoy more privileges and respect than others (Sidanius & Pratto, 1999). Individuals tend to have uneven access to, for example, power positions, social approval, economic resources, and well-being. Even though some hierarchies may depend on individual characteristics, such as high intelligence or athletic abilities, hierarchical positions are often allocated based on group membership. Countless criteria have been used to divide people into such hierarchical groups, many of which are quite arbitrary and socially constructed. For example, in all societies, men and adults tend to be ranked at higher power positions than women and children. Additionally, group-based hierarchies can be based on cast, religion, ethnicity, or any other salient characteristic (Sidanius & Pratto, 1999).

Building on these observations, Sidanius and Pratto (1999; Sidanius, 1993; Pratto et al., 1994) developed the Social Dominance Theory, which identifies the mechanism through which group-based hierarchies are produced and maintained in a society. According to Social Dominance Theory, some statements about inequality/hierarchies are commonly spread and believed in a society (Pratto et al., 1994). Such statements are called legitimizing myths and can be divided into two categories: hierarchy-enhancing legitimizing myths that promote hierarchies and stabilize oppression, and hierarchy-attenuating legitimizing myths that promote equality and support social change. According to Social Dominance Theory, group-based hierarchies can continue existing in a society due to public agreement on the hierarchyenhancing statements.

The predisposition to endorse hierarchy-enhancing or attenuating myths varies between individuals, and this predisposition is called ‘social dominance orientation’

I suspect many people would consider this a negative trait. That might help explain why practically nobody who took the researchers' survey endorsed it. Still, despite the fact practically nobody exhibited Social Dominance Orientation, and despite the fact practically nobody denied climate change, Jylha concludes:

The results showed that SDO was the strongest predictor of denial (β = .46, p < .001).

Because people who don't exhibit Social Dominance Orientation also don't deny climate change. I bet they also don't support nuclear warfare. If the researchers had asked them about it, they could have found supporting nuclear warfare is a strong predictor of climate change denial. It's complete nonsense. So are the results of the second paper Jylha uses in her dissertation:

This time researchers included traits like showing empathy and being domineering. Apparently we're supposed to believe people who deny climate change have less empathy even though there was practically no data for people who show little empathy or deny climate change. Why? Because if you draw a line through the data for one group, you can then extend the line out indefinitely to cover a different group you have no data for.

This is complete nonsense. It doesn't matter if the data Jylha used was tampered with. It doesn't matter if data was made up. This entire approach is fundamentally wrong. The people doing this sort of work either don't understand what they're doing or they're intentionally claiming to be able to draw conclusions about groups of people without any data for those groups of people.

And this is "science." People are receiving tens of thousands of dollars in research grants to do this sort of thing. Entire careers are being built upon it. Some of the work is even receiving large amounts of media attention. It's ridiculous. People should speak up. Scientists should speak up.


  1. Stephan Lewandowsky wasn't the first person to use this approach, but I think he may have been the first one to bring it into the global warming debate. The other papers I've seen using it were all on different topics. Interestingly, Lewandowsky added an additional twist by using SEM, a form of data modeling. SEM uses correlation tables like those published in these papers as an input (as opposed to the actual data) so it suffers from the same problems. It just adds an additional layer to muddle things.

    Ultimately though, this all comes down to the same basic problem. People don't bother to understand the tests they're using, and consequently, they use them on data sets the tests cannot work on. This creates spurious results because the data sets are so heavily skewed.

    Or to put it more simply, people ask one group questions in order to determine what a different group believes.

  2. I remember trying to point this out with a link to your post about do global warmers believe in genocide, but people couldn't be bothered to learn the details and just argued (for and) against the headline.

  3. It is rather interesting to what lengths people will go in order to misunderstand/ignore things which are inconvenient to them. I am still amazed at how Michael Wood responded to me when I pointed this problem out to him. He was person who used the same methodology to claim to prove people believe contradictory conspiracy theories, and to call his response obtuse would be generous.

    Interestingly, his response was so incredible to me one of my first posts on this site was about it.

Leave a Reply

Your email address will not be published. Required fields are marked *