Richard Tol's Peculiar Argument

Richard Tol has a tendency to make nonsensical criticisms. I'm going to discuss an example from half a year ago. It's not particularly relevant nowadays, but I'd like to do it so there's a record.

I first came across this particular nonsense when I read this blog post by lucia at The Blackboard. It gave a link to a draft comment by Richard Tol criticizing a paper by John Cook and others at Skeptical Science. As one of the biggest critics of the paper, I was curious. I found a variety of things I perceived as wrong and commented about them. A lengthy argument stemmed from my response to Tol's claim the source for Cook et al's data:

presents papers in an order that is independent of the contents of the abstract. The data should therefore be homoskedastic.

I found it hard to believe the order of the abstracts was random. My doubt turned out to be well founded as Tol responded by saying the abstracts were sorted:

On homoskedasticity, the Web of Science presents data in an order that is independent of its contents, namely the date of publication. Cook then randomized the order again, but presents data in the original order.

Yet claimed the order was independent of the abstracts' contents. I couldn't fathom how the contents of papers' abstracts could be independent of the date they were published. People's views on scientific issues changes over time. Nobody would expect abstracts written 20 years ago to be the same as abstracts written today.

It seemed obvious to me the data was not "independent of its contents" when Cook et al collected it. This became more obvious when I later found out Tol's description was inaccurate. Not only was the data not sorted by date (it was only sorted by year, not month and day), each year's data was placed in alphabetical order. Neither of these sorting methods were independent of the abstracts' contents.

On top of this, Tol noted the raters were presented abstracts in a random order. That would mean any sorting of the data when it was collected (or presented) was irrelevant to the order of the data when it was rated. The order of the data for each step was:

Abstracts Collected - Sorted by year published then alphabetical order
Abstracts Rated - Random order
Abstracts presented - Sorted by year published then alphabetical order

Tol found patterns in the data given in the third step. He claimed this showed problems in the second step. That's silly. It is no surprise one would find patterns in data after sorting it. That's exactly what we should expect. We should expect views expressed in abstracts to change over time as views on global warming change over time. Finding patterns in sorted data does nothing to show there were problems with rating done on unsorted data.

Imagine you were doing ratings for Cook et al. Imagine you wanted to screw them up by rating abstracts from earlier years differently than abstracts from later years. How would you do it? You're given abstracts at random. You have no idea what year any particular abstract was published in. How could you possibly know which abstracts to rate in which ways? And if you couldn't screw it up on purpose, how could you possibly do it by mistake?

You couldn't. Tol was simply nonsensical. He was claiming patterns in sorted data showed problems in unsorted data. That's impossible. Tol's claim was not just wrong. It could not possibly be true. Despite this, the user Carrick chimed in to say:

Brandon, seriously, you’re obviously wrong on this one.

I’m grappling with how you don’t see it.

He didn't offer any explanation. I was annoyed by his hand-waving, but I decided to just ask him to "explain how sorting by publication date is randomizing." This led to him saying:

Anyway, if you’re going to claim the data as presented to the rankers was heteroscedastic,

Which was a complete inversion of my position (heteroscedastic basically means non-random). Remember, the abstracts were presented to raters in a random order. The sorting was in how the abstracts were collected and stored in the data file. I pointed this out and said he was arguing against a straw man. He responded:


I’d like to understand how you can say this, and remain convinced there’s anything left to talk about. Seems a bit contradictory to me.
Strawman arguments? Wtf.

That’s not exactly a charitable way of reading this disagreement: You are accusing me now of an intellectually dishonest argument because I don’t understand what you are trying to say, partly because you seem to be moving the goalpost all over the playing field.

The first part portrayed me as wrong but didn't explain how I was. It'd seem Carrick was mixed up, but he provided no argument or explanation for his position. That made it difficult to figure out what he was claiming, much less discuss it.

The second part was just absurd. People argue against straw man arguments all the time. It almost always happens by mistake. No fair-minded reading of my remarks would say I was accusing him "of an intellectually dishonest argument." I tried to point this out, and he said:

Brandon, I’m done on this one.

You have this pattern of becoming very hostile when anybody challenges anything you say, and you started out needlessly bellicose to begin with.

You absolutely make no sense to me, but I find I don’t care.


I have no explanation for that. As far as I can tell, he was criticizing me for things that only existed in his imagination.

The more important point is he never even attempted to say what was wrong with my position. He put more effort into detailing my (supposed) personal flaws than in laying out what was wrong with what I said. To this day, I have no idea what he thought I got wrong.

Both Carrick and Richard Tol insisted I was wrong, yet neither even attempted to make a case. Neither said, "Your position is X. It is wrong because of Y." They instead resorted to hand-waving. They acted like their position was so obviously true it didn't need to be explained.

In reality, their position was insane. Cook et al went out of their way to make the point that views on global warming have changed over time with the consensus growing stronger. That'd mean we should expect patterns in data sorted by time. In other words, Tol and Carrick insisted expected results indicate a problem rather than, you know, being expected.

Interestingly, Tol dropped this argument in later versions of his draft comment. Presumably that means he realized his mistake. He's never said so though.


  1. Short version, Richard Tol's argument was:

    1) Abstracts were rated in a random order.
    2) Ratings were sorted via year published then alphabetical order.
    3) Sorted ratings were non-random.
    4) ????
    5) Profit.

  2. @Izuru, Brandon
    This is a so-called placebo test, standard in current statistics.

    The null hypothesis is that there should not be any pattern. The test rejects the null. Something is amiss.

    Unfortunately, placebo tests are against an aspecific alternative.

    A test against a specific alternative was not possible with the data available at the time. Cook was pressured into releasing more data (partly, I believe, as a result of the above placebo test). So I have run the more informative test now. Same conclusion: The rating system is not stable.

    The source of instability (drift, fatigue, rater composition, data manipulation) cannot be detected without yet further data releases.

  3. Richard Tol, as far as I can see, what you said isn't wrong so much as nonsensical. What you did is nothing like a "placebo test." The null hypothesis was never "that there should not be any pattern." Everyone should have known there would be a pattern without even looking at the data.

    It is utterly absurd to claim sorted data should be expected to have no pattern.

  4. Brandon: I'm not gonna go over this again. I thought for a bit that Izuru was someone else. Anyway, the onus is on you to show that a random reordering of a random draw should show a pattern.

    All of this is academic now that Cook released the original data.

  5. I have no such onus. There was no "random re-ordering of a random draw." There was a sorted re-ordering of a random draw. And as one would expect of a sorted re-ordering, that sorted re-ordering had patterns in it.

    If there is any onus here, it is for you to stop making things up.

Comments are closed.