My last post discussed a recent story being pushed by "skeptics" in which they claim an organization created for global warming advocacy committed fraud in order to obtain millions of dollars. The purpose of the post was to demonstrate most of what was said was just spin.
For today's post, I'm going to do something different. I'm going to discuss what does and does not constitute fraud while providing a couple examples. Fair warning, people on both sides may not like seeing these examples.
When considering whether each of these following scenarios is "fraud," keep in mind fraud has two components: 1) An intent to deceive; 2) The goal of obtaining some benefit. You cannot have fraud without both of these. To determine if the recent accusations of "fraud" are appropriate, let's consider what one of the reporters says:
close to 10% of the papers claimed by CCCEP as their own product — 24 out of approx. 260 (more in which the date details are not published are being investigated) — were completed, submitted or published before the CCCEP opened.
It is logically impossible for research to have been commissioned, executed, completed, written up, submitted to a journal and published before the institution that ‘supports’ it has opened.
That is the basic point. An organization applied for a grant to obtain funding. As part of its application, it included a list of publications published under its auspices. You can see the list here. It contains about 500 entries, 276 of which are papers published in scientific journals. Of these 276 papers, approximately 25 should not have been included.
That's it. Five percent of a list (or 10% if you only consider the papers in the list) should not have been included on it. There's no evidence indicating why these items were included. "Skeptics" are having a field day with this story based upon the assumption these items must have been included as an intentional attempt to deceive. Given there is no evidence to support this claim, I believe it was just an honest mistake.
A large part of why I don't believe these items were included as intentional attempt to deceive is motivation. Why would this have been done? Would the CCCEP's chances of receiving funding be better if their list was ~475 items (or ~240 papers) long instead ~500 (or ~250 papers)? That seems unlikely to me.
I have no doubt the CCCEP went back further in time than it should have when looking for papers to add to this list. That was wrong. It wasn't fraud though. For this to have been fraud, someone would have had to know these papers shouldn't be included and think adding a (relatively) small number of items to a long list would help them obtain funding. There is no evidence for this, and it seems highly improbable.
Let's compare this to two actual examples of fraud. I'll provide two, one from each side of the global warming debate. Since we've already discussed "skeptics," let's start with a famous example from the other side. That's right, let's look at Michael Mann's hockey stick. Yes, I'm going there.
For those who don't know, Mann's hockey stick was a reconstruction of temperatures over the last 1000 years which got its name from its relatively flat "shaft" from 1000-1850 AD followed by a sharp uptick in temperatures creating a "blade." This graph was created across two papers. In 1998, Mann and his co-authors created the graph going back to 1400. The following year, they wrote a new paper extending it back to 1000.
This work was highly popular and became an icon for the global warming movement after it was prominently included in an Intergovernmental Panel on Climate Change (IPCC) report. Mann was a lead author of a chapter of this report. Using this position, he ensured the chapter said this about his work:
Mann et al. (1998) reconstructed global patterns of annual surface temperature several centuries back in time. They calibrated a combined terrestrial (tree ring, ice core and historical documentary indicator) and marine (coral) multi-proxy climate network against dominant patterns of 20th century global surface temperature. Averaging the reconstructed temperature patterns over the far more data-rich Northern Hemisphere half of the global domain, they estimated the Northern Hemisphere mean temperature back to AD 1400, a reconstruction which had significant skill in independent cross-validation tests.
Mann and his co-authors had reported results for different statistical verification tests in their paper. One of these tests was for r2 verification. The details of what that is don't matter. What matters is they reported r2 scores only for time periods in which they were favorable. Here is a table showing them for each period of the reconstruction:
Prior to 1750, the r2 verification scores are terrible. Mann and his co-authors calculated all of the these scores, yet they only published favorable ones. That's deceptive. If people had known about the best test results, they would have been much more skeptical of the hockey stick graph.
This point is particularly relevant for the IPCC report. Had Mann ensured his chapter in this report accurately described which tests his work passed and which it failed, people would have been skeptical of his results. The results wouldn't have been given so much prominence and publicity. By hiding the fact his temperature reconstruction failed one of his statistical verification tests (and failed it miserably), Mann was able to become famous. That's fraud.
As another example to consider, a central claim in the 1998 paper was:
the long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network, suggesting that potential tree growth trend biases are not influential in the multiproxy climate reconstructions.
This claim was false. In reality, the results were entirely dependent upon a relatively small amount of tree ring data from a single area in North America. Mann himself acknowledges this. In a book he wrote (The Climate Wars), he discusses tests he and his co-authors performed after the publication of their paper:
The tests revealed that not all of the records were playing an equal role in our reconstructions. Certain proxy data appeared to be of critical importance in establishing the reliability of the reconstruction– in particular, one set of tree ring records spanning the boreal tree line of North America published by dendroclimatologists Gordon Jacoby and Rosanne D’Arrigo.
If a single tree ring proxy was of "critical importance in establishing the reliability of the reconstruction," the long-term trend could not be relatively robust to the inclusion of tree-ring data. Despite having performed these tests, Mann and his co-authors then went on to write their 1999 paper, extending the results of their 1998 paper without warning anyone they had discovered a central claim in their 1998 paper was false. Again, that is fraud.
Consider the difference between this example and the one with the CCCEP. With the CCCEP, we have a list with errors. We have no evidence of how those errors came about or what effect those errors would have had on anything. In the case of Michael Mann's hockey stick, we have the results of tests the authors performed and hid from public view. (If you want to see more details, check out my short eBook for a good overview to the topic).
For another example, let's turn to a $100,000 contest promoted by a number of "skeptics." The purpose of this contest was to see if people could distinguish between time series created via a "random" process and ones created with an underlying trend. There were a variety of problems with this, but ultimately, it just turned out to be a scam.
Yes, I said it's a scam. When the contest was first announced, people were quick to notice the 1000 time series provided for the contest showed clear patterns. Namely, if you looked at the linear trends in them, it was easy to see the (general) difference between the "random" series and those with an underlying trend. Here's a histogram created by a user looking at this contest:
After people had spent a bit of time working on this contest, the person running it updated his website and data set, saying:
The Contest was announced on 18 November 2015. Shortly afterward, a few people pointed out to me that the PRNG I had used might not be good enough. In particular, it might be possible for researchers to win the Contest by exploiting weaknesses in the PRNG. I have been persuaded that the risk might be greater than I had previously realized.
The purpose of the Contest is to test researchers' claimed capability to statistically analyze climatic data. If someone were to win the Contest by exploiting a PRNG weakness, that would not conform with the purpose of the Contest. Ergo, I regenerated the 1000 series using a stronger PRNG, together with some related changes.
The 1000 regenerated series were posted online four days after the Contest was announced—on 22 November 2015. Each person who submitted an entry before then has been invited to submit a new entry with no fee. Everyone who plans to enter the Contest should ensure that they have the regenerated series.
This was a lie. The reason for these changes were not to address a problem with the random number generator used in the contest. We can see this by looking at what the histogram of linear trends in the data set now shows:
The histogram is significantly different than before. Similar graphs could be made showing significant differences in autocorrelation coefficients and other statistical properties of the data sets. Even the number of digits provided was changed. The new data set was rounded to fewer digits than the old one, giving people less information to work with.
It is clear the data set wasn't changed to fix some issue with a random number generator. It was changed to make the contest more difficult after people had begun showing how they would tackle the problem. The guy running the contest just lied about it. It's an obvious deception. Given the contest involves a $10 entry fee, which wasn't refunded to people after the data set was changed, this is fraud.
Again, notice the difference. In this case, we have clear evidence showing the person running this contest changed the nature of his contest to make it more difficult but lied to everyone about it. We have him taking money from people for the contest. That is fraud because there is an attempt to deceive and a benefit being sought out by with the deception. We don't have evidence of either of these things with the CCCEP.
I know people on both sides are likely to disagree with this post. That's why I made sure to give an example from both sides. There are real cases of fraud on both sides of the global warming debate. Neither side will acknowledge it when it's from "one of their own." That's a shame. The global warming debate shouldn't involve partisanship. It does though.
Do I expect "skeptics" to acknowledge the recent cries of fraud at the CCCEP are inappropriate? No. I don't expect them to acknowledge that any more than I expect "warmists" to acknowledge Michael Mann's deceptions regarding his famous hockey stick graph were fraud. The tribalism is too overwhelming.
My hope is there are people who aren't on either "side" of the debate who simply want to know the truth about things. For them, hopefully this post helps demonstrate what fraud is and what it isn't.