I've been struggling to find motivation to write posts lately due to being tired of seeing an endless stream of obvious untruths paraded around as fact by people who nobody will call out. I feel like on practical terms, it doesn't matter what is true or not. There are a ton of topics I want to talk about. I just struggle to see why should I put time and effort into writing when what I write will be no more accepted than if I took LSD and wrote out whatever delusional ideas I might wind up having.
I know that's not true. To some extent or another, getting things right actually matters. It just doesn't seem to matter much. When it does matter, it seems to be that honesty and accuracy only cause oneself problems. To give a simple example of what sort of thing is causing this mood, consider this post I wrote nearly a month and a half ago.
To summarize, a new "hockey stick" paper by Joelle Gergis and several other authors resurrected a paper they had to withdraw several years ago because the skeptic blogosphere demonstrated the authors hadn't done what they claimed to have done. Interestingly, the new version of the paper has a similar problem, with the authors once again claiming to have done one thing while actually doing another.
The paper has a bunch of other problems, and I've spent quite a bit of time examining it. I haven't written much about it though. The reason stems from what I discuss in that post from a month and a half ago, specifically this claim in a post by Steve McIntyre about the paper:
Gergis et al 2016 stated that they screened proxies according to significance of the correlation to local gridcell temperature. Law Dome d18O not only had a significant correlation to local temperature, but had a higher t-statistic (and correlation) to local instrumental temperature than:
24 of the 28 proxies retained by Gergis in her screened network;
either of the other two long proxies (Mt Read, Oroko Swamp tree ring chronologies);
Nonetheless, the Law Dome d18O series was excluded from the Gergis et al network. Gergis effected her exclusion of Law Dome not because of deficient temperature correlation, but through an additional arbitrary screening criterion, which excluded Law Dome d18O, but no other proxy in the screened network.
As I said at the time:
This was a serious accusation he and I had actually discussed in e-mails before he wrote that post. As I told him in those e-mails, I couldn't find a way to replicate his results. I asked him to confirm the data he was using matched what I was using, but that didn't happen. When he wrote the post, I asked again. I asked again later via e-mail, again without success.
Mind you, McIntyre never said, "No," and I think he does intend to do this eventually. I tried to be patient, but given the seriousness of McIntyre's accusations and how they appear to be completely wrong, I think waiting over a month is more than sufficient.
It turns out things are worse than I realized. My post accurately predicted the reason I was unable to replicate McIntyre's results - he had used a HadCRUT3 data set instead of a HadCRUT3v data set like the authors claimed to have used. To make matters worse, he do so while claiming to have used the HadCRUT3v one:
For Law Dome d18O over 1931-1990 for the central gridcell at lag zero i.e. without any Gergian data mining or data torture, using the HadCRUT3v version on archive, I obtained a detrended correlation of 0.529, with a t-statistic of 4.71 (for 37 degrees of freedom after allowing for autocorrelation using the prescribed technique).
McIntyre responded to my post suggesting this explanation, writing:
I try to be careful in my analyses and suspect that differences may come from datasets or missing data. One of the reasons why I support code documentation is to resolve this sort of dispute. At the time of your initial request, I told you that I was getting ready to go to Europe for a week and that my computer had crashed and that I needed to reconfirm my script. I forgot about your request when I returned. Sorry about that, but you could also have reminded me, as I think that I have a very good track record of responding to requests and ensuring that results are documented.
In this case, as noted above, I got sandbagged by a computer crash, and need to crosscheck the version on my computer against my results. I still need to do this before responding.
When I got back from Europe, I made a resolution to finish a lengthy submission on Lewandowsky on which I'd been working off and on for a couple of years, but which I hadn't finished. Because I get tired quickly these days, I put other issues on the back burner and apologize for that. I would like to work on this undisturbed for a little while longer and would prefer not to revisit Law Dome for a couple of weeks if everyone doesn't mind waiting a while longer. I do not believe that there are any material issues with my Law Dome analysis, but will undertake to quadruple check, together with the relevant code.
I re-ran the code and once again got a correlation of 0.529 for Law Dome to summer temperatures, making a summer average temperature of available summer months. I'm not sure what you did, but I think that you're jumping the gun in assuming that my conclusions about Law Dome are "false".
For autocorrelation, Gergis et al used a formula from Bretherton et al 1999, which was new to me and which I implemented for the first time in this calculation. Re-running the code, I noticed an issue in my implementation of the Bretherton formula and the t-values are a little different. Tweaking this, I got a slightly lower t-value of 3.652 for Law Dome, a little lower than reported in my post, but not changing the conclusions of my post.
I'll post up my code after making it turnkey.
And then providing his code, code which clearly identified his data source as HadCRUT3 like I had suggested. Obviously, he could have saved us both some time just by looking to see if he had used the wrong data set like I had suggested.
For whatever reasons, he instead posted his code (which showed I was correct in what I suggested) then promptly stopped commenting for nearly a week. His last comment on my site was on September 10th. I responded within a few hours and quoted the portion of his code which clearly showed he used the wrong data set. This is where things get interesting.
For five days, McIntyre stopped commenting. Perhaps the timing was coincidental. I don't know. What I do know is McIntyre updated his post that day after discovering the error he mentions in that comment above. I initially thought little of it, as the text:
Seemed unremarkable since it was clearly indicated. It wasn't until later I discovered the post had undergone another change as well. Remember, I quoted an important claim from McIntyre's post as:
That is what I saw when I first read the post. It's the same thing most people would have seen. However, on September 10th, when McIntyre made the small changes noted above, he also changed this text to say:
There is nothing to indicate this change. Had I not quoted the original text in my post way back when, I might never have noticed it. Similarly, I might never have noticed the post's conclusions were changed from:
Or that this:
Was added to the post. There is nothing to indicate these changes were made. The only reason we can even see they were made is we have an archived copy of the page from before the edits. If not for that, these changes would be impossible to take notice of. The original text would have disappeared into the aether.
That's not okay. Whether or not one thinks these changes are important, they came about because Steve McIntyre realized he had made a mistake that affected some of his claims. To fix this mistake, he changed the text of his post and didn't bother to indicate the changes. That's the sort of thing "skeptics" would call dishonest if done by a climate scientist.
Do the changes matter? Maybe, maybe not. That's up to each individual to decide. They just can't make an informed decision if McIntyre hides his errors from them. And hides them he does. You see, the error leading to these changes has nothing to do with the issue I raised. After he updated his post (without me realizing the extent of the change), I wrote several comments at his site (and more on Twitter, to which responded) pointing out his code clearly showed he used the wrong data set. Four days after I initially demonstrated this, McIntyre commented:
Sorry not to have commented for a few days, as I’ve had some other things that I’ve had to look after.
In emulating G16, I downloaded data from the (now obsolete) HadCRUT3 webpage http://www.metoffice.gov.uk/hadobs/hadcrut3/data/download.html, downloading the HadCRUT3.nc version described as “Best estimate temperature anomalies”. I hadn’t taken note of the fact that Gergis et al 2016 had used the “variance adjusted” version, which, for Law Dome, is somewhat different than the “best estimate” version.
There was much more to this comment, as McIntyre defended his analysis as being appropriate despite being done on the wrong data set. I won't go into that discussion. You can read it if you're interested. The point is McIntyre clearly acknowledged he had used the wrong data set when doing his analysis.
That was on September 14th. It is now October 20th. The post makes no mention of this error. It has not been updated since the (mostly secret) changes of September 10th. There is nothing in the post to warn readers everything it tells them about this issue is flawed due to McIntyre having used the wrong data set. In fact, it continues to say:
For Law Dome d18O over 1931-1990 for the central gridcell at lag zero i.e. without any Gergian data mining or data torture, using the HadCRUT3v version on archive...
Even though Mcintyre has acknowledged he did not use the HadCRUT3v data set, but rather, that HadCRUT3 data set.
Remember, the same day McIntyre discovered an error distinct from the one I brought up, he updated several portions of his post (largely in secret) to take note of the relatively minor change it had on what he wrote. It's now been several months since I first brought my concern about what data set he used up. It's been over a month after he acknowledged he used the wrong data set. There's been no update.
Looking at my initial e-mails with McIntyre about this issue, I have spent three months trying to resolve this issue. Why? Why should I even bother?
Why should I spend months on an issue because McIntyre didn't do what he claimed, made no effort to reconcile results when discrepancies were brought up, secretly changed parts of his post after discovering a separate error and now simply chooses not to update what he wrote to acknowledge the error he knows he made?
And why should I spend time looking at how Joelle Gergis and her co-authors didn't do what they claimed? If it's this much trouble with McIntyre, a person routinely promoted as a champion for openness and honesty, what can I expect from anyone? It's not like anyone will ever care. Even if McIntyre and Gergis et al. updated their work to clearly acknowledge their errors, most people would never see it. Once they've read the erroneous text, most people will never go back and check it again.
There are many examples of things like this. Most of the ones I could point to are more serious. This one bothers me more than the rest though. People treat McIntyre as a paragon of virtue, ignoring things like this even as they would condemn other people for lesser offenses. Given that, what's the point?
Does anyone actually care about right and wrong? I'm starting to wonder. Everything I'm seeing makes it seem like the answer is, "Nope, not at all." All that seems to matter is saying things people like to hear.
Speaking of which, I have a new eBook mostly written. I'm not sure I'll finish it. I think my sanity might be better served by moving out to the woods and becoming a hermit. Or embracing the insanity of the world and voting Trump for president.