Category Archives: Surface Temperature Record

Accusing Scientists of Fraud While Committing Fraud

Steven Goddard is a blight on the skeptic community. Any skeptic who wants to be taken seriously should avoid him like the plague so as not to be tarnished with his infantile behavior and idiotic posts. Unfortunately, that's not what happens. If you want to see what Goddard is like, feel free to visit his blog. Or if you'd rather not subject yourself to that, you can read the first post I wrote about him last year where I pointed out he accused dozens of scientists of committing fraud by adjusting ocean temperature data while pointing to results which didn't even use ocean temperature data.

Yeah, he's that incompetent. That's not what today's post is about though. I've normally tried to ignore Goddard as there's no point repeating myself over and over. What he says is stupid and disgusting. Accusing everyone you disagree with of having committed fraud based upon basically nothing is pathetic. What's even more pathetic, however, is the fact Goddard is incredibly dishonest himself.

That's what today's post is going to be about. Goddard, who constantly accuses every scientist he can find of having committed fraud, secretly edits his posts and deletes people's comments to cover up his mistakes. In layman's terms, secretly replacing product so as to cover up deficiencies might be considered fraud.
Continue reading

Why I Don't Trust BEST

The Berkeley Earth surface temperature (BEST) project was supposed to be a great thing. It was supposed to resolve the concerns skeptics had raised about the modern temperature record. It was supposed to resolve not just technical issues skeptics had raised, but also basic concerns about openness and transparency.

Once upon a time, people managing the modern temperature records wouldn't even share basic information like what temperature stations they used. It was disgraceful, and it caused a lot of distrust. It was also one of the main reasons BEST was formed. BEST was supposed to help resolve the trust issues by being completely open and transparent. BEST has promoted it's openness and transparency time and time again, and its one of the most touted aspects of their project. The problem is, it's a lie.
Continue reading

How Not to Find UHI

BEST team member Zeke Hausfather made an interesting remark in a discussion at blogger Anders's place. A commenter had mentioned BEST's work examining the Urban Heat Island (UHI) effect, pointing out BEST hadn't found evidence artificial warming from urban development significantly impacted its results. Zeke commented:

To be fair, the separate study that Berkeley did was on homogenized data, so a lack of detectable UHI mainly just indicates that it was effectively removed.

I find this remarkable. When I discussed BEST's uncertainty calculations, I pointed out they don't redo the homogenization calculations. The result is they don't even attempt to calculate the uncertainty in their homogenization process, meaning they know there is more uncertainty in their results than they claim.

The same problem arises with their approach to looking for a UHI effect. To look for a UHI effect, BEST compared rural to non-rural stations. Only, it did so after it had already homogenized its data. That means BEST modified its rural stations by using data from non-rural stations (and vice versa) then said it couldn't find a difference between the two.

Of course they couldn't find a difference between two data sets after using those data sets to make one another more similar. Continue reading

An Interesting Update to BEST's Standards

A couple weeks ago I suggested it is peculiar for groups like Berkeley Earth (BEST) to make claims about what years were or were not the hottest in the temperature record. Each time they update their results, values for previous months change, sometimes by more than the stated uncertainty in their results. My view was uncertainty levels that can't handle changes between versions of a data set should be taken with a grain of salt.

There's a more interesting issue though. That post showed there are at least four different sets of results given by BEST calculations. In a later comment, I updated that number to seven. There are at least seven different sets of results published by BEST, and BEST hasn't archived a single one of them. BEST hasn't done a single thing to allow BEST to compare one version to another.

Yesterday, I discovered this problem goes even further. It turns out there are currently results from three different sets of calculations published on the BEST website. They're all published alongside one another as though they represent the same thing.
Continue reading

How BEST Overestimates its Certainty, Part 2

My last post had this footnote:

*This claim of independence is incredibly misleading. BEST estimates breakpoints prior to running “the entire Berkeley Average machinery.” It does so by examining every station in its data set, comparing each to the stations around it. This is effectively a form of homogenization (BEST even stores the code for it in a directory named Homogeniety).

That means BEST homogenizes its data prior to performing its jackknife calculations. Whatever series are removed in the jackknife calculations will still have influenced the homogeneity calculations, meaning they are not truly removed from the calculations as a whole.

It’s trivially easy to show homogenizing a data set prior to performing jackknife calculations means those calculations cannot reflect the actual uncertainty in that data set. I’m not going to do so here simply because of how long the post has already gotten. Plus, I really would like to get to work on my eBook again at some point.

It occurs to me I ought to demonstrate this is true rather than just claim it. I tried to show what effect this has on BEST's results by fixing BEST's mistake and rerunning the analysis, but I couldn't because my laptop doesn't have enough memory to handle all the processing. As such, I'll just provide a couple of excerpts from BEST's code to help show what is done.
Continue reading

How BEST Overestimates its Certainty, Part 1

I'm supposed to be working on the follow-up to the little eBook I published last month explaining the Hockey Stick Controversy (see here). My goal has always been to get the second part finished by the end of January. Unfortunately, I keep getting distracted. It had been bothering me I'd never gotten around to filing a complaint with the IPCC about the backchannel editing in its latest report so I spent some time writing and sending that complaint (see here).

I've also been bothered by people saying 2014 was the hottest year since that claim was based in part on the BEST temperature record, which has undergone a number of undisclosed/undocumented changes. I wrote a simple little post about that as well. Discussing that post led me to getting interested in more issues with the BEST temperature record, and now I'm thoroughly distracted. I have no choice but to take some time to discuss just how wrong the BEST approach is.

The last post I wrote about BEST highlighted the fact there have been a multitude of different versions of the BEST temperature record, none of which have been archived by BEST for comparison purposes. It showed the differences between versions can exceed the stated uncertainty in the BEST record, calling into question BEST's claims of precision. A previous post called into question the breakpoint calculations used by BEST, suggesting they artificially inflate BEST's calculated precision.

Today's post is going to focus on a more central issue. This is a graph I made two years ago:

Best_Unc_O

It shows the uncertainty levels published by BEST for its temperature record beginning in 1900. I made the graph to highlight the step change in BEST's uncertainty levels. At about 1960, the uncertainty levels plummet, meaning BEST is claiming we became more than twice as certain of our temperature estimates practically overnight. Here is an updated version of the graph, with better formatting and using more recent BEST results:

1-26-BEST-Uncertainty
Continue reading

Waste of Time

I woke up this morning to a bunch of Twitter notifications saying I had been mentioned. I looked at them, and I found I had been brought up in a discussion beginning with this tweet:

Because someone thought they remembered me having written about the subject. I haven't. I'm not sure why they thought I had, but as I said in one of my responses:

Which isn't really true. I find the subject itself, changes made to temperature records, quite interesting. What I find completely boring is the incessant stream of stupid discussions about the subject. As an example, while both the tweet which started the thread, and another tweet by Steven Goddard right after it:

Blame Gavin Schmidt (@ClimateofGavin) for the changes which were made, Schmidt had absolutely nothing to do with it. Schmidt works for NASA's GISS which does produce the temperature series being referred to in those tweets (such as this graph Goddard complains about). However, GISS gets its input data (called GHCN) from another organization, the National Climatic Data Center (NCDC).

If the NCDC changes its GHCN data set, that will change GISS's temperature series even if GISS does nothing new. That's what happened here. The GHCN data series changed, and Steven Goddard blamed Gavin Schmidt because of his involvement with GISS even though GISS had no involvement in the change.
Continue reading