Some Small Things

I often see things I feel merit comment but don't merit a blog post. This week, I happened to come across two separate posts by a blogger I would classify as such. On their own, neither seemed to merit writing a post. Taken together, perhaps they do.

The blogger in question goes by the name Victor Venema. He's commented here before, and he's been the subject of several posts. Put simply, I think he writes nonsensical things. You can see for yourself if you agree about past examples or you can read on to see new ones.

For one example, Venema responds to a piece written by Scott Adams, the creator of the well-known Dilbert comic. I don't think Adams wrote a good piece. In fact, I think it's kind of stupid. That doesn't justify Venema's misrepresentations though. There are a number of examples, but I'd like to focus on the most severe. Adams had written:

There is a severe social or economic penalty for having the “wrong” opinion in the field. As I already said, I agree with the consensus of climate scientists because saying otherwise in public would be social and career suicide for me even as a cartoonist. Imagine how much worse the pressure would be if science was my career.

I think the passive-aggressive approach Adams takes is stupid and pathetic, but Venema's response to this point is what is interesting. He begins:

It is clearly not career suicide for a cartoonist. If you claim that you only accept the evidence because of social pressure, you are saying you do not really accept the evidence.

Scott Adams sounds as if he would like scientists to first freely pick a position and then only then look for evidence. In science it should go the other way around.

This seems to be the main argument and shows that Scott Adams knows more about office workers than about the scientific community. If science was your career and you would peddle the typical nonsense that comes from the mitigation sceptical movement that would be bad for your career. In science you have to back up your claims with evidence. Cherry picking and making rookie errors to get the result you would like to get are not helpful.

Anyone who actually knows how science works would understand Venema is discussing the theory of how science works, not the practice. The reality of science is a lot of bad work becomes popular because people like the conclusions. The infamous hockey stick created by Michael Mann is a perfect example as people like Venema will never acknowledge it was complete rubbish. They'll never acknowledge the study behind it used a terrible and incorrect methodology which produced unjustifiable results that were only accepted by people like him largely because of dishonesty on the part of the researchers. Instead, he'll say things like:

However, if you present credible evidence that something is different that is great, that is what you become scientist for. I have been very critical of the quality of climate data and our methods to remove data problems. Contrary to Adams' expectation this has helped my career. Thus I cannot complain how climatology treats real skeptics. On the contrary, a lot of people supported me.

This is where the problem becomes clear. Venema is a strict party-liner. He will disagree with established views in certain ways, but he will only do so in "safe" ways. Nobody minds that he might say, "Temperature station data suffers from a number of non-climatic influences that might skew results a bit. We should work on improving it." He'll never say, "Temperature station data suffers from a number of non-climatic influences that appear to exaggerate global warming by a non-trivial amount."

Now, I don't fault him for not stating the latter. I imagine he doesn't believe it. That's fine. The point is simply the social effect of stating the latter is very different from the social effect of stating the former. The latter, to some extent, attacks the "consensus" view of things. Doing that would cause Venema problems. Questioning things that don't challenge the "consensus" view won't get him in trouble. It's that simple.

Venema might disagree with that claim. That's fine. It's an argument I'd be willing to have. The problem is that's not what he's doing. Look at what he goes on to say:

Another climate scientist, Eric Steig, strongly criticized the IPPC. He wrote about his experience:

I was highly critical of IPCC AR4 Chapter 6, so much so that the [mitigation skeptical] Heartland Institute repeatedly quotes me as evidence that the IPCC is flawed. Indeed, I have been unable to find any other review as critical as mine. I know "because they told me" that my reviews annoyed many of my colleagues, including some of my [RealClimate] colleagues, but I have felt no pressure or backlash whatsover from it. Indeed, one of the Chapter 6 lead authors said “Eric, your criticism was really harsh, but helpful "thank you!"

Eric Steig is a strict party-liner as well. He's part of the "hockey stick team" which steadfastedly defended Mann's infamous hockey stick, as well as Mann's 2009 attempt at re-doing it which even Steig's colleague Gavin Schmidt has walked back from due to recognizing its flaws. Steig hasn't done the same. I doubt he would. I'm sure both he and Venema would tell us it "doesn't matter" if Mann's work was garbage because blah, blah, replicated, blah. Which is basically, "We don't condemn bad science but that's okay because this other science says the same thing. You say that other science is bad too? Shut up you dirty denier."

Given that, what was Steig's ever-so-important criticism? He had said:

I have four chief concerns with this chapter. First, there are numerous important references left out, and an over-emphasis on papers by the authors themselves, which do not accurately reflect the communities' view. In general, the certainty with which this chapter presents our understanding of abrupt climate change is overstated....

There is plenty more, but it all ties back to that same point. Steig "strongly criticized" the IPCC by... saying it overstated cerrtainty about abrupt changes in climate thousands of years ago. First, that's not very strong criticism. It doesn't challenge the party line. Consider, for instance, these references Steig provides in his reviewer comments for the IPCC:

The characterization of the abrupt changes as "the South Atlantic warmed when the north warmed, and vice versa" is incorrect. Although this way of describing the data is popular, it is not very accurate. At the very least, the numerous papers pointing this out should be cited. Steig and Alley, 2002; Wunsch, 2003; Huybers, 2003; Schmittner et al., 2003; Roe and Steig, 2004. Furthermore, the purported relationship between N and S can only be demonstrated for the largest events, not for the events generally.
[Eric Steig (Reviewer’s comment ID #: 252-19)]

Steig's argument isn't, "The consensus is wrong." Steig's argument is, "You've mischaracterized the consensus position on this issue in a way which undercuts my work." The sad reality is there is a lot of jockeying going on in the IPCC reports about whose work and/or views will be given prominence. This is just more of that. The idea this example shows someone skeptical of (any important) mainstream positions would be welcome in the process is ludicrous.

There's much more that could be said about that post, but let's move on to a different one. This is the post Venema wrote before the last. In it, he says:

That something is statistically significant means that it is unlikely to happen due to chance alone. When we call a trend statistically significant, it means that it is unlikely that there was no trend, but that the trend you see is due to chance. Thus to study whether a trend is statistically significant, we need to study how large a trend can be when we draw random numbers.
...
If you draw 10 numbers and compute their trends many times, you can see the range of trends that are possible below in the left panel. On average these trends are zero, but a single realisation can easily have a trend of 0.2. Even higher values are possible with a very small probability. The statistical uncertainty is typically expressed as a confidence interval that contains 95% of all points. Thus even when there is no trend, there is a 5% chance that the data has a trend that is wrongly seen as significant.**

If you draw 20 numbers, 20 years of data, the right panel shows that those trends are already quite a lot more accurate, there is much less scatter.

And shows this graph to demonstrate his point:

12_8_venema_figure2_histogram

That's not terrible. Venema fails to discuss how the nature of the "randomness" one uses for the data affects the results in a way that wouldn't be true of real-world data, but as a general point, it's alright. The problem arises when Venema goes on to say:

To have a look at the trend error for a range of different lengths of the series. The above procedure was repeated for lengths between 5 and 140 random numbers (or years) in steps of 5 years. The confidence interval of the trend for each of these lengths is plotted below. For short periods the uncertainty in the trend is enormous. It shoots up.
...
In fact, the confidence range for short periods shoots up so fast that it is hard to read the plot. Thus let's show the same data with different (double-logarithmic) axis. Then the relationship look like a line. That shows that size of the confidence interval is a power law function of the number of years.

The exponent is -1.5. As an example that means that the confidence interval of a ten year trend is 32 (10^1.5) times as large as the one of a hundred year trend.

This is where things start to go off the rails. The results and figures Venema provides are fine for showing how uncertainty works given the specific form of randomness he uses to generate series, but that doesn't mean the same thing would be true for the global temperature record:

Some people looking at the global mean temperature increase plotted below claim to see a hiatus between the years 1998 and 2013. A few years ago I could imagine people thinking: that looks funny, let's make a statistical test whether there is a change in the trend. But when the answer then clearly is "No, no way", and the evidence shows it is "mostly just short-term fluctuations from El Nino", I find it hard to understand why people believe in this idea so strongly that they defend it against this evidence.

Especially now it is so clear, without any need for statistics, that there never was anything like an "hiatus". But still some people claim there was one, but it stopped. I have no words. Really, I am not faking this dear colleagues. I am at a loss.

Maybe people look at the graph below and think, well that "hiatus" is ten percent of the data and intuit that the uncertainty of the trend is only 10 times as large, not realising that it is 32 times.

This is complete and utter nonsense. First, the "random" data Venema uses is almost nothing like the global temperature record. Calculations performed on one will not give the same results as calculations performed on the other.

Second, while Venema talks about how his test could show things about "10 years" of data, he only used 10 data points for that. Similarly, when he used 20 years, he used 20 data points. 100 years, 100 data points. That's awkward as presumably, Venema knows there are 12 months in a year.* Every major global temperature record provides estimates for monthly temperatures (some even provide estimates for daily temperatures). As a result, we are not limited to 10 data points for the last ten years.

Venema knows this fully well. I have no idea why he ignores it. Interestingly enough, I discussed this same issue recently where a person trying to claim people can't prove global warming is happening insisted on using only annual temperatures. He responded to my criticism of that by saying:

Regarding the length of the time series, the main temperature series relied upon by the IPCC, in its most-recent Assessment Report (AR5), begins in 1880 and is annual.

That claim of his was a pure fabrication, but perhaps Venema would like to use the same defense. Or perhaps he'd like to explain why he pretended we can only use annual temperatures for an analysis he should have known was completely bogus.

The reality is we have far more data than Venema pretends, and if you account for that, his analysis changes a great deal. Using his approach, the difference in uncertainty between 120 (12 months, ten years) data points and 1,200 (12 months, 100 years) data points is minimal. That's because when using venema's approach, once you get over 100 data points, there is almost no uncertainty.

But again, Venema's "random" data doesn't begin to represent the real global temperature record. For instance, the effective number of data points in the real world is less than the total count because of a thing called "autocorrelation." Autocorrelation is basically self-similarity in the data. It exists because temperatures from month to month and year to year are determined partially by past temperatures. The more autocorrelation there is in your data, the less information each individual data point provides.

Venema's data shows no autocorrelation despite the fact annual temperatures do contain autocorrelation. That causes him to underestimate his results. At the same time, his use of annual temperatures instead of monthly temperatures exaggerates his results. Figuring out how the two would balance out is a trick business (made more tricky by the fact monthly temperatures have different and greater autocorrelation than annual ones).

Maybe that's why Venema didn't do the job properly. If so, he deceived his readers. Maybe there's some other reason. However, there's yet another issue Venema ignored despite being well aware of it. Consider this graph he shows of global, annual temperatures:

12_8_2016_october_gisstemp_estimate_2016

Take note of how there is no uncertainty displayed in it. Let's compare it to a chart created by the people he took his data from:

12_8_giss_graph

The charts show the same data. This chart includes a line showing temperatures on a smoother scale, but that's not important. What is important are the three blue bars you can see in it. Here is what the creators have to say about those:

Land-ocean temperature index, 1880 to present, with base period 1951-1980. The solid black line is the global annual mean and the solid red line is the five-year lowess smooth. The blue uncertainty bars (95% confidence limit) account only for incomplete spatial sampling.

Notice the word "uncertainty." The blue bars in that chart show you an estimate of part of the uncertainty in their data. That's not the uncertainty in any trend. It's a recognition of the fact we don't know exactly what each individual data point in the global temperature record ought to be. Venema ignores this. Instead, he creates data sets with absolutely no uncertainty in them.

If there is uncertainty in individual data points, that will obviously increase the uncertaintty in any trends one might estimate. Venema ignores that. Even worse, Venema ignores a very important point these blue bars tell us. As the bars show, uncertainty about past temperatures is greater than uncertainty about more recent temperatures. This means trends estimated over the last 10 years will be less affected by uncertainty in the data than trends estimated over any period including times further back in the past.

How does all this affect Venema's results? That's a complex question, but in the crudest and simplest terms, it means his statement:

Maybe people look at the graph below and think, well that "hiatus" is ten percent of the data and intuit that the uncertainty of the trend is only 10 times as large, not realising that it is 32 times.

Is complete nonsense. Even worse, Venema knows it is complete nonsense. Venema publishes work about the modern temperature record. He knows about all of these issues. It wouldn't surprise me to find out he knows more about them than I do. Why would he ignore them then? I don't know. I won't guess. What I will do is take note of the fact no party-liners are speaking out against this.

That might just be because few people read the blog post. I don't know. What I do know is if somebody on the "skeptic" side had written a post as misleading as this to attack the party line, Venema and his pals would be outraged. Venema doesn't have to worry about that though. Venema is staying safe behind the party line.

10 comments

  1. Here is a comment I left at Victor Venema's place:

    I figured I should let you know I've published a post which is highly critical of this. You can read the details if you want, but put simply, the "analysis" used in this post is garbage. The series used in it are so radically different from what we have for temperature records the results of this "analysis" are meaningless.

    Two central differences are: 1) Temperature data is not limited to annual records. The idea we have only 10 data points for 10 years of temperatures is laughable and necessary to come up with the claim there is 32 times as much uncertainty in 10 year trends as in 100 year trends. 2) The trends in this "analysis" are estimated for data which has no uncertainty. In the real world, individual points of data have uncertainty, and that uncertainty increases the further back in time you go. The increased uncertainty of past temperatures over recent temperatures would necessarily increase the uncertainty in any 100 year trend relative to any trend during the 21st century.

    The "analysis" used in this post is highly misleading. As a consequence, the results are greatly exaggerated.

    I don't know if or when it will pass moderation, but I want to keep a record of it.

  2. Comment released. Might better have been two post, got rather long.

    Scott Adams post. Yes, like Eric Steig I prefer to make statements for which I have evidence, if that is following the party line, I am happy to be part of that party. I told my colleagues multiple times that I do not trust paper on trends in the distribution of daily data. Even if you would like me to, I cannot claim yet that they are wrong, I do not have that evidence. At the moment I only have enough arguments to say we should study it.

    Having evidence is not the same as being right. We would not have scientific progress if we would not allow for non-trivial errors. That a pioneering paper like the one of Michael Mann does not immediately produce the truth and uses perfect methods is acceptable. It got a new field of study going. Global results are within his error bars, even with much more data and better methods on which also Mann worked. Pretty good contribution. Something different than regular WUWT contributor, new world order conspiracy theorist and greenhouse-effect-denier Tim Ball, I would not want to be part of the party that celebrates him.

    Trends and fluctuations post. Yes, for long periods inhomogeneities and how well we can correct them become more important. That is the argument I make at the end of my post. Happy to see some recognition from your side on the importance of my work.

    You are the second person this week protesting that I did not write the post he would have wanted me to write. Auto-correlations mean that you have effectively less samples and that short term trends thus become even more uncertain. The auto-correlations for annual mean temperatures are modest. Monthly data has much stronger auto-correlations. Thus you do not get much more data when you go from annual to monthly data. You do get additional complications due to the uncertainty in the seasonal cycle and that the fluctuations also have a seasonal cycle. I did not want to got there, but show the principle.

    If you write a post about monthly data do let me know, would be interested how much difference that makes. Would be surprised if the difference in uncertainty between 10 and 100 years of data is not still a lot larger for trends than it is for averages. I would be surprised if, like in the only comparison you quoted, then the uncertainty in 10 year trends would be nearer to 10 times larger than to 32 times the uncertainty in a 100 year trend.

  3. Victor Venema:

    Having evidence is not the same as being right. We would not have scientific progress if we would not allow for non-trivial errors. That a pioneering paper like the one of Michael Mann does not immediately produce the truth and uses perfect methods is acceptable. It got a new field of study going. Global results are within his error bars, even with much more data and better methods on which also Mann worked. Pretty good contribution. Something different than regular WUWT contributor, new world order conspiracy theorist and greenhouse-effect-denier Tim Ball, I would not want to be part of the party that celebrates him.

    This is the exact response I would have predicted. It is the party-line position, nearly verbatim. It's also complete nonsense. The absolute worst part of this is you say Mann's hockey stick "got a new field of study going." That isn't even close to true. Everything else you say about his paper is wrong and/or misleading, but if we can't at least agree on trivial details of the history of the field, there's little point.

    Quite frankly, I doubt you have the slightest idea of what I am talking about when I criticize Michael Mann's work. I've laid out the issues quite clearly multiple times, with the best resource being the two eBooks I wrote on the subject. I'd be happy to provide you a free copy of each so you can if there is any aspect of what I say you think is actually wrong.

    Trends and fluctuations post. Yes, for long periods inhomogeneities and how well we can correct them become more important. That is the argument I make at the end of my post. Happy to see some recognition from your side on the importance of my work.

    I have no idea what my "side" supposedly is or who might have recognized any importance of your work. Nobody I know has. I certainly haven't.

    You are the second person this week protesting that I did not write the post he would have wanted me to write. Auto-correlations mean that you have effectively less samples and that short term trends thus become even more uncertain. The auto-correlations for annual mean temperatures are modest. Monthly data has much stronger auto-correlations. Thus you do not get much more data when you go from annual to monthly data. You do get additional complications due to the uncertainty in the seasonal cycle and that the fluctuations also have a seasonal cycle. I did not want to got there, but show the principle.

    You gave specific numerical results which you now admit you knew were based upon an analysis which could not support them. You offered no caveats or indication to your readers to warn them they shouldn't believe the numbers you gave them were representative of the truth of things. It would have been easy to include a paragraph warning your readers to take the post as only demonstrating concepts. You could have easily explained there were many complexities you were not covering which made your number inapplicable to the real world.

    You didn't. You intentionally wrote something you knew to be misleading. You knowingly omitted many relevant details. That's lying. You lied. That's all there is to it. You can say:

    If you write a post about monthly data do let me know, would be interested how much difference that makes. Would be surprised if the difference in uncertainty between 10 and 100 years of data is not still a lot larger for trends than it is for averages. I would be surprised if, like in the only comparison you quoted, then the uncertainty in 10 year trends would be nearer to 10 times larger than to 32 times the uncertainty in a 100 year trend.

    But that you lied to your readers does not mean I am obligated to do the analysis you pretended to do. You are the one who should do the analysis you claimed to do.

  4. I just discovered due to my current inability to access my old laptop's hard drive, I do not have PDF versions of my two (short) eBooks on Michael Mann's hockey stick currently on hand. I'll see about scrounging them up again, but in the meantime, I do have pre-print versions of them. They don't have some of the same formatting as the final versions, and I think they're missing the table of contents. Other than that, the only differences between them and the final versions should be a handful of small corrections that shouldn't affect any substantive points (though I think one correction changes a 5% to 11%).

    I've uploaded them to this site, and anyone who wishes to see a case against Michael Mann's work in a clear and concise form can check them out for free here (for the first part) and here (for the second part). And as part of my standard offer, if anyone thinks I have gotten things wrong or given an inaccurate or incomplete portrayal, I will happily run a guest post from them free of any editorial control.* You can say whatever you want, and I will leave it untouched so readers can hear another side of things.

    *With the small exception of forbidding things like profanity and pornography. Those are generally not welcome on this site.

  5. I hate triple posting on my own site, but I want to make a note of the fact I edited my last comment slightly to fix an HTML tag. In the process, I also fixed two typos. I would prefer everyone be allowed to edit their comments for the first ~15 minutes after posting, but I haven't been able to find a plugin that enables such which doesn't look hideous when applied to this site.

    But as an administrative thing, broken HTML is problematic. Since I was correcting something anyway, I figured I'd cheat a little and fix my typos too.

  6. "Those are generally not welcome on this site."
    Generally, so there are exceptions?
    Thanks for the links to your ebooks.

  7. Hoi Polloi, the rule is no cursing. I just reserve the right to make exceptions to rules in situations I feel merit them. As far as I can recall, I have never made an exception for a commenter. I have, however, made an exception for myself before. You can see it here. It marks a rather special occurrence as it is the first (and I believe only) time I ever used that particular language. I still feel it was warranted.

    Given an extreme enough circumstance, I might make an exception and allow cursing in either a post or comment. I might also allow the use of curse words for purposes other than cursing (such as during a grammatical discussion, with the words used as objects of discussion). I wouldn't count on it, but I wouldn't rule it out entirely.

  8. Has IPCC walked away from the hockey stick in the latest report?
    Steve McIntyre noted that they used upside-down Mann from Finland for Southern Hemisphere temperatures, but others suggested that they have moved away.

  9. Paleoclimatology has received far less focus by the IPCC in the last two reports, presumably because the hockey stick controversy has tarnished the iconic image. Answering your question would require knowing what you have in mind when you say "the hockey stick." Do you mean the original one, the concept of one or what?

    The best off-the-cuff answer I can give is the latest IPCC report did use millennial temperature reconstructions but did not rely upon or promote controversial work anywhere near as much as previous reports. The work it did use is, in general, not as obviously flawed as the worked relied upon by earlier reports. Make of that what you will. Nobody seems to have cared much about that chapter of the latest report so I haven't paid too much attention to it myself.

Leave a Reply

Your email address will not be published. Required fields are marked *