2011-03-30 17:57:52Thoughts on McIntyre's obsession with obscure decades old Briffa data
John Cook

john@skepticalscience...
124.185.238.238

I heard someone say somewhere that we shouldn't respond to the McIntyre pre-1500 Briffa data controversy, that it's a distraction, irrelevant. But I'm thinking perhaps we should respond for that very reason. But our response shouldn't be primarily about the Briffa data. The driving narrative should be about skeptics like McIntyre, stuck in the past, digging through obscure data files, striving to find anything, drum anything up to relive his hockey stick faux-glory - the tragedy being his obsession with the obscure past means he isn't seeing the future and the dramatic impacts of future global warming.

So a possible approach. Robert Way has done some impressive research into this subject and I think could write a mean advanced rebuttal that delves into the nuts and bolts of the technical issues. But the basic rebuttal should only give the basic technical details, only enough to understand what is going on (not enough data points that far back in the past) with the driving narrative being how skeptics/McIntyre distract from the present and the future by trying to find meaning in meaningless old obscure datasets. I'm sure the language could be "SkSified" (eg - not be too emotive) but that's the general sense of what I'm thinking. Perhaps Rob Honeycutt could work with Robert Way on writing a compelling narrative? (btw, I will shortly add dual-authorship to blog posts so we can have more than 1 author).

Anyway, just a thought :-)

2011-03-30 18:38:17
Rob Painting
Rob
paintingskeri@vodafone.co...
118.93.204.120

Funny, I was thinking the very same thing, be good to put McIntyre's obsession with this into context. Be great to have a timeline of McIntyres revelations placed alongside changes in the real world. For example: 2003 Mc Intyre comes out with some lame claim -Europe is struck by devasting heatwave. 2007 another bogus claim - Arctic sea ice reaches lowest summer minimum on record, etc, etc. In other words, it doesn't make the slightest damn difference to the warming we are experiencing, and won't change future warming one iota. 

2011-03-30 18:46:06
Rob Painting
Rob
paintingskeri@vodafone.co...
118.93.204.120

Oh, and I disagree with ignoring these things, someone has to mount a rearguard defence. If left without rebuttal these canards have a habit of spreading like a virus of stupid.

2011-03-30 20:55:58
Ari Jokimäki

arijmaki@yahoo...
192.100.112.210

I think this type of approach should be taken in a quiet phase. Now that McIntyre has very active claims, this approach would look like hand-waving (although depending on the actual implementation of this approach).

2011-03-31 02:14:51ditto
dana1981
Dana Nuccitelli
dana1981@yahoo...
64.129.227.4

I had a similar thought that McIntyre is obsessing over 12-year-old research and we should highlight that fact.  But I also would look to see a detailed refutation of his "hiding data" claims, so I think the basic/advanced rebuttal idea is a good approach.

2011-03-31 03:55:10
grypo

gryposaurus@gmail...
173.69.56.151

I really like the ideas being discussed here for dealing with McIntyre's crap.  My tact would be to hit the heart of the matter, which is the constant insinuation made by these claims.  if we take the pre-1550 deletion example he says:

--Needless to say, one of the reasons for the reader being “uninformed” is the deletion of adverse data (both before 1550 and after 1960) to give the impression of “corroboration” of the “general validity” of the reconstructions

Now he follows this up by recognizing the difference in sample size, so why the insinuation?  This isn't a call to use better statistics or find lost data or other things, he is saying that they deleted data, not because of small sample size and poor statistical analysis; he is saying they are purposely fraudulent in their display of the data. This is serious.  As much as he may not want to own that insinuation, it is there and it is the reason why these posts go viral overnight.  Example 2, in same post, about why the reconstruction changed in B2001 even with the same data:

--The only reason that I can deduce is that the Briffa 2001 reconstruction had a rhetorical similarity to the Mann and Jones reconstructions in the 1400-1550 period – and therefore was shown, while the Briffa and Osborn 1999 version showed a major discrepancy – and was therefore not shown.

Once again, look at what he is saying.  This is akin to academic fraud.  I see no other way around these insinuations.  Then it gets even worse.

--The effect of using principal components on regional averages is to change the weights for individual sites, including the possibility of negative weights i.e. flipping the regional MXD series. In particular, the closing upticks in the Briffa 2001 reconstruction may well depend on the flipping of data – a point that I’ll try to examine in the future.

He's describing the method that Briffa used in 2001 as if flipping was the reason it looks the way it does.  Any cursory reading of the paper shows otherwise, but that's not his area.  He is in the business of making scientists look like incompetent frauds.  I think this is the behavior that has to be somehow tactfully exposed along with the other ideas proposed in this thread.

And to dispel anyone’s notion that he isn’t reading scientists minds to make them look like frauds, he says in the previous post:

-- Climategate scientists were well aware of the importance of figures. Briffa and Osborn knew that the graphic with the deletion of the decline would leave a different impression than one that disclosed the decline.

2011-04-01 08:04:31
Shoyemore
Toby Joyce
tobyjoyce@eircom...
86.46.189.99

One of the reasons I despair of this argument is that the data being discussed is so old.

However, I think a post emphasising the "debate" at this point in time is so "academic" as to be trivial would be very useful indeed.

2011-04-01 08:53:51Comment
Robert Way

robert_way19@hotmail...
134.153.163.105

The challenge with addressing Mc is that he does make a good point, the authors do not state "why" they chose 1550. I can hypothesize why and Nick Stokes has also done so, there are compelling reasons why I myself or he himself would cut off around that time but the selection of 1550 (or 40 proxies) is not supported by any statistical analysis. The question is at what point should the authors of cut it off? The answer seems to be that the method they originally used was not good enough to extend further but in later years their method was (2001). It is a challenge without hearing back from Briffa himself on the topic. We kinda have to reconstruct the why for him to rationalize to Mc. That being said Mc is ridiculous with his accusations... It was arbitrary to some degree sure but its not an attack on science like Mc purports

2011-04-01 09:13:35Briffa
dana1981
Dana Nuccitelli
dana1981@yahoo...
64.129.227.4

Have you tried contacting Briffa to ask about his reasoning for 1550?

If we can't get an explanation from the Briffa's mouth, I'd suggest maybe giving some possible reasons why he cut off the data at 1550, and explain that it's just a judgment call.  It's the same thing as the divergence problem - if you're trying to use data as a temperature proxy, then you probably only want to use the data you're confident is accurate in its representation of temperatures.  Perhaps make the point you agree with McIntyre that the should have explained why they chose 1550 as the cutoff, but that making assumptions about their motivations is inappropriate.

And definitely emphasize the point that we're talking about a paper from 12 freaking years ago, and the research field has advanced significantly since then.

2011-04-01 16:51:40
Rob Painting
Rob
paintingskeri@vodafone.co...
118.92.65.213

Rob Way, I trhink there should be two posts on this, one on the technical issues raised, and another on the big picture. What the hell does it really matter? Biiiiiiiigggggg trouble still headed our way!   

2011-04-02 00:29:17
grypo

gryposaurus@gmail...
173.69.56.151

Robert,

If you want to tackle the pre-1550 cut off, withut a complete explanation to that specific data, there is actually lots of literature for processing data for low-frequency signals using RCS.  In particular, this paper, Esper et al 2003, describes the methodolgy used, although you'd need to get more technical reference therein to calculate a regional curve youreself with any data, I imagine.  The section 'influence of sample depth' gave me a better understanding of the reasoning behind Briffa's decision.  For each sample size you create a regional curve.  When that is don, create a RCS curve combining all the data (this is normal tree ring width standardization).  Then compare the difference of RCS curve to see at what sample size the curve loses the signal.

Briffa 1996 touches on these methods as well.