Mann's Screw Up #2 - Non-Robust

The last item in this series discussed Mann responding to criticisms with (sometimes contradictory) lies. Today's item will show he resorts to dishonesty even when not criticized.

In 1998, Michael Mann and two co-authors published a paper (MBH98) claiming to reconstruct temperatures of the northern hemisphere back to 1400 AD. It claimed modern warmth was unprecedented in 600 years with “roughly a 99.7% level of certainty," and it said this result was robust:

the long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network, suggesting that potential tree growth trend biases are not influential in the multiproxy climate reconstructions.

Dendroclimatic indicators means tree ring data, the primary source of data for MBH98. That, coupled with the strong statement of certainty, portrayed their results as beyond dispute. This was nonsense. You don't have to take my word for it. Michael Mann himself says so in his book. On page 51, he said after MBH98 was published he performed tests that:

revealed that not all of the records were playing an equal role in our reconstructions. Certain proxy data appeared to be of critical importance in establishing the reliability of the reconstruction–in particular, one set of tree ring records spanning the boreal tree line of North America published by dendroclimatologists Gordon Jacoby and Rosanne D’Arrigo.

If “one set of tree ring records” was “of critical importance in establishing the reliability of the reconstruction,” the reconstruction could not have been “relatively robust to the inclusion of dendroclimatic indicators.” And according to Michael Mann, he knew this shortly after MBH98 was published.

This raises all sorts of questions. Why did he only test his claim of robustness after he published a paper saying his results were robust? Once he found out his results weren't robust, why didn't Mann tell people? Why didn't Mann correct his paper? Most importantly, why did Mann keep silent about this when he published another paper (MBH99) the next year extending his results back to 1000 AD?

We can speculate as to Mann's motives as much as we want, but the reality is simple. He made a bold claim in his paper without testing to see if that claim was true. When he found out his claim was untrue, he made absolutely no effort to correct it. Instead, he continued working on the same project, acting as though his earlier claim was completely true.

After that, Mann spent years defending his work from critics. He argued point after point, even starting a blog with several colleagues to better be able to defend his work. And in all the years and all the arguments, not once did he knew his results weren't robust. Not once did he just come out and say his results were dependent entirely upon “one set of tree ring records.” With that in mind, read this paragraph Mann wrote:

The MBH98 reconstruction is indeed almost completely insensitive to whether the centering convention of MBH98 (data centered over 1902-1980 calibration interval) or MM (data centered over the 1400-1971 interval) is used. Claims by MM to the contrary are based on their failure to apply standard ‘selection rules’ used to determine how many Principal Component (PC) series should be retained in the analysis. Application of the standard selection rule (Preisendorfer’s “Rule N’“) used by MBH98, selects 2 PC series using the MBH98 centering convention, but a larger number (5 PC series) using the MM centering convention. Curiously undisclosed by MM in their criticism is the fact that precisely the same ‘hockey stick’ pattern that appears using the MBH98 convention (as PC series #1) also appears using the MM convention, albeit slightly lower down in rank (PC series #4) (Figure 1). If MM had applied standard selection procedures, they would have retained the first 5 PC series, which includes the important ‘hockey stick’ pattern. The claim by MM that this pattern arises as an artifact of the centering convention used by MBH98 is clearly false.

Your eyes glazed over, right? Who cares about PCA, Preisendorfer’s 972 or whatever this argument is about? Mann could be completely right or completely full of it in this paragraph. Most people wouldn't be able to tell the difference, especially not when there are pages after pages of text like this.

But now that we know Mann tested his results and found “one set of tree ring records” was “of critical importance in establishing the reliability of the reconstruction,” it's easy to see what Mann did. Look at the second to last sentence of that paragrpah:

If MM had applied standard selection procedures, they would have retained the first 5 PC series, which includes the important ‘hockey stick’ pattern.

Ignore the technical issues. Just look at that and remember the reliability of Mann's results depended entirely upon "one set of tree ring records." Mann is saying you get a hockey stick if you keep the right "PC series, which includes the important 'hockey stick' pattern." What that means is, if you you keep that "one set of tree ring records," you get a hockey stick.

Think about that. Years and years of technical discussions and arguments over details all boils down to one simple fact. Mann's results were entirely dependent upon a small amount of tree ring data. Remove it, and you don't get a hockey stick. Remove it, and Michael Mann doesn't get his name heard by millions of people. Remove it, and Michael Mann doesn't get fame and acclaim. Remove it, and Michael Mann is just another nobody toiling away in obscurity.

And he knew that. He knew his results were entirely dependent upon a small amount of tree ring data. That's why he vehemently argued, for years, that we have to include that data. He just coached it in technical jargon because he knew if he ever came out and told people, "Yeah, my results are entirely dependent upon a small amount of tree ring data," he'd have been laughed at.

Or maybe not. Maybe people would have thought it was cool. I mean, maybe Michael Mann just found an area with some magical trees which could tell us the temperature of the entire Northern Hemisphere.

Of course, if those trees really are magical thermometers, shouldn't he have highlighted that so people could investigate them further?


  1. Choosing, prior to any results, which data you are going to either exclude or weigh heavier (which Mann did with PC1) is not by any definition 'science.'

    Then turning around and consciously hiding what you did is, by every definition, fraud.

    Really, that's the whole case in a nutshell. Game over. The quarterback is toast.

    (Tangentially related to this, and in Mann's defense (?!), Mann simply says he went back and checked on the robustness of the results. I've always been under the impression that he figured it out much later (I.e., after MM03). So when you suggest that he published MBH99 knowing about the lack of robustness...I'm not so sure about the chronology there. IOW, are you SURE about WHEN he knew?)


  2. Dr. C, Mann doesn't say when he did the tests in the part I quoted, but he does say it. The book is set out in chronological order, and the paragraph is in a section right after MBH98 was published. He explains the reason for doing the tests was Phil Jones and several others published a paper after MBH98 which claimed to reconstruct temperatures to 1000 AD. That made Mann and his co-authors re-examine their data and led to MBH99 being written.

    Incidentally, the results of this testing is what was found in the CENSORED directory. Mann explicitly states that in his book. That resolves any outstanding disputes about what the directory was for or when it was made. I wanted to fit that point into this post, but I couldn't find a way to, and I had already taken longer than I wanted because of the couple non-Mann posts I wrote recently. I'll probably want to go back and revise this post sometime.

    Anyway, I think it's good the CENSORED directory issue is finally resolved. We now know, beyond any dispute, it contained the results of sensitivity tests Michael Mann performed which showed his hockey stick was dependent entirely upon a small amount of tree ring data. We also know this test was performed after MBH98 was published but prior to MBH99 being written.

  3. Ok, my ADD meds are wearing off, so I know I'm losing my focus here. I apologize in advance for my dunderheadedness....

    I still am not seeing the evidence for exactly when he knew. How are you sure he knew before MBH99? (I understand what you said about the book being in chronological order, but can you be certain the Mann wasn't just off on a momentary tangent? No one, after all, has ever accused him of being the clearest of writers...

    I guess the reason I'm pushing this is because it is SUCH a head scratcher....? I thought I had the chronology down on this pretty well, but if your chronology here is true, then this is a freaking BOMBSHELL. This isn't just hyperbolic fraud; this is real, dyed-in-the-wool, 24 karat fraud. The idea that he would incriminate himself in his own book just seems....hunh?

    Apart from that, how does the censored directory play into this at all? According to Mann, the dataset on which the HS depends was one from Jacoby & D'Arrigo. But the CENSORED directory only included datasets from Graybill & Woodhouse.

  4. I put this on WUWT. I just wanted to be sure you saw it.

    Gunga Din says:
    February 13, 2014 at 2:04 pm

    Brandon Shollenberger says:
    February 11, 2014 at 11:31 pm

    Gunga Din says:
    February 11, 2014 at 6:38 pm

    In the “HarryReadme” file wasn’t there a line of code that produced a hockey stick even if random numbers were entered? (The fudge factor.) Was that Mann’s code or somebody elses?

    Gunga Din, definitely not. That file had nothing to do with Michael Mann’s work, and while his methodology definitely mines for hockey sticks, there’s no single line responsible for it. And it’s not because of any “fudge factor” (that’s a separate issue all together).

    Thank you.
    Sometimes names or things are substituted for something related to them or that represents something that has a relationship to them. “Coal trains of death” for Hansen and CAGW or “The Hockey Stick” for deceptive or shoddy climate “science”. (I think the figure of speech is called Metonymy.)
    But I don’t want to have the actual facts mixed up.

  5. After Steve McIntyre's experiences with Mann, it seems reasonable to assume that everything he says or writes is not entirely what it appears to be. No matter what he says now, my guess is that he well knew the issues with the data when he was creating the papers.

    Why do you think he did not want to provide data and code to Steve? Why do you think he referred to "his dirty laundry" when he reluctantly shared data and code with trusted colleagues? Why did he create new statistical procedures for the paper? How does one come up with short centered PCA over standard PCA unless one has tried the standard and found the results did not match the desired result? How did the "stepped" approach come about (which was not described in the original paper) unless one had tried standard approaches and found that they did not produce the desired result? Why did he decide to report some information from verification tests but not others, and then lie about it? It certainly seems like he knew the weaknesses of the study and fought very hard to manufacture meaningful appearing results while at the same time not disclosing what he actually had done. And, why now would he admit to weakness in his study, yet not retract the study? He's certainly acting like he's been caught and he's sort of trying to distance himself from the work, but isn't going to do the right thing and issue a retraction. He appears to be trying to eat his cake and have it, to.


  6. Dr. C, Michael Mann says the testing referred to in my post was part of what led to MBH99 being written. That could only be true if it were done prior to MBH99. There's no doubt about when the tests were done if you read pages 50-53. If you'd really like, I can provide some of the text to show this. It'd just be a bit of typing (since I have a hard copy of the book). As for your question:

    Apart from that, how does the censored directory play into this at all? According to Mann, the dataset on which the HS depends was one from Jacoby & D’Arrigo. But the CENSORED directory only included datasets from Graybill & Woodhouse.

    This is actually a very good question. I was curious if anyone would pick up on the fact I glossed over this. You'll note the quote does not actually say Mann's hockey stick disappeared when he did his testing. That's because it didn't.

    As you note, the "one set of tree ring records" is from Jacoby and D'Arrigo. If you remember, my original list included an item about the (misuse of the) Gaspe series. The Gaspe series is from Jacoby and D'Arrigo. That is the series Mann referred to in that quote.

    I intend to write my next post (#2.1) on the Gaspe series, and I plan to delve into this matter more in it. But basically, Mann's test found if he removed the 19 Graybill and 1 Woodhouse series, he could still get a hockey stick thanks to the Gaspe series. That's why it was "of critical importance in establishing the reliability of the reconstruction," not the results of the reconstruction.

    I glossed over that in this post, but I think I managed to do so without ever saying anything incorrect. The hope was to explain the point to people in a straightforward way people can understand without bogging them down with all the details.

    Basically, this post makes it sound like Mann removed 20 series and found his hockey stick disappeared. In actuality, Mann removed 20 series and found he could still get a hockey stick if he included one other (which he had misused).

  7. Brandon: Thank you for undertaking this series. In addition to possibly uncovering or bolstering a line of defense for Mr. Steyn, it is shaping up to be a concise tool for those of us who debate this topic across the Internet. Further, in the absolute, it is a reminder of the necessary relationship between science, integrity, and transparency.

  8. Fabi, I'm glad to! I've actually been intending to do something like this for some time now. I've always loved Climate Audit, but I didn't like how difficult a resource it is to use if you you're not well-acquainted with it. A person who didn't follow Climate Audit for years would struggle to really understand the issues. I've always thought a more user-friendly resource would be helpful.

    This isn't my ideal solution, but I'm hoping it'll be a step in the right direction.

  9. So as not to confuse people, you should mention that Yamal is not in MBH98, though it has its own single tree problem.

  10. HughMcdonough, how would pointing that out make things less confusing? Yamal hasn't been mentioned on this page. There's no reason anyone should be thinking about it. If anything, bringing it up would make matters more confusing as people would wonder what in the world I was talking about.

    Or am I missing something?

  11. Brandon, I thought that we clearly stated the problem of Graybill bristlecones being magic thermometers, as, for example, the following from Mc-Mc (2005 EE):

    If the reader takes the (reasonable, we think) view that these unusual trees [strip bark bristlecones]are not mystical antennae for an elusive “climate signal” missed by all other proxy indicators, then each of the above problems and issues must be dealt with systematically, prior to any reliance being placed on bristlecone pine ring widths as the dominant arbiter of world climate history.

    In contemporary controversy, Mann avoided discussion of bristlecones, instead trying to represent the dispute as about the "right" number of principal components to retain. By doing so, Mann tried to make it into a mathematics problem which nobody understood, rather than the obvious data analysis problem of whether Graybill bristlecones were magic thermometers. With the corollary that, if they were magic thermometers, then surely the reason for this remarkable property ought to have been great interest to climate scientists.

    However, we ourselves did not make any claims on the "right" number of PCs. We catalogued results under different assumptions (and our catalogue agreed with theirs.) Both Wahl and Ammann and Mann ignored what we actually wrote on the topic, and accused us of not considering permutations that were clearly itemized in MM (2005 EE).

  12. The comment in Mann's book about the D'Arrigo treeline series is odd, given that the CENSORED directory is about bristlecones. Taken at face value, it points to some other analysis that was not on the UVA FTP site (which wasn't comprehensive.) Perhaps that analysis resulted in the unique extension of the Gaspe series to AD1400 to get it into the early network.

  13. Steve McIntyre, I thought you did too. The problem I see is most people will never read what you wrote. Your E&E paper was ~25 pages long without references, and your posts about the issue at Climate Audit are buried in with more than a decade of posts. Even if a person did find a post you wrote about the issue, it's likely the post would refer to past discussions which they wouldn't be familiar with.

    Most of what I've written (and will write) comes from me following your work for over a decade. I learned a lot by following the various arguments across papers and blogs. I'm trying to distill that into a form that's readily accessible.

    Little of what I write will be new, and I don't want to steal credit for any of it. I don't think it'd be possible to overstate the credit you deserve. I just don't know how you give proper credit to a body of work that spans hundreds of blog posts, comments, and papers. And that's without considering all the commenters at blogs who have added to my knowledge.

    Right now I'm just hoping I do you guys justice.

  14. On the issue of Mann's comment about the CENSORED directory, I agree his remark in the book seems odd. However, he also says:

    I performed a series of so-called sensitivity tests, in which various proxy records are removed or - to use standard statistical terminology, "censored" - from the network, and the sensitivity of the results to those records is gauged by noting how much of an effect their removal has on the result.

    My understanding is the directory only showed results from one test, but here he claims he did multiple tests. My guess would be Mann tried removing different combinations of series, using that directory for his output. That meant his code saved results to the same spot each time, every time he tested a different combination, it overwrote his previous results. Because of that, the directory shows the last test he did, not all of them.

    That seems plausible because I find it difficult to imagine Mann removed those 20 series on his first test then did nothing more. Why would he have picked that specific set of series to remove? It seems unlikely he'd know in advance which to remove, and it seems impossible he'd guess them by chance. I'll admit I really don't know though. All I know is Mann's comment shows he knew his data wasn't robust to the removal of "bristlecones,"* much less all dendroclimatic indicators.

    *For those who don't know, "bristlecones" has often been used as shorthand for a specific set of data that isn't entirely from bristlecone trees. It also includes a couple foxtails and the Gaspe series.

  15. Trying to distill matters into a usable form is not a small job. I've had some inquiries about doing so. It's helpful for someone to do it with fresh eyes.

    Locating topics in early CA posts is made more difficult because tags were not an option at the time (or at least I didn't use them). I have tagged a number of (say) verification r2 posts I'll try to find out if there is a listing of tags as that might be useful.

    I think that it's helpful to distinguish (1) MBH98-99 issues: (2) IPCC TAR issues; (3) Mann et al 2008 issues; (4) other issues; as this division of topics can be related to the terms of reference of the various "investigations". The lack of connection of the various "investigations" to issues actually in play is really quite remarkable. Misrepresentations about non-robustness issues are an excellent example. None of the "investigations" even hint that they touched on these issues.

  16. Thanks for that link. It's been too long since I delved into MBH98's data. I really ought to refresh my memory.

    By the way, I remember you discussing the 159/112 proxies issue over at The Blackboard in the topic which started this all. I'm curious, did you ever figure anything out about the 159 value? As best I can tell, Mann wasn't claiming there were 159 proxies so much as you need to use 159 series to represent the 112 proxies because of the stepwise nature of his calculations. I also know PCs were recalculated for each interval

    That makes it seem like he was saying you need extra series to represent the different versions of the PCs. As in, every time new data comes in and alters a PC calculation, it creates a new series. If that's true and his numbers are correct, it'd mean there are 47 times a calculated PC changes between steps.

    Or maybe I'm way off.

  17. that's on the right track, but it's a little different. In the AD1820 step, there were 112 contributing series of which 31 (as I recall) were PC series for 6 networks and 71 were non-PC series. In earlier steps, PC series were calculated based on available trees. However, the PC series were not recalculated in every step, but irregularly. And the number of retained PCs in each step changed. The implication is that there were 71 non-PC series plus 88 different retained PC series over the 11 steps x 6 networks. We asked Mann to provide an explanation but he refused. We asked Nature and they required to Mann to provide a Corrigendum SI listing of proxies including PC series for each step. But the total didn't add up to 159. I did the totals at the time and it's probably documented in an early CA post. I asked Nature to provide an explanation of the 159, but they begged off, saying that the number 159 didn't appear in the article and thus wasn't there responsibility. I presume that Mann put out an incorrect number, but was unwilling to simply acknowledge making an error.

  18. One point on the history. I could be wrong but I don't think Real Climate was founded specifically so Mann could defend his work. I'm sure that's at least one reason why, certainly a primary reason for him. But I doubt that a bunch of guys at Fenton Communications, in between meetings with Cindy Sheehan, said to each other "gosh, we should create a website to defend Mann!"

    I think a primary motivation for them was to counteract State of Fear.

    But maybe I'm not remembering right.

    Incidentally, I should really go back and read that again. It really is a very good book. I especially enjoyed the character that is pretty obvious Lindzen.

  19. Steve, I was looking through the Mann data to reacquaint myself, and it seems to fit what I suggested. To test my idea, I went to the page listing the data used in each step of the reconstruction (after PC calculations). Starting with the 1400 step, I looked at the calculated values for each PC. I found:

    In the 1400 step, there's SWM PC1, NOAMER PC1 and NOAMER PC2. That's three series.
    In the 1450 step, the VAG PC1 joins. SWM PC1, NOAMER PC1 and NOAMER PC2 change. That's four more series.
    In the 1500 step, SWM PC2 and NOAMER PC3-6 join. SWM PC1 changes. That's six more series.
    In the 1600 step, SWM PC3&4, VAG PC2, NOAMER PC7, SAOMER PC1&2 and AUSTRAL PC1-3 join. SWM PC1&2, NOAMER PC1-6 and VAG PC1 change. That's 18 more series.

    This means by the point you reach the 1600 step, you need 31 unique indicators even though a maximum of 18 are used in any given step. I suspect if I continued counting like that up through the 1820 step, I'd find 88 unique series for the 31 PCs.

    Lacking the code to automate most of the steps for this, I decided to try approaching the problem in reverse. I first calculated the number of unique predictors neccessary to represent the PCs if each PC series changed in each step (effectively just a summation of the number of PC series in each step prior to 1820). That gave me 206.

    I then looked at the starting dates of the underlying series for the PC calculations, figuring a PC calculation wouldn't change between steps if no new data was added between them. The NOAMER series all start before 1700, so I was able to subtract them from consideration for the 1730 and later steps (206 - 9 * 5 = 161). The same was true for the SWM network (161 - 9 * 5 = 116). VAG is complete at 1750 (116 - 3 * 3 = 107). SAOMER is complete at 1600 (107 - 3 * 6 = 89). TXOK is complete at 1700 (89 - 3 * 5 = 74), as is AUSTRAL (74 - 4 * 5 = 54).

    It turns out my assumption was a bit off because, as you noted, they did not recalculate each PC at each step. Still, it shows there are ~50 opportunities for PCs series to change. That's very close to the number of missing series. It could be a coincidence, but that seems unlikely.

    Tomorrow, I might see if I can write the code to extract each step's calculated PCs so they can be compared to the PCs calculated at the other steps. That would make it easy to either prove, or disprove, my idea. It's a pain though because of how the data is formatted.

  20. timetochooseagain, Michael Mann says the impetus for the creation of Real Climate was deniers increasingly making use of the internet as a means for purveying climate change disinformation" and that there was a "virtual flood of climate change disinformation" that "saturated the Internet." One of the main purposes "was to help fight the climate change disinformation campaign. The climate contrarians had huge amounts of industry funding and a seemingly infinite network of advocacy groups and PR professionals to spread their message."

    The hockey stick dispute wasn't the only topic where "disinformation" existed, but it was probably the biggest at the time. Defending it may not have been the reason for starting Real Climate, but it was certainly a major driving factor.

    Also, Mann is paranoid.

  21. timetochooseagain, I think that's probably true about everything Mann has done in the global warming debate.

    Also, I have to point out you're my favorite commenter. It has nothing to do with what you say. I just love your avatar (shameless fanboy).

  22. Seems your whole point of this post is a non sequitur. Just because one part is of critical importance does not mean the whole is not robust. Lying by taking partial quotes out of context proves nothing.

  23. Eric, Mann et al explicitly said their reconstruction was robust to the removal of tree ring data. There are over 200 tree ring series in their data set. This post shows Mann knew if you removed only 21 of them, he couldn't get his reconstruction. If his reconstruction truly were robust to removing 200+ tree ring series, it would be robust to the removal of 21 tree ring series.

    If you think I'm lying, do more than wave your hands. Tell us what standard of robustness the original hockey stick meets that I say it doesn't.

  24. From Climategate emails, here is part of Mann's first take on MM03 (before he started taking it seriously):

    They did not implement the Mann et al approach!!
    From the description of the method provided, it appears that the authors skipped the
    essential step of (1) applying an objective criterion (i.e., Presidendorffer's Rule N as
    used in Mann et al, '98) to determine the optimal size N of the subset of the full (16)
    candidate instrumental principal component series to retain in the calibration of the
    proxy data and (2) optimizing the calibration resolved variance with respect to all
    subsets of the leading PC series of size N. These crucial aspects of the procedure were
    clearly layed out in Mann et al (1998), and is perhaps one of the most essential
    steps---it is only the application of this objective criterion that prevents an obvious
    statistical overfitting problem--the authors *always* appear to use a subset of all 16
    PC series! However, the criterion used by Mann et al (1998) dictated the retention of
    a maximum of 11 PC series, only a few PC series prior to AD 1600, and only one prior to
    AD 1450. So the authors appear to have tried to fit 16 PC series to the reconstruction
    from AD 1400-AD 1450, when an objective criterion would only dictate 1!
    This is a really basic statistical error, and its likely this massive overfitting that
    is responsible for the wild behavior in their reconstruction prior to about AD 1600.
    Can't beleive they made such a basic error?

  25. This is insightful:

    Incidentally, MBH98 go to great depths to perform careful cross-validation
    > experiments as a function of increasing sparseness of the candidate
    > predictors back in time, to demonstrate statistically significant
    > reconstructive skill even for their earlier (1400-1450) reconstruction
    > interval. MM describe no cross-validation experiments. We wonder what the
    > verification resolved variance is for their reconstruction based on their
    > 1400-1450 available network, during the independent latter 19th century
    > period?

    It was from an early draft of a response to MM03 which Mann circulated to colleagues for comment.

  26. Brandon, an important point about principal components is that the numbering corresponds to the importance of each component. The PC that explains the largest fraction of variance in the data set is ranked #1, the PC that explains the next highest fraction is #2, etc. Mann's decentering method meant that the bristlecones accounted for almost all of the shape of the PC1 and the PC1 accounted for 38% of the total variance in the North American data set. It was on this basis that MBH99 focused so much on the PC1 from the North American network as the dominant component of variance. But doing centered PCs moved the hockey stick shape down to the 4th PC which only explains 8% of the variance in the North American network, and if the bristlecones are removed then none of the PCs have a hockey stick shape. So your point about robustness is correct. The HS result depends on bristlecones, which only account for a small fraction of the variance in the North American network, and therefore the HS result is not a robust feature of the underlying data. As we remarked at the time:

    The issue is robustness. If a low-order PC, representing less than 8% of the explained variance in a single regional proxy network, is going to be allowed to overturn the conclusion that would be indicated by the entire rest of the data set, why even include the rest of the data? In MBH98 it is just there for show, to create the illusion of a hemispheric data base, while the final results are simply the imprint of a sample of bristlecones (dubious as temperature proxies) from western USA.


  27. Ross, it's remarkable how simple this really is. It's difficult to see how things have still not been resolved. Mann's defenders and friends even acknowledge the need to include specific data to get the hockey stick. They just say that's okay because you have to include PC4 because of whatever reason. How does that not give them pause? And why don't they ever clearly acknowledge the reconstruction depended so much on so little data?

    I understood the problem with their work when I was in high school (I began following the debate before Climate Audit existed). I could have done a better job back then too. That doesn't seem right.

  28. Brandon Shollenberger: "Ross, it’s remarkable how simple this really is."

    I write to suggest a way in which you could further share your insight with us lesser lights.

    As a non-scientist, non-statistician who over the years has nonetheless had to deal intimately with a technical issues of various sorts, I've found that in most (but not all) cases almost all the difficulties lie in decoding the jargon, resolving the ambiguities, and correcting the exposition errors. The ultimate facts are only occasionally very difficult to comprehend once those problems have been resolved (although the reader's latent incorrect assumptions sometimes have to be dispelled, too).

    To me it is therefore quite plausible that you're right in saying that the matter really is quite simple. And it appears that there are indeed those who have been able to follow everything Mr. McIntyre has written. But many of us have had difficulty with Mr. McIntyre's mode of exposition (his blog's last post being an exception). Some questions occasionally remain even after Dr. McKitrick's (rare but always gratefully received) explanations.

    Your efforts are therefore welcome, but one thing that could help the rest of us further is a reference series of block diagrams that depict the steps of processing tree-ring (varve, isotope, etc.) measurements to produce temperature reconstructions. Reference to those diagrams would enable those of us who are not insiders to know what "step," "chronology," "centering," etc. mean in the operational scheme about which you (or others) are commenting. Fig. 1 could be the overall operation, Fig. 2 could be an expansion of one of Fig. 1's blocks, etc. A verbal explanation would accompany the graphical (appropriately disambiguated by the relevant matrix equations), and then you and others could later refer to particular diagram blocks and accompanying text to provide context to whatever the current topic is.

    Since I have experience with such exercises, I am under no illusion about how easily or quickly a creditable job could be done, and I consider it unlikely that you could make such an investment of time. In the unlikely even that such a resource becomes available, though, the likely result would be an orders-of-magnitude increase in the number of people who understand the issues well.

  29. Joe Born, right now I'm working on a post which discusses the Mann's methodology without actually touching on the details of how PCA works. My hope is to make the issue accessible to people without requiring them understand the methodology. After that, I want to write a post explaining the methodology.

    I figure that will let people choose how much detail they want. Also, it'll make for a better transition from the current posts. PCA was only used to combine the 415 series Mann et al used into the 112 series they used. You don't need to understand how those 112 series were created to look at them and see almost none of them have a hockey stick.

    Once that's been established, the few series which do show a hockey stick can be looked at. That's when examining PCA comes into play.

    Or at least, that's the plan.

Leave a Reply

Your email address will not be published. Required fields are marked *