Strangest Chart Ever Created?

I think I may have found the strangest chart I have ever seen. You can see it below, taken from the newly published paper on the supposed "consensus on the consensus" on global warming:

consvexpertise2

Now, I discussed this paper a bit yesterday, and there are probably a lot of things more important to discuss than this chart. Those other things aren't as funny though. You see, this chart is complete nonsense. Look at the x-axis. See how it says "Expertise"? Tell me, what scale do you think that's on?

You're wrong. It doesn't matter what your answer might have been; it's wrong. It's wrong because there is no scale for the x-axis on this chart.

Seriously. This is what the authors of the paper had to say about the chart:

Figure 1 uses Bayesian credible intervals to visualise the degree of confidence of each consensus estimate (largely a function of the sample size). The coloring refers to the density of the Bayesian posterior, with anything that isn’t gray representing the 99% credible interval around the estimated proportions (using a Jeffreys prior). Expertise for each consensus estimate was assigned qualitatively, using ordinal values from 1 to 5. Only consensus estimates obtained over the last 10 years are included.

For today, let's ignore the part about the "coloring" and "credible intervals." Let's just focus on the part where it says the expertise values were "assigned qualitatively." What that means is there was no rigorous method to how they assigned these values. They just went with whatever felt right. That's why there is no rubric or guideline published for the expertise rankings.

Kind of weird, right? Well that's not too important. What is important is... there are five categories. Look at the chart. Where are they?

To answer this question, I did a quick tabulation of the table presented in the paper. I found the number of papers in each category is:

1 – 2
2 – 2
3 – 3
4 – 0
5 – 9

I was able to match the coded entries in the chart to those in the table to confirm this. Based on that, I was able to add lines to the chart showing where the categories are. Take a look:

4_13_scaling_example

I have no idea what to call that kind of scale. The 1 and 2 values on the x-axis each have two items, the 3 value on it has three items, the 4 value doesn't exist and the 5 value covers more than half the chart. If you divided the chart in half, splitting "Higher" and "Lower" evenly, one category would fall in both halves. How does one even begin to interpret that?

Here's what you would get if you just plotted each point by its category:

4_13_scaling_proper

It's not pretty, but it at least presents the results in a meaningful and honest manner. For instance, you can't just rearrange the points in the proper chart however you'd like. You largely can with the one these authors presented. For instance, we could present the "consensus estimates" in this order:

4_13_scaling_Cook_example1

Or we could present them in this order:

4_13_scaling_Cook_example2

Both make just as much sense. All the "consensus estimates" for expertise level 1 are together, all of the "consensus estimates" for expertise level 2 are together, all of the "consensus estimates" for expertise level 3 are together, all of the "consensus estimates" for expertise level 4 are non-existent, and all of the "consensus estimates" for expertise level 5 are together.

Everything's the same, except the arbitrary order of which "consensus estimates" within a given category is changed. But hey, there's no right way to do that. The authors of the paper just picked the one which they liked the look of best. We can do the same thing if we want.

But it gets worse. The authors of the paper don't just rely on this image to convey the idea greater expertise leads to more belief in a "consensus" on global warming. They made it explicit by creating this image:

Which is even shown in animated form in this video created by John Cook:

Try imagining that same image on a realistic scale with each category spaced evenly. Or rather, don't try it because you'll get killer headache. Any coherent scaling of the x-axis in that chart would completely ruin the visual appeal it holds. The only way the authors could create a nice, neat image showing the "consensus" gets stronger as expertise levels rise is to do the equivalent of counting:

1. Beat. 2. Beat. 3. Beat. Beat. 5. Beat. Beat. Beat. Beat. Beat. Beat. Beat. Beat.

And don't ask me how authors created the line for the chart Cook uses in his video. It's clearly not based on any sort of mathematics. It's difficult to do any sort of (logarithmic?) regression on unevenly spaced data when you have so little, but it would be practically impossible to come up with such a pretty regression after you compressed it to account for the spacing differences between the categories. Odds are the authors just hand-drew the line.

Sort of like how they just hand-picked whichever "Expertise" values they felt like picking for the various "consensus estimates." And like how they just hand-picked whichever order for the estimates withing a category they felt made their results look best.

I'm sure there's plenty more to be said about this chart, but for now, I need to stop. I'm going to go see if I can figure out what this chart would look like if the five categories had been given equal spacing. It's probably pointless, but it's fun to imagine what an honest depiction of these results would show.

65 comments

  1. Brandon,

    Yeah. It's sort of what I've right or wrong come to expect though, it would shock me a lot more not to find crap like that.
    What I'm really curious to see is, will all this consensus messaging do the trick I guess John and crew think it will, in somehow swaying people to act? I get this notion from here, although I expect I could find it elsewhere.

    I watch and await the results of the 97% climate consensus messaging experiment with keen curiosity. And possibly just a small smackerel of skepticism. 🙂

  2. How do Cook et al. interpret their sample ("11944 climate abstracts from 1991–2011 matching the topics ‘global climate change’ or ‘global warming’" (from the abstract of Cook et al. 2013)) as showing that the surveyed papers demonstrated an expertise level of 5? (I see nothing in Cook et al 2013 that suggests they selected papers or authors by 'expertise').

    It looks like circular reasoning to me: assume "expert climate scientists" support the consensus, therefore high levels of consensus must signify expertise...

  3. Ruth Dixon, that's a good question. I suspect the answer, if they were being honest, would be, "Because it'd screw up our results otherwise."

    But that's just one of the many arbitrary choices the authors made that enhance their results. For instance, there are three estimates in that table from the paper Stenhouse et al (2014). They are given as 46.2%, 80.5% and 87.9%. However, if you look at the table for Stenhouse's results, you'll see you can only get those numbers by combining two response groups. The surveyed individuals were asked, "Is global warming (GW) happening? If so, what is its cause?" The responses for, "Yes; Mostly human" are 78%, 71% and 38%. The only way you get the results given in this study and chart is to include a second category where people responded, "Yes; Equally human and natural" which got 10%, 10% and 8% of the respondents.

    The authors don't explain why they chose to use "consensus estimates" from the paper which held humans are not the main cause of global warming. Even if that is an acceptable choice, it would surely be acceptable to choose to show "consensus estimates" for the idea humans are the main cause of global warming. The authors ignore this issue however, choosing to simply pick the approach which lets them increase three of their data points (S141, S142, S143) by ~10%.

    But then, there are tons of problems with this chart and analysis I haven't even begun to cover.

  4. I assume the published article did not undergo peer review?

    As you point out, the data points are evenly spaced along the x-axis even though an entire segment ("4") is absent. There also is no overlap between the data points (representing the various surveys) which suggests that all of the surveys targeted unique populations of respondents with internally identical levels of expertise.

    The vertically colored range ("99% credible interval") around each data point is "largely a function of the sample size" but seems to ignore relative population size or issues with random sample selection requirements.

    For example, Pew151 surveyed 3748 US-based AAAS members out of 19,984 attempted contacts (a subset of the full 120,000 global membership). http://www.pewinternet.org/2015/01/29/appendix-b-about-the-aaas-scientists-survey/ This produced a thin, straight line for the Pew151 99% interval. Yet A10T200 covered 200 out of 200 "top publishers" and its 99% interval is noticeably wider than the AAAS sampling survey.

    This was a sloppy and misleading graphic. Do the authors even read their own publications?

  5. Seeing that chart, I was expecting to see Kate from ClimateSight in the author list as she previously had a post listing the credibility rankings.
    Al Gore was pretty high up.

  6. Oh and how about astrophysicists? Were they surveyed?

    Or are astrophysicists not experts because they are not climatologists?

    Besides, they think the Sun and the stars (cosmic particles) affect climate. It's not just Svensmark and Shaviv. Many astrophysicists think that variations in magnetism and cosmic ray flux can affect water cloud formation, causing variations in Bond albedo that climatologists treat as fixed.

  7. It took 16 authors to create this bit of squirrel scat. Sixteen of the usual suspects, conspiring to make us laugh ourselves to death at this deranged patchwork of drivel.

  8. This must be where the expression "Cooking the books" comes from ... but Cooking the papers would be more accurate. Maybe it's "Crooking the books".

  9. That last figure reminds me of a (semi) variogram. The x-axis is distance, in this case getting further from reality to the right and the y-axis is variance, ie more made up stuff with a nugget effect of about 35%. So at reality there is 35% made up stuff and the further from reality the more made up stuff so as you become an expert there is near 100% made up stuff. Could now do a kriging run with that variogram

  10. Frederick Colbourne:

    Don't you expect astrophysicists to be represented by Edinburgh's very own ATTP?

  11. I count 5-6 co-authors of previous consensus papers, as co-authors of this consensus paper - truly recursive..!

    quoting form the press release:
    https://www.uq.edu.au/news/article/2016/04/consensus-consensus-97-of-experts-agree-people-are-changing-climate

    "Mr Cook said he hoped this latest finding, which he has termed “consensus on consensus”, will enable scientists to focus on the real work – addressing climate change."

    Funny, Cook seems incapable of original thought, as The Consensus on the Consensus", was the title of the MSC thesis, for the Doran/Zimmerman consensus paper. (and Doran is a co-author of this nonsense)
    http://www.lulu.com/shop/m-r-k-zimmerman/the-consensus-on-the-consensus/ebook/product-17391505.html

    It is pure PR/marketing, dare I say propaganda, because they see the 97% of scientists soundbite as a gateway believe to persuade the public.

    Lewandowsky's blog post on this paper:
    http://www.shapingtomorrowsworld.org/lewandowskyConC.html

    "Given that recognition of the expert consensus is a gateway belief that determines the public’s attitudes toward climate policies, and given that informing people of the consensus demonstrably shifts their opinions, it is unsurprising that attempts continue to be made to deny the existence of this pervasive expert consensus."

  12. Self-declared experts who reject those who are not agreeing with them will build up a cohort with consensual opinion on whatever subject they think they are expert on.
    What makes someone a climate expert ? diploma, field of research, number of publications, peer recognition, political correctness ?

    Is there any climatologist receiving generous enough grants to enable research on controversial aspects of the anthropogenic cause of climate change ? Would such a person be accounted for as an expert in such useless but highly politically motivated studies.
    When agronomists and sociologists are declared as belonging to the climate expert horde then the consensus rate is sky rocketing.

  13. Kent:

    I assume the published article did not undergo peer review?

    It did undergo peer review, which is something of a damning indictment of the peer review process.

    frederick colbourne:

    Oh and how about astrophysicists? Were they surveyed?

    Or are astrophysicists not experts because they are not climatologists?

    Besides, they think the Sun and the stars (cosmic particles) affect climate. It's not just Svensmark and Shaviv. Many astrophysicists think that variations in magnetism and cosmic ray flux can affect water cloud formation, causing variations in Bond albedo that climatologists treat as fixed.

    Interestingly, one of the surveys, I think Carlton, has a question about people's view on the strength of the effect the Sun has on climbate change. When asked how much they agreed the Sun was the main cause of global warming, 20% said they were undecided while others said they agreed. If one can take results of these surveys at face value (I'd argue against doing so), that'd seem to make the consensus humans are the main cause of global warming less than what is portrayed for the paper.

  14. Interesting that John Cook should refer to the concensus on Plate Tectonics in his little promo-video. For the first 50 or so years of the 20th century a non-expert (Wegener) who proposed continental drift was ostracised by the many so-called experts (geologists) who suggested amongst other things that continental drift could not occur because the strength of the oceanic crust was too great for the continents to plough through! It wasn't until the advent of palaeomagnetic measurements in the 1950's and the discovery of magnetic stripes on the ocean floor that sea-floor spreading became accepted and ideas around plate tectonics became accepted.

  15. "To answer this question, I did a quick tabulation of the table presented in the paper. I found the number of papers in each category is:"

    How did you find which papers are in which category? I don't see what data is counted toward expertise.

  16. I have no idea how they came up with their ratings of which estimates represented what levels of expertise (some papers had multiple estimates), but they list the ratings they used in the Supplementary Material of their paper.

  17. Brandon I'm not sure what you are trying to say with this:

    And don't ask me how authors created the line for the chart Cook uses in his video. It's clearly not based on any sort of mathematics. It's difficult to do any sort of (logarithmic?) regression on unevenly spaced data, but it would be practically impossible to come up with such a pretty regression after you compressed it to account for the spacing differences between the categories. Odds are the authors just hand-drew line.

    It's actually pretty easy to do regression (curve fitting) on unequally spaced data. The mathematical basis is least-squares optimization.

  18. Carrick, I meant to include a qualifier like "when you have so little" to refer to the fact that while there are 16 data points, there are only four points on the x-axis. You may be able to tell I re-wrote that second a couple times. Sometimes when you're cutting and pasting you wind up thinking something is there when it isn't (or not noticing you extra words/phrases). I know that's why I dropped the word "the" in the last sentence there. My revision history shows I had referred to it as "a hand-drawn line" but changed the phrasing of the sentence to what it is now. Apparently in the process I left out a word.

    I'm sure there's a much better way to phrase the idea I was trying to convey, but I don't like to edit a post much after its written so I'm just going to make two minor changes which hopefully will make it more clear (one just to fix that annoying "typo").

  19. By the way, I know it isn't "difficult" to do a regresion on a data set like this in the sense it only requires you run a few lines of code. You just have to be willing to accept your results will be basically meaningless. That's obviously not something scientists should be doing when displaying results.

    I'm not sure how to phrase the point I'm trying to make clearly and concisely though. Any thoughts?

  20. Once you have decided that "true" expertise will result in agreement with the 97% consensus meme, the diagram practically draws itself.

  21. Brandon:

    By the way, I know it isn't "difficult" to do a regresion on a data set like this in the sense it only requires you run a few lines of code. You just have to be willing to accept your results will be basically meaningless

    Thanks, I was pretty sure you understood the formal curve fitting is trivial to do.

    What I'd say is it's technically very difficult to assign meaning (that is make uncertainty statements) when you treat ordinal scales as interval scales. Random article pulled out of hat here.

    That's a more basic controversy though (which I checked that this paper does a decent job describing).

    If I understand what they've done, they've basically invented additional intervals besides their qualitative rankings and assigned values to each point so they end up with a nice curve.

    So they have a fictitious ranking system that builds their personal bias into their results, and now they are fitting a curve to it.

    That's amazing, now that I think about it. Wow.

  22. It seems kind of amazing to this layman. They claim to show how consensus follows expertise and then just make up their result. If I have it right they've arranged the points around to make it look nice? Am I being too harsh?

    Also, to boot, yet again Cook is just shamelessly boosting his own work. I really think he must think the huge amount of papers he got his team to plow through must count for something, but putting it at the top right as if he did some work to filter for expertise seem such a clear act of faith on his part.

    Cook 2013 did nothing to measure expertise AFAICS. In fact his 97% consensus includes a large portion of papers whose research category was literately stated as being "Not climate-related".

    How does Cook get away with being so airy fairy and vague like this? It really is laughable. It literally makes me laugh 🙂

  23. Carrick, that this is an ordinal rating system would raise a number of issues, but the sheer sparsity of data (16 points spread over only four x values) makes any sort of regression questionable. Ones like logarithmic curves, if that is what they used for that graphic, are even worse as there's no way to establish the curve is an appropriate fit. Any number of other regressions could fit as well, if not better. The only reason it looks nice in this case is they misled viewers as to what their x-axis is.

    Which ties into what you say you understand the situation to be. I'm sure that's not how they would describe what they did, but it is perfectly accurate. They only had five possible "expertise" values (one of which didn't get used). Creating 16 different points along the x-axis and claiming they reflect those "expertise" values is...

    There's a reason I said this may be the strangest chart I've seen. I'm sure there are stranger ones out there, but I'm not sure any would be as misleading.

  24. tlitb1, they couldn't rearrange the points completely at will, as their 5 papers had to go to the right of the 3 papers and so forth, but otherwise the order is completely arbitrary. And of course, their numerical ratings for the papers were arbitrary, so they could put the estimats in any order (but risk people questioning the numerical ranking).

    That said, there are no papers rated as "not climate related" in Cook's "97% consensus." Those papers were all filtered out. There were, however, a lot of papers that weren't climated rated as endorsing the consensus. That's because basically any mention of anything remotely related to the climate was enough for them to keep the paper in, and if a paper did as much as acknowledge carbon dioxide is a greenhouse gas, it was rated as endorsing the consensus.

  25. @Brandon Shollenberger

    they couldn't rearrange the points completely at will, as their 5 papers had to go to the right of the 3 papers and so forth, but otherwise the order is completely arbitrary. And of course, their numerical ratings for the papers were arbitrary, so they could put the estimats in any order (but risk people questioning the numerical ranking).

    Sure, but since they decided to have a 5 point ordinal system it seems to me the only way they could graphically display their (rather vague) concept of expertise is in a form similar to your first variation above, with each ordinal numbered along the bottom and the gap at 4 clearly shown. Perhaps also they could have averaged the consensus to a single point at each ordinal and drawn lines through them to their hearts content? 😉

    What they have instead done seems utterly cargo cult naive at best, deceptive fakery at worst.

    That said, there are no papers rated as "not climate related" in Cook's "97% consensus." Those papers were all filtered out.

    Maybe I'm wrong or perhaps this is a misunderstanding because of Cook et al's imprecise use of language. It seems to me Cook et al uses the phrase "Not climate-related" in two ways. At one place in Cook et al 2013 they say:

    The ISI search generated 12 465 papers. Eliminating papers that were not peer-reviewed (186), not climate-related (288) or without an abstract (47) reduced the analysis to 11 944 papers written by 29 083 authors and published in 1980 journals.

    I understand from this that they eliminated some "not climate-related" papers *before* they did their assessments.
    However the remaining papers left in the study still included papers that were categorized at level (4) defined as

    (4) Not climate-related - Social science, education, research about people's views on climate

    I make it as about 500+ papers at Category (4) were found to be assessed as endorsing the consensus level. So somehow they eliminated 288 "not climate related" papers before their run and then had 500+ "not climate related" papers afterwards!?

    Later, when they talk of the self assessment, they say:

    After excluding papers that were not peer-reviewed, not climate-related or had no abstract, 2142 papers received self-ratings from 1189 authors

    I'm left wondering did they just eliminate Category 4 papers at this stage or make some fresh assessment of their non-climate-ness?

    Also, I note in this 'Synthesis' paper they class Cook et al 2013 as having a sample size of 1381. I assume that must be authors but that doesn't tally with the 1189 they initially said responded. When did they get more self assessments?

  26. Anders, while your explanation is correct, it would perhaps be helpful if you chose to address a more important point, like the serious criticism of the paper this post includes.

    Failing that, it would be helpful if you at least showed a little more courtesy and/or respect to other commenters. It is hardly surprising a person looking at a chart which gives 15 points taken from surveys of people would expect the 16th point to also be taken from views expressed by individuals. After all, the same table which gives these results gives the stated views of the people who responded to Cook et al (2013). Taking a moment to actually write a sentence or two discuss would seem appropriate. Or, you know, just not giving people orders. Because that's rude.

  27. tlitb1:

    Sure, but since they decided to have a 5 point ordinal system it seems to me the only way they could graphically display their (rather vague) concept of expertise is in a form similar to your first variation above, with each ordinal numbered along the bottom and the gap at 4 clearly shown.

    Yup. What they did is simply wrong. There is no justification for that chart. It is not anything close to a fair or accurate portrayal of their results. And the chart used by John Cook and other authors of the paper, where they actually draw a line as though it is a regression for their results is beyond any level of misleading I can recall ever seeing in a chart before.

    I understand from this that they eliminated some "not climate-related" papers *before* they did their assessments.

    Nope. Those papers were not pre-filtered. The raters were supposed to mark any "not climate related" papers as such and assign them to endorsement level 4. They were then filtered out after the ratings were complete.

    (But again, what they considered "climate related" is as strained a definition as you could imagine. The idea the authors of these papers have the highest level of expertise on climate science is laughable.)

  28. Brandon.
    My comment was curt and to the point on purpose. It addressed an issue that I thought was possible to resolve and which - I think - has now been resolved. I have no interest in trying to address issues which I suspect we will not resolve in this forum. I don't think you're really in a position to criticise my courtesy, or lack thereof.

  29. @...and Then There's Physics

    Cook et al. (2103). Table 4. Add the numbers 1342 and 39 to each other.

    OK, thanks to one of the co-authors of the Synthesis paper I now see what numbers from Cook et al 2013 add up to the 1381 that matches the sample number.

    However, since those numbers come from the column labelled "% of all papers", I'm now a little surprised at the apparent duality of the concept of "Sample" being used. Some of the other studies samples listed are clearly not for numbers of papers but for numbers of actual people. For example the Pew studies certainly are.

    If that multiple concept of sample is common usage in science then fair enough, seriously, I'm just a layman. However that only further illustrates Cook et al actually only focused on papers and had nothing to show about how they actually screened individual author "expertise".

  30. Anders, the only reason it would be impossible to resolve the issue of this chart here is that you would choose not to discuss the basis for presenting your results this way.

    Regardless, I suggested the alternative of not being rude to other commenters as an alternative rather than a separate statement because I felt it was likely you would refuse to discuss any substantive point here. Your response:

    I don't think you're really in a position to criticise my courtesy, or lack thereof.

    Is quite poor. Even if I were uncourteous and/or lacking of respect like you seem to suggest, that would not justify being rude to other commenters here. I am unquestionably in a position to suggest to my commenters they treat one another with courtesy and/or respect. Whatever your feelings about me may be, they have no bearing on how you should treat other people on my site.

  31. @Brandon Shollenberger

    Nope. Those papers were not pre-filtered. The raters were supposed to mark any "not climate related" papers as such and assign them to endorsement level 4. They were then filtered out after the ratings were complete.

    Perhaps there's still a misunderstanding. I understand that about Endorsement level 4, however I was referring to Category number 4, which comes from:

    Table 1. Definitions of each type of research category.

    Defined as:

    4) Not climate-related Social science, education, research about people's views on climate

    Most of those were indeed rated by the Cook team as Endorsement level 4, but there were around 522 Category 4 abstracts that are part of the 3896 'Endorse AGW' from Table 3 of Cook et al. Which to my mind means, at least in their abstract ratings, that Cook et al literally included papers that were categorised as "Not climate-related"!

  32. Brandon,

    I suggested the alternative of not being rude

    Which might have been a valid point if I had been rude. Since I wasn't, it really isn't. Do you potentially see why short and curt is preferable?

  33. tlitb1, are you perhaps searching the data files for papers rated as Category 4? If so, that's the problem. The category numbers listed in the paper do no match the ones in the files. Category 4 in the data files is either Methods or Paleoclimate (I can't remember offhand if they skip number 1 in the data files).

    The data files don't actually have ratings for the papers rated as no climate related. Despite having repeatedly claimed to have released all data of scientific interest, Cook et al (2013) never released any data for the papers they filtered out. They didn't even identify the papers or abstracts they filtered out. That means unless you find some alternative method to get those abstracts, you can never know what got filtered out.

    I think most people would say not disclosing the data you filtered out in a study like this is wrong. I mean, in theory, Cook et al could have filtered out ~500 papers that reject AGW by just assigning them to one of the categories they filtered out, and nobody would be able to tell.

    (They didn't do that. Still, why they would repeatedly claim to have released all relevant data while not releasing that data is a mystery to me.)

  34. Anders, giving people orders is rude. That's true even if you don't think it is.

  35. @Brandon Shollenberger

    Yep you're right. I had fired up an old python program and got it to show the number of Category 4, but I forgot about that wrinkle. Yeah its using the data with the reduced range as you say. Well remembered 🙂

  36. Glad to hear we got that sorted out. And that I was able to remember these things well enough to respond while in the grocery store. I won't be home for hours, so if I had to look details up, I'd have had to wait some time to respond.

  37. Brandon,

    Anders, giving people orders is rude. That's true even if you don't think it is.

    Ahh, I see, you think I was ordering people to add the numbers together. You could, of course, have chosen to remain ignorant, but I didn't make that clear. My apologies. What I should have said was "If you would like to resolve your issue with the number 1381, please add the numbers 1342 and 39 together. You will find them in Table 4 of Cook et al. (2013)", My sincerest apologies.

    Okay, you must get that I'm being seriously sarcastic, and - hopefully - you get that it is fully deserved. You - if you give this a moment's thought - will also realise why short and curt was my preference. We are now on an extended discussion of why I was correct but didn't present it how you would have liked (as if I should really care). My impression is that it is not possible to actually present this in a manner that would be acceptable to you, which might explain why no other author of the consensus paper has even a slight interest in discussing your issues with the paper with you. You have every right to highlight potential issues with a paper. You DO NOT have the right to expect the authors to discuss these issues with you. I would have thought this was obvious. It clearly is not.

  38. Having now looked at the methodologies of all the papers. Cook et al stands out as the only one to inflate its sample size in Table S1 - by using the number of papers instead of number of respondents.

    In case anyone of the authors ever feel the need to revise that number all you have to do is follow this procedure

    Cook et al. (2103). Table 4. Add the numbers 746 and 28 to each other. 😉

    That said there are few other oddities with the numbers offered for couple of other papers such as '200' for Anderegg et al 2010 which doesn't make sense IMO, and a couple of papers wherein the stated sample number can't be found.

    However I realise that supplementary sections of science papers are considered to be the comedy section (I still chuckle at having been lumped together with Richard Betts by Lewandowsky as having espoused conspiracy theories 🙂 ) so the concept of 'sample' in this paper being pretty flaky is no biggy I guess. 😉

  39. Anders, that response is unhinged. I didn't say anything to suggest I expected anyone to discuss anything with me. All I said is it would be helpful if you chose to.

    As for your stated impression, that is a very strange impression to hold. If you had merely said, "You can find the source of the 1381 number in Table 4," I wouldn't have thought anything negative of it. I would have thought anything negative if you had added more detail or not."

    The only reason I said you were rude is you behaved in a mannerthat was rude. That's it. You yourself said you were curt, a word that literally means "rudely bried." Denying having been rude while saying you were "rudely brief" is...

  40. Brandon,
    Let's see. Commenting here always goes badly. Hence, I thought I would comment in a way that simply provided the information and contained so little else that it could not be misinterpreted or lead to any kind of contentious exchange. However, I forgot your world class skill at turning what should a nice simple discussion into something best avoided. I really do think you are not really in a position to criticise the manner in which I commented. I realise that that isn't going to stop you from doing so, but it is amazing that you somehow think you can. Says it all really.

  41. Anders:

    Let's see. Commenting here always goes badly. Hence, I thought I would comment in a way that simply provided the information and contained so little else that it could not be misinterpreted or lead to any kind of contentious exchange. However, I forgot your world class skill at turning what should a nice simple discussion into something best avoided.

    Above you explicitly said you wrote the comment curtly on purpose. That I interpreted the comment as being curt and said it was rude shouldn't surprise you. You can pretend I am "turning" this discussion from something "nice" into a contentious exchange or anything else like that if you want, but the reality is when you visit a site and are rude to people there, there generally won't be a useful discussion. Being rude tends to prevent useful discussions from being had.

    I really do think you are not really in a position to criticise the manner in which I commented. I realise that that isn't going to stop you from doing so, but it is amazing that you somehow think you can. Says it all really.

    Similar to the above, repeatedly using vague innuendo to smear people and challenge their suggestion you should treat other commenters with respect and/or civility is not going to allow for a fruitful discussion. The innuendo alone is poor form as if you have something to say you should actually say it, but effectively defending having been rude to other commenters based on how you feel about me is both illogical and lame.

    You can try to blame this all on me, but the reality is you came here and intentionally wrote a comment in a manner that would be fairly described as "rudely brief." Rather than provide information in a friendly or cordial manner, you provided it in the form of an order. You then denied having been rude while saying you were intentionally curt... which means "rudely brief." You then wrote an unhinged comment, complete with Caps Lock, saying I have no right to expect things I've never said or even suggested I expect.

    It may be true "[c]ommenting here always goes badly" for you, but that should be the expected outcome when you behave like this. When you visit a site and show not a single ounce of respect or courtesy to anybody, instead being rude and resorting to things like vague innuendo, you should expect discussions won't go well.

  42. Brandon,
    Let's just clarify; my first comment cleared up the confusion about the number 1381 in Cook et al. (2016). This lead to a lengthy exchange in which you just complained about me not being quite as nice in that comment as you would have liked, without once - I think - acknowledging the clarification of that issue. Let's also bear in mind that the title of your previous post is "New Consensus Study Proves Its Authors Are Liars". I'm one of those authors. I'll leave others to decide if I should have tried to be more explicitly pleasant in my comment that clarified the confusion about the number 1381. Let's also imagine what could have happened if you had simply acknowledged the clarification of the 1381 number issues and asked, pleasantly, if I could clarify anything else about this paper. Okay, sorry, that's such a highly unrealistic scenario that there is probably no real point in actually considering it.

  43. ATTP, out of curiosity, what did you guys write in the funding application for this study?

    I ask because I just noticed that Naomi Oreskes twat thatshe wished the paper weren't necessary. A bizarre remark, when you think about it—not the kind of thing one'd ever expect to hear from a researcher, at least not if we assume they're engaged in the work we're paying them to do: the endless search for knowledge.

    Please remind us, if you can: what was the scientific, academic or scholarly purpose of this work?

  44. I'm away from the house so commenting will be light as it is more difficult and leads to a lot of typos (the ones in my comments from yesterday are still annoying me), but I wanted to make sure I said this right away: Brad Keyes, please tell me "twat" was a typo.

  45. Brandon, thank you for making the effort to say that before leaving the house, lest unchaperoned minors read my comment at face value and assume that's the kind of joint you're running here. All I can say in my defence is that it's not the first time I've struggled with the conjugation of the verb 'to tweet.'

  46. Brad Keyes, I actually posted that comment from the bowling alley I'm at 😀

    But yeah, I assumed it was a typo (the phrasing wouldn't make sense if not). I just wanted to make sure I said something because I know when I got an alert for the comment, I glanced at it and thought, "Wait, he called her what?"

  47. Anders, it is rather strange you say you think I didn't even acknowledge your clarification was correct when the first words out of my mouth were, "Anders, while your explanation is correct." It doesn't seem like it should have been difficult for you to remember that or perhaps just to glance at how the discussion began.

    Regardless, you are still making the absurd argument you've advanced this entire time. You are portraying my actions as justifying your behavior toward people other than me. That's not how it works. You want to be rude to me, no surprise. I expect it. I probably won't even comment on it. If you want to be rude to other commenters though, I wil speak up. And no matter what you may say about me, it will not justify treating somebody else rudely.

    Yes, I called you a liar. I called your co-authors liars. The paper you all wrote is dishonest, in multiple ways. You may not like that I point this out, but that I point out when you have been dishonest does not make me a bad person. It doesn't make me rude either. Even if it did though, that doesn't justify being rude to anyone but me.

    So treat me like garbage if you want. Just understand, I'm responsible for my actions. Nobody else is. You may find it weird for people to take responsibility for their actions (you weren't "rude, you were "curt"!), but I do.

  48. Brandon Shollenberger
    April 15, 2016 at 9:37 pm
    "Anders, that response is unhinged."

    Um...

    I think there's a reason for that...

  49. The biggest surprise for me with this 'Synthesis' paper is that it seems now that most of the remaining well known consensus authors have somehow found themselves persuaded to be brought under the umbrella of Cook. Cook seems to be now the de facto consensus authority.

    Does anyone else think this weird?

    Personally I used to be always of the position - having gone through the experience of reading, digesting, and critiquing Cook 2013 in depth - that while it was a clearly risible naive parvenu attempt to garner attention through scale - what I knew of the other consensus papers were that they had actually tried to directly survey authors and had not been so ostentatious with stating prior political beliefs.

    Thanks to this paper I've now downloaded all of the sampled participant papers from Sci-Hub and found they were methodical and interested in actual respondents - except... guess what? 😉

    Well, now the concept of consensus has been ****ed.

    Now that all those vaguely known authors have lined up with Cook and Lewandowsky, we now see no daylight between reality and the pathetic pseudo graph that Brandon has evicerated above.

    I think this paper has permanently hamstrung the concept of climate consensus. (which I am fine with)

    You think differently?

    Here's a hypothetical scenario.

    What happens in the future when some hypothetical groovy evidenced based writer for the Guardian - let's call him Ben Goldacre - says "Hey, deniers... look at Doran and Zimmerman", all we have to do is just turn around and say "Hey Ben, have you seen the made up graph that Brandon shows here that Doran signed up to? You know a bit like ****ing homeopathy bull****?" etc etc

    It really is amazing they sold their soul so cheap.

  50. tlitb1, please refrain from cursing. I've edited your comment to address this, but that is not something I want to do on a regular basis.

    Also, while there are problems with the other consensus papers, they tend to be the normal sort of problems you might expect in scientific papers. The issue has been more in how the results were presented to the public. Cook et al is different. It is, by far, the worst paper of the bunch. Via their intentional misrepresentation of what their "consensus" means, the authors of that paper moved (far) into the realm of dishonesty. As far as I know, that isn't true for any of the other papers.

    But now that all these people are signing onto John Cook's work, things are definitely different. I suspect this is really just a case of resume padding/author count inflation/whatever, with most of the new authors not actually contributing much of anything (other than their data). I bet some of them didn't even look at how this chart was made.

    But once you sign your name to a paper, you bear responsibility for that paper - especially in regard to the paper's central graphic. If the authors are unaware of how this chart was made, then they failed at doing their job in a fundamental way. If they do know how it was made and believe the chart is an accurate and fair representation of the paper's results... I don't know what to say.

  51. If I said your conversational progress with Anders looks like a failed mutual masturbatory negotiation would that be OK?

  52. Perhaps I should be more clear. As a rule, cursing is not allowed on this site. There may be rare occasions where I make an exception for situations involving extreme provocation, but I wouldn't expect it to happen.

    When it appears a person is simply unaware of this rule or has forgotten it, I will likely censor the cursing and caution them. If a person willfully breaks the rule though, I will nust delete their comments. This site has very few moderation rules, but one of those is users are expected not to post obscene or pornographic content.

    Aside from that and a couple other simple rules that shouldn't be relevant here, people can post what they want. At least as far moderation goes. There is always the possibility what they say will get them criticized.

  53. ATTP,

    Now that you've had a weekend to think about it please remind us, Ken: what was the scientific, academic or scholarly purpose of this study, to which you put your name as a coauthor?

    I know what the political/demagogic purpose was; John Cook has been touring the world's press admitting it to anyone who will listen, without a hint of embarrassment.

    What I'm asking about is its scholarly purpose.

    I'm assuming it had one, because otherwise you all owe your employers a refund.

  54. They are obsessed by you, Brandon, on ATTP, but none of them dares debate here in open. Regardless, you are wrong and mistaken and wrong, but no one can say why, not even the Co - author of this pile of tripe. It is really strange.

  55. It must be that I'm so obviously wrong, it'd be a waste of time to explain what my error is. Right?

  56. Brandon,
    your error evidently falls into the same so-obvious-we-can't-remember-what-it-is category as the scholarly purpose of the paper itself.

Leave a Reply

Your email address will not be published. Required fields are marked *