Tag Archives: John Cook

Consensus Chart Craziness - Part 2

Our highlighted a couple pecularities in the chart of a new paper, Cook et al (2016). This chart ostensibly shows as climate science expertise increases, one becomes more likely to endorse the "consensus" position on global warming. There are a number of problems with this chart, the most fundamental of which I highlighted by added lines to it:

4_13_scaling_example

To show where each of the five categories used to represent "expertise" fall. As you can see, multiple points with the same x-value are plotted side by side, causing the categories to be unevenly spaced. As a result, Category 5 covers more than half the chart while Category 4 doesn't even appear. This is highly unusual. Had the data been displayed in a normal manner, the result would have been something like:

4_13_scaling_proper

Which does not give the strong visual effect Cook et al (2016) gave in their chart. Additionally, there appear to be a number of problems with the data used in creating this figure. As I discussed in the last post in this series, Cook et al (2016) give two "consensus estimates" from one paper, Cartlon et al (2015), as such:

4_20_Carlton_2

And say:

Carlton et al (2015) adapted questions from Doran and Zimmerman (2009) to survey 698 biophysical scientists across various disciplines, finding that 91.9% of them agreed that (1) mean global temperatures have generally risen compared with pre-1800s levels and that (2) human activity is a significant contributing factor in changing mean global temperatures. Among the 306 who indicated that 'the majority of my research concerns climate change or the impacts of climate change', there was 96.7% consensus on the existence of AGW.

Even though Carlton et al (2015) clearly state only 5.50% of their respondents said. "The majority of my research concerns climate change or the impacts of climate change." Basic arithmetic shows you would need over 5,000, not fewer than 700, for 5.50% to give 306. That makes it clear this 306 value used by Cook et al (2016) is wrong. However, there are more problems, and I intend to discuss some in this post.
Continue reading

Consensus Chart Craziness - Part 1

There's a new paper out claiming to find a "consensus on [the] consensus" on global warming. It concludes:

We have shown that the scientific consensus on AGW is robust, with a range of 90%–100% depending on the exact question, timing and sampling methodology. This is supported by multiple independent studies despite variations in the study timing, definition of consensus, or differences in methodology including surveys of scientists, analyses of literature or of citation networks.

With it's one and only figure is used to demonstrate the claim:

Figure 1 demonstrates that consensus estimates are highly sensitive to the expertise of the sampled group. An accurate estimate of scientific consensus reflects the level of agreement among experts in climate science; that is, scientists publishing peer-reviewed research on climate change. As shown in table 1, low estimates of consensus arise from samples that include non-experts such as scientists (or non-scientists) who are not actively publishing climate research, while samples of experts are consistent in showing overwhelming consensus.

If you've followed the discussion about this paper so far, you may have seen my recent post discussing this chart:

consvexpertise2

In which I explained:

Look at the x-axis. See how it says "Expertise"? Tell me, what scale do you think that's on?

You're wrong. It doesn't matter what your answer might have been; it's wrong. It's wrong because there is no scale for the x-axis on this chart.

Seriously. This is what the authors of the paper had to say about the chart:

Figure 1 uses Bayesian credible intervals to visualise the degree of confidence of each consensus estimate (largely a function of the sample size). The coloring refers to the density of the Bayesian posterior, with anything that isn’t gray representing the 99% credible interval around the estimated proportions (using a Jeffreys prior). Expertise for each consensus estimate was assigned qualitatively, using ordinal values from 1 to 5. Only consensus estimates obtained over the last 10 years are included.

For today, let's ignore the part about the "coloring" and "credible intervals." Let's just focus on the part where it says the expertise values were "assigned qualitatively." What that means is there was no rigorous method to how they assigned these values. They just went with whatever felt right. That's why there is no rubric or guideline published for the expertise rankings.

Kind of weird, right? Well that's not too important. What is important is... there are five categories. Look at the chart. Where are they?

I then showed what the chart would look like if you labeled the various categories in it:

4_13_scaling_example

One category (5) covers more than half the chart's range while another category (4) doesn't even appear on the chart. Any claim "consensus estimates are highly sensitive to the expertise of the sampled group" based on this chart is heavily biased by the authors decision to present their data in a misleading way. Had they simply shown their data by category, they would have gotten a chart like this:

4_13_scaling_proper

Which doesn't make for anywhere near as compelling an image, and it wouldn't allow the authors to create graphics like this one which they use to promote their conclusions:

By choosing not to label the values on their x-Axis, and by choosing to place every point next to another rather than grouping the data by category, the authors of this paper were able to create the visual impression of a relationship between expertise level and size of the consensus estimate.

That alone should be damning, but it turns out there are many other problems with this chart as well. To highlight them, I am going to run a little mini-series of posts under the title of this one. The series will demonstrate how data used in this chart has been cherry-picked, adjusted, and in one case seemingly pulled out of thin air.

Because this post is already running long, I'll close it out with one of the more peculiar aspects of this chart. It's a mystery I cannot unravel. Continue reading

Remarkable Remarks by Cook et al

There was apparently an AMA on Reddit yesterday with the authors of the recent Cook (2016) paper I've recently discussed. I missed out on it, which is a shame even though I expect I would have just been censored. Oh well. At least we get to see what the authors of the paper have to say about their work.

That's what this post will be for. I'm going to just highlight comments by these authors I see which seem remarkable and give a brief description of what is noteworthy about them. Feel free to do the same in the comments section.
Continue reading

Strangest Chart Ever Created?

I think I may have found the strangest chart I have ever seen. You can see it below, taken from the newly published paper on the supposed "consensus on the consensus" on global warming:

consvexpertise2

Now, I discussed this paper a bit yesterday, and there are probably a lot of things more important to discuss than this chart. Those other things aren't as funny though. You see, this chart is complete nonsense. Look at the x-axis. See how it says "Expertise"? Tell me, what scale do you think that's on?

You're wrong. It doesn't matter what your answer might have been; it's wrong. It's wrong because there is no scale for the x-axis on this chart.
Continue reading

New Consensus Study Proves Its Authors Are Liars

This post isn't going to be exhaustive, and I will likely have much more to say within the next few days, but I wanted to get this out right away. As you probably know, last month I found a CONFIDENTIAL draft version of a new paper by:

John Cook1,2,3, Naomi Oreskes4, Peter T. Doran5, William R. L. Anderegg6,7, Bart Verheggen8, Ed W. Maibach9, J. Stuart Carlton10, Stephan Lewandowsky11,2, Andrew G. Skuce13, Sarah A. Green12, Dana Nuccitelli3, Peter Jacobs9, Mark Richardson14, Bärbel Winkler3, Rob Painting3, Ken Rice15

This is just about everybody publishing on the consensus messaging approach to the global warming debate, where they repeatedly say there is a consensus and expect that to convince people we need to take action on global warming. There is one exception, an exception which is quite notable now that the final version of the paper has been published. That exception is a man named James Powell. Powell claims to have shown there is a 99.9% consensus on global warming, yet he is not a co-author of this paper, and his result doesn't show up anywhere in the study. The complete lack of mention of his results is rather notable since the draft version talked about them quite a bit. In fact, Powell's work is featured heavily in the abstract of the draft version:

This 97% result has been criticised for being both too high (Tol 2015) and too low (Powell 2015). In some cases, Tol assumes that when the cause of global warming is not explicitly stated ("no position"), this represents non-endorsement, while Powell assumes the opposite. Neither assumption is robust: as argued by Powell, Tol’s approach would reject the consensus on well-established theories such as plate tectonics. On the other hand, Cook et al surveyed authors of the studies considered and some full papers rejected the consensus even when their abstracts were classified as "no position", contradicting Powell's assumption.

In fact, Powell's name comes up 43 times in the draft version, yet it only shows up twice in the final version. One might think the authors simply changed the focus of their paper, yet they explicitly state this in their new abstract:

We examine the available studies and conclude that the finding of 97% consensus in published climate research is robust and consistent with other surveys of climate scientists and peer-reviewed studies.

The authors must know this statement is incredibly deceptive. The devoted a great deal of time and effort to discussing the study by Powell, yet after completely excising such discussions from the published version of this paper, they still claim to "examine the available studies." There is no doubt the authors are aware of the Powell study. There is no doubt the Powell study is available. As such, there is no excuse for the authors to ignore it and simultaneously claim to "examine the available studies." They must know that statement is a lie.

Don't believe it is a lie? Rather than quote all the text showing the authors were aware of Powell's study when they wrote the draft version, we can see more direct proof. This is from the draft version's table listing the studies they examined and the results those studies had:

4_12_draft_table

Powell's study with its 99.9% result is right there, plain as day. This is from the same table in the final version:

4_12_final_table

The draft version's table gives results for studies by Stenhouse et al, Verheggen et al, Pew Research Center, Powell, Carlton et al. The final version gives results for studies by Stenhouse et al, Verheggen et al, Pew Research Center, Powell, Carlton et al. Powell (2015) has simply been disappeared.

But it gets worse. The authors of this paper haven't just excluded Powell 2015's results from all their analysis and excised the significant amount of discussion of those results that was present in their draft version. They've done that while still citing the paper to support their criticisms of a paper by one Richard Tol (which to be fair was terrible):

Powell (2015) shows that applying Tol’s method to the established paradigm of
plate tectonics would lead Tol to reject the scientific consensus in that field because nearly all current papers would be classified as taking ‘no position’.

So while claiming to examine the available studies, the authors intentionally exclude results from one study they knew is available then turn around and cite that study to support their views on a different matter. It's obscene.

I can't think of any word to describe this other than "lying." The authors apparently feel free to claim to "examine the available studies" while simply ignoring some studies. Not only is that wrong and dishonest, it raises the important question: What other results have they ignored? If the authors are willing to lie about the existence of one study, who knows what other studies they might have pretended don't exist?

Or for that matter, who knows what else they might have lied about? How can anyone know they accurately described the studies they did include? We can't. In fact, we have strong reason to believe we shouldn't trust their descriptions of studies. The central study for this paper, created by John Cook and many other co-authors, described its methodology saying:

Each abstract was categorized by two independent, anonymized raters.

One of the authors of that study, Sarah Green, is also a co-author of this paper. She said this when carrying out the study:

But, this is clearly not an independent poll, nor really a statistical exercise. We are just assisting
in the effort to apply defined criteria to the abstracts with the goal of classifying them as objectively as possible. Disagreements arise because neither the criteria nor the abstracts can be 100% precise. We have already gone down the path of trying to reach a consensus through the discussions of particular cases. From the start we would never be able to claim that ratings were done by independent, unbiased, or random people anyhow.

Despite saying she and her colleagues could never describe the ratings as having been "done by independent" raters, she co-authored a paper which explicitly stated the ratings were done by "two indepenent, anonymized raters." No explanation was ever given for that contradiction, but in the Supplementary Material the new paper, the authors say:

Tol (2015) questions what procedures were adopted to prevent communication between raters. Although collusion was technically possible, it was - in practice - virtually impossible. The rating procedure was designed so that each rater was assigned 5 abstracts selected at random from a set of more than 12,000. Consequently, the probability two raters being assigned the same abstract at the same time was infinitesimal making collusion practically impossible.

When I discovered the draft version of this paper, I summarized why this is not a fair or reasonable description. We can see the number of raters responsible for most the work in the study was small with this image John Cook created and published for everyone in his forum to see:

So I said:

There were a total of 24 raters, but given how few people were responsible for the majority of the ratings, collusion would have been perfectly possible. It would have been easy for people to talk to one another about how to game the system. This new paper tries to play that off by saying people wouldn't have been able to collude on individual papers, but nobody said that's what happened.
Whether or not people collude on how to rate specific papers doesn't determine whether or not they colluded in general. It would have been easy for raters to talk to one another and say things like, "Hey, whenever you and I do a rating, let's always add +1 to our score." If they did, their ratings wouldn't be independent of one another even though they never discussed a specific paper.
The authors made no effort to prevent such a thing from being possible for their project. They made no effort to monitor for such happening. The raters all had each others' e-mail addresses meaning they could easily contact one another directly and privately, but the authors want everyone to just assume that wasn't an issue. All while offering false excuses of how any communication between raters was merely to seek clarifications/amendments to the rating process.
The reality is the raters themselves recognized the problems created by talking to one another about how to rate specific items, with one rater saying:

Giving the objective to being as close to double blind in methodology as possible, isn’t in inappropriate to discuss papers on the forum until all the ratings are complete?

John Cook, the head of this project, was well aware of this issue. He ran the forum where people talked to one another about how to rate individual papers. He ran the forum where people acknowledged they had cheated the study's methodology by looking up extra information about the papers they were rating. Rather than speak out against any of it, he actively participated in those forums and consequently encouraged this sort of behavior.

That last part was a reference to how the authors of the Cook et al (2013) study claimed their ratings were based solely on the title and abstracts of papers even though they had discussed in their forums how they looked up the author names and full text of papers they were rating. This new paper now admits they did this, meaning they lied when they claimed to have only used titles and abstracts for their ratings:

During the rating process of C13, raters were presented only with the paper title and abstract to base their rating on. Tol (2015) queries what steps were taken to prevent raters from gathering additional information. While there was no practical way of preventing such an outcome, raters conducted further investigation by perusing the full paper on only a few occasions, usually to clarify ambiguous abstract language

So first the authors lied and claimed they had only used the abstracts and titles of papers for their ratings. Then, after enough people have pointed out the authors had discussed how they looked up author names and full texts of papers in the forum John Cook (the head of the project) ran, they admit they cheated and used titles and abstracts but claim they only did so a few tiems.

There's bound to be a ton more to say about this paper, but for now, I'm signing off to await the media flurry I expect will come and manage to completely ignore how the authors of the paper blatantly lied.

Responding to Skeptical Science's Libel

I didn't think this would become a thing again, but the Skeptical Science group is apparently accusing me of hacking them again. There's a long history of these accusations, with the head of the group John Cook even getting his university, the University of Queensland, to send me a threatening letter including the accusation. His university even said they would report me to the police over it. (They never did.)

But I'm not here to rehash all that. Today, I just want to look at the latest accusations being made. Or at least, one of them. A Skeptical Science team member recently wrote this about the material I found:

Intent: John Cook certainly did not intend the files to be public. In fact, I am informed that:
– There were no public links to the files.
.
– URLs had to be obtained from the database of redirect URLs.
.
– The database was password-protected: So it required actlve effort to obtain a username/password match and search the database to find target URLs before the files could be accessed. Once you have the URL, you can give it to anyone. But that’s true of a metal door-lock key as well: Once you have a key, you can make copies and give them to everyone. The block to the public was that decent people don’t try to hack username/password pairs on other people’s systems.

If these claims were true, my actions would have been criminal. They are not true. They are obviously false. Despite this, a few days later King went on to say:

2) Checking out the methods described for getting the two files, fairly thoroughly. My SkS folks agreed that it worked that way the 1st time (for the Photoshops), but they claim it wouldn’t have worked that way the 2nd time, because they put a password system on it. Lucia did a demo, but then she reported a problem with it, so I’m not sure of the situation. It may be that what each side is saying is compatible with the other, but that’s not the way it looks. Maybe something is being de-emphasized in the story: either the effort required for some task, or the possibility of some work-around. I need some 1st-hand information.

No examination of the issue could have possibly supported the claims being made. My explanation of how I found the material in question is easy to check, and it is absolutely indisputable there was no password protection involved. I don't know which "SkS folks" are spreading false information here, but to demonstrate beyond and doubt what they claim is false, I've made a video demonstrating exactly how I found what I found. I encourage anyone who has any doubt the Skeptical Science group is full of it when they claim I hacked them to watch the demonstration:

Oh, and for the record, falsely accusing people of committing felonies is libel. It's kind of a big deal.

A New Secret Skeptical Science Paper and a New eBook

Hey guys. Today's post is an interesting one. As you guys may know, I've been accused of hacking Skeptical Science on occasion, and while it isn't true, I have had a history of finding things they post in publicly accessible locations which they would like nobody to see.

I've done that again. This time, I found a "CONFIDENTIAL" manuscript (archived here) the Skeptical Science group has apparently submitted for publication with a scientific journal. I don't know if the manuscript has been accepted, rejected or is still under review, but the fact they posted it in a publicly accessible location when it was supposed to be kept confidential is rather amusing.

I also found a copy of John Cook's PhD thesis (archived here), which I find incredibly lackluster. If it can earn him a PhD, then I don't think PhDs mean much of anything. I imagine he'll update it and improve it before actually submitting it, but I can't imagine any way in which it could be made not to... well, suck. And that's not just because he's wrong in a lot of what he says in it. Even if I agreed with his conclusions, I'd still say it was unimpressive.

In any event, this latest discovery has given me the motivation and material to finish an eBook I've been wanting to publish for a while now. You can find it here:

It's a bit more personal than the last two eBooks I wrote, as I was directly involved in much of what it discusses, but I'd like to think I found a good balance to keep it from just being a mini-biography. I hope you'll agree. If you don't want to risk 99 cents to find out, you can download a free PDF copy here.

Now like my last two eBooks, this one is ~10,000 words long, so it shouldn't take too long to get through. Unlike the last two, it doesn't really cover any technical subjects so it should be easier to follow (though I'd like to think the others were easy enough to follow). It also doesn't cover everything as there are tons of topics and points I'd have liked to discuss but only so much room. I'd like to think I hit the most important points though.

Of course, with me having only recently discovered the latest paper by the Skeptical Science group, this eBook doesn't cover all the issues it might have. Because of that, I highly recommend people check out the paper themselves. The author list alone should prove it will be interesting:

John Cook1,2,3, Naomi Oreskes4, Peter T. Doran5, William R. L. Anderegg6,7, Bart Verheggen8, Ed W. Maibach9, J. Stuart Carlton10, Stephan Lewandowsky11,2, Andrew G. Skuce13, Sarah A. Green12, Dana Nuccitelli3, Peter Jacobs9, Mark Richardson14, Bärbel Winkler3, Rob Painting3, Ken Rice15

The normal Cook et al group is there, but so are people like Ken Rice, also known as the blogger Anders and Stephan Lewandowsky, famous for finding global warming skeptics are conspiracy nuts by taking basically no data and just assuming the lack of data proved his preconceived beliefs.

But what makes this paper truly remarkable is what these people say. For instance, while the Skeptical Science group had previously portrayed their consensus findings as being based on people having only read the title and abstracts for various papers, this paper now admits:

During the rating process of C13, raters were presented only with the paper title and abstract to base their rating on. Tol (2015) queries what steps were taken to prevent raters from gathering additional information. While there was no practical way of preventing such an outcome, raters conducted further investigation by perusing the full paper on only a few occasions, usually to clarify ambiguous abstract language.

The raters cheated. They looked at information they weren't supposed to look at when doing their ratings. They openly discussed having done so in their forums, with neither John Cook nor any other author of the paper speaking up to say it was wrong. And then, for years, they pretended this never happened.

But now, they insist everything is okay because the raters only cheated a few times. They offer no evidence for this claim, and it would be completely impossible to know the claim is true. Even so, they want to publish this under with expectation people should just trust them.

Similarly, they both acknowledge and distort another issue:

Raters had access to a private discussion forum which was used to design the study, distribute rating guidelines and organise analysis and writing of the paper. As stated in C13: "some subjectivity is inherent in the abstract rating process. While criteria for determining ratings were defined prior to the rating period, some clarifications and amendments were required as specific situations presented themselves". These "specific situations" were raised in the forum.

The raters didn't just talk to one another about clarifications and amendments. That's an obvious misrepresentation anyone who actually read what they said to one another in their forums would know is false. On a number of occasions, raters simply asked one another how they would rate papers, not saying a word about wanting any standards or guidelines clarified.

But even with that distortion in place, this admission is huge. The original Skeptical Science consensus paper stressed that the raters were independent of one another. That's a huge stretch given they were all members of the same activist group, were mostly friends with one another and were in direct communication with one another. It's an impossible stretch, however, once you admit they were talking to one another about how to perform the ratings they were supposedly doing independently.

What's perhaps most interesting, however, is Table 1 of this new paper. It lists a number of papers supposedly finding a consensus on global warming, and in it, there is a column for "Definition of consensus." This would have been a perfect opportunity to highlight and contrast the various definitions of the global warming consensus, explicitly stating what Cook et al had found. It doesn't. Instead of giving any explicit definition, they just copy the rating categories:

1. Explicitly states that humans are
the primary cause of recent global
warming
2. Explicitly states humans are
causing global warming
3. Implies humans are causing global
warming.
4a. Does not address or mention the
cause of global warming
4b. Expresses position that human’s
role on recent global warming is
uncertain/undefined
5. Implies humans have had a
minimal impact on global warming
without saying so explicitly
6. Explicitly minimizes or rejects that
humans are causing global warming
7. Explicitly states that humans are
causing less than half of global
warming

Intentionally not explaining what consensus definition you get when you combine these categories. This is interesting mostly because if one looks at the rest of Table 1, they see no other paper gets a 97% consensus without using a weak definition or arbitrarily limiting what portions of its results to use. Instead, you get values as low as 40% or as high as 93%. In many ways, this paper shows there is no meaningful 97% consensus.

Of course, its authors would never say so. They'll try to spin everything they find to support their consensus message, even if that means trying to excuse what were basically lies about the methodology of papers. Cook's PhD thesis is perhaps worse, with it repeating a number of falsehoods and even re-using at least one quote he knows fully well has been fabricated.

But to be honest, the thing I find most fascinating is I found these documents in the exact same way I found the Skeptical Science consensus paper's data. The Skeptical Science group called me a criminal who had hacked into their server to get that data. If what I did then was hacking, why would they still allow anyone to do it and find new material? Why are they posting "CONFIDENTIAL" material in publicly accessible locations then handing out the URL to that material?

It's mind-boggling. I'm sure some people will claim I've "hacked" Skeptical Science again, but come on! It's been over a year since I described exactly how I found the secret material last time. Why can I still find more secret material in the exact same way?! That the consensus message is being crafted by people this incompetent is dumbfounding.

Anyway, feel free to give my new eBook a look and tell me what you think. It's fine even if you want to tell me it is complete garbage. I think most writers tell themselves the same thing plenty of times about most things they write.

Skeptical Science Online Course is a Stunning... 6% Success

So you guys might remember the Skeptical Science group put on an online course with the help of the University of Queensland for people to take titled “Making Sense of Climate Science Denial." If nothing else, you might remember the laughs we had over a nutjob there going off on widd rants because I disagreed with her, to the point she labeled the IPCC a right wing fringe group.

Or perhaps you'll remember how the course instructors didn't take issue with this, nor the user's repeated insults directed at me where she labeled me a sociopath and various other... offensive things. If you remember that, you'll likely remember how the course instructors instead threatened to ban me from their course because I dared to show people what was going on in their forum.

But whether your remember the course or not, I think we can all be interested in something I discovered today. The Skeptical Science group routinely promoted the number of people who enrolled in the course as a sign of how important the course was going to be. They made tons of remarks about 10,000+ people had already signed up, and things like that. They never really talked much about how many people actually completed the course though.

As it happens, I might know why. I recently found a report discussing things like the student activity for the course, and it says only 962 of the 16,861 enrolled students actually completed the course. That is, 94% of their students abandoned the course before it was finished.
Continue reading

John Cook is a Low Down Dirty Liar

I've previously established John Cook, proprietor of the Skeptical Science website and lead author of multiple scientific papers promoting the consensus on global warming, is a liar. I've demonstrated dishonesty on his part multiple times, including when I showed his scientific publications are built entirely upon an intentional campaign of deception.

Today I'd like to revisit one the more baffling examples, Cook's tendency to fabricate quotes. Continue reading