Tag Archives: Skeptical Science

This Might be Pointless, but...

As you guys may remember, last month I asked "What Should a Person Do?" when confronted with a situation where authors of a paper published something they knew to be false. I still don't have a good answer, but today, I took one step in potentially addressing the issue by contacting the journal of the paper this particular example was published in. I thought I'd post it here as well so people could see. Maybe I should have done that first so I could get feedback?
Continue reading

Is It Just Me?

I've mostly recovered from a recent illness, and I've working on a new post in line with what I've been discussing recently. I'm still a bit tired though. As such, rather than worry about technical discussions, I wanted to ask a question. John Cook is the proprietor of the Skeptical Science website. Here is a picture of him:

For the last few years, this picture has bugged me. Every time I saw a picture of Cook, I felt like I had seen it him somewhere before. Today, I finally realized why he seemed so familiar. It might be silly/crazy, but... doesn't he kind of look like Donny Osmond?

Continue reading

Did John Cook Lie in His Doctoral Thesis?

I'm growing a bit tired of repeating the same point over and over in regard to the recent paper by John Cook and Stephan Lewandowsky (that they repeatdly call things contradictions even though they are not), so I decided it would be a good time to take a break and discuss something else that has been bugging me. You guys may remember this tweet:

Which wasn't actually written by Barack Obama or by anyone representing him. The group using his name for the Twitter account is Oragnizing for Action, a non-profit advocacy group which explicitly denies any affiliation with any government. When asked, "Is OFA affiliated in any way with the federal or any other government, or funded with taxpayer dollars," it says, "No."

Combined with the fact the account's profile says:

This account is run by Organizing for Action staff. Tweets from the President are signed -bo.

It should be clear President Obama had nothing to do with this tweet. Despite that, John Cook wrote this in his doctoral thesis:

Consequently, our study received a significant amount of media attention, including a number of tweets by President Obama (Cook, Bedford, & Mandia, 2014).

For today's post, I would like to discuss whether or not this was a lie.
Continue reading

The (Socialist) Nazis Did It!

In our last post, we looked at how a recent paper by the proprietor of the Skeptical Science website, a man named John Cook (and two co-authors), claimed global warming skeptics hold "incoherent" beliefs by grossly misrepresenting and distorting a variety of quotes.

Specifically, Table 2 of the paper provided quotations from several different skeptics which supposedly showed those skeptics contradicting themselves. This was a key issue for the paper, which was titled "The ‘Alice in Wonderland’ mechanics of the rejection of (climate) science: simulating coherence by conspiracism" based on the well-known quote from the story Alice and Wonderland:

“Why, sometimes I’ve believed as many as six impossible things before breakfast.”

This is the key concept for the paper. It's entire concept rests on the idea skeptics hold "incoherent" beliefs because they are willing to and capable of holding contradictory beliefs at the same time. The evidence they offer to support this claim is bogus though. We can tell just by looking at Nazis.
Continue reading

Consensus Chart Craziness - Part 4

We've been discussing a strange chart from a recent paper published by John Cook of Skeptical Science and many others. The chart ostensibly shows the consensus on global warming increases with a person's expertise. To "prove" this claim, Cook et al assigned "expertise" levels to a variety of "consensus estimates" they took from various papers. You can see the results in the chart below, which I've added lines to to show each category:

4_13_scaling_example

As you can see, the "consensus estimates" are all plotted one next to the other, without concern for how the categories are spaced. The result is Category 4 doesn't exist in the chart, and Category 5 covers more than half of the chart. This creates a vastly distorted impression of the results.

But while that is a damning problem in and of itself, there is much more wrong with the chart and corresponding paper. One of the key issues we've been looking at in this series is how Cook et al arbitrarily chose which results from the studies it examined to report and which ones not to. Today I'd like to discuss one of the most severe cases of this. It deals with the paper Verheggent et al (2014).
Continue reading

Consensus Chart Craziness - Part 3

Our last post continued examining a very strange chart from a recent paper by John Cook of Skeptical Science and many other people. The chart ostensibly shows the consensus on global warming increases with a person's expertise. To "prove" this claim, Cook et al assigned "expertise" levels to a variety of "consensus estimates" they took from various papers. You can see the results in the chart below, which I've added lines to to show each category:

4_13_scaling_example

As you can see, the "consensus estimates" are all plotted one next to the other, without concern for how the categories are spaced. The result is Category 4 doesn't exist in the chart, and Category 5 covers more than half of the chart. This creates a vastly distorted impression of the results which, if displayed properly, would look something like:

4_13_scaling_proper

The last post in this little series shows there is even more wrong with the chart. Cook et al openly state the "expertise" values they used were assigned in a subjective manner, with no objective criteria or formal guidelines. In effect, they just assigned whatever "expertise" level they felt like to each "consensus estimate" then plotted the results in a chart where they chose not to show the expertise levels they had chosen.

The arbitrary assignment of these "expertise" levels is an interesting topic which I intend to cover in more detail in a future post as a number of the decisions they made are rather bizarre, but for today, I'd like to discuss something else Cook et al did. Namely, I'd like to discuss how Cook et al arbitrarily chose to exclude some results while including others.
Continue reading

Consensus Chart Craziness - Part 2

Our highlighted a couple pecularities in the chart of a new paper, Cook et al (2016). This chart ostensibly shows as climate science expertise increases, one becomes more likely to endorse the "consensus" position on global warming. There are a number of problems with this chart, the most fundamental of which I highlighted by added lines to it:

4_13_scaling_example

To show where each of the five categories used to represent "expertise" fall. As you can see, multiple points with the same x-value are plotted side by side, causing the categories to be unevenly spaced. As a result, Category 5 covers more than half the chart while Category 4 doesn't even appear. This is highly unusual. Had the data been displayed in a normal manner, the result would have been something like:

4_13_scaling_proper

Which does not give the strong visual effect Cook et al (2016) gave in their chart. Additionally, there appear to be a number of problems with the data used in creating this figure. As I discussed in the last post in this series, Cook et al (2016) give two "consensus estimates" from one paper, Cartlon et al (2015), as such:

4_20_Carlton_2

And say:

Carlton et al (2015) adapted questions from Doran and Zimmerman (2009) to survey 698 biophysical scientists across various disciplines, finding that 91.9% of them agreed that (1) mean global temperatures have generally risen compared with pre-1800s levels and that (2) human activity is a significant contributing factor in changing mean global temperatures. Among the 306 who indicated that 'the majority of my research concerns climate change or the impacts of climate change', there was 96.7% consensus on the existence of AGW.

Even though Carlton et al (2015) clearly state only 5.50% of their respondents said. "The majority of my research concerns climate change or the impacts of climate change." Basic arithmetic shows you would need over 5,000, not fewer than 700, for 5.50% to give 306. That makes it clear this 306 value used by Cook et al (2016) is wrong. However, there are more problems, and I intend to discuss some in this post.
Continue reading

Consensus Chart Craziness - Part 1

There's a new paper out claiming to find a "consensus on [the] consensus" on global warming. It concludes:

We have shown that the scientific consensus on AGW is robust, with a range of 90%–100% depending on the exact question, timing and sampling methodology. This is supported by multiple independent studies despite variations in the study timing, definition of consensus, or differences in methodology including surveys of scientists, analyses of literature or of citation networks.

With it's one and only figure is used to demonstrate the claim:

Figure 1 demonstrates that consensus estimates are highly sensitive to the expertise of the sampled group. An accurate estimate of scientific consensus reflects the level of agreement among experts in climate science; that is, scientists publishing peer-reviewed research on climate change. As shown in table 1, low estimates of consensus arise from samples that include non-experts such as scientists (or non-scientists) who are not actively publishing climate research, while samples of experts are consistent in showing overwhelming consensus.

If you've followed the discussion about this paper so far, you may have seen my recent post discussing this chart:

consvexpertise2

In which I explained:

Look at the x-axis. See how it says "Expertise"? Tell me, what scale do you think that's on?

You're wrong. It doesn't matter what your answer might have been; it's wrong. It's wrong because there is no scale for the x-axis on this chart.

Seriously. This is what the authors of the paper had to say about the chart:

Figure 1 uses Bayesian credible intervals to visualise the degree of confidence of each consensus estimate (largely a function of the sample size). The coloring refers to the density of the Bayesian posterior, with anything that isn’t gray representing the 99% credible interval around the estimated proportions (using a Jeffreys prior). Expertise for each consensus estimate was assigned qualitatively, using ordinal values from 1 to 5. Only consensus estimates obtained over the last 10 years are included.

For today, let's ignore the part about the "coloring" and "credible intervals." Let's just focus on the part where it says the expertise values were "assigned qualitatively." What that means is there was no rigorous method to how they assigned these values. They just went with whatever felt right. That's why there is no rubric or guideline published for the expertise rankings.

Kind of weird, right? Well that's not too important. What is important is... there are five categories. Look at the chart. Where are they?

I then showed what the chart would look like if you labeled the various categories in it:

4_13_scaling_example

One category (5) covers more than half the chart's range while another category (4) doesn't even appear on the chart. Any claim "consensus estimates are highly sensitive to the expertise of the sampled group" based on this chart is heavily biased by the authors decision to present their data in a misleading way. Had they simply shown their data by category, they would have gotten a chart like this:

4_13_scaling_proper

Which doesn't make for anywhere near as compelling an image, and it wouldn't allow the authors to create graphics like this one which they use to promote their conclusions:

By choosing not to label the values on their x-Axis, and by choosing to place every point next to another rather than grouping the data by category, the authors of this paper were able to create the visual impression of a relationship between expertise level and size of the consensus estimate.

That alone should be damning, but it turns out there are many other problems with this chart as well. To highlight them, I am going to run a little mini-series of posts under the title of this one. The series will demonstrate how data used in this chart has been cherry-picked, adjusted, and in one case seemingly pulled out of thin air.

Because this post is already running long, I'll close it out with one of the more peculiar aspects of this chart. It's a mystery I cannot unravel. Continue reading

Remarkable Remarks by Cook et al

There was apparently an AMA on Reddit yesterday with the authors of the recent Cook (2016) paper I've recently discussed. I missed out on it, which is a shame even though I expect I would have just been censored. Oh well. At least we get to see what the authors of the paper have to say about their work.

That's what this post will be for. I'm going to just highlight comments by these authors I see which seem remarkable and give a brief description of what is noteworthy about them. Feel free to do the same in the comments section.
Continue reading

Strangest Chart Ever Created?

I think I may have found the strangest chart I have ever seen. You can see it below, taken from the newly published paper on the supposed "consensus on the consensus" on global warming:

consvexpertise2

Now, I discussed this paper a bit yesterday, and there are probably a lot of things more important to discuss than this chart. Those other things aren't as funny though. You see, this chart is complete nonsense. Look at the x-axis. See how it says "Expertise"? Tell me, what scale do you think that's on?

You're wrong. It doesn't matter what your answer might have been; it's wrong. It's wrong because there is no scale for the x-axis on this chart.
Continue reading