2012-02-21 20:44:47Things to address in analysis and discuss in paper
Ari Jokimäki

arijmaki@yahoo...
194.251.119.196

1. Language barrier. (Abstracts are in English, but some raters and some abstract writers are non-english)

2. Rating system usage (how easy it is to click unintended rating due to rating system design)

 

2012-02-21 23:13:33
John Cook

john@skepticalscience...
121.222.175.176

1. Suggested action?

2. Will program up a "edit my ratings" page

2012-02-21 23:39:45
Ari Jokimäki

arijmaki@yahoo...
194.251.119.196

1. During the analysis and writing the paper an effort to estimate the effect of language barrier should be made (very difficult, I know). I started this thread just to write down possible issues to discuss in the paper.

2. That's good but how do you identify wrong ratings afterwards without reading abstracts again? By the way, In recent rating work, I noticed at least one wrong rating soon after I clicked it, so we already have empirical evidence that wrong unintended ratings do happen. But how to quantify it?

2012-02-22 07:43:49Wrong ratings
John Cook

john@skepticalscience...
121.222.175.176

You mean you clicked a rating then before clicking save, you noticed the rating had changed?

2012-02-22 17:03:57
Ari Jokimäki

arijmaki@yahoo...
194.251.119.198

Yes, I noticed that there was a different rating I intended it to be, probably because I mis-clicked while rating. Gladly I caught this before saving but my point here is that we need to acknowledge this possible source of uncertainty in the paper somehow. So in this thread there's nothing you need to do currently, these issues are here for remembering them later while doing the analysis and writing the paper.

3. Understanding rating instructions differently (for example the recent urge to put impact neutrals to implicit endorsements)

2012-03-01 23:04:15
Ari Jokimäki

arijmaki@yahoo...
194.251.119.198

I have thought about estimating the uncertainty in the rating process. Let us consider a case where there's 10 papers rated by two people and ratings are either neutral (0) or implicit endorsement (1). Example ratings are as follows:

1 1 1 1 1 0 0 0 0 0

1 1 1 1 0 1 0 0 0 0

There's four papers that both have rated as implicit endorsement, so at least 4 are in that rating in this case. There are additional two papers that only one has rated as implicit endorsement. If we would include these, we would get maximum of 6 in implicit endorsements. What I'm saying here that we can use these cases where ratings go differently as indicators of uncertainty, so we can even derive some sort of quantification to our rating uncertainty from these differences.

Let's assume further that third person is assigned to rate the two papers with different ratings. That person would rate one as neutral and one as implicit, so our situation would be:

1 1 1 1 1 0 0 0 0 0

1 1 1 1 0 1 0 0 0 0

x x x x 0 1 x x x x

I'm not sure what the correct method to derive the uncertainty would be, but we know that end result would be 5 implicit endorsements. We could use the original situation, where the range was 4-6 and say that there are 5 +/- 1 implicit endorsements (this is bad example in the sense that we could have derived that result even without third person). Other thoughts?

2012-03-04 21:06:12
Ari Jokimäki

arijmaki@yahoo...
91.154.109.19

I have started wondering if there's some journals missing from our sample or something like that, because I have now rated 1300 papers and I think I have only encountered a few papers that are actually relevant to the issue of AGW. There are lots ond lots of impacts and mitigation papers but I haven't seen much of papers actually studying global warming itself. This might be something to consider and check after rating phase.

2012-03-04 23:38:23
Riccardo

riccardoreitano@tiscali...
2.33.129.107

Biologists dominate the scene, that's not new. Also, many climate papers (correctly) end up in the neutral category, for example when study the tropopause response to warming (any warming).The meaning of the neutral papers is going to be a deliate issue during analysis.

2012-03-05 07:32:58
Sarah
Sarah Green
sarah@inlandsea...
67.142.177.21

It would be interesting to check how many of the papers referenced by the IPCC show up in our sample. Certainly a good number of basic papers on climate science are older than 1991. 

 "Impact-neutral" papers are by authors who (i) find the potential effects of climate change worthy of study, and (ii) feel that the consensus is strong enough that they don't need to revisit it. So this category is really an implicit endorsement of the consensus by scientists who do not study climate themselves, but have been convinced by the arguments of climate scientists.

I assume the number of those is growing over time, and increases as a fraction of the total. That would be be useful to track in the results. 

 

2012-03-05 07:39:22
Sarah
Sarah Green
sarah@inlandsea...
67.142.177.21

On the uncertainty issue:

The sample is large enough that a statistical analysis would be possible comparing the ratings of each of us. I am imagining a bar graph showing the number I have put in each category, and seeing how that compares to others. Do I skew more toward (or against) endorsement than Ari?

A good statistician (not me!) could find some clever ways of doing this.

2012-03-05 08:44:23
Riccardo

riccardoreitano@tiscali...
2.33.129.107

Sarah's idea could be good to determine the uncertainty in the rating.

2012-03-05 15:59:47
Sarah
Sarah Green
sarah@inlandsea...
67.142.177.24

I assume that one output will be the number of different institutions/authors under each category. That can address the notion that a few scientists at a few (left wing) institutions are conspiring to push AGW.

It would also be interesting to see if denialism is concentrated in particular places.

2012-03-05 18:54:33Giving the Denialists their moment in the Sun...
Glenn Tamblyn

glenn@thefoodgallery.com...
58.165.89.133

An interesting point to consider, not as to the academic merit of the TCP project broadly, but the capacity of it to then be used in the AGW PR War. Does it include the major skeptic papers - Soon & Baliaunus etc. Secondly, does it include PopTech's 900 papers. This probably should not be included in the academic analysis. But in terms of minimising the scale of the blowback from Deniers it might be important.

If they can claim that '100's of papers counter to the Alarmists AGW Agenda have been excluded from this biased study....' that will make a lot of noise that will wash over into the MSM.

So how does the methodology both maintain acedemic rigor in its methods AND minimise the blowback.

The only approach I can see is to use the methodology it currently has as it's primary approach. Then as a secondary analysis, perhaps not in the main paper, perhaps even on SkS, explore the extent to which it captures these other sources.

It isn't the main game of the study, but giving this side issue some attention at the end may reap further rewards.

Also, we need to clearly draw a distinction between what the response in the Blogosphere will be - look up the dictionary for the word 'vitriolic'. Rtaher it is what the broader MSM impact is that matters.

2012-03-05 20:04:05Sarah's idea of authors/institutions
John Cook

john@skepticalscience...
121.222.175.176

Definitely, we'll show information about the # of denier authors vs endorsement authors.

What I would *love* to show is the countries where endorsement authors come from but that info is not so easy to obtain. But it would be great to say something like 20,000 authors from 150 countries endorse the consensus, with a sexy infographic driving home how widespread, huge and diverse this global consensus is.

Glenn, I can see the "You didn't include paper X or paper Y" being a major critique of TCP. The answer to this is TCP is not a comprehensive list of every rejection paper ever written but a sampling of the literature. Albeit a very large sample (biggest ever perhaps?). We've chosen a strong methodology - selecting a very specific sample from the ISI database that results in a huge 12000+ sample size. Nevertheless, we need to come up with a strong response to this. 

One approach which perhaps is along the lines you're suggesting is for us to have a look at rejection papers that didn't appear in our sample, see what keywords they use then work out how many papers would come up in our database if we'd expanded our search to include those keywords. The result will be a much larger sample, more consensus papers and hence the same result - a growing gap between endorsements and rejections.

2012-03-05 21:23:11
logicman

logicman_alf@yahoo.co...
109.150.152.138

A listing by journal might be useful to implicitly highlight journals which are more receptive of denialist BS papers attempting to rebut AGW.

A listing by country would sit well with a recent Oxford study showing that the US and UK are more receptive of denialist BS attempted rebuttal of AGW than the other countries surveyed.

2012-03-06 00:35:24
Sarah
Sarah Green
sarah@inlandsea...
67.142.177.24

The argument "you didn't include paper X" should be addressed by 

1. Identifying how many papers in the IPCC report are in TCP, e.g. to say TCP includes 50% of the IPCC papers.

2. Doing the same for a denialist list.

Based on the number of denialists absracts I've seen so far, the stats are going to look pretty good, even if we just add in 900 denailist papers and accept their implied ratings, they will only be a few percent. So another tact is to figure out how many papers they'd have to come up with to made any impression and ask where those papers are. (And why aren't they in the peer reviewed literature?)