2012-01-19 11:00:28Should we email the authors?
John Cook

john@skepticalscience...
130.102.158.12

In the original thread, Rob H suggests a way to deflect the accusation that SkS is biased and therefore our analysis is biased:

Would it make sense to perform a double check on how each paper is classified by sending an email to the lead author of each paper?  Something as simple as a quick note saying that you have read "suchandsuch" paper and have categorized it as "whatever."  Ask the author if they agree with the interpretation.  

One big critique I would see coming out of this is deniers saying the group categorizing the papers is biased.  Never mind if it's true or not.  They're going to say it.  If we get at least a significant number of authors saying that they agree with how their papers have been categorized then that might nip that argument in the bud.

When I suggested this was a huge workload, Rob suggested a possible procedure:

I was thinking something very simple.  Each paper needs to have some form of review, right?  Someone has to make a determination on each of the 12,000 papers.  If part of that process includes entering an email address (when available) of the lead author then it's easy.  Sending out an email can be done at the end of the process.  Emailing 12,000 people is easy.

Well, I'm not so sure that emailing 12,000 people is that easy, particularly as I think a form letter would not suffice - would need to be individually tailored if you wanted to get an adequate response. So the reward vs effort ratio here would be very small. However, if it was a dealbreaker that we obtained the emails in order to make TCP bullet proof, I would suggest we'd just have to bite the bullet (hmm, mixing my metaphors) and do it.

However, I think the criticism of analysis bias is just as effectively (or more so) answered by making our results transparent on SkS - by publishing the results in an interactive, user-friendly fashion. One thing Naomi never did was publish her data. We can publish the papers and our ratings - I can make it linkable, searchable, interactive, sortable, visual, animated - and challenge people to test our results for themselves.

A key aspect of our result is that it's robust - rejection papers have had a negligible impact and there's a growing gap between endorsements and rejections no matter how generously you rate rejections. So if deniers want to quibble over this or that paper, let them - it doesn't affect the end result. We must be clear from the start that the conclusion of this paper isn't "There are only 23 rejection papers" but that "rejection papers have had a proportionally negligible impact on our understanding of the science" and that "there is a growing gap between papers endorsing the consensus and papers rejecting the consensus - the consensus is strengthening".

But discussion welcome if others have different thoughts.

2012-01-19 12:49:10
Glenn Tamblyn

glenn@thefoodgallery.com...
143.238.233.30

One advantage of emailing the authors is we can send them a standard form which includes our categories and ask them how they rate their own paper based on our categories. Could that be used to supercede our own rankings. And the reply would constitute part of the database. Then we only have to rate them if the author doesn't or refuses to do so.

Addendum. We would have to explicitly request that the Authors comment on what the papers findings support, rather than what their personal opinion is.

2012-01-19 18:32:42
Rob Painting
Rob
paintingskeri@vodafone.co...
118.93.201.79

I like the idea, especially as Rob H has kindly volunteered to do the grunt work. Don't know how responsive the authors may be, but it adds a certain level of bombproofing to the work. Very difficult for anyone to claim we've miscategorized a paper when that's exactly what the paper's authors state too.

Let's face it the deniers will want to cast doubt, and the easiest way is to find one or two papers that could legitmately be called into question regarding our categorization, and then use the 'house of cards' meme to cast aspersions on the rest of the study. If we don't even give them a bone to chew on they'll saunter off empty-handed.  

 

2012-01-19 19:16:03
MarkR
Mark Richardson
m.t.richardson2@gmail...
134.225.187.225

There will be one or two 'skeptics' who've published stuff and who're likely to leak this if they think they can gain anything from it.

2012-01-19 19:51:08
Rob Painting
Rob
paintingskeri@vodafone.co...
118.93.201.79

You mean leak it and garner publicity for it? Hmm....

Just leave the skeptic paper e-mails until right near the end. If they wish to leak this, then they'll do a lot of the publicising for us. 

2012-01-20 01:44:31
Rob Honeycutt

robhon@mac...
12.202.9.2

My thinking is you don't need a high return rate on these.  I'm certainly willing to do the grunt work.  I think, like Rob P says, it should be a form letter.  Short explanation of what we're doing.  List the paper title and how we have categorized it.  Ask if they agree or disagree with our determination.  

This would function more like a poll.  If we got 500 responses out of several thousand emails we could make a strong case for the validity of the results.  And, hey, if we're getting a lot of scientists disagreeing with the classification of their papers then that would clearly be good information to get!  We don't want a bunch of scientists on OUR side popping up going, "Well, yes, but not exactly right."

I think of this all from my perspective as a manufacturing engineer.  You always want error checks if you want to ensure quality of the end product.

2012-01-23 14:45:06
Tom Curtis

t.r.curtis@gmail...
112.213.206.248

I have suggested emailing authors as a first step, rather than a second on the "nuts and bolts" thread.  The advantage is that, by not informing the authors of our rating, we gain a more independant check of our ratings, and can seperately report author ratings in the paper.

2012-01-23 16:41:16
Glenn Tamblyn

glenn@thefoodgallery.com...
138.130.64.114

I agree with Tom on the methodology. Leading with our rating then asking their response is adding a psychological biasing pressure which is a no-no. A bit like in most medical research where you do randomised, double blind & placebo controlled.

Another aspect in the cloud-sourcing of ratings would to have the allocation of papers to raters be done randomly. Don't just dot i's and cross t's. Be seen to be doing so. The quality of the methodology in this will be one of its biggest strengths.