|2012-03-03 13:01:15||Discussion of the public interactive feature|
The "will TCP be the biggest ever survey of peer-reviewed climate papers?" thread got hijacked by discussion of how the public interactive feature will work but once we started discussing that, I got thinking about that issue more. It's a very exciting idea and could be very a very powerful communication tool, as well as unique and novel (might attract some media interest). So I'm moving that discussion here.
There are so many ways we can handle the public interactive feature. Currently, this is my thinking:
On a connected note:
|I think that publishing the ratings one by one in a public online archive will be essential. The questions are: 1) do we, can we, publish the abstracts themselves, or just the titles, journals, authors and dates (and raters's ID's)? 2) should we make it easy for anyone in the public to rate papers? Neither is required for publication purposes. My gut feel is that we need to be transparent but there's no need to make things easy for our critics. We should do a double check on all papers by authors identified as skeptics to make sure none have wrongly slipped through. Using poptechs list and Anderegg et als list of skeptics will be useful here. I've got some other ideas for quality control: 1) when our two ratings differ by more than one rating or a different classification plus one rating difference, we should all regrade them again using five raters; 2) where the scientist's self rating is one step more skeptical or two or more steps more affirming, we should have a second look (blind to the raters) to see if we made an error. I think if we take these extra steps, our critics will have a very hard time even with heroic cherry-pick. Even if they find a hundred clear cut examples of questionable ratings, which will be a huge effort for them, that won't budge the overall stats significantly. Besides, our team will be able to quickly crowd-source a detailed rebuttal for each alleged case of error.|