2012-02-29 21:52:17A brief moment to part SkSers on the back
John Cook

john@skepticalscience...
121.222.175.176
Visited UNSW today while I was in Sydney and had lunch with Tim Lambert. I talked to him about TCP - as he is a computer scientist, was interested on his thoughts on how we might do the public interactive bit after the launch. We also talked about his attempt to do his own consensus project a few years ago. Basically, he took the 928 papers from Naomi Oreskes' study, imported them into a blog, reprogrammed the blog so readers could rate papers then stepped back and let the crowd sourcing roll. The response was minimal and the effort was never completed. So kudos to SkSers as we've already rated 5 times the Oreskes sample in a short time. We've still got a long way to go and having a finished paper is a lot of analysis, literature review and writing away but we've already achieved heaps!

Side note - by the time we're done, we will have rated all the papers in the Oreskes study so will be interesting comparing our results to Naomi's.

Hmm, just had a thought. Would be interesting to compare our final ratings to Naomi's. I wonder if I should ask for her database.

2012-02-29 22:08:11
Ari Jokimäki

arijmaki@yahoo...
194.251.119.196

Oreskes database would be very helpful. I was about to finish the rating for today but now I started thinking if I should do another 28.

Speaking of Oreskes, I hope we will do better than her in our analysis. Her analysis doesn't have any kind of uncertainty analysis. I think I would have rejected that paper if I would have been refereeing it. However, seeing that it is marked as "essay", I wonder if it was peer-reviewed?

2012-02-29 22:35:13
Ari Jokimäki

arijmaki@yahoo...
194.251.119.196

You say we are going through all of the papers in Oreskes' sample, but she used search phrase "climate change" while we used phrase "global climate change" if I have understood correctly. So, I'm wondering if we really have all of Oreskes' papers in our sample?

2012-02-29 22:55:40
John Cook

john@skepticalscience...
121.222.175.176

Oreskes used "global climate change". The initial write-up said "climate change" but that was a typo.

In our database, there are 932 "global climate change" papers over 1991 to 2001. Eg - Web of Science have added 4 more papers to their database since Naomi did her analysis. They do that, add papers to earlier years afterwards.

She told me in an email that it was peer-reviewed - originally it was going to be just an essay but then Science had it peer-reviewed.

Oh and nice touch finishing on 928 tonight :-)

2012-03-01 04:38:18
dana1981
Dana Nuccitelli
dana1981@yahoo...
64.129.227.4

Yeah one big plus with TCP is going to be its comprehensive nature - over 12,000 papers, each rated twice to ensure as much accuracy as possible, and with better categories too ;-)  Would definitely be useful to compare to Oreskes' ratings.

Way to get to 928 Ari.  I'm having a hard time just keeping half your pace!

2012-03-01 19:44:26
Glenn Tamblyn

glenn@thefoodgallery.com...
121.216.121.61

Just to throw in a reality check, to ward of the nay-sayers. How many of PopTech's 900 papers are in the list - I know, I can hear the reaction already. But this may be a useful measure of how inclusive the original search criteria are. We have already pulled in papers that are obviously not climate related. What steps might you take as part of the study to look at climate related papers possibly missed through the search criteria you are using.

I am thinking of this both in terms of reducing denialist criticisms. But also raising the standing of the final results (and also your personal standing as a researcher into this, which apart from the kudos, is a useful tool in the war)

You have a basic methodology but you explore outlier or other factors. That professional.

Unfortunately we can't rate on quality of research - that might change the whole perspective. I wish. OOh God I wish!

2012-03-02 07:49:34Reality check
John Cook

john@skepticalscience...
121.222.175.176
Anticipating every possible line of attack is absolutely essential so naysay to your heart's content! My hope is TCP will make a deep, abiding impact on actual public perception of consensus, not just another temporary strike in the blog war. If we promote this well, the backlash will be proportionally intense.

The beauty of TCP is we can look at all the attacks directed at Oreskes 2004 as an indicator of what we will face. A bit like looking at past year's exams as a clue of what to expect in an upcoming exam.

I don't know how many of Poptech's papers appear but be aware that if we applied our rating system to Poptech's list, we would find few rejections. He rates papers that study past climate change as skeptical! In fact, that might even be a followup project - we rate Poptech's list in the same fashion as we did this list. First, a good chunk of the 900 will have been covered in the TCP effort. Second, once we got all his papers into the database, it will be really quick to knock off 900 papers. Hell, Ari just went past 1000! Something to think about :-)

Inclusiveness is something we will address in the paper. Naomi found 0 rejection papers because she had a sample size of only 928. We have found some rejections because of the increased sample size. If we broadened the sample (eg - added "climate change" papers, swelling it to 70,000 papers), we would find more rejections but also much more endorsements. So our result would still be the same. We must be very clear to communicate that this isn't about finding every rejection paper but comparing the proportional amount of rejections to endorsements, like any survey. It's not about finding absolute numbers but measuring proportional amounts.

Lastly, we can rate the quality of research. Citation counts is a metric for quality. Not foolproof but a good proxy. That too will definitely be examined in the final analysis. All the citation data is in the system - not only how many citations each paper receives but how it evolves over time so we can plot citations of rejection papers vs citations of rejection papers over time as a measure of their impact on science.

2012-03-02 07:59:21
dana1981
Dana Nuccitelli
dana1981@yahoo...
64.129.227.4

I like the poptech idea, except that it would draw attention to his list.  I haven't seen anyone reference it in ages.  Probably better to let it die a quiet death.

2012-03-02 09:43:44
John Cook

john@skepticalscience...
121.222.175.176

Fair point. We started crowd sourcing Poptech's list but that effort died out and possibly rightly so. His is a marginal blog meme while we are aiming to get this into peer-review and into the public consciousness - we don't want to elevate it to that level.

But damn, it would be fun to do. Like Steve Goddard, just have to grit my teeth and turn away.

2012-03-02 10:18:43
Andy S

skucea@telus...
209.121.15.232

Poptech is sure to spring back to life once the study comes out. It would be great to have a study ready on his stuff in our backpockets to slap him down. By that time, we'll have duplicate ratings, including some self-ratings by the first authors and we should be able to crush him with reliable data.

But we shouldn't consider his study worthy of including in the formal report.

2012-03-02 10:36:33
dana1981
Dana Nuccitelli
dana1981@yahoo...
64.129.227.4

Yeah, it might not hurt to categorize the poptech list just to have it in our back pockets.  If it comes up, maybe just do a blog post about it.  900 papers would only take us a couple days anyway.

2012-03-02 10:55:40
John Cook

john@skepticalscience...
121.222.175.176

The main workload with Poptech would be getting his papers into the database. It requires identifying which of his papers are already listed then adding the others. Going through all 900 will take some time. I wonder how we might crowd source that. Perhaps copy and paste his list into a google doc then as we add each paper, delete it from the google doc? Or has someone already converted his list to a database or spreadsheet?

Once they're in there, we'd shoot through the ratings within a few days, particularly if a bunch of them have already been rated by TCP.

2012-03-02 17:25:25
Ari Jokimäki

arijmaki@yahoo...
194.251.119.197

Forget about "Poptech" already, we are doing science here :) . Don't mess our sample with some additional cherry-picked batch. We have objective sample selection with the search words. If you include these additional papers which have been selected with bias to one direction, you will include yet another dose of subjectivity (and uncertainty) to the sample. Why waste time with that? Just for the fear that someone will start throwing accusations at us in the Internet? They will do that anyway, regardless if we have all Poptech papers or not.

2012-03-02 17:49:52
John Cook

john@skepticalscience...
121.222.175.176

I'm certainly not saying we include Poptech's papers in the TCP analysis. That has to stick rigidly to the ISI database matching the search 'global warming' and 'global climate change'

If down the track, SkSers haven't had a gutfull of rating and crowd sourcing papers, we can do this as a separate thing that would only be done as a blog post, not a peer-reviewed paper. But only if there are a few keen to do it. I wouldn't call it a high priority but I do confess a degree of fascination with Poptech's list and it is tempting to want to apply some rigorous analysis to it.