|2012-01-19 10:39:24||Introduction to TCP|
Here is an overview of The Consensus Project (TCP), taken from this earlier thread (I will be updating this as the project develops). Please keep any discussion of TCP within the SkS forum as it's all formative and not ready for public exposure yet.
It's essential that the public understands that there's a scientific consensus on AGW. So Jim Powell, Dana and I have been working on something over the last few months that we hope will have a game changing impact on the public perception of consensus. Basically, we hope to establish that not only is there a consensus, there is a strengthening consensus. Deniers like to portray the myth that the consensus is crumbling, that the tide is turning. However, our survey of the peer-reviewed literature shows that the opposite is true - the consensus is getting stronger and the gap between those that accept and reject the consensus is increasing. What we have in mind is an extended campaign over 2012 (and beyond).
Phase 1: Publishing a paper on the negligible impact of climate denial in the peer-reviewed literature
TCP is basically an update and expansion of Naomi Oreskes' survey of the peer-reviewed literature with deeper analysis. In 2004, Naomi surveyed 928 articles in the Web of Science matching the search "global climate change" from 1993 to 2003. We've expanded the time period (1991 to 2011) and added papers matching the search "global warming". We ended up with 12,272 papers. I imported the details of each paper (including abstracts) into the SkS database and set up a simple crowd sourcing system allowing us to rate the category of each paper using Naomi's initial categories (impacts, mitigation, paleoclimate, methods, rejection, opinion). We did find some rejection papers in the larger sample but the amount was negligible. The amount of citations the rejection papers received were even smaller proportionally, indicating the negligible impact of AGW denial in the peer-reviewed literature. Jim and I wrote these initial results up into a short Brevia article that we just submitted to Science (so please don't mention these results outside of this forum yet, lest it spook Science who freak out if there's any mention of a submitted paper before publication). Of course, Science have a 92% rejection rate so the chances are very slim - we'll try other journals if rejected there.
When the paper is published, we would announce it on SkS as the beginning of the public launch of TCP. It will also be promoted through the communications dept at the Global Change Institute although their press releases only go to Australian media so will have to explore other promotion ideas.
Phase 2: SkS team rates endorsements
For Phase 1, we didn't rate the actual # of endorsements of AGW - the focus was on the proportion and impact of rejection articles. So Phase 2 will be about tallying the # of endorsements and comparing it to the # of rejections in a variety of ways. This is where it gets exciting. A simple comparison of the # of endorsement papers vs rejection papers tells a vivid story of a strengthening consensus. Even more so, the # of citations of endorsement papers vs rejection citations. And this is something I haven't crunched any data for yet but just adding up the # of authors who have written endorsement papers vs rejection authors will, I imagine, tell another interesting tale.
What I'm thinking of doing is crowd sourcing among the SkS team the role of rating the 12,000 papers. By rating, we are actually going beyond what Naomi did. Her rating was one dimensional - just the 6 categories. We decided we wanted to collect more information about each paper and have defined two dimensions or two aspects of each paper that we want to capture - the category (impacts, mitigation, paleoclimate, methods, opinion) and the endorsement level (from explicit endorsement down to explicit rejection). So I'll program up a crowd sourcing system allowing SkSers to rate papers - the goal being every paper gets at least 2 ratings from different people for consistency.
The end goal of Phase 2 is publishing the results in a peer-reviewed paper. As far as co-authorship of the paper goes, I was thinking perhaps a practical approach would be that to be a co-author on the paper, you rate at least 2000 papers - seems a fair requirement to get your name on a peer-reviewed paper. And of course input into the writing of the paper - we'll need to anticipate all the various attacks our results will get as this result will be highly threatening to the denialosphere.
The result is we'll have 12,000 papers with category ratings and endorsement level. We can analyse this data in a variety of ways to tell many interesting stories - but what I'm guessing from what I've rated so far is we'll find is around 50% of the papers are explicit or implicit endorsements and the rest are neutral (with the tiniest fraction being rejection). Note and this is an important note - this result is based just on the abstract text, not the full paper, and hence is an underestimate of the actual number of endorsements.
Phase 3: Publicly crowd source the categorisation of neutral papers
When we publish the Phase 2 paper, it will strongly emphasise that the endorsement percentage is based just on the abstract text and hence an underestimate of the true number of papers endorsing the consensus. I anticipate there will be around 6000 "neutral" papers. So what I was thinking of doing next was a public crowd sourcing project where the public are given the list of neutral papers and links to the full paper - if they find evidence of an endorsement, they submit it to SkS (I'll have an easy-to-use online form) with the excerpted text. The SkS team would check incoming submissions, and if they check out, make the endorsement official. Thus over time, we would gradually process the 6000 neutral papers, converting many of them to endorsement papers - and make regular announcements like "hey the consensus just went from 99.75% to 99.8%, here are the latest papers with quotes". The final result will be a definitive, comprehensive survey of the number of endorsements of AGW in the literature over the last 21 years.
Phase 4: Repeat each year
Fingers crossed, Phase 3 will be complete by the end of 2012. Then in early 2013, we can repeat the process for all papers published in 2012 to show that the consensus is still strengthening. We beat the consensus drum often and regularly and make SkS the home of the perceived strengthening consensus.
|2012-01-20 11:35:17||I think this fits here|
This study on global cooling consensus in the 1970s is a classic example to show how deniers deceived the public through the media that the scientific consensus at that time was we were heading for an ice age, when the consensus was the exact opposite. It is one of the main arguments put up by the deniers that the public can’t trust scientific consensus and they are still doing it as shown by the Consensus Project.
Put it somewhere in the introduction to get people and the media in the mood.
Edit: I see this paper is in SkS arguments - along with others. But my suggestion of giving this as a great example of how the truth about scientific consensus is totally misrepresented by deniers is a good lead into what the project shows is still happening - and got worse.
Just a thought/question. And definitely back-burner stuff for now. Once the rating of these papers has all been done, will we start looking at not just new papers, but also older papers. Could we progressively work backwards, a few years at a time. Because much of the consensus among climate scientists at the coal face about the validity of AGW significantly pre-dates 1991. The more we can show that this isn't just a new 'alarm' but has been thought about for decades, the more we might reach people. And I imagine the interesting movement in the % acceptance happened during the 60's & 70's.
Besides, what a cool tool to have available. Every Climate Change paper ever written!
|2012-01-23 20:23:07||Pre 1991|
In 1991, we have about 160 papers. So the sample size is getting pretty small, as you get further back. The reason we go back to 1991 is because WoS added abstracts back to 1991. When Naomi Oreskes did her survey in 2004, they only had abstracts going back to 1993 which is why her survey goes from 1993 to 2003. We've extended the period to 1991 to 2011, adding 10 years to the analysis.
So doing pre-1991 means tracking down abstracts for fairly inconsequential sample sizes. I question the reward vs effort ratio but if someone with library access is keen, they're welcome to have a look down the track.