2012-01-19 12:15:42Endorsement categories
John Cook


Before we can start rating papers, we need to pin down the endorsement categories. This needs to be carefully done - Dana and I have been discussing this and have slightly different views. My experience has been the SkS forum is of such a high signal to noise quality that contentious discussions invariably produce more light than heat so my hope is discussion here can help bring clarity to the issue.

To provide some context, here's a history of how we got to where we are now. Initially, we planned to replicate Naomi's methodology of sorting papers into 7 categories:

  1. Endorsement of the consensus
  2. Rejection of the consensus
  3. Impacts
  4. Mitigation
  5. Paleoclimate
  6. Methods
  7. Opinion (not an official category, more a way of eliminating certain papers that weren't peer-reviewed)

We came to realise there were two limitations with this categorisation. Firstly, she was measuring two different things - the level of endorsement and the type of research - but lumping them into the one basket. So we thought it made more sense to measure two different aspects of each paper - the category of research and the level of endorsement.

Secondly, Naomi made assumptions about the level of endorsement. She assumed all mitigation and impacts papers implicitly endorsed the consensus so adding up the explicit endorsements with the mitigation and impact papers, she concluded 75% of papers endorsed the consensus. However, upon reading many abstracts, I can say many impact papers don't implicitly endorse the consensus and even some mitigation papers aren't about mitigating CO2 emissions but other forms of non-GHG pollution. She also assumes all Methods and Paleoclimate papers are neutral but I've found some of them implicitly endorse the consensus. So rather than assume the amount of endorsements, why not quantify it? That's what we set out to do.

So we've broken down our ratings into categories and endorsement, with the following options:


  1. Impacts
  2. Mitigation
  3. Paleoclimate
  4. Methods
  5. Opinion 

And here's the crucial part, where our crowd sourcing will be focused on initially - the level of endorsement. I tentatively suggest the following options:

  1. Explicit Rejection
  2. Possible Rejection
  3. Neutral
  4. Implicit Endorsement
  5. Explicit Endorsement
  6. Evidence for AGW

Some notes:

Firstly, in Phase 1, we identified many papers that definitely rejected AGW. Eg - Chilingar's paper that CO2 actually causes cooling. Then there were possible rejections, that didn't quite go as far as rejecting AGW but minimsed the role of AGW by saying the sun or ocean cycles were the dominant factor in global warming. But this grey area of "possible rejections" is the area of discussion between Dana and myself - he thinks they should be labelled as endorsing AGW because they do accept human activities has a role but merely minimises the role. So if we collapsed all options into simple rejection/neutral/endorsement, the possible rejections that minimise AGW would fall into the endorsement side. I'm thinking in the spectrum of endorsement, possible rejections lie on the other side of neutral and should fall into the rejection side. Thoughts, comments?

The other comment is that option 6, evidence for AGW, came about because as I was reading papers that explicitly endorse AGW, sometimes I would see a kick-arse paper that doesn't just say humans are causing GW but provide evidence for it. I wanted some way of flagging those special papers. For all practical purposes, we would probably collapse "explicit endorsement" and "evidence for AGW" into the one category but I just wanted a way of attaching a sticky note to those papers saying "this one is important".

Re Glenn's idea of weighting a paper's level of support, there is potential for that in this system. Even now, the database table of endorsement levels uses the numbering system above - 1 for rejections, 5 for explicit endorsements, etc. So if you wanted to add up a score, you could use that numbering or come up with an indexing system based on those numbers. That's one of the powers of this system is all the data is in there, it's just a matter of doing the ratings and deciding how we want to output the results. 

There are specific details on how we identify implicit endorsements but before we get into that nitty gritty, I'd like to discuss how we should organise the options for endorsement level.

2012-01-19 15:47:47
Brian Purdue


Is the project going to include National Academies, Institutions and companies that endorse/agree with the consensus? This list is a few years old now so would have to be updated – that could be a fair job if someone hasn’t updated it recently. Maybe the list has grown so that could be pointed out as well. Think it would be best to leave companies out and just have the science bodies.

Could be the introduction to scientific paper’s consensus or the whole project?

The consensus has been explicitly endorsed by:

Academia Brasiliera de Ciências (Bazil) Royal Society of Canada Chinese Academy of Sciences Academié des Sciences (France) Deutsche Akademie der Naturforscher Leopoldina (Germany) Indian National Science Academy Accademia dei Lincei (Italy) Science Council of Japan Russian Academy of Sciences Royal Society (United Kingdom) National Academy of Sciences (United States of America) Australian Academy of Sciences Royal Flemish Academy of Belgium for Sciences and the Arts Caribbean Academy of Sciences Indonesian Academy of Sciences Royal Irish Academy Academy of Sciences Malaysia Academy Council of the Royal Society of New Zealand Royal Swedish Academy of Sciences

In addition to these national academies, the following institutions specializing in climate, atmosphere, ocean, and/or earth sciences have endorsed these conclusions: NASA's Goddard Institute of Space Studies (GISS) National Oceanic and Atmospheric Administration (NOAA) National Academy of Sciences (NAS) State of the Canadian Cryosphere (SOCC) Environmental Protection Agency (EPA) Royal Society of the United Kingdom (RS) American Geophysical Union (AGU) American Institute of Physics (AIP) National Center for Atmospheric Research (NCAR) American Meteorological Society (AMS) Canadian Meteorological and Oceanographic Society (CMOS) These organizations also agree with the consensus: The Earth Institute at Columbia University Northwestern University University of Akureyri University of Iceland Iceland GeoSurvey National Centre for Atmospheric Science UK Climate Group Climate Institute Climate Trust Wuppertal Institute for Climate Environment and Energy Royal Meteorological Society Community Research and Development Centre Nigeria Geological Society of London Geological Society of America UK Centre for Social and Economic Research on the Global Environment Pew Center on Global Climate Change American Association for the Advancement of Science National Research Council Juelich Research Centre US White House US Council on Environmental Quality US Office of Science Technology Policy US National Climatic Data Center US Department of Commerce US National Environmental Satellite, Data, and Information Service The National Academy of Engineering The Institute of Medicine UK Natural Environment Research Council Office of Science and Technology Policy Council on Environmental Quality National Economic Council Office of Management and Budget The National Academy of Engineering The Institute of Medicine UK Natural Environment Research Council Australian Government Bureau of Meteorology Engineers Australia American Chemical Society American Association of Blacks in Energy World Petroleum Council The Weather Channel National Geographic

The following companies agree with the consensus: ABB Air France Alcan Alcoa Allian American Electric Power Aristeia Capital BASF Bayer BP America Inc. Calvert Group Canadian Electricity Association Caterpilliar Inc. Centrica Ceres Chevron China Renewable Citigroup ConocoPhillips Covanta Holding Corporation Deutsche Telekom Doosan Babcock Energy Limited Duke Energy DuPont EcoSecurities Electricity de France North America Electricity Generating Authority of Thailand Endesa Energettech Austraila Pty Ltd Energy East Corporation Energy Holding Romania Energy Industry Association Eni Eskorn ETG International Exelon Corporation F&C Asset Management FPL Group General Electric German Electricity Association Glitnir Bank Global Energy Network Institute, Iberdrola ING Group Institute for Global Environmental Strategies Interface Inc. International Gas Union International Paper International Power Marsh & McLennan Companies Massachusetts Municipal Wholesale Electric Company MEDIAS-France MissionPoint Capital Partners Munich Re National Grid National Power Company of Iceland NGEN mgt II, LLC NiSource NRG Energy PG&E Corporation PNM Resources Reykjavik Energy Ricoh Rio Tinto Energy Services Rockefeller Brothers Fund Rolls-Royce Societe Generale de Surveillance (SGS Group) Stora Enso North America Stratus Consulting Sun Management Institute Swiss Re UCG Partnership US Geothermal Verde Venture Partners Volvo In addition, the scientific consensus is also endorsed by the CEO's of the following companies: A. O. Smith Corporation Abbott Laboratories Accenture Ltd. ACE Limited ADP Aetna Inc. Air Products and Chemicals, Inc. AK Steel Corporation Alcatel-Lucent Allstate Insurance Company ALLTEL Corporation Altec Industries, Inc. American Electric Power Company, Inc. American Express Company American International Group, Inc. Ameriprise Financial AMR Corporation/American Airlines Anadarko Petroleum Corporation Apache Corporation Applera Corporation Arch Coal, Inc. Archer Daniels Midland Company ArvinMeritor, Inc. AstraZeneca Pharmaceuticals LP Avery Dennison Corporation Avis Budget Group, Inc. Bechtel Group, Inc. BNSF Railway Boeing Company Brink's Company CA Carlson Companies, Inc. Case New Holland Inc. Ceridian Corporation Chemtura Corporation Chubb Corporation CIGNA Corporation Coca-Cola Company Constellation Energy Group, Inc. Convergys Corporation Con-way Incorporated Corning Incorporated Crane Co. CSX Corporation Cummins Inc. Deere & Company Deloitte Touche Tohmatsu Delphi Corporation Dow Chemical Company Eastman Chemical Company Eastman Kodak Company Eaton Corporation EDS Eli Lilly and Company EMC Corporation Ernst & Young, L.L.P. Fannie Mae FedEx Corporation Fluor Corporation FMC Corporation Freddie Mac General Mills, Inc. General Motors Corporation Goldman Sachs Group, Inc. Goodrich Corporation Harman International Industries, Inc. Hartford Financial Services Group Home Depot, Inc., The Honeywell International, Inc. HSBC - North America Humana Inc. IBM Corporation Ingersoll-Rand Company International Textile Group ITT Corporation Johnson Controls, Inc. JP Morgan Chase & Co. KPMG LLP Liberty Mutual Group MassMutual MasterCard Incorporated McGraw-Hill Companies McKesson Corporation MeadWestvaco Corporation Medco Health Solutions, Inc. Merck & Co., Inc. Merrill Lynch & Company, Inc. MetLife, Inc. Morgan Stanley Motorola, Inc. Nasdaq Stock Market, Inc. National Gypsum Company Nationwide Navistar International Corporation New York Life Insurance Company Norfolk Southern Corporation Northwestern Mutual Life Insurance Company Nucor Corporation NYSE Group, Inc. Office Depot, Inc. Owens Corning (Reorganized) Inc. Pactiv Corporation Peabody Energy Corporation Pfizer Inc PPG Industries, Inc. Praxair, Inc. PricewaterhouseCoopers LLP Principal Financial Group Procter & Gamble Company Prudential Financial Realogy Corporation Rockwell Automation, Inc. Ryder System, Inc. SAP America, Inc. Sara Lee Corporation SAS Institute Inc. Schering-Plough Corporation Schneider National, Inc. ServiceMaster Company Siemens Corporation Southern Company Springs Global US, Inc. Sprint Nextel St. Paul Travelers Companies, Inc. State Farm Insurance Companies Tenneco Texas Instruments Incorporated Textron Incorporated Thermo Fisher Scientific Inc. TIAA-CREF Tyco Electronics Tyco International Ltd. Union Pacific Corporation Unisys Corporation United Technologies Corporation UnitedHealth Group Incorporated USG Corporation Verizon Communications W.W. Grainger, Inc. Western & Southern Financial Group Weyerhaeuser Company Whirlpool Corporation Williams Companies, Inc. Xerox Corporation YRC Worldwide Inc


2012-01-19 16:28:43Other endorsements
John Cook


The plan is to submit a paper containing the results of Phase 2. As part of the introduction, I was planning to mention that there are a number of organisations that have endorsed the consensus - as one of several lines of evidence indicating a scientific consensus. 

What would be really cool is if the section talking about science bodies endorsing the consensus was able to actually emphasise that this indicated a 'strengthening consensus'. Eg - one idea might be plotting a time series of the # of organisations endorsing the consensus, over time, showing more and more bodies over time. Or is that a bit of a stretch?

2012-01-19 16:31:16
Alex C


It doesn't seem that model work would fall under any of those categories well.  Are you expecting Methods to house such studies?

Edit - I haven't read much into Naomi's work, was that the way she had implemented such categorization?

2012-01-19 16:42:22Modelling
John Cook


Generally, they fall under Methods unless they're modelling impacts.

Good point re Naomi - that paper should be required reading. Will start a new thread on this.

UPDATE: have added new thread: Required reading for everyone involved in TCP

2012-01-19 18:21:59
Glenn Tamblyn


If modelling for example falls under Methods then Methods is the wrong name. Modelling is wrong as well since this is too limited - what about radiative transfer theory, ocan acidifiction theory and so on.


As a more general point, putting on my old IT guys hat, it is HARD to add data to a large database post-hoc. Better to get it right first time. How this applies to the categories question is to not get locked into the idea that the list of categories that might match the final analysis is the category system that has to be adopted inside the database.

The History of AGW database basically has 3 categories Pro, Neutral and Anti. However there reasonably could be many grades and scales of category that could slice & dice that.

If we had a scheme that had 24 categories in our database hypothetically. We could still assign 8 of these categories to Pro, 8 to neutral and 8 to Anti. And then analyse on 3 categories. But if we want to analyse on sub-details like % of ANTI that is absolute rejection vs % that is partial acceptance we can't do it with our 3 category version but could with our 24 version.

This is the oldest problem in the book in the IT world. Reconciling what the user wants from the system with what the system needs internally to deliver what the user wants. And listening to the conversations here I am hearing users trying to design their system - no offense intended to anyone. But this is core to maximising the benefits of this project. Restricting the categories overly may severely limit what we can do with it. Where as a broader range of categories can be narrowed down to simpler questions using SQL queries on the database.

Don't confuse the way the data may be represented internally in the database with the ways in which we may then use it for external reporting. And not putting enough information into the database is REALLY hard to rectify post-hoc

2012-01-19 18:35:53
Ari Jokimäki


A study that shows that global warming is happening (for example) doesn't seem to fall into any of those categories, unless you have very broad definition for "impacts".

I wonder how neutral and implicit endorsement are separated from each other? If some evidence is presented that doesn't go against a theory, then it is implicit endorsement because it fits to the theory. So basically anything could be included to implicit endorsement unless it goes implicitly/explicitly against AGW or has completely unrelated study subject. I think you need to define these categories very strictly so that you don't introduce too much subjectivity to the classification phase (different people see these categories differently).

2012-01-19 18:37:18
Rob Painting

What about the number of authors of each paper? Are we going to compile information on that? I expect it would show a staggeringly high number of publishing scientists accept the consensus (or whatever you want to call it).

I know we're going to have a number of repeats, but if there's some capacity to capture the author names, I'm willing to do the grunt work on that. That is of course if others feel it's worthwhile. 

2012-01-19 19:18:00
Mark Richardson

I like the two dimensional idea, with one 'axis' being:

  1. Impacts
  2. Mitigation
  3. Paleoclimate
  4. Methods
  5. Opinion

Where I guess 'impacts' would include things like estimates of temperature change, changing glaciers, observations of moving pants and animals etc. IT would be more 'observations and impacts'.

But how do we ensure we include things like calculations of the sensitivity from observations, model outputs etc?


And the other axis sounds best if it's strong rejection to strong acceptance, where we have to define a 'consensus' position for each point. e.g. climate sensitivity below <1.5 C, or measurements saying there is no global warming.

2012-01-19 19:56:07Number of authors
John Cook


Rob, that info is all in the SkS database - I have an "authors" field imported straight from WoS. The authors are semi-colon separated so I can automate the stripping of authors into a separate database then generate the # of authors endorsing the consensus. Look forward to seeing what those numbers tell us.

2012-01-19 20:21:32
Rob Painting


2012-01-19 22:02:39Addressing Glenn and MarkR's comments
John Cook


Glenn is right in that the design of the database is important and best to get it right beforehand, hence this thread. I did have in mind that we capture more information and then have the option of collapsing it to a simple system afterwards if we want... or not. So for example, using this system:

  1. Explicit Rejection
  2. Possible Rejection
  3. Neutral
  4. Implicit Endorsement
  5. Explicit Endorsement
  6. Evidence for AGW

We can collapse 4,5,6 into one "Endorsement" category if we wanted to compare Endorsements to Rejections.

But it's also useful to have implicit vs explicit - seeing how those two figures evolve proportionally over time could tell some interesting stories too.

Ari, this study doesn't concern itself with whether global warming is happening - but what is causing global warming.

How do we distinguish between neutral and implicit? That is the key question that's been occupying me for a while as I've been rating papers and building up some guidelines. Here's  what I've got so far:

Guidelines for determining the 'Endorsement Level' of a paper

  • Explicit Endorsement: Mention of 'anthropogenic global warming' or 'anthropogenic climate change' as a given fact. Mention of increased CO2 leading to higher temperatures without including 'anthropogenic' or reference to human influence/activity relegates to 'implicit endorsement'.
  • Implicit Endorsement: Mitigation papers that examine GHG emission reduction or carbon sequestration
  • Implicit Endorsement: Climate modelling papers that talks about emission scenarios in the abstract implicitly endorse that GHGs cause warming
  • Implicit Endorsement: Paleoclimate papers that link CO2 to temperature change
  • Implicit Endorsement: Papers about climate policy unless they restrict their focus to non-GHG issues like CFC emissions in which case they're neutral
  • Implicit Endorsement: Modelling of increased CO2 effect on regional temperature - not explicitly saying global warming but implying warming from CO2
  • Implicit Endorsement: Reference to IPCC is usually an implicit endorsement
  • Neutral: If a paper merely mentions 'global climate change' or 'global warming', this isn't sufficient to imply anthropogenic global warming
  • Neutral: Mitigation papers talking about non-GHG pollutants are not about AGW
  • Neutral: Research into the direct effect of CO2 on plant growth without including the warming effect of CO2
  • Neutral: Anthropogenic impact studies about direct human influence like land use changes (eg - not about anthropogenic GHG emissions)
  • Neutral: Research into metrics of climate change (surface temperature, sea level rise) without mention of causation (eg - GHGs)
  • Reject Consensus: explicitly rejects anthropogenic warming (to ensure no misses, if abstract is ambiguous, full paper is read)
2012-01-19 23:53:01
Ari Jokimäki


"Ari, this study doesn't concern itself with whether global warming is happening - but what is causing global warming."

Yes, but your search will most likely result in papers addressing the question if global warming is happening or not. I was just asking their category, as impacts or methods don't seem to fit to those papers.

"Implicit Endorsement: Mitigation papers that examine GHG emission reduction or carbon sequestration"

I think that these issues can be researched without believing that AGW is correct. There are plenty of occasions where a scientist has make a paper on some theory they don't believe in. It can be just academic exercise, for example. Here also we deal with development of technologies which might have very little to do with AGW itself. I'm not necessarily suggesting that you should get rid of this classification, but at least you need to discuss this potential bias that these papers might not be actual endorsements. But I do think that it would be best to put these papers into the neutral bin.

"Implicit Endorsement: Climate modelling papers that talks about emission scenarios in the abstract implicitly endorse that GHGs cause warming"

I disagree with this. Surely you can have a study that uses emission scenarios and shows that there's no warming when you feed them to climate model. They probably have crappy model or parametrizations or something like that, but that's besides the point. Paper like that doesn't endorse AGW even if it uses emission scenarios.

"Implicit Endorsement: Papers about climate policy unless they restrict their focus to non-GHG issues like CFC emissions in which case they're neutral"

Except those climate policy papers that suggest we shouldn't do anything because AGW is not true.

"Implicit Endorsement: Reference to IPCC is usually an implicit endorsement"

I'm not sure what you mean by this. Are you suggesting that papers should be put to implicit endorsement bin just because they mention IPCC?

"Neutral: Research into metrics of climate change (surface temperature, sea level rise) without mention of causation (eg - GHGs)"

I think that here the line between implicit endorsement and neutral is very difficult to determine. For example, mentioning that global warming is showing in surface temperatures as expected might be implicit endorsement, but it doesn't mention causation explicitly but implicitly (the "expected"). So this example is implicit implicit endorsement. This is very difficult part of classification that needs to be handled at least in the discussion section of the paper.


2012-01-20 14:47:16
Glenn Tamblyn


Each one of John's bullets could be a separate category for example, and perhaps more. Then these can be folded into a smaller number of categories. Alternatively, we could have a code for each category then separate sub-code for different reasons why it is in a category. This means there is less of a personal value judgement about a paper. We start with an intial set of codes or codes/subcodes. Then reviewers can only use existing codes. If a paper is found that doesn't fit any codes it can be discussed as to whether to 'squeeze' it into a code or create a new code. Only once consensus is reached about a new code is that code added and the paper rated.

In which case I would suggest something like JC original codes then sub-codes for various reasons. But with a few others to keep the sceme symmetrical. To keep this study strictly impartial we shouldn't have any biases in the structure of the categories we use. If it turns out that no papers fit a category, that tells us something to. So:

  1. Explicit Rejection
  2. Implicit Rejection
  3. Explicit Possible or Partial Rejection
  4. Implicit Possible or Partial Rejection
  5. Implicit Neutral
  6. Explicit Neutral
  7. Explicit Possible or Partial Endorsement
  8. Implicit Possible or Partial Endorsement
  9. Implicit Endorsement
  10. Explicit Endorsement
  11. Evidence for AGW

I have uncertainties about 11 - evidence. Is this evidence of warming? The Anthropogenic part? Evidence of cooling? The Anthropogenic part? Again we need impartiality in our codes - let empty code buckets tell their tale.

2012-01-20 15:15:21Making it too complicated
John Cook


I'm wary of letting it get too complicated with too many categories. 

The 'Evidence for AGW' category isn't essential - it was just a passing thought as I'd see papers like the ones Dana highlights in his 'Causes of Global Warming' post that not only explicitly stated AGW but provided evidence and thought "these papers are gold, we should highlight these". For symmetry, we might not even include that option in the final analysis but collapse them into the "Explicit Endorsement" option - but then at least at the end of the rating effort, we'd also have a solid list of papers providing evidence for AGW (note the A in AGW - not just evidence of warming but evidence that humans are causing it). So in response to the concern that the categories aren't symmetrical and hence unfair, the raw structure of the database need not equate to the final results presented and it could be a simple case of showing rejection/neutral/endorsement.

Ari makes some good points and let me offer an updated version of the guidelines with some clearer language and some comments added:

Guidelines for determining the 'Endorsement Level' of a paper

Explicit Endorsement

  • Mention of 'anthropogenic global warming' or 'anthropogenic climate change' as a given fact. Mention of increased CO2 leading to higher temperatures without including 'anthropogenic' or reference to human influence/activity relegates to 'implicit endorsement'.

Implicit Endorsement

  • Mitigation papers that examine GHG emission reduction or carbon sequestration (one important element of this survey is that we're not speculating on what the scientist believes but can only go on the words of the published paper - if the paper examines the issue of reducing GHG emissions, the research implicitly endorses AGW regardless of the scientists' feelings).
  • Climate modelling papers that talks about emission scenarios and subsequent warming in the abstract implicitly endorse that GHGs cause warming (note - added "and subsequent warming or other climate impacts from increased CO2)"
  • Paleoclimate papers that link CO2 to temperature change
  • Papers about climate policy (specifically mitigation of GHG emissions) unless they restrict their focus to non-GHG issues like CFC emissions in which case they're neutral. (added "specifically mitigation of GHG emissions")
  • Modelling of increased CO2 effect on regional temperature - not explicitly saying global warming but implying warming from CO2
  • Endorsement of IPCC findings is usually an implicit endorsement. (updated this so it's more than just reference to IPCC but actual endorsement of IPCC)


  • If a paper merely mentions 'global climate change' or 'global warming', this isn't sufficient to imply anthropogenic global warming
  • Mitigation papers talking about non-GHG pollutants are not about AGW
  • Research into the direct effect of CO2 on plant growth without including the warming effect of CO2
  • Anthropogenic impact studies about direct human influence like urban heat island and land use changes (eg - not about GHG emissions)
  • Research into metrics of climate change (surface temperature, sea level rise) without mention of causation (eg - GHGs)

Reject Consensus

  • explicitly rejects anthropogenic warming (to ensure no misses, if abstract is ambiguous, full paper is read)
2012-01-20 15:18:53


Simplify, simplify. Three categories (+. -, neutral) in each should be sufficient, and avoid second guessing.

+, -, N on AGW.

Various categories on attribution, mitigation, sensitivity, methods, opinon. Don't over-divide it.

2012-01-20 16:33:40Simplify
John Cook


The reason we have the extra categories - implicit endorsement and possible rejections are because:

  1. Naomi talks about explicit and implicit endorsements and in one sense, we're replicating and expanding on her work. I don't like the fact that she just assumes all mitigation and impact papers are implicit endorsements so it would be nice to quantify it. Plus critiques of Naomi DID quantify the # of implicit endorsements so we should get ahead of potential criticisms.
    More importantly, many papers do endorse the consensus without saying it explicitly so to get a true state of the consensus in the literature, you need to include implicit endorsements.
  2. Re possible rejections, the reason we did these was an organisation thing at first - we highlighted all "possible rejections" then had a closer look at each paper to see whether each was a definite rejection or not. Some possibles while they "smelt" like rejections, once you looked at the paper, was apparent that they didn't reject AGW (one even explicitly endorsed AGW in the text so it pays to look closely). So it was a procedural thing, to ensure we didn't miss any. It was also useful in the publishing of our final results to say "we found 23 rejections of AGW - we also identified 19 other possible rejections that we decided didn't go so far as to reject AGW but even if you include them in the rejection list, our conclusion of a negligible denial impact and strengthening consensus is still robust".

    This is actually a key result from this survey - even if you include all the disputed papers that *might* be rejections, the end result of a strengthening consensus still stands. So that's why we're happy to publish our ratings in an open, transparent fashion, challenging others to reproduce our work and confident that the results stand. 
2012-01-20 19:03:45
Glenn Tamblyn


To reiterate the point from my previous comment. Your categories MUST be impeccably impartial. If you have 2 categories above Neutral you MUST have 2 categories below it. Otherwise you are open to accusations of bias in how the study is set up.

It isn't an issue of what Oreskes did or anything else. If this study is to have the cut through impact we hope it will, it must cross every T and dot every I in the objectivity stakes.

You MUST approach this as if you are completely dispassionate and detached from what the outcome will be. The point of the study must NOT be to show that there is a consensus. It must be to find out if there IS a consensus. We may have a view about what the outcome of the study will be, but the methodology MUST be brutally dispassionate.

2012-01-20 22:10:58Symmetrical endorsements
John Cook


Consider the "Evidence for AGW" category as synonymous with "Explicit Endorsement" - for all intents and purposes, they're the same thing and we will collapse the two into one category for the analysis. I just figured, while we're looking at 12,000 papers, might as well tag the especially cool papers, perhaps for a future analysis.

This then boils down to one unresolved question - how do we handle Possible Rejections. Are they "Implicit Rejections" and if so, what does that mean? Is a paper that minimises the role of AGW while not completely rejecting it a rejection?

2012-01-21 06:31:34
Dana Nuccitelli

I like John's latest guidelines.  Eventually it's probably going to collapse down to explicit, implicit, and rejections, but we can also say some useful things about the sub-categories.

I would vote to keep 'possible rejections' for now, to leave them open for future discussion.  Hopefully our rejections will match up with those in the Jim/John paper, but if there's a question about a possible rejection, we should use the same process and keep it flagged for a final discussion.

I still suggest we add a category for papers that admit AGW is happening, but that the anthropogenic effect is less than 50%, or less than the IPCC range, or less than the consensus, or some similar statement.  Basically a paper that explicitly says AGW is a minimal effect.  I would tentatively classify these as neutral, with the subset being "minimizing endorsement" or something similar.

2012-01-21 14:04:52Here are the papers currently listed as Rejections or Possible Rejections
John Cook


It might help at this point to see papers that we've categorised as either Rejections or Possible Rejections - 

Two points of note. Mouse over the paper title and you'll see the abstract - this is a feature we will include when we publish our results online.

I've also included the Notes where Jim and I debate the various papers, whether they should be included as rejections or not. So the internal debates we've had should give some idea of the kind of issues we've grappled with. Looking at the types of Possible Rejections we considered might help us devise a more formal definition other than the pornography definition (I know it when I see it).

Links removed, no peeking, good point Ari!

2012-01-21 19:20:38
Ari Jokimäki


You want us to classify these papers but give results beforehand? That might mess up our objectivity, if we see that certain paper has been classified as rejection by you. The classification should be done so that there's no information of how others have classified the papers to ensure unaffected decision.

2012-01-21 19:32:17
Kevin C



Another methodological point. I've got a colleague who has done an analagous study - getting experts to classify images as a training set for a machine learning algorithm.

You need to store some extra data:

 - The person who did the classification in each case, and

 - Some kind of self-evaluation of the expertise of each person doing classifications.

The first allows you to do a consistency check - for every classifier, calculate how well they agree with others classifying the same paper. You'll probably pick up a few people who have misunderstood the catagories. Comparing this score against expertise (even self-evaluated) tells you about how expertise affects accuracy of classification

Expertise evaluation questions can be general or specific: specific examples would be about formal qualifications, number of primary research papers read, etc.

2012-01-21 20:50:28Methodology on who rates what
John Cook


The database will record who rates what so there is plenty of opportunity to examine that data.

We will get into more detail on methodology in another upcoming thread but was hoping to finish defining the categorisations first. 

2012-01-22 10:44:15Don't forget the applied sciences.


When an engineer applies for a patent, or when a scientist speaks of a discovery which does, or which could, lead to a new technological advance and it is explicitly stated in the patent application or paper that this new technolgy can help address global warming / GHG issues, then I would say that this adds to the scientific consensus.

It is not just scientists working directly in the climate-related observational sciences whose opinions matter.  If it can be shown that more and more people working in applied science are aiming specifically to help mitigate climate change / GHG emissions etc., then - imho - that is the strongest possible endorsement of the climate consensus since it implies financial investment in the future based on current knowledge.


my 2 cents, but, as an engineer I may be biased. ;-)



by way of example, here is but one of lterally many millions of published papers which are not about global warming per se but which implicitly or specifically accept its validity.  In the case of the specific paper below, global warming is mentioned right at the start.

Development of a solar-powered passive ejector cooling system

V.M. Nguyen, S.B. Riffat, P.S. Doherty.
Institute of Building Technology, School of the Built Environment, Nottingham university.


pdf here:

2012-01-22 14:48:50Additional thoughts on possible rejections
John Cook


Have been thinking over the last day on how to shoehorn my existing classifications of Possible Rejections and Rejections into our new Endorsement Level system. I finally realised that the two systems are two separate things - I was trying to fit a square shape into a round hole.

The possible rejections were in essence a temporary state on the way to the final destination. The process Jim, Dana and I took was to identify possible papers that might reject AGW. As we scrutinised those, we gradually relegated some to the definite Rejection category. The rest that weren't Rejection were then, in effect, Neutral papers. So Possible Rejection was not a final category but just a temporary state we put papers into on their way to either Rejection or Neutral.

So forget Possible Rejection. I suggest we should rate papers as being in one of 5 possible states:

  1. Rejection (explicit or implicit)
  2. Neutral
  3. Endorsement (explicit or implicit)

So for example, all the Rejection papers identified by Jim, Dana and myself would probably subdivide into explicit or implicit rejections. Technically, we wouldn't even have to worry about subdividing into explicit or explicit - but i would like to capture this information as it's interesting and could yield some illuminating stories.

The next question then is how do you define the consensus position on AGW. Naomi Oreskes defined it as "most of global warming is caused by humans". Doran defined it as a "significant contribution" from human activity. The problem is most endorsements don't get as specific as quantifying the proportion of human contribution. They just say "humans are changing climate", "humans are causing global warming", etc. And rejection papers don't say "humans have ZERO impact on climate", they usually just minimise the role of AGW. Where do you draw the line?

Here's a speculative hand-wavy approach - papers that generally say "humans are causing global warming" (without quantification) are an endorsement while papers that restrict AGW to less than 50% of warming are a rejection"? Thoughts, comments?

2012-01-22 16:38:45
Dana Nuccitelli

Hmm the problem there is in the assumption - papers/scientists that endorse AGW don't necessarily believe the AGW contribution is greater than 50%.  We could get into trouble making that assumption.

The less than 50% AGW is still the problematic category.  I rather wish we could just put those into their own category.  Neither endorsement nor rejection, but rather disputing the magnitude of the effect.  Ideally I'd like to have the categories:

  • Explicit endorsement
  • Implicit endorsement
  • Neutral
  • Disputes AGW magnitude
  • Rejection

Is it really a problem to put that fourth group into a separate category?

The problem is that technically the 'consensus' has to be the simple existence AGW, since the vast majority of papers aren't specific about the magnitude of tha human contribution.  If we assume every paper talking about AGW is saying the effect is >50% of the observed warming, we're over-reaching.

But then papers that say humans are contributing, but the contribution is small, are technically endorsements.  So then you get deniers like Scafetta in the endorsement category, and that weakens the whole argument.

So I still think putting them in their own category - neither endorse nor reject - is the best way to go.

2012-01-22 18:09:14
Ari Jokimäki


The number of papers giving the percentage of human contribution is minimal so I don't think it can be a decider here Well, ok, it can be a decider for those few papers that actually give the number, but that is quite meaningless in the big picture.

2012-01-22 18:11:09So what are you suggesting Ari?
John Cook


In that survey you did, was your approach just endorse, neutral, reject? What did they endorse/reject? And where did you draw the line on rejection?

2012-01-22 20:01:39Two fields instead of one?


To give you more flexibility, how about setting up not just one field to rate endorsement but two in the database?

The first field could be for "Endorsement, Rejection, Neutral" and the second could have values for the kind of endorsment/rejection "Explicit, Implicit, Unclear (or to be decided)". If you work with two seperate fields you have greater flexibility and you can add more values if needed.

Two fields should also make it easier later to extract the data by either just one of the fields or a combination of the two (all endorsements or all endorsements which are also explicit).

And, it might be best to set up small tables where you assign values (eg. numbers or abbreviations) to the descriptive term shown. This makes the system more flexible.

2012-01-22 20:41:11
Ari Jokimäki


My survey is still ongoing, and will be for quite long. In fact, I'm thinking of joining these two together so that I would select papers for classification from such journals that I'm thinking of going through in my own project. My categories are just pro-AGW, neutral, and against AGW. Biggest point of my survey in my opinion is that I'm going through climate journals systematically so there's no biases relating to search words for example.

What I have noticed while classifying things is that there seems to be no escape from introducing lot of subjectivity to the classifications. Every paper seems to be genuinely individual case and it seems that no guiding line will be sufficient for classification. I have no idea where my line for rejection is other than it's there where I feel papers start rejecting the concensus. So I'm not sure what to suggest, but the best approach would seem to be having several classifiers for each paper, so that statistically you get correct classification. I was actually thinking of introducing my project here at some point in search of more classifiers, but you beat me to it. :)

One other suggestion might be to avoid too complicated classification system. Simpler the classification system, less room for subjectivity. For example, less categories means less borders between categories for people to disagree on. I think the three category system (pro, neutral, against) might be the simplest there is. You can always increase the complexity in future projects, when we have learned how to do this properly.

2012-01-22 21:51:39Ah, the pornography methodology?
John Cook

Ari, so you're operating under the "You can't define rejection but you know it when you see it" policy :-)

Essentially, we boil it down to endorse/neutral/rejection. The implicit/explicit is background details and not essential to the final result although I'm very keen to capture the info and curious about the results.

Your project sounds interesting and there is some synergy between both projects. Our 12,000 papers come from 2060 different journals. If after rating 12,000 papers we're feeling energetic, we can always group the 2060 journals into climate vs non-climate categories in order to see whether the endorsement % is different in climate journals. That should answer the same question you're seeking to answer.

Baerbel, I'm keen to keep things as simple as possible (but not too simple). We already have two drop downs for each paper, am reluctant to overload our raters with another one.

Dana, this in my mind is the main unresolved issue - "Disputes AGW magnitude" vs "implicit AGW". What makes me uncomfortable about "disputes" is it operates at a different level to all the other categories. All the others are binary (or trinary) - it either endorses or it doesn't. Then you introduce "it kind of endorses" which is messy. Also, many papers might "dispute the magnitude" in the sense that they're trying to quantify the human element but still endorse AGW. So the terminology is problematic. Perhaps "minimizes AGW"?

Generally, I favour the simple approach with a binary endorsement vs rejection, perhaps with rejection being "minimizing the role of human role in global warming". What does minimize mean exactly? Because many papers don't provide specific quantification, this is difficult as Ari has found.

2012-01-22 22:06:50
Ari Jokimäki


Well, yes, I'm taking the learning by doing approach because I see no other way. It would be possible to limit the sample only to certyain kinds of studies and then define tailor-made classification scheme for only those papers (for example the above suggested 50% scheme would apply quite well to the few papers that give the estimate of that number).

It remains to be seen if the answer by journal is same with your search word limited sample and with my complete sample. The journal approach I'm taking looks first at the climate journals, i.e. what the work of climate scientists shows. Next stage is to take related journals (meteorology and environmental journals for example) and see what the work of scientists in closely related fields say. Next, move even further from climate science, and so on.

2012-01-22 22:12:33Interesting idea
John Cook

Reminds me of Doran's survey where consensus grows as the expertise in climate grows. I don't think you'll find as strong a signal across journals though. In fact, the signal might go either way. But will be interesting to see.
2012-01-23 10:48:43More thoughts on Ari's consensus by journal idea
John Cook


As we have 12,000 papers over 2060 journals (eg - an average of only half a dozen papers per journal), you might not get significant results over the whole 2060 journals. However, if we focused on the top tier journals that contain a significant number of papers in our sample, some signal might emerge from the noise. I suggest down the track, I'll punch out the # of papers per journal and we can target the biggest journals, categorise them then see how consensus measures across the categories. Depending on how quickly we get through the initial paper rating, we could have an answer on this in a fairly short time.

2012-01-23 15:11:20
Dana Nuccitelli

I suppose we could classify the minimizers as rejections, but personally I'd still strongly prefer to keep them separate as their own category as I said in this comment.

The minimizers would be any paper that has an explicit statement that the role of AGW is less than most scientists think.  For example, if they do quantify it as less than 50%, or say it's less than in the IPCC report, or less than the consensus, or some similar explicit minimalizing statement.

I see those as a different category than those that say humans have no role at all, which are our full-on rejections.

2012-01-23 22:26:22
Ari Jokimäki


It's a good start for this journal point of view to use your sample. However, I have some extra angles in mind where your sample doesn't quite fit in:

- percentage of papers taking a stand on AGW (your sample contains only papers that contain relevant search words - my sample contains all truely neutral papers also, you know, the ones that are just climate science, not climate change science)

- how journal's take on AGW evolves through time (your sample might give an idea about this at least for some journals, but I get "exact" numbers)

- are there some journals that are especially pro/against AGW? (hard to say how your sample can handle this as the biases relating to the use of search words are difficult to determine)

(Also on the plus side is that eventually I end up with quite a thorough database of climate science papers. Takes lot of time, though.)

2012-01-24 13:08:34Just curious, Ari
John Cook


If you don't mind me asking (and I understand if you don't answer), roughly how many papers have you rated and how long have you been doing it?

2012-01-24 17:41:48
Ari Jokimäki


I don't mind answering, and I don't understand why you understand if I don't answer. ;) You ask, I answer. I don't have a need to keep anything secret here.

I haven't had much time to do it and have done it occasionally few minutes per time for few months at least. It has been a background project for me. I have gone through very roughly about 500+ papers (I don't have all my files here, but I have done 350 Journal of Climate papers, currently going through year 1991).