2011-04-01 10:59:39Categorising skeptic arguments
John Cook

john@skepticalscience...
124.185.238.238

There is this whole discussion on how to categorise skeptic arguments on the planet 3.0 group - I'm not sure we want to get into any of this for now but I'm just saving much of the interesting thoughts for future reference. I like the idea of recording how long ago a skeptic argument was first debunked - but a high maintenance idea!

Bob Grumbine:

I've thought some about taxonomies for the errors, but have never gotten towards anything that I liked.  The fundamental problem being that biological structures form a nested hierarchy and, as best I've seen, the errors regarding climate do not.Some time back, namely graduate school, when I first encountered the library of congress system, I was distinctly unhappy.  Dewey at least tried to organize the contents of the library.  LC merely provides catalogers a means of assigning a number without having to know much, if anything, about the contents of the work being classified.  Hence my dissertation did not wind up in the science library, for instance.I then put some work in to my own system.  A linear system, like Dewey, I rapidly realized has other significant problems.  What I wound up with was to make it a multidimensional classification.  I forget details now, but it included things like 'level' (professional research publication on down to early childhood readers), 'scale' (a single person/proton/atom/...), andsome others.For more thorough, and easier in some respects, classification of error, I'd think another multidimensional classification would work if we could figure out a good set of axes.  But for practical use, collapsing on to a single number (say the distance from the origin) would be needed.One dimension I'll suggest is 'how far back was this known to be an error'?  In that respect, denying that there is a greenhouse effect gets a very high score, given Fourier 1824/7.  Saying that Arctic ice is not declining is rather lower (not really beyond statistical debate until the mid 1990s), but still more major than denying the acceleration of sea level rise due to Greenland and Antarctica (2007).To that end, by the way, I'll encourage folks to mention some of the earliest papers when they take on errors.  I've had people say that if all the references are very recent, they figure it can't be much of an error -- it took until last week to publish the correction.A different possible axis is 'how much math/science/... do you have to know to understand that this is in error?'  It's worse to blow the existence of the greenhouse effect, which takes practically no knowledge to understand, than to misunderstand some details of how exactly it works.  The acceleration of Greenland melt is another that takes little background to understand.  So it'd be one that rates low for recency, but high for background requirement (meaning there isn't much required).

Response from Brian Dupuis

I would suggest using a categorical system instead. John Quiggin attempted something similar for denialists themselves (http://johnquiggin.com/index.php/archives/2009/05/27/a-taxonomy-of-delusion/; it links to an older, more comprehensive attempt by John Mashey). There's also the semi-famous Denialism Blog's About Denialism article (http://scienceblogs.com/denialism/about.php), which identifies common components in denialist arguments in general, scientific or not. Identifying these in this section may prove useful. However, it's resolution isn't high enough for our purposes (i.e. "fake experts" covers everything from Lindzen to Monckton). Specific statements, particularly those about empirically-verifiable information, aren't generally classified in any meaningful way, especially to non-experts.
Perhaps something like this would serve as a starting point for that last one? This is in response to hearing a specific, incorrect statement about a scientific topic.
-Mistake (i.e. Waxman's claim or Bob's example above, the conclusion is reasonably correct but not "accurate", the kind of mistake an undergrad would make if they aren't familiar with the material and a grader may give partial credit for. Due to the degree of this mistake, specialists are probably more likely to notice this than amateurs, unless it's botched terminology like Waxman's "evaporate".)
-Inaccurate/Wrong (similar to above, but due to the order of magnitude or the nature of the mistake involved, the conclusion itself is wrong or misleading. Example: Plotting all the major temperature records on the same scale, or misstating sea level rise by an order of magnitude. To some extent, this also covers the arithmetic errors originally present in early UAH measurements. Usually, when I find well-meaning but not-scientist greens making misleading statements, they're also in this category. This kind of statement would generally be worthy of a retraction or correction.)
-Distortion (Claiming something says one thing when it in fact says something contradictory - i.e. addressed by simply looking at the source, doesn't necessarily require any expertise in the field to verify. Think Monckton or Lomborg's citation tricks here. This category implies deliberate deception on the part of the speaker or whoever informed the speaker, unlike the ones above, since anyone looking at the source sees the opposite conclusion. It's the only one reflecting intent, and it's narrow enough that only blatant attempts will register in it.)
-PRATT (Previously Refuted A Thousand Times, anything that a simple SkS link could handle. Intended for talking points that have already been addressed comprehensively, rather than new statements. People making these remarks could have been convinced by them, or they simply "did not get the memo"; no intent is implied here.)
-Not Even Wrong (logical contradictions, and other statements that cannot actually be evaluated or is patently false at face value even to the lay audience (i.e. even simpler than 'weather is not climate'). If conspiracy theories are included they're likely to be here.)
There's a loose spectrum here, but they aren't all along the same dimension, and there's some degree of overlap (so it's possible a statement is both a distortion and wrong; only Not Even Wrong is necessarily exclusive). It clearly needs some refinement, if for no other reason than to remove subjectivity. The model I'm working on isn't a "tree" so much as a network - a scientific conclusion stems from multiple chains of reasoning connecting different observations and earlier conclusions, and it's the conclusion "nodes" and reasoning "connections" that are evaluated rather than the argument as a whole. 
The other assumption I'm making is that this is all taken from the perspective of the lay audience - i.e. the very targets that the politicians are trying to sway. A Mistake is dead wrong for a scientist (my own advisor is fond of saying "there's blood in the water" over minor errors in arithmetic!), but let's face it, most politicians and their audience aren't scientists, and they're being evaluated by voters, not a thesis committee. If they can explain the principles "more or less" right without any serious order-of-magnitude errors, it's still a mistake, but it's not "wrong". (Good example: Think back to how the non-scientific media countered Fox News' "Snowmageddon" stories during the last two winters. Did any of them, particularly on TV, get it entirely right? On the other hand, did they do an effective job of explaining the basic idea without making any egregious errors?)
I'd also like to thank Bob for reminding me of Relativity of Wrong. I've been using a quote from there for ages (including just this morning, by coincidence), and it would make the absolute perfect lead-in or subtitle for an article discussing this very subject:
"When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."
Timothy Chase uses a lot of big words :-)

I attempted a phylogenetic analysis of basic informal fallacies to complement a corresponding analysis of the central epistemic norms.  I am fairly certain that wouldn't have helped even if I had completed it.  However, I presume there is a basic, largely hierarchical structure to our knowledge of the physics of global warming -- with cross-referencing links between distant branches.  I even laid out much of the structure in a quasi-linear fashion here some time ago:
Anyway, there is only one reality but many different ways of denying it.  If you want structure I would suggest that reality offers a better means of hanging your hat.  Then organize the fallacies (primarily) around that.
Followup from Timothy:
Another dimension -- related to both yet distinct would be the extent to which what is being denied is fundamental.  There is the chronological order where the earlier something became an established part of the consensus the graver the error, there is the degree to which one must be educated in order to recognize the error where the earlier in one's education that one should be able to recognize the error the greater the error, but there is also the breadth of the principles that must be denied for the entertainment of the fallacy.
The broader the principle (e.g., the conservation of energy) the greater the error -- along this axis.  Along these lines most of the fallacies based upon "observation" (e.g., "its been a cold winter in Europe therefore global warming isn't happening") would actually fall into the denial of the principles of statistics, and from this perspective at least would seem to be more grave than the denial of principles of physics.  Anyway, even if one doesn't actually reduce things to numbers, keeping in mind multiple dimensions would no doubt be helpful in illuminating the severity of the error.
Me personally, I'm quite satisfied with grouping them under "It's not happening", "It's not us", etc.
2011-04-01 16:06:38
Ari Jokimäki

arijmaki@yahoo...
192.100.112.210

This depends on what the categories are for. If they are for finding certain arguments easier, then for me the most sensible categorizing system would be to categorize them same way as climate science is categorized. If argument is about water vapour feedback, then I would look it under water vapour feedback. I don't see what would be the point with how long they have known to be false. For that to be practical you should memorize when each subject became crystal clear to us.

However, some arguments would be easier to find by location. For example, there are quite a lot of arguments dealing with Greenland (surprisingly lot, actually).