2011-10-31 12:34:13How do models test their results?
John Cook


Just got this email?

I have a question from a friend regarding reliability of climate models. In response to his reading of your website statement"Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future." He writes: "A proper scientific method would be to estimate and calibrate models using historical data up to 1980, but not using the 1980-2010 data.  Then the model would be tested against a part of the data that was not used to create the model -- the 'out of sample' part of 1980-2010.  If you test the model against the out-of-sample data and it fails then you need a new out-of-sample dataset.  Because, otherwise, if you constantly remake the model and retest it until it fits the 1980-2010 data, then you have effectively brought that data 'in-sample'.  

I wonder if you could comment on this in terms of fitting the model to match the data over a period? I would have thought the scientists would have tested the model as he suggests as part of their normal verification?

I'm not sure this process is done - omitting observational data in calibrating your model doesn't sound the right way to go. So what verification processes do modellers use besides hindcasting to test their results?

2011-11-01 13:30:21comment
Robert Way


I know statistical Models do have omission of observational data in random ways to assess whether the relationship continues to exist. Cross correlation is that term.

But either way observational data shouldn't significantly affect the model because it is about the radiative forcings and short term variability nowmatter the relationships with observations.