[R-sig-eco] glm-model evaluation

David Hewitt dhewitt37 at gmail.com
Sat May 31 06:15:50 CEST 2008


We've mostly gotten out of the area where I know enough statistically to
speak with confidence, but I'll risk some lumps anyway...

I always thought that the idea of retaining a portion of the data for
validation was a good idea. I asked David Anderson about this personally and
he said he couldn't see any reason to do that. Using likelihood, he thought
the best approach was to use all the data to determine the best model.

I'm pretty muddy on the difference between selecting a good model with AIC
(which is sometimes referred to as being predictive in nature) and what is
meant by post-hoc validation of predictive ability (aside from testing on
another data set). I've often seen the "leave-one-out" approach used to
"validate" a model. If anyone has a good reference that differentiates the
two with an example, I'd really appreciate it.



> I think that's a different (though not unrelated) issue -- namely,
> model selection.  Asymptotically, AIC is equivalent to leave-one-out
> cross validation, Mallow's Cp, and some other methods for model
> selection.  However I don't see using a model selection method as
> equivalent to validating the predictive ability of a model.
> 
> As far as how to show predictive ability - I think that's context
> dependent. Along with various quantitative measures, I've found
> plotting to be useful.  For example, for each fold of a k-fold cross
> validation plotting the observed vs predicted in a scatter plot, using
> color to identify an important categorical variable (e.g. sex,
> species, region etc.) and pch to identify another.  Or, if it's
> spatial data actually mapping the RMSEs of the cross validations to
> get an idea of where the model is performing  well/poorly.
> Conditional plots and parallel coordinate plots can be good tools for
> these types of 'validation' as well. One thing to remember -- if these
> methods are used as part of the model selection process there should
> be a final hold-out dataset that was never used in any way in making
> modeling decisions.  This is a luxury, but if there's enough data it
> can provide strong evidence for the models' predictive traits.
> 


-----
David Hewitt
Research Fishery Biologist
USGS Klamath Falls Field Station (USA)
-- 
View this message in context: http://www.nabble.com/glm-model-evaluation-tp17525503p17571777.html
Sent from the r-sig-ecology mailing list archive at Nabble.com.



More information about the R-sig-ecology mailing list