[R-sig-eco] glm-model evaluation

David Hewitt dhewitt37 at gmail.com
Fri May 30 01:51:26 CEST 2008




> I'd add that showing predictive ability is very important if the goal of
> the modeling process is to make predictions (and even if it's not, showing
> predictive ability provides support for the model).  Frank Harrell has
> tools in the Design library for efficient internal validation and
> calibration via the bootstrap (see the 'validate' and 'calibrate'
> functions) but these will not work on a model produced by glm.nb.  However
> it's easy to code a cross-validation in R and I believe MASS shows a
> 10-fold cross-validation for the CPUs example.
> 

IIRC, there's a section in B&A (2002) that points out and demonstrates that
AIC model selection has the property of being equivalent to "leave one out"
cross-validation. It draws from an original work by Stone (197x ??). They
also discuss more involved simulation-based (bootstrap) methods for complex
models.

-----
David Hewitt
Research Fishery Biologist
USGS Klamath Falls Field Station (USA)
-- 
View this message in context: http://www.nabble.com/glm-model-evaluation-tp17525503p17548602.html
Sent from the r-sig-ecology mailing list archive at Nabble.com.



More information about the R-sig-ecology mailing list