[R-sig-eco] glm-model evaluation

Ben Bolker bolker at zoology.ufl.edu
Thu May 29 19:47:31 CEST 2008


  ICtab() and friends in bbmle (on CRAN) will do this, although these 
capabilities aren't tremendously well tested.  I'd be interested in 
feedback.

   Ben Bolker


Brianne Addison wrote:
> Manuel,
> 
> If you are looking for a package or command in R that will produce AIC
> tables for you, I haven't found one.  Once I produce my AIC scores I
> compute the rest of my table values (usually AICc scores, delta
> values, weights, and parameter weights from model averaging) by hand
> in R using formulas in B & A.  Maybe someone else has a better way.
> If so, I'd love to know it.  Good luck!
> 
> BriAnne
> 
> 2008/5/29 Ben Bolker <bolker at ufl.edu>:
> Manuel Spínola wrote:
> | Dear list members,
> |
> | I am fitting negative binomial models with the nb.glm function (MASS
> | package).
> | I ran several models and did model selection using AIC.
> | How is a good way to evaluate how good is the selected model (lower AIC
> | and considerable Akaike weight)?
> | Is model diagnostics a good approach?
> | Thank you very much in advance.
> |
> | Best,
> |
> | Manuel Spínola
> |
> 
> ~   Manuel,
> 
> ~  not absolutely sure what your question is.
> 
> ~  If you're talking about evaluating the relative merit of
> the selected model, it's a question of delta-AIC (or delta-AICc),
> follow the usual rules of thumb -- <2 is approximately equivalent,
> |6 is a lot better, >10 is so good that you can probably discard
> worse models.  (See Shane Richards' nice papers on the topic.)
> 
> ~  If you have several models within delta-AIC of 10 (or 6) of each
> other, Burnham and Anderson would say you should really be
> averaging model predictions etc. rather than selecting a single
> best model.
> 
> ~  If you're talking about a global goodness-of-fit test, then the
> answer's a little bit different.  You should do the global GOF
> evaluation on the most-complex model, not a less-complex model
> that was selected for having a better AIC.  The standard recipes
> for GOF (checking residual deviance etc.) don't work because the
> negative binomial soaks up any overdispersion -- these recipes
> are geared toward Poisson/binomial data with fixed scale parameters.
> You should do the "usual" graphical diagnostic checking on the
> most complex model (make sure that relationships are linear on
> the scale of the linear predictor, scaled variances are homogeneous,
> distributions within groups follow the expected distribution,
> no gross outliers or points with large leverage, etc etc etc --
> plot(model) will show you a lot of these diagnostics.
> However, there isn't a simple way to get a p value for goodness
> of the fit of the global model in this case.  (If this is really
> important, you can pick a summary statistic, calculate it for
> your fitted model, then simulate 'data' from the fitted model many times
> and calculate the summary statistics for the simulated data
> (which represent the null hypothesis that the data really do
> come from the fitted model) and see where your observed
> statistic falls in the distribution.)
> 
> ~    cheers
> ~     Ben Bolker
>>
_______________________________________________
R-sig-ecology mailing list
R-sig-ecology at r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-ecology
>>


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 252 bytes
Desc: OpenPGP digital signature
URL: <https://stat.ethz.ch/pipermail/r-sig-ecology/attachments/20080529/d04f2cae/attachment.bin>


More information about the R-sig-ecology mailing list