[R] Optimisation and NaN Errors using clm() and clmm()
rhbc at imm.dtu.dk
Sat Apr 20 10:53:21 CEST 2013
On 18 April 2013 18:38, Thomas Foxley <thomasfoxley at aol.com> wrote:
> Thank you very much for your response.
> I don't actually have the models that failed to converge from the first
> (glmulti) part as they were not saved with the confidence set. glmulti
> generates thousands of models so it seems reasonable that a few of these may
> not converge.
> The clmm() model I provided was just an example - not all models have 17
> parameters. There were only one or two that produced errors (the example I
> gave being one of them), perhaps overparameterisation is the root of the
> Regarding incomplete data - there are only 103 (of 314) records where I have
> data for every predictor. The number of observations included will obviously
> vary for different models, models with fewer predictors will include more
> observations. glmulti acts as a wrapper for another function, meaning (in
> this case) na's are treated as they would be in clm(). Is there a way around
> this (apart from filling in the missing data)? I believe its possible to
> limit model complexity in the glmulti call - which may or may not increase
> the number of observations - how would this affect interpretation of the
Since the likelihood (and hence also AIC-like criteria) depends on the
number of observations, I would make sure that only models with the
same number of observations are compared using model selection
criteria. This means that I would make a data.frame with complete
observations either by just deleting all rows with one or more missing
predictors or by imputing some data points. If one or a couple of
variables are responsible for most of the missing observations, you
could disregard these variables before deleting rows with NAs.
As I said, I am no expert in model averaging or glmulti usage, so
there might be better approaches or other opinions on this.
More information about the R-help