Dear list,

I think my question might pertain more to model selection than to mixed
effects modeling, so please let me know if you think it would be more
appropriate for another list.

I am examining the effect of oak leaftying caterpillars on herbivory
intensity, where the response variable is % leaf skeletonization, log
transformed (log.skel).

There are 5 fixed effects:
tree species (species) - 8 levels
leaftie treatment (treat) - 4 levels
condensed tannin (condensed) - continuous
hydrolyzable tannin (hydrolyzable) - continuous
total phenolics (total) - continuous

The random effect is individual trees (tree) with unique ID's, N=10 per
species, each tree with a unique value set of the three phenolics, and each
tree contains all four levels of the leaftie treatment.

I constructed a set of models with tree as random effect and treat as
uncorrelated random effect:
> mix2 <- lmer(log.Skel ~ Treat*Condensed*Hydrolyzable + (1|Tree) +
(0+Treat|Tree), REML=FALSE, data=esppskel)
> mix3 <- lmer(log.Skel ~ Treat*Condensed*Total + (1|Tree) + (0+Treat|Tree),
REML=FALSE, data=esppskel)
> mix5 <- lmer(log.Skel ~ Treat*Species*Condensed + (1|Tree) +
(0+Treat|Tree), REML=FALSE, data=esppskel)
> mix6 <- lmer(log.Skel ~ Treat*Species*Condensed*Total + (1|Tree) +
(0+Treat|Tree), REML=FALSE, data=esppskel)
> mix7 <- lmer(log.Skel ~ Treat*Species*Condensed*Hydrolyzable + (1|Tree) +
(0+Treat|Tree), REML=FALSE, data=esppskel)
and compared them using anova.

      Df    AIC    BIC  logLik  Chisq Chi Df Pr(>Chisq)
mix2  28 1938.4 2072.0 -941.21
mix3  28 1941.0 2074.6 -942.51  0.000      0    1.00000
mix5  76 1977.8 2340.2 -912.87 59.274     48    0.12755
mix6 140 2013.2 2680.9 -866.58 92.578     64    0.01124 *
mix7 140 2041.7 2709.5 -880.86  0.000      0    1.00000
I understand that I cannot compare the models based on LRT because they are
not nested, so I should use the AIC. It is also my understanding that models
with ΔAIC > 10 are considered different. Based on the above output, for
example mix6 and mix7 have a ΔAIC > 25, so should I ignore that p-value of
1.0 and consider them different? Also, is there any meaning in the ordering
of the models in these outputs? In this case, the models are ordered in
ascending AIC values, but in a related study, my models were ordered in
descending AIC values.

Am I doing this with the right approach at all? Thank you in advance for any
pointers.

Regards,

George Wang

	[[alternative HTML version deleted]]


