[R-sig-ME] laplace deviance crossed factors glm

Ben Pelzer b.pelzer at maw.ru.nl
Mon Feb 22 16:25:40 CET 2010


Dear all,

Recently I ran into a counterintuitive result related to the Laplace 
deviance value produced by lmer for a logistic regression model.

For logistic regression models, one of the methods lmer can use to 
derive an (approximate) deviance value, is
called 'Laplace approximation'. As explained in the lmer package's pdf, 
this Laplace deviance is equal to the deviance that one would obtain 
when using the 'adaptive Gaus-Hermite quadrature' (AGQ) with only 1 
sample point. However, with AGQ one can use more sample points to 
increase the precision of the approximation to the true deviance. Hence, 
the Laplace deviance is 'worse' than the AGQ deviance that uses, say, 
five sample points. So far so good.

My question now is, in how far it is safe to use the Laplace deviance to 
compare the fit of two nested glm models. Realizing that the Laplace
deviance can be pretty rough, the Laplace deviance difference between 
two nested models may not be as nicely chisquare distributed as we would 
like it to be. Is there literature on this subject?

This question was inspired by running a  model, in which two random 
factors A and B are crossed, having 214 and 40 levels, respectively. For 
each (A,B)  combination there is exactly 1 observation in my data, so 
random interaction of A and B cannot be modeled. In total there 214 * 40 
minus 1 = 8559 observations, the "minus 1" caused by the fact that  for 
1 particular (A,B) combination the dependent Y is missing.

The nice thing about the Laplace approximation method is that it can be 
applied to such cross-factorial glm models, while AGQ (with more than 1 
quadr. point) cannot. However, comparing the deviances of two nested 
cross-factorial models, I discovered that the more restricted model has 
a smaller (closer to zero) deviance, than the full model! This doesn't 
make sense, of course. There appear to be no convergence problems, the 
models being relatively simple.

The restricted and full model differ as follows. There are two 
(independent) groups of observations in the data, the indicator 
variables (0/1) gr1 and gr2 indicating the group to which an 
observations belongs. In the full model, the random effect variances of 
A and B are allowed to vary across the two groups, so in total there are 
4 variances to be estimated,  var(A) for group 1, var(A) for group 2, 
var(B) for group 1 and var(B) group 2.  In the restricted model, both 
groups have the same random effect variance for A, but each group has a 
different variance for B, so there are three variances to be estimated 
now, var(A), var(B) for group 1 and var(B) for group 2. In R syntax:

# full model; deviance = 7095.
M1 <- lmer (Y ~  1  +  (gr1+0|A)  +  (gr2+0|A)  +  (gr1+0|B)  +  
(gr2+0|B)  +  gr1,
                    family=binomial(link="logit"), reml=FALSE)

# restricted model; deviance = 7076.
M2 <- lmer (Y ~ 1  +  (gr1+0|A)  +  (gr2+0|A)  +  (1|B)  +  gr1,
                     family=binomial(link="logit"), reml=FALSE)

Now I'm wondering where the unexpected reduction of the deviance may 
come from. Could this be related to the fact that, for comparing nested 
models,  the Laplace deviance should not be used? Or am I simply 
overlooking something which has nothing to do with the Laplace deviance 
at all? Any tip would be greatly appreciated. Kind regards,

Ben




More information about the R-sig-mixed-models mailing list