[R-sig-ME] lme4a, glmer and all that
Douglas Bates
bates at stat.wisc.edu
Thu Mar 4 17:11:40 CET 2010
Two further comments. It is only the results from fitting generalized
linear mixed models with the current lme4 that I have cause to doubt.
The results from linear mixed models do check out.
Here is an example of the inconsistency. In lme4a a model fit to the
cbpp data set provides
> (m1 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd),
+ family = binomial, data = cbpp))
Generalized Linear mixed model fit by maximum likelihood
AIC BIC logLik deviance
115.2 127.4 -51.62 103.2
Random effects:
Groups Name Variance Std.Dev.
herd (Intercept) 0.76381 0.87396
Number of obs: 56, groups: herd, 15
Fixed effects:
Estimate Std. Error t value
(Intercept) -1.4277 0.1449 -9.852
period2 -0.9693 0.2915 -3.326
period3 -1.1064 0.3129 -3.536
period4 -1.5573 0.4131 -3.770
Correlation of Fixed Effects:
(Intr) perid2 perid3
period2 -0.497
period3 -0.463 0.230
period4 -0.351 0.174 0.163
(As I mentioned previously, those standard errors are not
appropriately updated to the parameter estimates and should be
ignored.)
The corresponding results from the current lme4 are
(m1 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd),
+ family = binomial, data = cbpp))
Generalized linear mixed model fit by the Laplace approximation
Formula: cbind(incidence, size - incidence) ~ period + (1 | herd)
Data: cbpp
AIC BIC logLik deviance
110.1 120.2 -50.05 100.1
Random effects:
Groups Name Variance Std.Dev.
herd (Intercept) 0.4125 0.64226
Number of obs: 56, groups: herd, 15
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.3985 0.2279 -6.137 8.42e-10 ***
period2 -0.9923 0.3054 -3.249 0.001156 **
period3 -1.1287 0.3260 -3.462 0.000537 ***
period4 -1.5804 0.4288 -3.686 0.000228 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) perid2 perid3
period2 -0.351
period3 -0.329 0.267
period4 -0.249 0.202 0.186
Fitting this model in Stata provided
. xtmelogit incidence i.period || herd: , binomial(size) intpoints(1)
Refining starting values:
Iteration 0: log likelihood = -93.798792
Iteration 1: log likelihood = -92.108205
Iteration 2: log likelihood = -92.078723
Performing gradient-based optimization:
Iteration 0: log likelihood = -92.078723
Iteration 1: log likelihood = -92.026831
Iteration 2: log likelihood = -92.026282
Iteration 3: log likelihood = -92.026282
Mixed-effects logistic regression Number of obs = 56
Binomial variable: size
Group variable: herd Number of groups = 15
Obs per group: min = 1
avg = 3.7
max = 4
Integration points = 1 Wald chi2(3) = 24.95
Log likelihood = -92.026282 Prob > chi2 = 0.0000
------------------------------------------------------------------------------
incidence | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
period |
2 | -.9923328 .3066425 -3.24 0.001 -1.593341 -.3913245
3 | -1.128672 .3266379 -3.46 0.001 -1.768871 -.4884737
4 | -1.580314 .4274366 -3.70 0.000 -2.418074 -.7425537
|
_cons | -1.398532 .232472 -6.02 0.000 -1.854169 -.9428953
------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
-----------------------------+------------------------------------------------
herd: Identity |
sd(_cons) | .6422615 .1785622 .3724431 1.107551
------------------------------------------------------------------------------
LR test vs. logistic regression: chibar2(01) = 14.01 Prob>=chibar2 = 0.0001
Note: log-likelihood calculations are based on the Laplacian approximation.
.
.
end of do-file
which, now I see are more consistent with the results from lme4, not
lme4a. I misunderstood the message I received regarding the Stata
results.
OK, this might have all been a red herring. I'll look into it some more.
At one time on Saturday Night Live Gilda Radner would play a citizen
commentator on weekend news update who was confused about the issue
and, after a long harangue, would end up realizing she got it wrong.
She ended by saying "Never mind". That might be the case here too.
More information about the R-sig-mixed-models
mailing list