[R-sig-ME] testing fixed effects in binomial lmer...again?
Dimitris Rizopoulos
dimitris.rizopoulos at med.kuleuven.be
Tue Jan 8 18:13:47 CET 2008
----- Original Message -----
From: "Douglas Bates" <bates at stat.wisc.edu>
To: "Achaz von Hardenberg" <fauna at pngp.it>
Cc: <r-sig-mixed-models at r-project.org>
Sent: Tuesday, January 08, 2008 3:10 PM
Subject: Re: [R-sig-ME] testing fixed effects in binomial
lmer...again?
> On Jan 8, 2008 5:38 AM, Achaz von Hardenberg <fauna at pngp.it> wrote:
>> Dear all,
>
>> I know that this may be a already debated topic, but even searching
>> the R-help and the r-sig-mixed-models archives I can not find a
>> reply
>> to my doubts...(but see Ben Bolkers' reply to my similar quest in
>> r-
>> help).
>
>> I am performing a binomial glmm analysis using the lmer function in
>> the lme4 package (last release, just downloaded). I am using the
>> "Laplace method".
>
> Yes, that is the best choice in lmer. (In the development version
> it
> is, in fact, the only choice.)
>
>> However, I am not sure about what I should do to test for the
>> significance of fixed effects in the binomial case: Is it correct
>> to
>> test a full model against a model from which I remove the fixed
>> effect I want to test using the anova(mod1.lmer, mod2.lmer) method
>> and then relying on the model with the lower AIC (or on the Log-
>> likelihood test?)?
>
> The change in the log-likelihood between two nested models is, in my
> opinion, the most sensible test statistic for comparing the models.
> However, it is not clear how one should convert this test statistic
> to
> a p-value. The use of the chi-squared distribution is based on
> asymptotic results and can give an "anti-conservative" (i.e. lower
> than would be obtained through a randomization test or via
> simulation)
> p-value for small samples. As far as I can see, the justification
> for
> the use of AIC as a comparison criterion is even more vague.
>
> For linear fixed-effects models one can compensate for small samples
> by changing from z-tests to t-tests and from chi-squared tests to F
> tests. The exact theory breaks down for mixed-effects models or for
> generalized linear models and is even more questionable for
> generalized linear mixed models. As Ben Bolker mentioned, I think
> that one way to deal with the hypothesis testing question while
> preserving the integrity of the model is to base inferences on a
> Markov-chain Monte Carlo sample from the (Bayesian) posterior
> distribution of the parameters.
>
> Code for MCMC samples for parameters in GLMMs is not yet fully
> developed (or documented). In the meantime I would use the
> likelihood
> ratio tests but exercise caution in reporting p-values for
> small-sample cases.
What about Bootstrap (parametric or not)? Would it be useful in this
case?
(For instance, something along the following lines:
library(lme4)
form.null <- # formula under null
form.altr <- # formula under alternative
fm1 <- lmer(form.null, family = binomial, data = data)
fm2 <- lmer(form.altr, family = binomial, data = data)
# observed value of the LRT
Tobs <- anova(fm1, fm2)$Chisq[2]
B <- 199
Tvals <- numeric(B)
# 'id' is the grouping variable
unq.ids <- unique(data$id)
for (b in 1:B) {
dat.new <- # a sample with replacement from the original subjects
fm1 <- lmer(form.null, family = binomial, data = data.new)
fm2 <- lmer(form.altr, family = binomial, data = data.new)
Tvals[b] <- anova(fm1, fm2)$Chisq[2]
}
# estimated p-value
(1 + sum(Tvals >= Tobs)) / (B + 1)
if the estimated p-value is near the significance level, 'B' can be
increased accordingly.)
Best,
Dimitris
>> Would you advice me to use the glmmML function instead? (I am not
>> sure where the differences are with lmer)
>>
>> I thank in advance for your help!
>>
>> best regards,
>> Achaz von Hardenberg
>>
>> Ben Bolker wrote:
>> >The short answer is that testing fixed effects in GLMMs
>> >is difficult and dangerous. Likelihood ratio tests on fixed
>> >effect differences [which is generally what anova() does]
>> >in a random-effects model are unreliable
>> >(see Pinheiro and Bates 2000). Most of the world does
>> >F tests with various corrections on the denominator
>> >degrees of freedom, but this is contentious (in particular,
>> >Doug Bates, the author of lme4, disagrees). lme4 will
>> >eventually let you use an MCMC sampling method to test
>> >fixed effects but that may or may not be working
>> >in the current version.
>>
>> >I would try this question again on the r-sig-mixed
>> >mailing list.
>>
>> > good luck,
>> > Ben Bolker
>>
>> Dr. Achaz von Hardenberg
>> ------------------------------------------------------------------------
>> --------------------------------
>> Centro Studi Fauna Alpina - Alpine Wildlife Research Centre
>> Servizio Sanitario e della Ricerca Scientifica
>> Parco Nazionale Gran Paradiso, Degioz, 11, 11010-Valsavarenche
>> (Ao),
>> Italy
>> ------------------------------------------------------------------------
>> --------------------------------
>>
>>
>>
>>
>>
>> [[alternative HTML version deleted]]
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
More information about the R-sig-mixed-models
mailing list