[R-sig-ME] lmer vs glmmPQL
Kingsford Jones
kingsfordjones at gmail.com
Fri Jun 26 22:28:09 CEST 2009
On Fri, Jun 26, 2009 at 12:21 PM, Ben Bolker<bolker at ufl.edu> wrote:
> That's really interesting and kind of scary.
> Do you have any thoughts on why this should be so?
> I know of a few simulation studies (Browne and Draper, Breslow) that
> test PQL and generally find reasonably "significant" bias for binary
> data with large random variance components. I guess I had simply
> assumed that Laplace/AG(H)Q would be better. (There are also some
> theoretical demonstrations (Jiang?) that PQL is asymptotically
> inconsistent, I think ...)
>
> * Are you working in a different regime from previous studies
> (smaller data sets, or some other point)?
> * Does considering RMSE rather than bias give a qualitatively
> different conclusion (i.e., PQL is biased but has lower variance)?
> * ?
>
> Since in a recent paper I recommended Laplace/AGHQ out of principle,
> and Wald tests out of pragmatism, and thought the former recommendation
> was reliable but the latter was not, it's interesting to be having
> my world turned upside down ...
>
> Would welcome opinions & pointers to other studies ...
Hi Ben -- Here's another simulation showing lower bias but increased
MSE with Laplace when compared to PQL.
@article{1225064,
author = {Diaz, Rafael E.},
title = {Comparison of PQL and Laplace 6 estimates of hierarchical
linear models when comparing groups of small incident rates in cluster
randomised trials},
journal = {Comput. Stat. Data Anal.},
volume = {51},
number = {6},
year = {2007},
issn = {0167-9473},
pages = {2871--2888},
doi = {http://dx.doi.org/10.1016/j.csda.2006.10.005},
publisher = {Elsevier Science Publishers B. V.},
address = {Amsterdam, The Netherlands, The Netherlands},
abstract = {The variances of the random components in hierarchical
generalised linear models (HGLMs) with binary outcomes have been
reported to have a considerable downward bias when estimated with the
commonly used penalised quasilikelihood (PQL) technique. The more
recently proposed Laplace 6 approximation promises to reduce this
bias. This study compares the performance of these two techniques when
estimating the parameters of a particular HGLM. This comparison is
performed via Monte Carlo simulations in which the difference between
two groups of proportions, modelled after those appearing in many
epidemiological cluster randomised interventions, are tested using
this model. The Laplace 6 approximation does reduce the bias mentioned
above, but at the price of a higher mean square error. The results of
this study suggest that the optimal solution involves using a
combination of these two techniques. This combination is illustrated
by analysing a data set from a real cluster randomised intervention.}
}
hth,
Kingsford Jones
>
> Ben Bolker
>
>
> @article{browne_comparison_2006,
> title = {A comparison of Bayesian and likelihood-based methods for
> fitting multilevel models},
> volume = {1},
> url = {http://ba.stat.cmu.edu/journal/2006/vol01/issue03/draper2.pdf},
> number = {3},
> journal = {Bayesian Analysis},
> author = {William J. Browne and David Draper},
> year = {2006},
> pages = {473--514}
> }
>
> @incollection{breslow_whither_2004,
> title = {Whither {PQL?}},
> isbn = {0387208623},
> booktitle = {Proceedings of the second Seattle symposium in
> biostatistics: Analysis of correlated data},
> publisher = {Springer},
> author = {N. E. Breslow},
> editor = {Danyu Y. Lin and P. J. Heagerty},
> year = {2004},
> pages = {1–22}
> }
> Fabian Scheipl wrote:
>> Ben Bolker said:
>>> My take would be to pick lmer over glmmPQL every time, provided
>>> it can handle your problem -- in general it should be more accurate.
>>
>> That's what I wanted to demonstrate to my students last week, so I did
>> a small simulation study with a logit-model with random intercepts:
>>
>> logit(P(y_ij=1)) = x_ij + b_i;
>> b_i ~N(0,1);
>> x_ij ~U[-1,1];
>> i=1,..,m;
>> j=1,...,n_i
>>
>> The pdfs with the results are attached (m subjects, ni obs/subject,
>> RPQL is PQL with iterated REML fits on the working observations
>> instead of ML, nAGQ=11 for AGQ).
>> The results surprised me :
>> - For the estimated standard deviation of the random intercepts, PQL
>> actually has (much) lower rmse for small and medium-sized data sets
>> and bias is about the same for LA, AGQ and PQL for small datasets.
>> - There were no relevant differences in rmse or bias for the estimates
>> of the fixed effects.
>>
>> Differences for poisson data should be even smaller, since their
>> likelihood is more normal-ish.
>> glmer may still be preferrable since its much faster and more stable
>> than glmmPQL, but accuracy for smaller datasets may be better for PQL.
>>
>> Best,
>> Fabian
>>
>
>
> --
> Ben Bolker
> Associate professor, Biology Dep't, Univ. of Florida
> bolker at ufl.edu / www.zoology.ufl.edu/bolker
> GPG key: www.zoology.ufl.edu/bolker/benbolker-publickey.asc
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
More information about the R-sig-mixed-models
mailing list