[R-sig-ME] Suggestions on how to correct a misapprehension?

Hein van Lieverloo he|n@v@n@||ever|oo @end|ng |rom v|@etern@@n|
Tue Dec 13 18:49:30 CET 2022


Hi Douglas,

First: 😊on the form of your 
Second: thanks for the details and support for your quest! 
Third: anyone can get a login on Wikipedia, I have made some changes myself (not on the page you refer to 😊). This also explains why it is 'less than perfect' and should never be used as a single source.

Best,

Hein

-----Original Message-----
From: R-sig-mixed-models <r-sig-mixed-models-bounces using r-project.org> On Behalf Of Douglas Bates
Sent: Tuesday, 13 December 2022 18:40
To: R Mixed Models <r-sig-mixed-models using r-project.org>
Subject: [R-sig-ME] Suggestions on how to correct a misapprehension?

So my family is having to live through a "Someone is wrong on the internet", https://xkcd.com/386/, moment. In the past couple of days I have twice encountered the same mistaken characterization of how the parameter estimates in lmer and in the MixedModels.jl package are evaluated.

As we documented in our 2015 paper http://dx.doi.org/10.18637/jss.v067.i01
in lme4 the REML estimates or the ML estimates for the parameters of a linear mixed-effects model are evaluated by constrained optimization of a profiled log-likelihood or profiled log-restricted-likelihood.  The parameters directly being optimized are the elements of relative covariance factors.  The profiling involves solving a penalized least squares problem.  This PLS representation, and the use of sparse matrices, is what allows for fitting models with random effects associated with crossed or partially crossed grouping factors, such as "subject" and "item".  To many users this capability is one of the big selling points for lme4.

In our paper we explain in great detail why this approach is, in our opinion, superior to earlier approaches.  And if someone doesn't believe us, both lme4 and MixedModels.jl are Open Source projects so anyone who wants to do so can just go read the code to find out what actually is done.

So it came as a surprise when reading the Wikipedia entry on mixed models, https://en.wikipedia.org/wiki/Mixed_model, to learn that lme4 and MixedModels.jl use an EM algorithm that Mary Lindstrom and I described (
https://doi.org/10.1080%2F01621459.1988.10478693) in 1988.  It is possible that in the early days of lme4 there was such an implementation, but not in the last 15 years, and there definitely has never been such an implementation in MixedModels.jl.  I noticed that the Python package statsmodels is described in the Wikipedia article and in their documentation, https://www.statsmodels.org/stable/mixed_linear.html, as using that EM algorithm. I didn't verify this in the code because reading code based on numpy and scipy causes me to start ranting and raving to the extent that family members need to take away my laptop and put me in a quiet room with the window shades drawn until I promise to behave myself.

Anyway the Python statsmodels documentation claims that lme4 uses this method, which it doesn't.

I have never gone through the process of proposing an edit in a Wikipedia article.  As I understand it I would need to create a login etc.  Would anyone who does have such a login be willing to propose an edit and save me the steps?

	[[alternative HTML version deleted]]

_______________________________________________
R-sig-mixed-models using r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models



More information about the R-sig-mixed-models mailing list