[R-sig-ME] Equivalence of (0 + factor|group) and (1|group) + (1|group:factor) random effect specifications in case of compound symmetry

Reinhold Kliegl reinhold.kliegl at gmail.com
Sat Sep 23 13:41:40 CEST 2017


The models are on a continuum of complexity wrt to the random-effects
structure. Specifically:

m1 estimates k variance components and k(k-1) correlation parameters for
the k levels of factor f

m2 estimates 1 variance component for the intercept and a 1 variance
component for the k-1 contrasts defined for the k levels of factor f, that
it constrains the k-1 contrasts for factor f to the same value. The
correlation parameters are forced to zero.

The continuum becomes transparent if one re-parameterizes m1 as a m1a (see
below) with 1 variance component for the intercept and k-1 variance
components for the k levels of factor f, and k(k-1) correlation parameters.
 m1 and m1a have the same number of parameters and identical deviance.

m1a <- lmer(y ~ factor + (factor|group))

There is an additional model specification on this continuum between m1 and
m2.  If contrasts are converted to numeric covariates, one can force
correlation parameters to zero, but estimate different variance components
for the k-1 contrasts of factor f. We call this the zero-correlation
parameter model.

cB_A <- model.matrix(m1)[,2]
cC_A <- model.matrix(m1)[,3]

m.zcp <- lmer(score ~ 1 + Machine + (1 + cB_A + cC_A  ||  Worker),
 data=Machines, REML=FALSE)
print(summary(m.zcp), corr=FALSE)

Note that m.zcp without the double-bar is equivalent to m1 and m1b.
m.1b <- lmer(score ~ 1 + Machine + (1 + cB_A + cC_A  |  Worker),
 data=Machines, REML=FALSE)
print(summary(m.zcp), corr=FALSE)

Of course, there is also a simpler model than m2 - the
varying-intercept-only model:

m.vio <- lmer(score ~ 1 + Machine + (1 | Worker),  data=Machines,
REML=FALSE)
print(summary(m.zcp), corr=FALSE)

Here is some R code demonstrating all this for the Machines data.

####
library(lme4)
#library(RePsychLing)

data("Machines", package = "MEMSS")

# OP m1
m1 <- lmer(score ~ 1 + Machine + (0 + Machine | Worker), data=Machines,
REML=FALSE)
print(summary(m1), corr=FALSE)

# re-parameterization of m1
m1a <- lmer(score ~ 1 + Machine + (1 + Machine | Worker), data=Machines,
REML=FALSE)
print(summary(m1a), corr=FALSE)

# alternative specification of m1a
cB_A <- model.matrix(m1)[,2]
cC_A <- model.matrix(m1)[,3]

m1b <- lmer(score ~ 1 + Machine + (1 + cB_A + cC_A  | Worker),
 data=Machines, REML=FALSE)
print(summary(m1b), corr=FALSE)

anova(m1, m1a, m1b)

# zero-correlation parameter LMM
m.zcp <- lmer(score ~ 1 + Machine + (1 + cB_A + cC_A  || Worker),
 data=Machines, REML=FALSE)
print(summary(m.zcp), corr=FALSE)

# OP m2
m2 <- lmer(score ~ 1 + Machine + (1 | Worker) + (1 | Machine:Worker),
data=Machines, REML=FALSE)
print(summary(m2), corr=FALSE)

# varying-intercept-only LMM
m.vio <- lmer(score ~ 1 + Machine + (1 | Worker), data=Machines, REML=FALSE)
print(summary(m.vio), corr=FALSE)

anova(m1, m.zcp, m2, m.vio)

sessionInfo()
####

You may also want to look this RPub:
http://www.rpubs.com/Reinhold/22193

On Sat, Sep 23, 2017 at 11:43 AM, Maarten Jung <
Maarten.Jung at mailbox.tu-dresden.de> wrote:

> Hello everyone,
>
> I have a question regarding the equivalence of the following models:
>
> m1 <- lmer(y ~ factor + (0 + factor|group))
> m2 <- lmer(y ~ factor + (1|group) + (1|group:factor))
>
> Douglas Bates states (slide 91 in this presentation [1])  that these models
> are equivalent in case of compound symmetry.
>
> 1. I realized that I don't really understand the random slope by factor
> model (m1) and espacially why it reduces to m2 given compound symmetry.
> Also, why is there no random intercept in m1?
> Can anyone explain the difference between the models and how m1 reduces to
> m2 in an intuitive way.
>
> 2. If m1 is a special case of m2 – this could be an interesting option for
> model reduction but I’ve never seen something like m2 in papers. Instead
> they suggest dropping the random slope and thus the interaction completely
> (e.g. Matuschek et al. 2017 [3]).
> What do you think about it?
>
> Please note that I asked the question on Stack Exchange [2] but some
> consider it as off-topic. So, I hope one of you can help me out.
>
>
> Best regards,
> Maarten
>
> [1] http://www.stat.wisc.edu/~bates/UseR2008/WorkshopD.pdf
> [2] https://stats.stackexchange.com/q/304374/136579
> [3] https://doi.org/10.1016/j.jml.2017.01.001
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list