[R-sig-ME] R-sig-mixed-models Digest, Vol 136, Issue 41
Maarten Jung
Maarten.Jung at mailbox.tu-dresden.de
Wed May 2 00:27:06 CEST 2018
Sorry, I forgot that lmer() (unlike lmer_alt() from the afex package)
does not convert factors to numeric covariates when using the the
double-bar notation!
The model I was talking about would be:
m_zcp5 <- lmer_alt(angle ~ recipe + (recipe || replicate), cake)
VarCorr(m_zcp5)
Groups Name Std.Dev.
replicate (Intercept) 6.2359
replicate.1 re1.recipe1 1.7034
replicate.2 re1.recipe2 0.0000
Residual 5.3775
This model seems to differ (and I don't really understand why) from
m_zcp6 which I think is equivalent to your m_zcp4:
m_zcp6 <- lmer_alt(angle ~ recipe + (0 + recipe || replicate), cake)
VarCorr(m_zcp6)
Groups Name Std.Dev.
replicate re1.recipeA 5.0429
replicate.1 re1.recipeB 6.6476
replicate.2 re1.recipeC 7.1727
Residual 5.4181
anova(m_zcp6, m_zcp5, refit = FALSE)
Data: data
Models:
m_zcp6: angle ~ recipe + ((0 + re1.recipeA | replicate) + (0 + re1.recipeB |
m_zcp6: replicate) + (0 + re1.recipeC | replicate))
m_zcp5: angle ~ recipe + ((1 | replicate) + (0 + re1.recipe1 | replicate) +
m_zcp5: (0 + re1.recipe2 | replicate))
Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
m_zcp6 7 1781.8 1807.0 -883.88 1767.8
m_zcp5 7 1742.0 1767.2 -863.98 1728.0 39.807 0 < 2.2e-16 ***
Do m_zcp5 and Model3b estimate the same random effects in this case?
If not, what is the difference between m_zcp5 and Model3b (except for
the fact that the variance depends on the
recipe in m_zcp5) and which one is the more complex model?
I would be glad if you could elaborate on this and help me and the
others understand these models.
Cheers,
Maarten
On Tue, May 1, 2018 at 9:53 PM, Rune Haubo <rune.haubo at gmail.com> wrote:
> On 1 May 2018 at 15:45, Maarten Jung <Maarten.Jung at mailbox.tu-dresden.de> wrote:
>> Dear Rune,
>>
>> am I right in thinking of Model3b as a zero-correlation-parameter
>> model (m_zcp) but with the variances of the operators-related effects
>> constrained to equality?
>> Specifically, is the difference between Model3b and m_zcp that m_zcp
>> estimates variance components for each level of the factor 'machines'
>> and Model3b assumes equal variances across the levels of machines and
>> estimates only one variance for all levels?
>>
>> Model3b <- lmer(Y ~ machines + (1 | operators) + (1 | machines:operators))
>> m_zcp <- lmer(Y ~ machines + (machines || operators))
>>
>> Cheers,
>> Maarten
>
> Hmm, well, that only makes partial sense to me. You probably want to compare
>
> Model2 <- lmer(Y ~ machines + (1 | machines:operators))
>
> with m_zcp. These two models have the same number of random effects
> the difference
> being that Model2 assumes a single variance for all of them, while m_zcp
> assumes that the vectors of random effects for each operator come from a
> multivariate normal distribution the dimension of which match the number of
> machines.
>
> This is probably easier to think about if we look at a concrete example, say
> using the cake data from lme4. Here recipe=machines and replicate=operators;
> there are 3 levels for recipe and 15 replicates.
>
> First, '(recipe || replicate)' is the same as/expands to '(1 |
> replicate) + (0 + recipe | replicate)'
> which is just an over-parameterized version of '(0 + recipe | replicate)', which
> again is a re-parameterized version of '(recipe | replicate)'. These are all
> representing the same model (all on 10 df though lmer() is mislead and thinks
> that m_zcp has 11 df):
>
>> library(lmerTest)
>> m_zcp <- lmer(angle ~ recipe + (recipe || replicate), cake)
>> VarCorr(m_zcp)
> Groups Name Std.Dev. Corr
> replicate (Intercept) 0.0000
> replicate.1 recipeA 5.0692
> recipeB 6.7300 0.951
> recipeC 7.2107 0.902 0.991
> Residual 5.3622
>> m_zcp2 <- lmer(angle ~ recipe + (0 + recipe | replicate), cake)
>> VarCorr(m_zcp2)
> Groups Name Std.Dev. Corr
> replicate recipeA 5.0692
> recipeB 6.7300 0.951
> recipeC 7.2107 0.902 0.991
> Residual 5.3622
>> m_zcp3 <- lmer(angle ~ recipe + (recipe | replicate), cake)
>> anova(m_zcp, m_zcp2, m_zcp3, refit=FALSE)
> Data: cake
> Models:
> m_zcp2: angle ~ recipe + (0 + recipe | replicate)
> m_zcp3: angle ~ recipe + (recipe | replicate)
> m_zcp: angle ~ recipe + (recipe || replicate)
> Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
> m_zcp2 10 1741 1777.0 -860.5 1721
> m_zcp3 10 1741 1777.0 -860.5 1721 0 0 1
> m_zcp 11 1743 1782.6 -860.5 1721 0 1 1
>
> If we want to enforce a diagonal covariance matrix for the random effects,
> we can use the dummy function:
> m_zcp4 <- lmer(angle ~ recipe +
> (0 + dummy(recipe, "A") | replicate) +
> (0 + dummy(recipe, "B") | replicate) +
> (0 + dummy(recipe, "C") | replicate), data=cake)
> VarCorr(m_zcp4)
> Groups Name Std.Dev.
> replicate dummy(recipe, "A") 5.0429
> replicate.1 dummy(recipe, "B") 6.6476
> replicate.2 dummy(recipe, "C") 7.1727
> Residual 5.4181
>
> Now we have something closer to what I think you were thinking of. Here,
>
> Model2 <- lmer(angle ~ recipe + (1 | recipe:replicate), data=cake)
>
> and m_zcp estimate the same random effects, but Model2 assumes they have
> the same variance while m_zcp says that the variance depends on the
> recipe and none of the models include correlation parameters.
>
> In this case the difference in variance between recipes is small and
> the random-effect estimates are very similar (as is the test for the
> fixed-effects of recipe):
>
> head(matrix(unlist(ranef(Model2)[[1]]), ncol=3))
> [,1] [,2] [,3]
> [1,] 10.444800 13.695174 15.22126404
> [2,] 9.106993 13.397883 13.43752216
> [3,] 5.242219 4.181884 3.47829667
> [4,] -2.487329 1.357626 4.22152245
> [5,] 2.269316 1.654916 -3.21073538
> [6,] -6.054813 -2.953084 -0.08918709
>
> head(ranef(m_zcp4)$replicate)
> dummy(recipe, "A") dummy(recipe, "B") dummy(recipe, "C")
> 1 9.821502 13.824869 15.58456445
> 2 8.563530 13.524764 13.75824831
> 3 4.929388 4.221487 3.56131649
> 4 -2.338897 1.370483 4.32228155
> 5 2.133894 1.670588 -3.28736906
> 6 -5.693489 -2.981050 -0.09131581
>
> Cheers
> Rune
More information about the R-sig-mixed-models
mailing list