[R-sig-ME] Removing random intercepts before random slopes

Maarten Jung M@@rten@Jung @ending from m@ilbox@tu-dre@den@de
Sat Sep 1 15:49:58 CEST 2018


> thanks for your answer, makes sense to me. I think removing the random intercepts should mostly increase the residual error and thus even
> increase the SEs for the fixed effects. Is this correct?

Fwiw: this quick test with the Machines data seems to support my speculation:

data("Machines", package = "MEMSS")
d <- Machines
xtabs(~ Worker + Machine, d)  # balanced

mm <- model.matrix(~ 1 + Machine, d)
c1 <- mm[, 2]
c2 <- mm[, 3]

summary(lmerTest::lmer(score ~ 1 + c1 + c2 + (1 + c1 + c2 | Worker), d))
# Fixed effects:
#             Estimate Std. Error     df t value Pr(>|t|)
# (Intercept)   52.356      1.681  5.000  31.151  6.4e-07 ***
# c1             7.967      2.421  5.000   3.291 0.021693 *
# c2            13.917      1.540  5.000   9.036 0.000277 ***

summary(lmerTest::lmer(score ~ 1 + c1 + c2 + (0 + c1 + c2 | Worker), d))
### SEs increased:
# Fixed effects:
#             Estimate Std. Error      df t value Pr(>|t|)
# (Intercept)  52.3556     0.6242 41.0000  83.880  < 2e-16 ***
# c1            7.9667     3.5833  5.3172   2.223 0.073612 .
# c2           13.9167     1.9111  6.2545   7.282 0.000282 ***

summary(lmerTest::lmer(score ~ 1 + c1 + c2 + (1 + c1 + c2 || Worker), d))
# Fixed effects:
#             Estimate Std. Error     df t value Pr(>|t|)
# (Intercept)   52.356      1.679  5.004  31.188 6.31e-07 ***
# c1             7.967      2.426  5.002   3.284 0.021833 *
# c2            13.917      1.523  5.004   9.137 0.000262 ***

summary(lmerTest::lmer(score ~ 1 + c1 + c2 + (0 + c1 + c2 || Worker), d))
### SEs increased:
# Fixed effects:
#             Estimate Std. Error      df t value Pr(>|t|)
# (Intercept)  52.3556     0.6242 41.0000  83.880  < 2e-16 ***
# c1            7.9667     3.5833  5.3172   2.223 0.073612 .
# c2           13.9167     1.9111  6.2545   7.282 0.000282 ***

Still, I would be glad to hear any thoughts on this question:

> Why exactely would it be conceptually strange to have random slopes but not random intercepts?
> Because intercepts often represent some kind of baseline and, say subjects, will probably have different baselines (and thus a corresponding variance component estimated as > 0) if their slopes (i.e. effects) vary, or is there any other statistical reason why most people remove the random slopes first?



More information about the R-sig-mixed-models mailing list