[R-sig-ME] Removing random intercepts before random slopes
Phillip Alday
phillip@@ld@y @ending from mpi@nl
Wed Aug 29 12:41:10 CEST 2018
Focusing on just the last part of your question:
> And, is there any difference between LMMs with categorical and LMMs
> with continuous predictors regarding this?
Absolutely! Consider the trivial case of only one categorical predictor
with dummy coding and no continuous predictors in a fixed-effect model.
Then ~ 0 + cat.pred and ~ 1 + cat.pred produce identical models in some
sense, but in the former each level of the predictor is estimated as an
"absolute" value, while in the latter, one predictor is coded as the
intercept and estimated as an "absolute" value, while the other levels
are coded as offsets from that value.
For a really interesting example, try this:
data(Oats,package="nlme")
summary(lm(yield ~ 1 + Variety,Oats))
summary(lm(yield ~ 0 + Variety,Oats))
Note that the residual error is identical, but all of the summary
statistics -- R2, F -- are different.
Best,
Phillip
On 08/29/2018 11:21 AM, Maarten Jung wrote:
> Dear list,
>
> Does it make sense to remove random intercepts before one removes
> random slopes (regarding the same grouping factor)?
>
> Barr et al. (2013, [1]) suggest that a model "missing within-unit
> random intercepts is preferable to one missing the critical random
> slopes" (p. 276).
> However, I wonder whether this procedure does make sense from a
> conceptual perspective and whether it is reconcilable with the
> principal of marginality?
>
> And, is there any difference between LMMs with categorical and LMMs
> with continuous predictors regarding this?
>
> Best regards,
> Maarten
>
> [1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3881361/
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
More information about the R-sig-mixed-models
mailing list