[R-sig-ME] New paper by Bates et al: Parsimonious mixed models
singmann at psychologie.uzh.ch
Fri Jun 19 23:45:13 CEST 2015
It seems my first mail did not get through so I am trying it again.
Sorry for double posting.
Thanks a lot for sending this paper around, I was eagerly awaiting it
and really enjoyed reading it.
Nevertheless, I would like to start a discussion on your response to
Barr et al.'s (2013) suggestion to "keep it maximal". You conclude that
"it is not necessary to aim for maximality when the interest is in a
confirmatory analysis of factorial contrasts" (p. 25).
This conclusion is based on your reanalysis of several data sets showing
that the maximal models for those cases are overparameterized. As a
remedy you suggest a simplification strategy with the goal to reach a
model that only contains parameters "actually well-supported by the
information in the data" (p. 25). Nevertheless, for all cases discussed,
the conclusion regarding the fixed effects agree between the
overparameterized models and the simplified models.
In other words, there actually does not seem to be a downside of using
maximal models for testing fixed effects. Likewise, the only upside of
the simplified model is that one avoids superfluous variance and
correlation parameters. Whereas the latter is clearly desirable from a
statistical perspective, from the perspective of a researcher with the
goal to avoid false positives (i.e., anticonservative fixed effects
estimates) this does not sound overly convincing. Furthermore, your
suggested simplification strategy, albeit clearly principled, is still a
stepwise procedure that must share some of the problems stepwise
procedures usually have. Specifically, I cannot see how it could
guarantee that one does not occasionally exclude a random effects
component that, once omitted, results in anticonservative estimates.
Based on this, I would perhaps suggest a somewhat different conclusion
from your analyses for those interested in "confirmatory analysis":
1. Test your fixed effects on the maximal model.
2. Test your fixed effects on the optimal model found with the suggested
If both results agree everything is fine.
If not things get more difficult: Either make convincing arguments while
one of the results must be "more correct" (e.g., by analyzing the fixed
effects estimates across a wider range of random effects structures),
perhaps collect more data, or ...
(I have avoided discussing the "hidden complexities" as it goes beyond
the issue of maximal or optimal LMMs.)
I am really interested in any thoughts on this matter and thanks again
for the paper.
PS: I found two small typos:
- In section 3.3 (reanalysis of Kronmüller and Barr) you introduce the
cognitie load manipulation as "L" whereas the formulas given later as
well as the figures refer to it as "C".
- In the same section, subesction "Dropping non-significant variance
components" (p. 11) you write "compared to the other standard-deviation
estimates (> 64)." Inspection of the results from this reanalysis (e.g.,
in the KB vignette of RePsychLing) shows that there are dropped random
effects with SDs above 64 (e.g., C) but also below 64 (e.g., SP).
Am 17.06.2015 um 12:24 schrieb Shravan Vasishth:
> Dear all,
> People on this list will be interested in the following paper by Douglas
> Bates et al:
> This relates to discussions about model fitting that sometimes happen on
> this list.
> Shravan Vasishth
> Professor for Psycholinguistics and Neurolinguistics
> Department of Linguistics
> University of Potsdam, Germany
> [[alternative HTML version deleted]]
More information about the R-sig-mixed-models