[R-sig-ME] New paper by Bates et al: Parsimonious mixed models

Jake Westfall jake987722 at hotmail.com
Sat Jun 20 19:17:53 CEST 2015


Henrik, Shravan, and everyone,

I have been eagerly awaiting this paper, but like Henrik, I was a little puzzled by this draft. I hope the comments that I offer below will be useful to the authors.

The paper assumes "that the experimental hypotheses relate to the fixed effects, not the random-effects structure" (p. 5), which I agree is usually the case, and more importantly is the situation considered by Barr et al. But then what is conspicuously missing is a discussion of what are the actual negative consequences that can arise *in interpreting the fixed effects* when one fits an overparameterized, maximal model (which nevertheless successfully converges and has no obviously wacky parameter estimates). The paper describes a nice, principled procedure for simplifying maximal models, but seems to simply take for granted that this is a useful and worthwhile thing to do--I think that this point is, at the least, NOT self-evident.

(Of course, often the maximal model won't converge and one will be pretty much forced to simplify the model--but then no one was ever saying otherwise!)

I am also not sure what to make of the section on "hidden complexities." It's very interesting. But it's hard to see what it has to do with the maximal model vs. parsimonious model issue. The best I can tell is that the argument is this: If one takes the maximal philosophy seriously, this would lead one to consider hopelessly complex models in which the theoretically maximal model would virtually never be well-supported by data sets of the typical size in cognitive science. Which is all fine I guess, but if it's a response to Barr et al. then it's sort of a straw man, since they obviously never did nor would advocate fitting such complex models as a matter of default routine. And if it's not a response to Barr et al. and the maximal philosophy, well, then it's not really clear why it's in the paper, as interesting as it is.

For me, the big contribution of Barr et al. is to provide workbench cognitive scientists with a model-selection strategy that (a) will typically provide stringent tests of the fixed effects in the model, (b) reduces the "researcher degrees of freedom" that are kind of inherent in the specification of the random part of the model, and (c) is easy to understand and implement for researchers who, for better or worse (but I guess we know which it is), have little understanding of how to find the "optimal" random effects structure and are unlikely to take the time to learn. Bates et al. do provide an alternative model-selection strategy, but from what I can tell, it seems to do no better or worse at (a) and will usually be worse at (b) and (c). I think it is obvious to most who have a good understanding of LMMs that the "best" random effects structure for almost any particular LMM will usually be something in between the most simple model and the theoretically maximal model. But I really don't see many compelling reasons in this current draft for why we should NOT continue to recommend keeping at maximal as the default strategy for researchers who don't have a great understanding of LMMs, which surely must be the majority.

And finally, I have saved the most pressing and important comment for last: The reference to my 2014 paper in the opening paragraph is wrong :) The author list, year, and journal are correct (assuming that's the paper you intended to cite), but the given title is from a different paper of ours published in 2015 in another journal. The correct references can be found on my website.

Jake
> To: r-sig-mixed-models at r-project.org
> From: singmann at psychologie.uzh.ch
> Date: Fri, 19 Jun 2015 23:45:13 +0200
> Subject: Re: [R-sig-ME] New paper by Bates et al: Parsimonious mixed models
> 
> It seems my first mail did not get through so I am trying it again. 
> Sorry for double posting.
> 
> Dear Shravan,
> 
> Thanks a lot for sending this paper around, I was eagerly awaiting it 
> and really enjoyed reading it.
> 
> Nevertheless, I would like to start a discussion on your response to 
> Barr et al.'s (2013) suggestion to "keep it maximal". You conclude that 
> "it is not necessary to aim for maximality when the interest is in a 
> confirmatory analysis of factorial contrasts" (p. 25).
> 
> This conclusion is based on your reanalysis of several data sets showing 
> that the maximal models for those cases are overparameterized. As a 
> remedy you suggest a simplification strategy with the goal to reach a 
> model that only contains parameters "actually well-supported by the 
> information in the data" (p. 25). Nevertheless, for all cases discussed, 
> the conclusion regarding the fixed effects agree between the 
> overparameterized models and the simplified models.
> 
> In other words, there actually does not seem to be a downside of using 
> maximal models for testing fixed effects. Likewise, the only upside of 
> the simplified model is that one avoids superfluous variance and 
> correlation parameters. Whereas the latter is clearly desirable from a 
> statistical perspective, from the perspective of a researcher with the 
> goal to avoid false positives (i.e., anticonservative fixed effects 
> estimates) this does not sound overly convincing. Furthermore, your 
> suggested simplification strategy, albeit clearly principled, is still a 
> stepwise procedure that must share some of the problems stepwise 
> procedures usually have. Specifically, I cannot see how it could 
> guarantee that one does not occasionally exclude a random effects 
> component that, once omitted, results in anticonservative estimates.
> 
> Based on this, I would perhaps suggest a somewhat different conclusion 
> from your analyses for those interested in "confirmatory analysis":
> 1. Test your fixed effects on the maximal model.
> 2. Test your fixed effects on the optimal model found with the suggested 
> simplification strategy.
> If both results agree everything is fine.
> If not things get more difficult: Either make convincing arguments while 
> one of the results must be "more correct" (e.g., by analyzing the fixed 
> effects estimates across a wider range of random effects structures), 
> perhaps collect more data, or ...
> 
> (I have avoided discussing the "hidden complexities" as it goes beyond 
> the issue of maximal or optimal LMMs.)
> 
> I am really interested in any thoughts on this matter and thanks again 
> for the paper.
> 
> Cheers,
> Henrik
> 
> 
> PS: I found two small typos:
> - In section 3.3 (reanalysis of Kronmüller and Barr) you introduce the 
> cognitie load manipulation as "L" whereas the formulas given later as 
> well as the figures refer to it as "C".
> - In the same section, subesction "Dropping non-significant variance 
> components" (p. 11) you write "compared to the other standard-deviation 
> estimates (> 64)." Inspection of the results from this reanalysis (e.g., 
> in the KB vignette of RePsychLing) shows that there are dropped random 
> effects with SDs above 64 (e.g., C) but also below 64 (e.g., SP).
> 
> 
> Am 17.06.2015 um 12:24 schrieb Shravan Vasishth:
> > Dear all,
> >
> > People on this list will be interested in the following paper by Douglas
> > Bates et al:
> >
> > http://arxiv.org/abs/1506.04967
> >
> > This relates to discussions about model fitting that sometimes happen on
> > this list.
> >
> > best,
> >
> > --
> > Shravan Vasishth
> > Professor for Psycholinguistics and Neurolinguistics
> > Department of Linguistics
> > University of Potsdam, Germany
> > http://www.ling.uni-potsdam.de/~vasishth
> >
> > 	[[alternative HTML version deleted]]
> >
> 
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
 		 	   		  
	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list