[R-sig-ME] likelihood-ratio tests in conflict with coefficiants in maximal random effect model
Emilia Ellsiepen
emilia.ellsiepen at gmail.com
Mon Mar 3 10:47:15 CET 2014
2014-02-28 18:29 GMT+01:00 Ben Bolker <bbolker at gmail.com>:
> On 14-02-28 11:27 AM, Douglas Bates wrote:
>> On Fri, Feb 28, 2014 at 10:04 AM, Emilia Ellsiepen <
>> emilia.ellsiepen at gmail.com> wrote:
>>
>>> Dear list members,
>>>
>>> in analyzing a data set using lmers with maximal random effect
>>> structure and subsequent likelihood-ratio tests (LRTs) following Barr
>>> et al. 2013, I ran into the following problem: In some of the LRTs, it
>>> turned out that the simpler model (only main effects) has a higher
>>> likelihood than the more complicated model (including interaction),
>>> resulting in Chi=0. If I simplify the models by taking out the
>>> interactions in the two random effect terms, the LRT for the
>>> interaction has a highly significant result.>
>
> [snip]
>
>> It shows that the more complex model has not converged to the optimum
>> parameter values. This can be because the optimizer being used is a bad
>> choice (in recent versions of the lme4 package the default was a
>> Nelder-Mead optimizer that can declare convergence to values that are not
>> the optimal values) or it can be because the model is too complex. We say
>> that such models are over-parameterized.
>>
>> This is why the Barr et al. advice is dangerous. In model selection there
>> are two basic strategies: forward and backward. Forward selection starts
>> with a simple model and adds terms until they are no longer justified.
>> Backward selection starts with the most complex model and drops terms. It
>> is well known that backward selection is problematic when you can't fit the
>> most complex model. Yet Barr et al. say unequivocally that you must use
>> backward selection. The result will be that many researchers, especially
>> in linguistics, will encounter these problems of complex models providing
>> worse fits than simpler models.
>>
>> I wish that Barr et al. would have provided software that is guaranteed to
>> fit even the most complex model to a global optimum when they stated their
>> "one size fits all" strategy. they didn't and those with experience in
>> numerical optimization can tell you why. It is not possible to guarantee
>> convergence to an optimum in over-parameterized models.
>>
>
> [snip]
>
> I would be very interested to know whether the more thorough
> convergence tests that we have implemented in the development version of
> lme4 would correctly report that there are convergence problems with the
> model containing the maximal RE structure ...
>
> Ben Bolker
>
Yes, it does! After installing the development version, I get the
following error message:
Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 12.213 (tol = 0.001)
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge: degenerate Hessian with 3 negative eigenvalues
Thank you,
Emilia
-----------------------------------------
Emilia Ellsiepen
Institut für Linguistik
Goethe-Universität Frankfurt
More information about the R-sig-mixed-models
mailing list