[R-sig-ME] Statistical significance of random-effects (lme4 or others)
ph||||p@@|d@y @end|ng |rom mp|@n|
Mon Sep 7 15:27:33 CEST 2020
Yes, you're spot on -- I oversimplified a bit. :) The deeper issue is
indeed the edge of the parameter space. And the p/2 trick also breaks
down for non trivial cases. As do many other asymptotic results in mixed
models -- the big one is the denominator degrees of freedom. There is a
big question not just in defining what the DoF are, but also whether
it's reasonable to use the F distribution based on asymptotics.
On 7/9/20 2:29 pm, Emmanuel Curis wrote:
> There is a point I don't understand in your answer:
> On Mon, Sep 07, 2020 at 07:52:21AM +0000, Alday, Phillip wrote:
>> * The p/2 for LRT on the random effects comes from the standard LRT
>> being a two-sided test, but because variances are bounded at zero, you
>> actually need a one-sided test.
> I thought the LRT test was always one-sided, because under the
> null-hypothesis that additional parameters are all uneeded, the two
> models have the same likelihood, hence the ratio should be 1, its log
> 0; the chi-square can only be positive by nature (which is consistent
> with the likelihood always higher for a model with more parameter),
> hence the test is by nature one-sided - that is, p = p(LRT > lrt_obs)
> and not p = p(LRT > |lrt-obs|) + p(LRT < -|lrt_obs|).
> Wasn't the p/2 because the asymptotic distribution of the LRT in this
> special case is *not* a 1-df khi-square, because the special case of
> sigma²=0 is at the boundary of the parameter space and not
> « inside ». Instead, it is a 50-50 mixture of a 1-df khi-square and a
> almost surely constant 0. An asymptotic result that does not hold for
> more complex cases.
> Am I wrong? Or may be it is just a point of what is called a 1 or
> 2-sided test?
> Best regards,
More information about the R-sig-mixed-models