# [R-sig-ME] nAGQ > 1 in lme4::glmer gives unexpected likelihood

D. Rizopoulos d@r|zopou|o@ @end|ng |rom er@@mu@mc@n|
Sun Apr 26 08:51:54 CEST 2020

I would say that you can compare a linear model with a linear mixed model using a likelihood ratio test. Under maximum likelihood you integrate the random effects out. Hence, you are testing whether some variance components are zero, i.e., the linear model is nested within the linear mixed model. The technical problem is that the distribution of the statistic will not be the classic chi-squared distribution because for the variance parameters the null hypothesis lies on the boundary of the corresponding parameter space.

Best,
Dimitris

��
Dimitris Rizopoulos
Professor of Biostatistics
Erasmus University Medical Center
The Netherlands
________________________________
From: R-sig-mixed-models <r-sig-mixed-models-bounces using r-project.org> on behalf of Rolf Turner <r.turner using auckland.ac.nz>
Sent: Sunday, April 26, 2020 1:23:22 AM
To: Martin Maechler <maechler using stat.math.ethz.ch>
Cc: r-sig-mixed-models using r-project.org <r-sig-mixed-models using r-project.org>
Subject: Re: [R-sig-ME] nAGQ > 1 in lme4::glmer gives unexpected likelihood

On 26/04/20 12:33 am, Martin Maechler wrote:

>
>>     I think they're both 'correct' in the sense of being proportional to
>> the likelihood but use different baselines, thus incommensurate (see the
>> note pointed out below).
>
> Well, I'm sorry to be a spoiler here, but I think we should
> acknowledge that (log) likelihood is uniquely well defined
> function.
> I remember how we've fought with the many definitions of
> deviance and have (in our still unfinished glmm-paper -- hint!!)
> very nicely tried to define the many versions
> (conditional/unconditional and more), but I find it too sloppy
> to claim that likelihoods are only defined up to a constant
> factor -- even though of course it doesn't matter of ML (and
> REML) if the objective function is "off by constant factor".
>
> Martin

OK.  Let me display my ignorance:  I was always under the impression
that likelihood is defined with respect to an *underlying measure*.
Change the measure and you get a different likelihood (with the log
likelihoods differing by a constant).

To beat it to death:  Pr(X <= x) = \int_{-\infty}^x f(x) d\mu(x)
where f(x) is the density (which can be interpreted as the likelihood)
of X *with respect to* the measure \mu().

E.g. if you have a random variable X with probability mass function
P(x;\theta) defined, say, on the positive integers, you could make use
of the atomic measure \mu_1() having point mass 1 at each positive
integer, or you could make use of the atomic measure \mu_2() having
point mass 1/2^n at each positive integer n.

The log likelihood of \theta and an iid sample {x_1, ..., x_N} is

\sum_{i=1}^N \log P(x_i; \theta)

with respect to \mu_1() and is

\sum_{i=1}^N [\log P(x_i; \theta) + x_i \log(2)]

with respect to \mu_2(), so the two likelihoods differ by
\log(2) \times \sum_{i=1}^N x_i.

Of course \mu_1() is the "natural" measure and \mu_2() is contrived and
artificial.  But they are both perfectly legitimate measures.  And I
have a distinct impression that situations arise in which one might
legitimately wish to make a change of measure.

Moreover, is it not the case that the likelihood that one gets from
fitting a linear model with no random effects, using lm(), is not
(directly) comparable with the likelihood obtained from fitting a linear
mixed model using lmer()?  (Whence one cannot test for the
"significance" of a random effect by comparing the two fits via a
likelihood ratio test, as one might perhaps na�vely hope to do.)

Please enlighten me as to the ways in which my thinking is confused.

cheers,

Rolf

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

_______________________________________________
R-sig-mixed-models using r-project.org mailing list