[R-sig-ME] nAGQ > 1 in lme4::glmer gives unexpected likelihood

Rolf Turner r@turner @end|ng |rom @uck|@nd@@c@nz
Sun Apr 26 01:23:22 CEST 2020


On 26/04/20 12:33 am, Martin Maechler wrote:

> 
>>     I think they're both 'correct' in the sense of being proportional to
>> the likelihood but use different baselines, thus incommensurate (see the
>> note pointed out below).
> 
> Well, I'm sorry to be a spoiler here, but I think we should
> acknowledge that (log) likelihood is uniquely well defined
> function.
> I remember how we've fought with the many definitions of
> deviance and have (in our still unfinished glmm-paper -- hint!!)
> very nicely tried to define the many versions
> (conditional/unconditional and more), but I find it too sloppy
> to claim that likelihoods are only defined up to a constant
> factor -- even though of course it doesn't matter of ML (and
> REML) if the objective function is "off by constant factor".
> 
> Martin

OK.  Let me display my ignorance:  I was always under the impression 
that likelihood is defined with respect to an *underlying measure*.
Change the measure and you get a different likelihood (with the log 
likelihoods differing by a constant).

To beat it to death:  Pr(X <= x) = \int_{-\infty}^x f(x) d\mu(x)
where f(x) is the density (which can be interpreted as the likelihood)
of X *with respect to* the measure \mu().

E.g. if you have a random variable X with probability mass function 
P(x;\theta) defined, say, on the positive integers, you could make use 
of the atomic measure \mu_1() having point mass 1 at each positive 
integer, or you could make use of the atomic measure \mu_2() having 
point mass 1/2^n at each positive integer n.

The log likelihood of \theta and an iid sample {x_1, ..., x_N} is

\sum_{i=1}^N \log P(x_i; \theta)

with respect to \mu_1() and is

\sum_{i=1}^N [\log P(x_i; \theta) + x_i \log(2)]

with respect to \mu_2(), so the two likelihoods differ by
\log(2) \times \sum_{i=1}^N x_i.

Of course \mu_1() is the "natural" measure and \mu_2() is contrived and 
artificial.  But they are both perfectly legitimate measures.  And I 
have a distinct impression that situations arise in which one might 
legitimately wish to make a change of measure.

Moreover, is it not the case that the likelihood that one gets from 
fitting a linear model with no random effects, using lm(), is not 
(directly) comparable with the likelihood obtained from fitting a linear 
mixed model using lmer()?  (Whence one cannot test for the 
"significance" of a random effect by comparing the two fits via a 
likelihood ratio test, as one might perhaps naïvely hope to do.)

Please enlighten me as to the ways in which my thinking is confused.

cheers,

Rolf

-- 
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276



More information about the R-sig-mixed-models mailing list