[R-sig-ME] (no subject)

Martin Maechler m@echler @ending from @t@t@m@th@ethz@ch
Thu Aug 2 08:53:59 CEST 2018


>>> On Wed, Aug 1, 2018 at 10:57 PM, Ben Bolker <bbolker using gmail.com> wrote:
>>> 

>>>> I'm pretty sure that lmer and lm models are commensurate, in case that
>>>> helps.  Here's an example rigged to make the random-effects variance
>>>> equal to zero, so we can check that the log-likelihoods etc. are
>>>> identical.
>>>> 
>>>> set.seed(101)
>>>> dd <- data.frame(y=rnorm(20),x=rnorm(20),f=factor(rep(1:2,10)))
>>>> library(lme4)
>>>> m1 <- lmer(y~x+(1|f),data=dd,REML=FALSE) ## estimated sigma^2_f=0
>>>> m2 <- lm(y~x,data=dd)
>>>> all.equal(c(logLik(m1)),c(logLik(m2))) ## TRUE
>>>> all.equal(fixef(m1),coef(m2))
>>>> anova(m1,m2)
>>>> 

Then, Peter replied

    >> Sorry, you estimated it to be very close to zero, I see.
    >> Peter

and Ben, again, (Thu, 2 Aug 2018 01:26:27 -0400):

    > Yes.  I think you can specify a fixed residual variance in blme::blmer, but
    >      not to exactly zero. 

Peter: it is estimated to be  *exactly*  zero, not just close to
zero with the lmer example above:

  > VarCorr(m1)$f == 0
	      (Intercept)
  (Intercept)        TRUE
  > 

  (yes, these are always matrices, here of dimension  1x1)

This has been one of the major features of lme4::lmer()  wrt to nlme::lme()
that  \hat{\sigma_j^2} = 0  is naturally possible
because of the parametrization used.

Martin



More information about the R-sig-mixed-models mailing list