[R-sig-ME] Random effects variances in R and SPSS not matching

Simon Harmel @|m@h@rme| @end|ng |rom gm@||@com
Fri Apr 2 05:57:39 CEST 2021


Dear Ben,

I should add that using `control = lmeControl(sigma=1e-5, opt ="optim")`
does make the random effects variances from `lme()` match that of SPSS but
t completely change -2*logLik(model).

Specifically, before using  `control = lmeControl(sigma=1e-5, opt
="optim")`, the  -2*logLik(model_SPSS) =  -2*logLik(model_lme).
But,  after using  `control = lmeControl(sigma=1e-5, opt ="optim")`, the
-2*logLik(model_SPSS) NOT equal to   -2*logLik(model_lme).

Thanks,
Simon

On Thu, Apr 1, 2021 at 10:19 PM Simon Harmel <sim.harmel using gmail.com> wrote:

> Thank you so very much for your assistance.
>
> On Wed, Mar 31, 2021 at 4:18 PM Phillip Alday <me using phillipalday.com> wrote:
>
>> I don't think the optimum is well defined:
>>
>> > library("lattice")
>> > library("lme4")
>> > m2.4 <- lmer(value ~0 + name + (0+name| Student), data = dat,
>> REML=FALSE,
>> control=lmerControl(optimizer="bobyqa",check.nobs.vs.nRE="warning"))
>> Warning messages:
>> 1: number of observations (=1600) <= number of random effects (=1600)
>> for term (0 + name | Student); the random-effects parameters and the
>> residual variance (or scale parameter) are probably unidentifiable
>> 2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
>>   Model is nearly unidentifiable: large eigenvalue ratio
>>  - Rescale variables?
>> > p <- profile(m2.4)
>> There were 50 or more warnings (use warnings() to see the first 50)
>> > xyplot(p)
>>
>> Those are some very bad profiles! Also, this goes back to the lme4
>> safety-check that I had to disable. The fundamental problem is that
>> there isn't enough data to completely distinguish the residual variance
>> from the RE, so you get difference answers for the RE variance depending
>> on how much you attribute to the residual variance.
>>
>> I also tried to do this with MCMC and flat priors (always a bad idea,
>> but...) and also ran into bad convergence issues.
>>
>> Phillip
>>
>>
>>
>> On 31/3/21 10:43 pm, Simon Harmel wrote:
>> > Thank you. I'll be happy to give more info. SPSS model syntax is shown
>> > on in Table 14.5, p. 585 (type `606` in page slot) of this book
>> > (http://docshare02.docshare.tips/files/31719/317194846.pdf).
>> >
>> > The SPSS output is shown on p. 588 (type `606` in page slot).
>> >
>> > I should add the covariance between `Y1` and `Y2` exactly match. and the
>> > log-likelihood seems to be almost identical. But variances differ by a
>> > lot. SPSS is using "ML".
>> >
>> > Please let me know if I can provide any further information.
>> >
>> > Thank you for your prompt reply,
>> > Simon
>> >
>> >
>> >
>> >
>> > On Wed, Mar 31, 2021 at 3:36 PM Phillip Alday <me using phillipalday.com
>> > <mailto:me using phillipalday.com>> wrote:
>> >
>> >     Without more information, we don't know for sure that the models
>> are the
>> >     same in both languages.
>> >
>> >     It's too much of a time sink for a human to change model details
>> >     randomly until the output matches some expected output, but you
>> could
>> >     probably do something with genetic programming or simulated
>> annealing to
>> >     do that....
>> >
>> >     But if you can get more information, I would start by making sure
>> >     - that the contrasts are truly the same
>> >     - assumed covariance structures are the same
>> >     - that one language isn't dropping some observations that the other
>> is
>> >     keeping (check the reporting number of observations levels of the
>> >     grouping var)
>> >     - the estimation method is the same across languages (ML,REML;
>> hopefully
>> >     SPSS isn't using something like quasi-likelihood)
>> >     - different optimizers (if available) give the same  result across
>> >     languages (i.e. make sure you're not in a local optimum)
>> >     - cross checking the result against yet another software package
>> >
>> >     For example, cross-checking against lme4 immediately hints that this
>> >     model might not be advisable / have a well-defined optimum:
>> >
>> >     > m2.4 <- lmer(value ~0 + name + (0 + name| Student), data = dat,
>> >     REML=FALSE)
>> >     Error: number of observations (=1600) <= number of random effects
>> >     (=1600) for term (0 + name | Student); the random-effects
>> parameters and
>> >     the residual variance (or scale parameter) are probably
>> unidentifiable
>> >
>> >     Phillip
>> >
>> >     On 31/3/21 10:15 pm, Simon Harmel wrote:
>> >     > Dear All,
>> >     >
>> >     > For my reproducible model below, SPSS gives the variance
>> component of
>> >     > 119.95 for Y1, and 127.90 for Y2.
>> >     >
>> >     > But in `nlme::lme()` my variance components are 105.78 for Y1 and
>> >     113.73
>> >     > for Y2.
>> >     >
>> >     > Can we make the `lme()` reproduce the SPSS's variance components?
>> >     >
>> >     > #======= Data and R code:
>> >     > dat <-
>> >     read.csv('https://raw.githubusercontent.com/hkil/m/master/mv.l.csv
>> ')
>> >     >
>> >     > library(nlme)
>> >     >
>> >     > m2 <- lme(value ~0 + name, random = ~0 + name| Student, data =
>> >     dat, method
>> >     > = "ML")
>> >     >
>> >     > Random effects variance covariance matrix
>> >     >              nameY1   nameY2
>> >     > nameY1 105.780  60.869
>> >     > nameY2  60.869 113.730
>> >     >
>> >     >       [[alternative HTML version deleted]]
>> >     >
>> >     > _______________________________________________
>> >     > R-sig-mixed-models using r-project.org
>> >     <mailto:R-sig-mixed-models using r-project.org> mailing list
>> >     > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>> >     >
>> >
>>
>

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list