[R-sig-ME] results lme unstructured covariance matrix, again
Viechtbauer, Wolfgang (NP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon Jul 18 17:56:06 CEST 2022
Dear Ben,
The model is overparameterized. There is an infinite number of solutions that lead to the same marginal variance-covariance structure and the same log likelihood. lme() is giving you one solution, but what solution is given is essentially arbitrary and might depend on the optimizer used.
One way to show this is to examine profile likelihood plots. Although metafor wasn't designed for analyzing raw data (it's a package for meta-analysis), we can fit the same model using its rma.mv() function and then obtain profile likelihood plots. You will find that they are flat for large portions of the profiles.
set.seed(123)
t1 <- c(rep(1, 10), rep(0, 10))
t2 <- 1 - t1
person <- rep(1:10, 2)
y <- t2 + rnorm(20)
da <- data.frame(y, t1, t2, person)
da <- da[order(da$person),]
library(nlme)
model1 <- lme(y ~ t2, random = ~ 0 + t1 +t2 | person, data=da)
summary(model1)
getVarCov(model1, type="marginal", indivual="1")
da$id <- 1:nrow(da)
library(metafor)
model2 <- rma.mv(y ~ t2, V=0, random = list(~ 0 + t1 + t2 | person, ~ 1 | id), struct="GEN", data=da, control=list(REMLf=FALSE))
model2
# same log likelihoods even though the parameter estimates are different
logLik(model1)
logLik(model2)
# same marginal var-cov structure
round(vcov(model3, type="obs")[1:2,1:2], 4)
# profile likelihood plots
profile(model2)
So, the specific variance component estimates provided cannot really be interpreted in any meaningful way, or put differently, they are one of many solutions leading to the same marginal var-cov structure.
Best,
Wolfgang
>-----Original Message-----
>From: R-sig-mixed-models [mailto:r-sig-mixed-models-bounces using r-project.org] On
>Behalf Of ben pelzer
>Sent: Monday, 18 July, 2022 12:39
>To: John Fox
>Cc: r-sig-mixed-models using r-project.org
>Subject: Re: [R-sig-ME] results lme unstructured covariance matrix, again
>
>Dear John,
>
>Thank you for answering my question and your nice example with the
>alternative model formulations!!. But still I'm in doubt about the results.
>The fact that there are only three observed (co)variances which are being
>reproduced by four parameters estimates of lme leaves me confused.
>Actually, I would think that there is no unique solution without some
>constraint being applied. But I could not find something about such a
>constraint in lme documentation. The random effects of t1 and t2, or of the
>intercept and t2 in your alternative model, should be sufficient to
>reproduce the observed (co)variances, so the residual variance is
>"unnecassary". I guess that is the reason that lmer is not able to estimate
>the model.
>
>library(lme4)
>m3 <- lmer(y ~ 1+t2+(1+t2|person), data=da)
>
>Error: number of observations (=20) <= number of random effects (=20)
>for term (1 + t2 | person); the random-effects parameters and the
>residual variance (or scale parameter) are probably unidentifiable
>
>It's interesting to notice that the two model specifications (t1+t2 and
>1+t2) lead to different residual variances estimated by lme. What do these
>residual variances mean? And of course also: what do the random effect
>variances estimated by both model formulations actually mean or stand for?
>Do you have any ideas? Kind regards,
>
>Ben.
>
>On Sat, 16 Jul 2022 at 16:50, John Fox <jfox using mcmaster.ca> wrote:
>
>> Dear Ben,
>>
>> First, I'll make this into a reproducible example:
>>
>> > set.seed(123)
>> > t1 <- c(rep(1, 10), rep(0, 10))
>> > t2 <- 1 - t1
>> > person <- rep(1:10, 2)
>> > y <- t2 + rnorm(20)
>> > da <- data.frame(y, t1, t2, person)
>>
>> > library(nlme)
>>
>> Then note that the random-effect specification 0 + t1 + t2 is simply a
>> reparametrization of 1 + t2 (i.e., 1 = t1 + t2), which produces the same
>> fit to the data (same fixed effects, same restricted log-likelihood):
>>
>> > m1 <- lme(y ~ 1 + t2, random = ~ 0 + t1 + t2 | person, data=da)
>> > m2 <- lme(y ~ 1 + t2, random = ~ 1 + t2 | person, data=da)
>> > m1
>> Linear mixed-effects model fit by REML
>> Data: da
>> Log-restricted-likelihood: -25.92726
>> Fixed: y ~ 1 + t2
>> (Intercept) t2
>> 0.07462564 1.13399632
>>
>> Random effects:
>> Formula: ~0 + t1 + t2 | person
>> Structure: General positive-definite, Log-Cholesky parametrization
>> StdDev Corr
>> t1 0.8964136 t1
>> t2 0.9856215 0.647
>> Residual 0.3258015
>>
>> Number of Observations: 20
>> Number of Groups: 10
>>
>> > m2
>> Linear mixed-effects model fit by REML
>> Data: da
>> Log-restricted-likelihood: -25.92726
>> Fixed: y ~ 1 + t2
>> (Intercept) t2
>> 0.07462564 1.13399632
>>
>> Random effects:
>> Formula: ~1 + t2 | person
>> Structure: General positive-definite, Log-Cholesky parametrization
>> StdDev Corr
>> (Intercept) 0.8787887 (Intr)
>> t2 0.7540826 -0.302
>> Residual 0.3707215
>>
>> Number of Observations: 20
>> Number of Groups: 10
>>
>> Finally, it's unnecessary to supply the intercept 1 in the model formula
>> since the intercept is implied if it's not explicitly excluded:
>>
>> > m3 <- lme(y ~ t2, random = ~ t2 | person, data=da)
>> > m3
>> Linear mixed-effects model fit by REML
>> Data: da
>> Log-restricted-likelihood: -25.92726
>> Fixed: y ~ t2
>> (Intercept) t2
>> 0.07462564 1.13399632
>>
>> Random effects:
>> Formula: ~t2 | person
>> Structure: General positive-definite, Log-Cholesky parametrization
>> StdDev Corr
>> (Intercept) 0.8787887 (Intr)
>> t2 0.7540826 -0.302
>> Residual 0.3707215
>>
>> Number of Observations: 20
>> Number of Groups: 10
>>
>> I hope this helps,
>> John
More information about the R-sig-mixed-models
mailing list