# [R-sig-ME] results lme unstructured covariance matrix, again

ben pelzer benpe|zer @end|ng |rom gm@||@com
Mon Jul 18 12:39:19 CEST 2022

```Dear John,

Thank you for answering my question and your nice example with the
alternative model formulations!!. But still I'm in doubt about the results.
The fact that there are only three observed (co)variances which are being
reproduced by four parameters estimates of lme leaves me confused.
Actually, I would think that there is no unique solution without some
constraint being applied. But I could not find something about such a
constraint in lme documentation. The random effects of t1 and t2, or of the
intercept and t2 in your alternative model, should be sufficient to
reproduce the observed (co)variances, so the residual variance is
"unnecassary". I guess that is the reason that lmer is not able to estimate
the model.

library(lme4)
m3 <- lmer(y ~ 1+t2+(1+t2|person), data=da)

Error: number of observations (=20) <= number of random effects (=20)
for term (1 + t2 | person); the random-effects parameters and the
residual variance (or scale parameter) are probably unidentifiable

It's interesting to notice that the two model specifications (t1+t2 and
1+t2) lead to different residual variances estimated by lme. What do these
residual variances mean? And of course also: what do the random effect
variances estimated by both model formulations actually mean or stand for?
Do you have any ideas? Kind regards,

Ben.

On Sat, 16 Jul 2022 at 16:50, John Fox <jfox using mcmaster.ca> wrote:

> Dear Ben,
>
> First, I'll make this into a reproducible example:
>
>  > set.seed(123)
>  > t1 <- c(rep(1, 10), rep(0, 10))
>  > t2 <- 1 - t1
>  > person <- rep(1:10, 2)
>  > y <- t2 + rnorm(20)
>  > da <- data.frame(y, t1, t2, person)
>
>  > library(nlme)
>
> Then note that the random-effect specification 0 + t1 + t2 is simply a
> reparametrization of 1 + t2 (i.e., 1 = t1 + t2), which produces the same
> fit to the data (same fixed effects, same restricted log-likelihood):
>
>  > m1 <- lme(y ~ 1 + t2, random = ~ 0 + t1 + t2 | person, data=da)
>  > m2 <- lme(y ~ 1 + t2, random = ~ 1 + t2 | person, data=da)
>  > m1
> Linear mixed-effects model fit by REML
>    Data: da
>    Log-restricted-likelihood: -25.92726
>    Fixed: y ~ 1 + t2
> (Intercept)          t2
>   0.07462564  1.13399632
>
> Random effects:
>   Formula: ~0 + t1 + t2 | person
>   Structure: General positive-definite, Log-Cholesky parametrization
>           StdDev    Corr
> t1       0.8964136 t1
> t2       0.9856215 0.647
> Residual 0.3258015
>
> Number of Observations: 20
> Number of Groups: 10
>
>  > m2
> Linear mixed-effects model fit by REML
>    Data: da
>    Log-restricted-likelihood: -25.92726
>    Fixed: y ~ 1 + t2
> (Intercept)          t2
>   0.07462564  1.13399632
>
> Random effects:
>   Formula: ~1 + t2 | person
>   Structure: General positive-definite, Log-Cholesky parametrization
>              StdDev    Corr
> (Intercept) 0.8787887 (Intr)
> t2          0.7540826 -0.302
> Residual    0.3707215
>
> Number of Observations: 20
> Number of Groups: 10
>
> Finally, it's unnecessary to supply the intercept 1 in the model formula
> since the intercept is implied if it's not explicitly excluded:
>
>  > m3 <- lme(y ~ t2, random = ~ t2 | person, data=da)
>  > m3
> Linear mixed-effects model fit by REML
>    Data: da
>    Log-restricted-likelihood: -25.92726
>    Fixed: y ~ t2
> (Intercept)          t2
>   0.07462564  1.13399632
>
> Random effects:
>   Formula: ~t2 | person
>   Structure: General positive-definite, Log-Cholesky parametrization
>              StdDev    Corr
> (Intercept) 0.8787887 (Intr)
> t2          0.7540826 -0.302
> Residual    0.3707215
>
> Number of Observations: 20
> Number of Groups: 10
>
> I hope this helps,
>   John
>
> On 2022-07-16 10:05 a.m., ben pelzer wrote:
> > Sorry, my previous mailed contained another question which is
> irrelevant...
> > I deleted that now.
> >
> >
> > Hi all,
> >
> > I have a question about results from lme of package nlme.
> >
> > Suppose the data consists of repeated measures at two fixed time points.
> >
> > I used the following equation:
> >
> >
> >
> > Model1 <- lme ( y ~ 1+t2 , random = ~ 0 + t1+t2|person, data=da)
> >
> >
> >
> > y is the dependent, t1 and t2 are binary dummy variables, valued 0 or 1,
> > indicating the time point.  Model1 is estimated without any convergence
> > problems and the reproduced (co)variances found with
> >
> >
> >
> > getVarCov(Model1, type=”marginal”, indivual=”1”)
> >
> >
> >
> > are identical to the observed (co)variances.
> >
> >
> > My question is:  how can lme estimate 4 (co)variances with only 3 known
> > (co)variances?
> >
> >
> >
> > The 4 estimates concern:
> >
> > -          std. deviation of the random effect of dummy t1
> >
> > -          std. deviation of the random effect of dummy t2
> >
> > -          covariance of the random effects of the dummies t1 and t2 t1
> >
> > -          residual std. error
> >
> >
> >
> > Related to the question above: how can the variances of the random
> effects
> > and the residual std. error be interpreted?
> >
> >
> >
> > Thanks for any help,
> >
> >
> >
> > Ben.
> >
> >       [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-mixed-models using r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> --
> John Fox, Professor Emeritus
> McMaster University
> Hamilton, Ontario, Canada
> web: https://socialsciences.mcmaster.ca/jfox/
>
>

[[alternative HTML version deleted]]

```