[R-sig-ME] using lme for cov structures?
Ben Pelzer
b.pelzer at maw.ru.nl
Fri Jul 28 10:51:54 CEST 2017
Dear Thierry,
After all effort and thinking about the trick with the "one" variable, I
realised (but only after your previous mail with the syntax for the
"group" variable) that I do not need the trick at all. With a
grouping/context variable, lme can be used and without such
grouping/context, gls will do. Why didn't I realize this earlier???
As to your last mail, the syntax you included looks very intriguing,
which language did you use to evaluate the cov matrix?
Best regards, Ben.
On 27-7-2017 15:13, Thierry Onkelinx wrote:
> Dear Ben,
>
> I would look at the variance-covariance matrix of the model. Below is
> the var-covar matrix for the model lme(opp ~ 1, random = ~1|id,
> correlation=corCompSymm(form = ~ wave), weights=varIdent(form =
> ~1|wave)). I worked it out for 3 id groups with each 3 waves. The
> submatrices indicate the grouping by the random effect.
>
> Is this the structure you are looking for? If not please provide the
> required structure.
>
> Having a random effect with only one level is nonsense. IMHO it should
> yield an error, or at least a warning. I have no idea how lme
> estimates the reported variance.
>
> Best regards,
>
> Thierry
>
> $\sigma^2_i$: random effect variance
> $\sigma^2_e$: variance of the noise
> $\rho$: correlation of the compound symmetry
> $w_2$ and $w_3$: relative variance of the 2nd and 3th wave compared to
> the 1st wave
>
> $$
> \begin{pmatrix}
> \begin{bmatrix}
> \sigma^2_i+\sigma^2_e & \sqrt{w_2}\sigma^2_i+\rho\sigma^2_e &
> \sigma^2_i+\sqrt{w_3}\rho\sigma^2_e \\
> \sigma^2_i+\sqrt{w_2}\rho\sigma^2_e & w_2\sigma^2_i+\sigma^2_e &
> \sqrt{w_2w_3}\sigma^2_i+\rho\sigma^2_e \\
> \sigma^2_i+\sqrt{w_3}\rho\sigma^2_e &
> \sqrt{w_2w_3}\sigma^2_i+w_3\rho\sigma^2_e & \sigma^2_i+\sigma^2_e
> \end{bmatrix} &
> \begin{bmatrix}
> \sigma^2_i & \sigma^2_i & \sigma^2_i\\
> \sigma^2_i & \sigma^2_i & \sigma^2_i\\
> \sigma^2_i & \sigma^2_i & \sigma^2_i
> \end{bmatrix} &
> \begin{bmatrix}
> \sigma^2_i & \sigma^2_i & \sigma^2_i\\
> \sigma^2_i & \sigma^2_i & \sigma^2_i\\
> \sigma^2_i & \sigma^2_i & \sigma^2_i
> \end{bmatrix}
> \\
> \begin{bmatrix}
> \sigma^2_i & \sigma^2_i & \sigma^2_i\\
> \sigma^2_i & \sigma^2_i & \sigma^2_i\\
> \sigma^2_i & \sigma^2_i & \sigma^2_i
> \end{bmatrix} &
> \begin{bmatrix}
> \sigma^2_i+\sigma^2_e & \sqrt{w_2}\sigma^2_i+\rho\sigma^2_e &
> \sigma^2_i+\sqrt{w_3}\rho\sigma^2_e \\
> \sigma^2_i+\sqrt{w_2}\rho\sigma^2_e & w_2\sigma^2_i+\sigma^2_e &
> \sqrt{w_2w_3}\sigma^2_i+\rho\sigma^2_e \\
> \sigma^2_i+\sqrt{w_3}\rho\sigma^2_e &
> \sqrt{w_2w_3}\sigma^2_i+w_3\rho\sigma^2_e & \sigma^2_i+\sigma^2_e
> \end{bmatrix} \\
> \begin{bmatrix}
> \sigma^2_i & \sigma^2_i & \sigma^2_i\\
> \sigma^2_i & \sigma^2_i & \sigma^2_i\\
> \sigma^2_i & \sigma^2_i & \sigma^2_i
> \end{bmatrix} &
> \begin{bmatrix}
> \sigma^2_i & \sigma^2_i & \sigma^2_i\\
> \sigma^2_i & \sigma^2_i & \sigma^2_i\\
> \sigma^2_i & \sigma^2_i & \sigma^2_i
> \end{bmatrix} &
> \begin{bmatrix}
> \sigma^2_i+\sigma^2_e & \sqrt{w_2}\sigma^2_i+\rho\sigma^2_e &
> \sigma^2_i+\sqrt{w_3}\rho\sigma^2_e \\
> \sigma^2_i+\sqrt{w_2}\rho\sigma^2_e & w_2\sigma^2_i+\sigma^2_e &
> \sqrt{w_2w_3}\sigma^2_i+\rho\sigma^2_e \\
> \sigma^2_i+\sqrt{w_3}\rho\sigma^2_e &
> \sqrt{w_2w_3}\sigma^2_i+w_3\rho\sigma^2_e & \sigma^2_i+\sigma^2_e
> \end{bmatrix}
> \end{pmatrix}
> $$
>
>
> ir. Thierry Onkelinx
> Instituut voor natuur- en bosonderzoek / Research Institute for Nature
> and Forest
> team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance
> Kliniekstraat 25
> 1070 Anderlecht
> Belgium
>
> To call in the statistician after the experiment is done may be no
> more than asking him to perform a post-mortem examination: he may be
> able to say what the experiment died of. ~ Sir Ronald Aylmer Fisher
> The plural of anecdote is not data. ~ Roger Brinner
> The combination of some data and an aching desire for an answer does
> not ensure that a reasonable answer can be extracted from a given body
> of data. ~ John Tukey
>
> 2017-07-27 11:16 GMT+02:00 Ben Pelzer <b.pelzer at maw.ru.nl
> <mailto:b.pelzer at maw.ru.nl>>:
>
> Dear Thierry,
>
> Thanks for your help with these models. I wasn't sure how to formulate
> them. At the department of sciology where I work (Radboud University)
> longitudinal data are getting more common bussiness. And very often,
> we'd like to estimate random country or municipality influences for
> repeated measures on the same person. This is not a big problem as
> long
> as we use growth modelling with random intercept and random time
> influence, but the covariance structure implied by such models is,
> say,
> limited. Knowing how to specify the cov. structures in R is really
> helpful and broadens prespective.
>
> After fiddling around with all the possibilities, the picture is
> slowly
> getting clearer here. I was puzzled by the heterogeneous compound
> symmetry structure and how to specify this. With specification:
>
> heteroCS1 <- lme(opp ~ 1,opposites,
> random = ~1|id,
> correlation=corCompSymm(form = ~ wave),
> weights=varIdent(form = ~1|wave), method="REML")
>
> the random person effect is estimated apart from the residuals. On the
> other hand, with:
>
> heteroCS2 <- lme(opp ~ 1,opposites,
> random = ~1|one,
> correlation=corCompSymm(form = ~wave | one/id),
> weights=varIdent(form = ~1|wave), method="REML"))
>
> the random person effect is kept part of the residuals, because it is
> not estimated explicitly. As a result, heteroCS2 gives the same
> results
> as obtained with the more straightforward gls specification:
>
> hetroCS3 <- gls(opp ~ 1, opposites,
> correlation = corCompSymm(form = ~ wave|id),
> weights = varIdent(form = ~ 1 | wave))
>
> In general, I think I would prefer to have the random person effect as
> part the residual term instead of seperating it from the residual.
> That
> is, by cutting the random person effect away from the residual, you
> remove the very part that causes correlation between the observations
> over time. The only specification (I could think of) that keeps the
> random person effect IN the residual is heteroCS2. But maybe something
> less artificial can be found....
>
> And the strange thing that remains is: how can lme estimate a random
> effect variance in case of one single group, as in:
>
> strangemodel <- lme(opp ~1, random = ~1|one, opposites)
>
> which produces the summary:
>
> Linear mixed-effects model fit by REML
> Data: opposites
> AIC BIC logLik
> 1473.5 1482.303 -733.7498
>
> Random effects:
> Formula: ~1 | one
> (Intercept) Residual
> StdDev: 10.50729 46.62148
>
> Fixed effects: opp ~ 1
> Value Std.Error DF t-value p-value
> (Intercept) 204.8143 11.22179 139 18.25148 0
>
>
> Do you have any thoughts about this strange model's estimate of the
> intercept variance 10.50729?
>
> Best regards, Ben.
>
>
> On 26-7-2017 22:05, Thierry Onkelinx wrote:
> > Dear Ben,
> >
> > The correlation structure always works at the most detailed level of
> > the random effects. E.g. at the id level in the example below,
> not at
> > the group level. The correlation is only effective among
> observations
> > from the same id. Two observation within the same group but
> different
> > id have uncorrelated residuals by definition. Likewise are two
> > residuals from different groups uncorrelated. The correlation matrix
> > of the residuals is hence always a block diagonal matrix, with
> blocks
> > defined by the most detailed level of the random effects.
> >
> > opposites <-
> >
> read.table("https://stats.idre.ucla.edu/stat/r/examples/alda/data/opposites_pp.txt
> <https://stats.idre.ucla.edu/stat/r/examples/alda/data/opposites_pp.txt>",header=TRUE,sep=",")
> > opposites$group <- opposites$id %% 6
> > library(nlme)
> > lme(opp ~ 1, random = ~1|group/id, data = opposites)
> > lme(opp ~ 1, random = ~1|group/id, data = opposites, correlation =
> > corAR1(form = ~wave))
> >
> > Best regards,
> >
> >
> > ir. Thierry Onkelinx
> > Instituut voor natuur- en bosonderzoek / Research Institute for
> Nature
> > and Forest
> > team Biometrie & Kwaliteitszorg / team Biometrics & Quality
> Assurance
> > Kliniekstraat 25
> > 1070 Anderlecht
> > Belgium
> >
> > To call in the statistician after the experiment is done may be no
> > more than asking him to perform a post-mortem examination: he may be
> > able to say what the experiment died of. ~ Sir Ronald Aylmer Fisher
> > The plural of anecdote is not data. ~ Roger Brinner
> > The combination of some data and an aching desire for an answer does
> > not ensure that a reasonable answer can be extracted from a
> given body
> > of data. ~ John Tukey
> >
> > 2017-07-26 21:45 GMT+02:00 Ben Pelzer <b.pelzer at maw.ru.nl
> <mailto:b.pelzer at maw.ru.nl>
> > <mailto:b.pelzer at maw.ru.nl <mailto:b.pelzer at maw.ru.nl>>>:
> >
> > Dear Thierry and Wolfgang,
> >
> > Thanks for responding so quickly. Here is the reproducible
> example of
> > the two models that I'm interested in.
> >
> > # Read data.
> > opposites <-
> >
> read.table("https://stats.idre.ucla.edu/stat/r/examples/alda/dat/opposites_pp.txt
> <https://stats.idre.ucla.edu/stat/r/examples/alda/dat/opposites_pp.txt>
> >
> <https://stats.idre.ucla.edu/stat/r/examples/alda/dat/opposites_pp.txt
> <https://stats.idre.ucla.edu/stat/r/examples/alda/dat/opposites_pp.txt>>",header=TRUE,sep=",")
> >
> > # Open library.
> > library(nlme)
> >
> > # Define group "one" with value 1 for all cases.
> > one <- rep(1, 140)
> >
> > #----- Model estimated with gls.
> > comsym.gls <- gls(opp~1,opposites,
> > correlation=corCompSymm(form = ~ 1 |id),
> > method="REML")
> > summary(comsym.gls)
> >
> >
> > #----- Same model estimated with lme.
> > comsym.lme <- lme(opp~1,opposites,
> > random= ~1|one,
> > correlation=corCompSymm(form = ~ 1 |one/id),
> > method="REML")
> > summary(comsym.lme)
> >
> >
> > #----- Wolfgang's gls suggestion for heterogeneous CS.
> >
> > summary(gls(opp ~ 1, opposites, correlation =
> corCompSymm(form = ~ 1 |
> > id), weights = varIdent(form = ~ 1 | wave)))
> >
> > # Does not work with lme.
> > summary(hetercom <- lme(opp ~ 1,opposites,
> > correlation=corCompSymm(form = ~ 1 |id),
> > weights=varIdent(form = ~1|wave),
> method="REML"))
> >
> > # But does work with the "one" trick.
> > summary(lme(opp ~ 1,opposites,
> > random = ~1|one,
> > correlation=corCompSymm(form = ~ 1 |one/id),
> > weights=varIdent(form = ~1|wave), method="REML"))
> >
> >
> >
> > The main reason of my mailing to the list is this. I was
> wondering
> > whether with lme it is possible to also estimate models with the
> > individuals nested in higher levels like schools or
> hospitals etc
> > and at
> > the same time letting the residuals correlate within
> individuals over
> > time with one of the nice covar structures. However, I do
> NOT have a
> > reproducible example of such more complex models at the
> moment. I only
> > hoped that if the lme version of the model presented above
> has no
> > further problems than a (slightly) different AIC, BIC and std.
> > error of
> > the fixed intercept, it would be meaningful to proceed in this
> > way, i.e.
> > using lme instead of gls, since lme provides the important
> possibility
> > of additional random effects in the model.
> >
> > As to Wolfgang's suggestion for heretogeneous CS using:
> >
> > gls(opp ~ 1, correlation = corCompSymm(form = ~ 1 | id),
> weights =
> > varIdent(form = ~ 1 | timepoint))
> >
> > I didn't find a way to estimate such a model with lme,
> except for the
> > method with the "trick". Using:
> >
> > lme(opp ~ 1,opposites,
> > correlation=corCompSymm(form = ~ 1 |id),
> > weights=varIdent(form = ~1|wave), method="REML")
> >
> > leads to error-message:
> >
> > Error in lme.formula(opp ~ 1, opposites, correlation =
> > corCompSymm(form
> > = ~1 | : incompatible formulas for groups in 'random' and
> > 'correlation'
> >
> >
> > whereas
> >
> > lme(opp ~ 1,opposites,
> > random = ~1|one,
> > correlation=corCompSymm(form = ~ 1 |one/id),
> > weights=varIdent(form = ~1|wave), method="REML")
> >
> > leads to similar results as your gls suggestion.
> >
> >
> > Best regards, Ben.
> >
> >
> > On 26-7-2017 14:44, Thierry Onkelinx wrote:
> > > Dear Ben,
> > >
> > > Please be more specific on the kind of model you want to
> fit. That
> > > would lead to a more relevant discussion that your potential
> > misuse of
> > > lme. A reproducible example is always useful...
> > >
> > > Best regards,
> > >
> > >
> > > ir. Thierry Onkelinx
> > > Instituut voor natuur- en bosonderzoek / Research
> Institute for
> > Nature
> > > and Forest
> > > team Biometrie & Kwaliteitszorg / team Biometrics & Quality
> > Assurance
> > > Kliniekstraat 25
> > > 1070 Anderlecht
> > > Belgium
> > >
> > > To call in the statistician after the experiment is done
> may be no
> > > more than asking him to perform a post-mortem examination:
> he may be
> > > able to say what the experiment died of. ~ Sir Ronald
> Aylmer Fisher
> > > The plural of anecdote is not data. ~ Roger Brinner
> > > The combination of some data and an aching desire for an
> answer does
> > > not ensure that a reasonable answer can be extracted from a
> > given body
> > > of data. ~ John Tukey
> > >
> > > 2017-07-26 13:36 GMT+02:00 Ben Pelzer <b.pelzer at maw.ru.nl
> <mailto:b.pelzer at maw.ru.nl>
> > <mailto:b.pelzer at maw.ru.nl <mailto:b.pelzer at maw.ru.nl>>
> > > <mailto:b.pelzer at maw.ru.nl <mailto:b.pelzer at maw.ru.nl>
> <mailto:b.pelzer at maw.ru.nl <mailto:b.pelzer at maw.ru.nl>>>>:
> > >
> > > Dear list,
> > >
> > > With longitudinal data, the package nlme offers the
> > possibility to
> > > specify particular covariance structures for the
> residuals.
> > In the
> > > examples below I used data from 35 individuals
> measured at 4
> > time
> > > points, so we have 4 observations nested in each of 35
> > individuals.
> > > These data are discussed in Singer and Willett, Applied
> > Longitudinal
> > > Data Analysis, chapter 7.
> > >
> > > In several sources I found that e.g. compound symmetry can
> > easily be
> > > obtained with gls from package nlme, by using the
> correlation
> > > structure
> > > corCompSymm, as in:
> > >
> > > comsym <- gls(opp~1,opposites,
> > > correlation=corCompSymm(form = ~ 1 |id),
> > > method="REML")
> > >
> > > where id is subject-identifier for the individual.
> With gls
> > > however one
> > > cannot specify random effects, as opposed to lme. To have
> > lme estimate
> > > compound symmetry is simple, one would not even need
> have to
> > specify a
> > > particular correlation structure, it would suffice to say
> > > "random=~1|id". However, for more complex covariance
> > structures, e.g.
> > > heterogeneous compound symmetry, I was only able to
> find syntax
> > > for gls,
> > > but not for lme.
> > >
> > > Then I thought that tricking lme might be an option by
> having a
> > > kind of
> > > "fake" random effect. That is, I constructed a variable
> > "one" which
> > > takes value 1 for all cases, and let the intercept be
> random
> > > across "all
> > > one groups". This led to the following lme model:
> > >
> > >
> > > comsym.lme <- lme(opp~1,opposites, random= ~1|one,
> > > correlation=corCompSymm(form = ~ 1
> > |one/id),
> > > method="REML")
> > >
> > > And actually to my surprise, the results of this lme
> are highly
> > > similar
> > > to those of the gls above.
> > > The output of both is attached below. The loglik's are
> > identical, the
> > > AIC and BIC are not, which I can understand, as the
> lme has
> > an extra
> > > variance to be estimated, compared to the gls. Also,
> the fixed
> > > intercept's standard error is different, the one of
> the lme
> > being
> > > about
> > > twice as large.
> > >
> > > I added some independent variables (not shown below)
> but the
> > > similarities between gls and lme remain, with only the AIC
> > and BIC and
> > > the standard error of the fixed intercept being different
> > for the two
> > > models.
> > >
> > > My question is threefold.
> > > 1) Suppose the individuals (say pupils) would be nested in
> > > schools, then
> > > I assume that with lme I could add school as a random
> > effect, and
> > > run a
> > > "usual" model, with pupils nested in schools and
> observations in
> > > pupils,
> > > and then use any of the possible residual covariance
> structures
> > > for the
> > > observations in pupils. Would you agree with this?
> (with gls one
> > > cannot
> > > use an additional random effect, like e.g. school)
> > > 2) Are the lme results indeed to be trusted when using
> this
> > "fake"
> > > random effect, apart from the differences with gls
> mentioned?
> > > Could you
> > > imagine a situation where lme with this trick would
> produce
> > wrong
> > > results?
> > > 3) I don't understand the variance of the intercept in the
> > lme output.
> > > Even when I specify a very simple model: lme(opp ~ 1,
> random
> > = ~1|one,
> > > opposites, method="REML")
> > > lme is able to estimate the intercept variance...but
> what is
> > this
> > > variance estimate expressing?
> > >
> > > Thanks a lot for any help!!!
> > >
> > > Ben.
> > >
> > >
> > >
> > >
> > >
> > > *----------------- gls
> > > ---------------------------------------------------
> > >
> > > > comsym <- gls(opp~1,opposites,
> > > correlation=corCompSymm(form = ~ 1 |id),
> > > method="REML")
> > > > summary(comsym)
> > >
> > > Generalized least squares fit by REML
> > > Model: opp ~ 1
> > > Data: opposites
> > > AIC BIC logLik
> > > 1460.954 1469.757 -727.4769
> > >
> > > Correlation Structure: Compound symmetry
> > > Formula: ~1 | id
> > > Parameter estimate(s):
> > > Rho
> > > 0.2757052
> > >
> > > Coefficients:
> > > Value Std.Error t-value p-value
> > > (Intercept) 204.8143 5.341965 38.34063 0
> > >
> > > Standardized residuals:
> > > Min Q1 Med Q3 Max
> > > -2.75474868 -0.71244027 0.00397158 0.56533908 2.24944156
> > >
> > > Residual standard error: 46.76081
> > > Degrees of freedom: 140 total; 139 residual
> > >
> > >
> > > #------------------ lme
> > > ---------------------------------------------------
> > >
> > > > comsym.lme <- lme(opp~1,opposites, random= ~1|one,
> > > correlation=corCompSymm(form = ~ 1 |one/id),
> > > method="REML")
> > > > summary(comsym.lme)
> > >
> > > Linear mixed-effects model fit by REML
> > > Data: opposites
> > > AIC BIC logLik
> > > 1462.954 1474.692 -727.4769
> > >
> > > Random effects:
> > > Formula: ~1 | one
> > > (Intercept) Residual
> > > StdDev: 10.53875 46.76081
> > >
> > > Correlation Structure: Compound symmetry
> > > Formula: ~1 | one/id
> > > Parameter estimate(s):
> > > Rho
> > > 0.2757052
> > > Fixed effects: opp ~ 1
> > > Value Std.Error DF t-value p-value
> > > (Intercept) 204.8143 11.81532 139 17.33463 0
> > >
> > > Standardized Within-Group Residuals:
> > > Min Q1 Med Q3 Max
> > > -2.75474868 -0.71244027 0.00397158 0.56533908 2.24944156
> > >
> > > Number of Observations: 140
> > > Number of Groups: 1
> > >
> > >
> > > [[alternative HTML version deleted]]
> > >
> > > _______________________________________________
> > > R-sig-mixed-models at r-project.org
> <mailto:R-sig-mixed-models at r-project.org>
> > <mailto:R-sig-mixed-models at r-project.org
> <mailto:R-sig-mixed-models at r-project.org>>
> > > <mailto:R-sig-mixed-models at r-project.org
> <mailto:R-sig-mixed-models at r-project.org>
> > <mailto:R-sig-mixed-models at r-project.org
> <mailto:R-sig-mixed-models at r-project.org>>> mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models>
> > <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models>>
> > >
> <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models>
> > <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models>>>
> > >
> > >
> >
> >
> > [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-mixed-models at r-project.org
> <mailto:R-sig-mixed-models at r-project.org>
> > <mailto:R-sig-mixed-models at r-project.org
> <mailto:R-sig-mixed-models at r-project.org>> mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models>
> > <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models>>
> >
> >
>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models at r-project.org
> <mailto:R-sig-mixed-models at r-project.org> mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models>
>
>
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list