[R-sig-ME] using lme for cov structures?
Thierry Onkelinx
thierry.onkelinx at inbo.be
Wed Jul 26 14:44:14 CEST 2017
Dear Ben,
Please be more specific on the kind of model you want to fit. That would
lead to a more relevant discussion that your potential misuse of lme. A
reproducible example is always useful...
Best regards,
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and
Forest
team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium
To call in the statistician after the experiment is done may be no more
than asking him to perform a post-mortem examination: he may be able to say
what the experiment died of. ~ Sir Ronald Aylmer Fisher
The plural of anecdote is not data. ~ Roger Brinner
The combination of some data and an aching desire for an answer does not
ensure that a reasonable answer can be extracted from a given body of data.
~ John Tukey
2017-07-26 13:36 GMT+02:00 Ben Pelzer <b.pelzer op maw.ru.nl>:
> Dear list,
>
> With longitudinal data, the package nlme offers the possibility to
> specify particular covariance structures for the residuals. In the
> examples below I used data from 35 individuals measured at 4 time
> points, so we have 4 observations nested in each of 35 individuals.
> These data are discussed in Singer and Willett, Applied Longitudinal
> Data Analysis, chapter 7.
>
> In several sources I found that e.g. compound symmetry can easily be
> obtained with gls from package nlme, by using the correlation structure
> corCompSymm, as in:
>
> comsym <- gls(opp~1,opposites,
> correlation=corCompSymm(form = ~ 1 |id), method="REML")
>
> where id is subject-identifier for the individual. With gls however one
> cannot specify random effects, as opposed to lme. To have lme estimate
> compound symmetry is simple, one would not even need have to specify a
> particular correlation structure, it would suffice to say
> "random=~1|id". However, for more complex covariance structures, e.g.
> heterogeneous compound symmetry, I was only able to find syntax for gls,
> but not for lme.
>
> Then I thought that tricking lme might be an option by having a kind of
> "fake" random effect. That is, I constructed a variable "one" which
> takes value 1 for all cases, and let the intercept be random across "all
> one groups". This led to the following lme model:
>
>
> comsym.lme <- lme(opp~1,opposites, random= ~1|one,
> correlation=corCompSymm(form = ~ 1 |one/id),
> method="REML")
>
> And actually to my surprise, the results of this lme are highly similar
> to those of the gls above.
> The output of both is attached below. The loglik's are identical, the
> AIC and BIC are not, which I can understand, as the lme has an extra
> variance to be estimated, compared to the gls. Also, the fixed
> intercept's standard error is different, the one of the lme being about
> twice as large.
>
> I added some independent variables (not shown below) but the
> similarities between gls and lme remain, with only the AIC and BIC and
> the standard error of the fixed intercept being different for the two
> models.
>
> My question is threefold.
> 1) Suppose the individuals (say pupils) would be nested in schools, then
> I assume that with lme I could add school as a random effect, and run a
> "usual" model, with pupils nested in schools and observations in pupils,
> and then use any of the possible residual covariance structures for the
> observations in pupils. Would you agree with this? (with gls one cannot
> use an additional random effect, like e.g. school)
> 2) Are the lme results indeed to be trusted when using this "fake"
> random effect, apart from the differences with gls mentioned? Could you
> imagine a situation where lme with this trick would produce wrong results?
> 3) I don't understand the variance of the intercept in the lme output.
> Even when I specify a very simple model: lme(opp ~ 1, random = ~1|one,
> opposites, method="REML")
> lme is able to estimate the intercept variance...but what is this
> variance estimate expressing?
>
> Thanks a lot for any help!!!
>
> Ben.
>
>
>
>
>
> *----------------- gls ---------------------------------------------------
>
> > comsym <- gls(opp~1,opposites,
> correlation=corCompSymm(form = ~ 1 |id), method="REML")
> > summary(comsym)
>
> Generalized least squares fit by REML
> Model: opp ~ 1
> Data: opposites
> AIC BIC logLik
> 1460.954 1469.757 -727.4769
>
> Correlation Structure: Compound symmetry
> Formula: ~1 | id
> Parameter estimate(s):
> Rho
> 0.2757052
>
> Coefficients:
> Value Std.Error t-value p-value
> (Intercept) 204.8143 5.341965 38.34063 0
>
> Standardized residuals:
> Min Q1 Med Q3 Max
> -2.75474868 -0.71244027 0.00397158 0.56533908 2.24944156
>
> Residual standard error: 46.76081
> Degrees of freedom: 140 total; 139 residual
>
>
> #------------------ lme ------------------------------
> ---------------------
>
> > comsym.lme <- lme(opp~1,opposites, random= ~1|one,
> correlation=corCompSymm(form = ~ 1 |one/id),
> method="REML")
> > summary(comsym.lme)
>
> Linear mixed-effects model fit by REML
> Data: opposites
> AIC BIC logLik
> 1462.954 1474.692 -727.4769
>
> Random effects:
> Formula: ~1 | one
> (Intercept) Residual
> StdDev: 10.53875 46.76081
>
> Correlation Structure: Compound symmetry
> Formula: ~1 | one/id
> Parameter estimate(s):
> Rho
> 0.2757052
> Fixed effects: opp ~ 1
> Value Std.Error DF t-value p-value
> (Intercept) 204.8143 11.81532 139 17.33463 0
>
> Standardized Within-Group Residuals:
> Min Q1 Med Q3 Max
> -2.75474868 -0.71244027 0.00397158 0.56533908 2.24944156
>
> Number of Observations: 140
> Number of Groups: 1
>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models op r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list