[R-sig-ME] Contrasts among estimates of fixed effects
Seth W. Bigelow
seth at swbigelow.net
Thu Nov 1 20:00:57 CET 2012
Aha. That makes a lot of sense. As it turns out, there is an a priori reason
for thinking that variance may differ among Exams -- this is a forestry
study, and one of the criticisms of the kind of treatment I am investigating
is that it may 'homogenize' the forest. I see now that this is reflected in
much smaller variance for P1 and P5 (i.e., post-treatment) than for the P0,
'pre-treatment' exam.
I set the random term to '~Exam|SettingID', thus estimating separate
variance for each level of Exam:
m1 <- lme(BA ~ Exam, data=s2, random = ~Exam|SettingID, method="ML")
Then compared the likelihood to that of the lumped-variance model --
m0 <- lme(BA ~ Exam, data=s2, random = ~1|SettingID, method="ML")
anova(m0,m1)
Model df AIC BIC logLik Test L.Ratio p-value
m0 1 5 1758.785 1775.072 -874.3924
m1 2 10 1678.630 1711.205 -829.3151 1 vs 2 90.15449 <.0001
Result is that the separate-variance model is massively more likely than the
pooled variance model. And when I rerun the separate-variance model under
restricted likelihood then apply the desired contrast statement,
> contrast(m2, list(Exam='P1'),list(Exam='P5'))
lme model parameter contrast
Contrast S.E. Lower Upper t df Pr(>|t|)
1 -4.013438 1.179034 -6.324301 -1.702574 -3.4 182 8e-04
Voila! The contrast is significant! I feel that is a reasonably defensible
procedure. Thank you very much, Paul Thompson, for providing the insights to
set me in the right direction. --Seth W. Bigelow
-----Original Message-----
From: Thompson,Paul [mailto:Paul.Thompson at SanfordHealth.org]
Sent: Thursday, November 01, 2012 11:23 AM
To: Seth W. Bigelow
Subject: RE: [R-sig-ME] Contrasts among estimates of fixed effects
I don't use R, but do a lot of stuff with contrasts in SAS. So, here are my
comments.
When you have a model with three levels like this, you construct an error
estimate from all 3. This error estimate is used to assess the significance
of the difference between levels. If you exclude a level as you do with a t
test involve P1-P5, you are constructing a different error term. My
suspicion in this case is that the within cell variability for P0 is higher
than for P1 and P5. If that is the case, then you would obtain the results
that you see. If the within cell variability is HUGELY greater, you may be
justified in looking at P1-P5 alone.
Otherwise, I think that the contrast approach is the better one. You
designed the experiment with 3 levels, and all should contribute to the
error term, as is the case with a contrast evaluation of the differences
between groups.
-----Original Message-----
From: r-sig-mixed-models-bounces at r-project.org
[mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf Of Seth W.
Bigelow
Sent: Thursday, November 01, 2012 10:07 AM
To: r-sig-mixed-models at r-project.org
Subject: [R-sig-ME] Contrasts among estimates of fixed effects
Hello mixed-modelers:
I am confused about detection of differences among fixed effect estimates in
mixed models. I am using lme() to compare the effect of the factor Exam (3
levels: P0,P1,P5) on the continuous variable BA. Measurements were done on
64 Plots. This is modeled with the statement m0 <- BA~Exam, data.,
random=~1|plot. Summary(m0) tells me that Exam has a highly significant
effect, and that both P1 and P5 differ highly significantly from P0, but not
whether P1 and P5 differ. If I use a contrast statement ("library(contrast),
contrast(m0, list(Exam='P1'), list(Exam='P5'), the contrast is not
significant (p=0.16). However, if I just use a paired t-test, the
difference is significant at p=0.001. I also get significant results with
lme if I just delete the 'P0' level. I would be grateful if anyone could
point me in the right direction for making a robust (yet sensitive)
determination of whether these factor levels differ. I possess a copy of the
Ur-text for R mixed models (i.e., Pinheiro & Bates) but it has not yet
provided insight. Appreciatively, Seth W. Bigelow
Linear mixed-effects model fit by REML
Data: s2
AIC BIC logLik
1746.913 1763.122 -868.4565
Random effects:
Formula: ~1 | Plot
(Intercept) Residual
StdDev: 25.779 16.18827
Fixed effects: BA ~ Exam
Value Std.Error DF t-value p-value
(Intercept) 72.01422 3.805048 126 18.925968 0e+00
ExamP1 -15.01266 2.861708 126 -5.246047 0e+00
ExamP5 -10.99922 2.861708 126 -3.843585 2e-04
Correlation:
(Intr) ExamP1
ExamP1 -0.376
ExamP5 -0.376 0.500
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-1.74478652 -0.54972545 0.02464379 0.41020121 3.24236603
Number of Observations: 192
Number of Groups: 64
CONTRAST statement
lme model parameter contrast
Contrast S.E. Lower Upper t df Pr(>|t|)
1 -4.013438 2.861708 -9.622282 1.595407 -1.4 187 0.1624
[[alternative HTML version deleted]]
_______________________________________________
R-sig-mixed-models at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
-----------------------------------------------------------------------
Confidentiality Notice: This e-mail message, including a...{{dropped:7}}
More information about the R-sig-mixed-models
mailing list