[R-sig-ME] p-values from model fitting or glht?
Sorkin, John
j@orkin @ending from @om@um@ryl@nd@edu
Thu Jun 7 02:14:50 CEST 2018
Cristiano,
Is the difference in the p value simply due to the difference in the tests that are being done? The summary statistic is using a Student's t-test which for 15 degrees of freedom for a two-tailed p value of 0.05 has a critical value of 2.131 while the post-hoc test is performed using a z-test which for a two-tailed p value of 0.05 has a critical value of 1.96. One should also note the the SE for the summary statistic is 1.42 while the SE for the post-hoc test is 1.37. I don't know why these values are different; I suspect they are calculated differently.
For small sample sizes Wald's statistic (the difference of two means divided by the SE of the difference, which when applied to the post-hoc test is equivalent to a z test) can be too liberal. For these cases, a likelihood ratio test has been recommended to me.
I hope this helps, or at least does not confuse.
John
John David Sorkin M.D., Ph.D.
Professor of Medicine
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology and Geriatric Medicine
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)
________________________________
From: R-sig-mixed-models <r-sig-mixed-models-bounces using r-project.org> on behalf of Cristiano Alessandro <cri.alessandro using gmail.com>
Sent: Wednesday, June 6, 2018 5:30 PM
To: R Mixed Models
Subject: [R-sig-ME] p-values from model fitting or glht?
CAUTION: This message originated from a non UMB, UMSOM, FPI, or UMMS email system. Whether the sender is known or not known, hover over any links before clicking and use caution opening attachments.
Hi all,
I am running a mixed model to compare two groups. This is repeated measure
design. The model is very simple:
linM11 <- lme(values ~ grf, random = ~1|id, data=dat_trf,
na.action=na.omit, method = "ML", control=lCtr )
where grf is a factor with two levels. I am asking if the two levels are
significantly different. When I call summary() I obtain (among the other
things):
> summary(linM11)
Fixed effects: values ~ grf
Value Std.Error DF t-value p-value
(Intercept) -8.513064 0.9908567 16 -8.59162 0.0000
grf1 3.027705 1.4158346 15 2.13846 0.0493
which makes me think that the two groups are barely significantly
different. But If I run this post-hoc test I get this other (quite
different) result:
> ph_conditional <- c("grf1 = 0");
> linM.ph <- glht(linM, linfct = ph_conditional);
> summary(linM.ph)
Linear Hypotheses:
Estimate Std. Error z value Pr(>|z|)
grf1 == 0 3.028 1.372 2.206 0.0274 *
Which one should I trust? I am always confused on whether I should use the
p-values of the model fit or those of the post-hoc tests. If I had multiple
tests, I would certainly run the post-hoc and adjust for multiple
comparisons (with glht). But here there is only one test, and I am not sure
why I get so different results, and which one I should trust.
Thanks
Cristiano
[[alternative HTML version deleted]]
_______________________________________________
R-sig-mixed-models using r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list