[R-sig-ME] adjusted values
Rune Haubo
rune.haubo at gmail.com
Thu Mar 22 21:16:59 CET 2018
Maybe we are confusing ourselves here. Christiano, you say that you
are using lme4, but the output looks more like that from lme (nlme
package). If the latter is the case, the lmerTest package is not
directly related to your situation.
Otherwise I agree with Ben that whether MC corrections are appropriate
depends on the context. And about the coefficients: they are not
adjusted or corrected.
Cheers
Rune
On 22 March 2018 at 19:08, Ben Bolker <bbolker at gmail.com> wrote:
>
> summary() via lmerTest incorporates finite-size corrections, but not
> multiple-comparisons corrections. glht does the opposite. In this case
> your finite-size corrections are pretty much irrelevant though (in this
> context 962 \approx infinity).
>
> By convention, people don't usually bother with MC corrections when
> they're testing pre-defined contrasts from a single model, but I don't
> know that there's hard-and-fast rule (if I were testing the effects of a
> large number of treatments within a single model I might indeed use MC;
> I probably wouldn't bother for n=4).
>
> I don't know exactly what kind of MC correction glht does, but it
> probably shouldn't be Bonferroni (which is very conservative, and
> ignores correlations among the tests).
>
> On 18-03-22 01:28 PM, Cristiano Alessandro wrote:
>> Hi all,
>>
>> I am fitting a linear mixed model with lme4 in R. The model has a single
>> factor (des_days) with 4 levels (-1,1,14,48), and I am using random
>> intercept and slopes.
>>
>> Fixed effects: data ~ des_days
>> Value Std.Error DF t-value p-value
>> (Intercept) 0.8274313 0.007937938 962 104.23757 0.0000
>> des_days1 -0.0026322 0.007443294 962 -0.35363 0.7237
>> des_days14 -0.0011319 0.006635512 962 -0.17058 0.8646
>> des_days48 0.0112579 0.005452614 962 2.06469 0.0392
>>
>> I can clearly use the previous results to compare the estimations of each
>> "des_day" to the intercept, using the provided t-statistics. Alternatively,
>> I could use post-hoc tests (z-statistics):
>>
>>> ph_conditional <- c("des_days1 = 0",
>> "des_days14 = 0",
>> "des_days48 = 0");
>>> lev.ph <- glht(lev.lm, linfct = ph_conditional);
>>> summary(lev.ph)
>>
>> Simultaneous Tests for General Linear Hypotheses
>>
>> Fit: lme.formula(fixed = data ~ des_days, data = data_red_trf, random
>> = ~des_days |
>> ratID, method = "ML", na.action = na.omit, control = lCtr)
>>
>> Linear Hypotheses:
>> Estimate Std. Error z value Pr(>|z|)
>> des_days1 == 0 -0.002632 0.007428 -0.354 0.971
>> des_days14 == 0 -0.001132 0.006622 -0.171 0.996
>> des_days48 == 0 0.011258 0.005441 2.069 0.101
>> (Adjusted p values reported -- single-step method)
>>
>>
>> The p-values of the coefficient estimates and those of the post-hoc tests
>> differ because the latter are adjusted with Bonferroni correction. I wonder
>> whether there is any form of correction in the coefficient estimated of the
>> LMM, and which p-values are more appropriate to use.
>>
>> Thanks
>> Cristiano
>>
>> [[alternative HTML version deleted]]
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
More information about the R-sig-mixed-models
mailing list