[R-sig-ME] p-correction for effects in LMM

marko mtonc|c @end|ng |rom ||r|@un|r|@hr
Tue Dec 7 23:32:14 CET 2021


I think that the only viable option (at least that i know of; please 
someone from the group to back me up on this) is to compare 
competing/nested models (the ones with and without some specific 
parameters; e.g. the model without and with the interaction parameters) 
via LR test ("anova(m1, m2)" in R).

As an estimate of effect size, you can compute omega^2 (even though it 
is just a pseudo R^2 measure; a mere squared correlation between 
predicted and actual results) for those competing/nested models. See 
more in;

Xu, R. (2003). Measuring explained variation in linear mixed effects 
models. Statistics in Medicine, 22(22), 3527–3541. 
https://doi.org/10.1002/sim.1572

I think that some of that is implemented in the package "sjstats" if 
this might be of any help.

Cheers,

Marko



On 12/6/21 11:04 PM, Bojana Dinic wrote:
> Dear Marko,
> Yes, I need effect size for F tests or p-adjustment for it. Thus, is 
> there any procedure to obtain effect sizes or if I use p-adjustments I 
> am not sure whether I need to involve random effect in calculation or 
> not?
> Thank you.
>
> Regards,
> Bojana
>
> On 04-Dec-21 23:38, marKo wrote:
>> I must admit that I do not understand what is that you are asking. 
>> Those CIs are for the parameters of your model (a 3x4 model + random 
>> effects: subject + residuals).
>> The referent group here are cond2 and rep1.
>> Maybe the problem that you have is that you would like to have an F 
>> statistic for the main effects and for the interaction. I do not know.
>>
>> Marko
>>
>>
>> On 04. 12. 2021. 11:26, Bojana Dinic wrote:
>>> Dear Marko,
>>>
>>> Thank you. I have question, these are CIs for which statistic (I 
>>> have 2 factors, cond and rep, and their interaction)?
>>>                           2.5 %    97.5 %
>>> .sig01        8.6062568 12.035500
>>> .sigma       12.8489647 14.841375
>>> (Intercept)  -2.1807253  9.047992
>>> cond1        -4.1126524 11.296070
>>> cond3        -5.8346317  7.649526
>>> rep2         -8.0280001  5.220439
>>> rep3         -4.0846168  9.194793
>>> rep4          6.1875602 18.878770
>>> cond1:rep2  -11.2367698  8.031134
>>> cond3:rep2   -7.9112913  7.491230
>>> cond1:rep3   -8.2536791 10.129607
>>> cond3:rep3   -5.9766989 10.071404
>>> cond1:rep4   -0.9371539 17.551132
>>> cond3:rep4   -4.2738610 11.020311
>>>
>>> Kind regars,
>>> Bojana
>>>
>>> On 26-Nov-21 20:29, marKo wrote:
>>>> On 26. 11. 2021. 08:41, Bojana Dinic wrote:
>>>>> Dear colleagues,
>>>>>
>>>>>      I use linear mixed models with 1 random effect (subject), 2 
>>>>> fixed
>>>>>      factors (one  is between factor and another is repeated) and 
>>>>> one covariate, and
>>>>>      explore all main effects, 2-way interactions and one 3-way 
>>>>> interaction.
>>>>>      Regarding of used software, somewhere I get effect of intercept,
>>>>>      somewhere not. Reviewer asks to use p-adjustment for these
>>>>>      effects. My dilemma is should I apply p-correction for 7 
>>>>> tests or 8 (including
>>>>>      random intercept for subjects)?
>>>>>
>>>>>      The output do not contain F for random effect, but only 
>>>>> variance.
>>>>>      Also, the output do not contain effect size. CIs are 
>>>>> available only for
>>>>>      betas as product of specific level of both fixed effects and 
>>>>> covariate, but
>>>>>      since I have 3 levels for between and 4 for repeated effects, 
>>>>> the
>>>>>      output is not helpful + there is no possibility to change 
>>>>> reference group.
>>>>>      Thus, I'm stuck with p-adjustment.
>>>>>
>>>>>     Any help is welcomed.
>>>>>      Thank you.
>>>>>
>>>>
>>>> As I understand, p-values are somewhat unreliable (In LMM). As a 
>>>> sensible alternative maybe you could compute bootstrap CI and use 
>>>> that to infer about significance of specific effects (if i have 
>>>> understood your problem correctly).
>>>> I you use lme4 or nlme, this should not be a problem.
>>>>
>>>> You ca use (for model  m)
>>>>
>>>> confint(m, level=0.95, method="boot", nsim=No.of.SIMULATIONS)
>>>>
>>>> even use some multi-core processing to speed thing up
>>>>
>>>> confint(m, level=0.95, method="boot", parallel = "multicore", ncpus 
>>>> = No.of.CORES, nsim=No.of.SIMULATIONS)
>>>>
>>>> change No.of.SIMULATIONS with the desired number of repetitions 
>>>> (1000 or so)
>>>> change No.of.CORES with the desired number of cores (depends of 
>>>> your machine).
>>>>
>>>> Hope it helps.
>>>>
>>>>
>>>
>>> _______________________________________________
>>> R-sig-mixed-models using r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>> _______________________________________________
>> R-sig-mixed-models using r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models



More information about the R-sig-mixed-models mailing list