[R-sig-ME] p-correction for effects in LMM

marKo mtonc|c @end|ng |rom ||r|@un|r|@hr
Wed Dec 1 13:48:20 CET 2021


Not sure what to say in this regard, as those methods will produce very 
similar results. If  I recall it correctly, Douglas Bates suggests doing 
a profile CI.
I usually do a bootstrap and do not think much about it (sorry to say 
that, actually).

As for the number of cores (CORES in the mentioned code), they depends 
on the processor you have. To establish the max number, in Windows start 
the task manager and see how many threads you have. In Linux, You can 
use some system monitor to check that.

Hope it helps,

Marko

  On 30. 11. 2021. 16:40, Victoria Pattison-Willits wrote:
> Hi there
> Thank you to the OP for sharing this question and I am following this
> thread as I was wondering which CIs were the best to go with for mixed
> models - I have been calculating three different types (Wald, Boot and
> Profile) and was not really sure for mixed models (in my case v similar an
> lmer with a nested random effect crossed with a second random effect and 8
> fixed effects (no interactive terms)). I have been on a massive learning
> curve and so still a little hazy how the t approaches differ in their calc
> of  the CIs - I have been reporting the bootstrap CIs in my project
> although there was only a very small difference when plotted for all 3
> across all my fixed effects. Just want to check in light of this question
> this is the correct approach!
> 
> One also quick q related to OQ - how do you determine the number of CORES
> if I wanted to include that code - does it depend on processing speeds etc?
> 
> Cheers and this is my first question and I still am a relative novice so
> thanks in advance for patience with probably very simple questions! :)
> 
> Vicki PW
> 
> On Fri, Nov 26, 2021 at 2:30 PM marKo <mtoncic using ffri.uniri.hr> wrote:
> 
>> On 26. 11. 2021. 08:41, Bojana Dinic wrote:
>>> Dear colleagues,
>>>
>>>       I use linear mixed models with 1 random effect (subject), 2 fixed
>>>       factors (one  is between factor and another is repeated) and one
>>> covariate, and
>>>       explore all main effects, 2-way interactions and one 3-way
>>> interaction.
>>>       Regarding of used software, somewhere I get effect of intercept,
>>>       somewhere not. Reviewer asks to use p-adjustment for these
>>>       effects. My dilemma is should I apply p-correction for 7 tests or 8
>>> (including
>>>       random intercept for subjects)?
>>>
>>>       The output do not contain F for random effect, but only variance.
>>>       Also, the output do not contain effect size. CIs are available only
>>> for
>>>       betas as product of specific level of both fixed effects and
>>> covariate, but
>>>       since I have 3 levels for between and 4 for repeated effects, the
>>>       output is not helpful + there is no possibility to change reference
>>> group.
>>>       Thus, I'm stuck with p-adjustment.
>>>
>>>      Any help is welcomed.
>>>       Thank you.
>>>
>>
>> As I understand, p-values are somewhat unreliable (In LMM). As a
>> sensible alternative maybe you could compute bootstrap CI and use that
>> to infer about significance of specific effects (if i have understood
>> your problem correctly).
>> I you use lme4 or nlme, this should not be a problem.
>>
>> You ca use (for model  m)
>>
>> confint(m, level=0.95, method="boot", nsim=No.of.SIMULATIONS)
>>
>> even use some multi-core processing to speed thing up
>>
>> confint(m, level=0.95, method="boot", parallel = "multicore", ncpus =
>> No.of.CORES, nsim=No.of.SIMULATIONS)
>>
>> change No.of.SIMULATIONS with the desired number of repetitions (1000 or
>> so)
>> change No.of.CORES with the desired number of cores (depends of your
>> machine).
>>
>> Hope it helps.
>>
>>
>> --
>> Marko Tončić, PhD
>> Assistant professor
>> University of Rijeka
>> Faculty of Humanities and Social Sciences
>> Department of Psychology
>> Sveucilisna avenija 4, 51000 Rijeka, CROATIA
>> e-mail: mtoncic using ffri.uniri.hr
>>
>> _______________________________________________
>> R-sig-mixed-models using r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
> 
> 	[[alternative HTML version deleted]]
> 
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>



More information about the R-sig-mixed-models mailing list