[R-sig-ME] Minimum detectable effect size in linear mixed model
varin sacha
v@r|n@@ch@ @end|ng |rom y@hoo@|r
Sat Jul 4 18:07:12 CEST 2020
Hi,
Is the question about post hoc power analysis ?
Post hoc power analyses are usually not suggested. (See for example The abuse of power...hoenig & heisey).
You should do an a priori power analysis. If you then do the small sample study and obtain a negative result, you have no idea why – you are stuck.
That is why I always tell people not to do a study where everything rides on a significant result. It is an unnecessary gamble.
It is always better to realize an a priori power analysis to know Type II error and the power in case of the test is not significant.
Also, it is very easy to, a priori, estimate the power of say, a medium, effect size. So there is little reason for not doing that at the beginning.
Best,
Sacha
Envoyé de mon iPhone
> Le 4 juil. 2020 à 01:04, Patrick (Malone Quantitative) <malone using malonequantitative.com> a écrit :
>
> No, because I don't think it can be. That's not how power analysis works.
> It's bad practice.
>
>> On Fri, Jul 3, 2020, 6:42 PM Han Zhang <hanzh using umich.edu> wrote:
>>
>> Hi Pat,
>>
>> Thanks for your quick reply. Yes, I already have the data and the actual
>> effects, and the analysis was suggested by a reviewer. Can you elaborate on
>> when do you think such an analysis might be justified?
>>
>> Thanks!
>> Han
>>
>> On Fri, Jul 3, 2020 at 6:34 PM Patrick (Malone Quantitative) <
>> malone using malonequantitative.com> wrote:
>>
>>> Han,
>>>
>>> (1) Usually, yes, but . . .
>>>
>>> (2) If you have an actual effect, does that mean you're doing post hoc
>>> power analysis? If so, that's a whole can of worms, for which the best
>>> advice I have is "don't do it." Use the size of the confidence
>>> interval of your estimate as an assessment of sample adequacy.
>>>
>>> Pat
>>>
>>> On Fri, Jul 3, 2020 at 6:27 PM Han Zhang <hanzh using umich.edu> wrote:
>>>>
>>>> Hello,
>>>>
>>>> I'm trying to find the minimum detectable effect size (MDES) given my
>>>> sample, alpha (.05), and desired power (90%) in a linear mixed model
>>>> setting. I'm using the simr package for a simulation-based approach.
>>> What I
>>>> did is changing the original effect size to a series of hypothetical
>>> effect
>>>> sizes and find the minimum effect size that has a 90% chance of
>>> producing a
>>>> significant result. Below is a toy code:
>>>>
>>>> library(lmerTest)
>>>> library(simr)
>>>>
>>>> # fit the model
>>>> model <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
>>>> summary(model)
>>>>
>>>> Fixed effects:
>>>> Estimate Std. Error df t value Pr(>|t|)
>>>> (Intercept) 251.405 6.825 17.000 36.838 < 2e-16 ***
>>>> Days 10.467 1.546 17.000 6.771 3.26e-06 ***
>>>>
>>>>
>>>> Here is the code for minimum detectable effect size:
>>>>
>>>> pwr <- NA
>>>>
>>>> # define a set of reasonable effect sizes
>>>> es <- seq(0, 10, 2)
>>>>
>>>> # loop through the effect sizes
>>>> for (i in 1:length(es)) {
>>>> # replace the original effect size with new one
>>>> fixef(model)['Days'] = es[i]
>>>> # run simulation to obtain power estimate
>>>> pwr.summary <- summary(powerSim(
>>>> model,
>>>> test = fixed('Days', "t"),
>>>> nsim = 100,
>>>> progress = T
>>>> ))
>>>> # store output
>>>> pwr[i] <- as.numeric(pwr.summary)[3]
>>>> }
>>>>
>>>> # display results
>>>> cbind("Coefficient" = es,
>>>> Power = pwr)
>>>>
>>>> Output:
>>>>
>>>> Coefficient Power
>>>> [1,] 0 0.09
>>>> [2,] 2 0.24
>>>> [3,] 4 0.60
>>>> [4,] 6 0.99
>>>> [5,] 8 1.00
>>>> [6,] 10 1.00
>>>>
>>>> My questions:
>>>>
>>>> (1) Is this the right way to find the MDES?
>>>>
>>>> (2) I have some trouble making sense of the output. Can I say the
>>>> following: because the estimated power when the effect = 6 is 99%, and
>>>> because the actual model has an estimate of 10.47, then the study is
>>>> sufficiently powered? Conversely, imagine that if the actual estimate
>>> was
>>>> 3.0, then can I say the study is insufficiently powered?
>>>>
>>>> Thank you,
>>>> Han
>>>> --
>>>> Han Zhang, Ph.D.
>>>> Department of Psychology
>>>> University of Michigan, Ann Arbor
>>>> https://sites.lsa.umich.edu/hanzh/
>>>>
>>>> [[alternative HTML version deleted]]
>>>>
>>>> _______________________________________________
>>>> R-sig-mixed-models using r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>
>>>
>>>
>>> --
>>> Patrick S. Malone, Ph.D., Malone Quantitative
>>> NEW Service Models: http://malonequantitative.com
>>>
>>> He/Him/His
>>>
>>
>>
>> --
>> Han Zhang, Ph.D.
>> Department of Psychology
>> University of Michigan, Ann Arbor
>> https://sites.lsa.umich.edu/hanzh/
>>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
More information about the R-sig-mixed-models
mailing list