[R-sig-ME] Minimum detectable effect size in linear mixed model

varin sacha v@r|n@@ch@ @end|ng |rom y@hoo@|r
Mon Jul 6 02:16:23 CEST 2020


 Dear Han,

I agree with your interpretation of a sensitivity analysis that shows a correlation of .6 would be needed to have the desired power in a situation where .2 would be typical. To achieve desired sensitivity, we could increase sample size, or increase alpha (i.e., go to .10 instead of .05), or we could reduce our desired power (maybe be satisfied with .80 or less instead of .90 or .95), or we could try to increase the effect size, perhaps by using better measures or a more intense treatment. 

If we wish to determine an appropriate sample size, we specify alpha, power, and the effect size. Setting the effect size is tricky because we don't know the actual effect. A logical approach is to set the effect size at the smallest value that is considered to be important. If the effect size is larger, we will have even more power. If the effect size is smaller, we don't care much if the result is not statistically significant. 

I used the acronym BEAN to help people remember the four features that are involved with power analysis. 

B = beta error, where power = (1 – Beta error)
E = effect size
A = alpha error rate
N = sample size

If you know any three, you can compute the fourth. 

Best 
Sacha

Envoyé de mon iPhone

> Le 4 juil. 2020 à 23:26, Han Zhang <hanzh using umich.edu> a écrit :
> 
> 
> Hi Sacha,
> 
> Correct me if I'm wrong, but I tend to think this is more like a sensitivity analysis (given alpha, power, and N, solve for the required effect size). If the minimum detectable effect size at 80% power ends up so large that it exceeds the typical range in the field (say,  a .6 correlation is the minimum whereas a .2 is typically expected), then we may say the study is underpowered. So I think I made a mistake with question (2) - the MDES should be compared to an effect size with practical importance, not the observed effect size.
> 
> Han
> 
>> On Sat, Jul 4, 2020 at 12:07 PM varin sacha <varinsacha using yahoo.fr> wrote:
>> Hi,
>> 
>> Is the question about post hoc power analysis ?
>> 
>> Post hoc power analyses are usually not suggested. (See for example The abuse of power...hoenig & heisey).  
>> You should do an a priori power analysis.  If you then do the small sample study and obtain a negative result, you have no idea why – you are stuck.
>> 
>> That is why I always tell people not to do a study where everything rides on a significant result.  It is an unnecessary gamble. 
>> 
>> It is always better to realize an a priori power analysis to know Type II error and the power in case of the test is not significant.
>> 
>> Also, it is very easy to, a priori, estimate the power of say, a medium, effect size.  So there is little reason for not doing that at the beginning.
>> 
>> Best,
>> Sacha 
>> 
>> Envoyé de mon iPhone
>> 
>> > Le 4 juil. 2020 à 01:04, Patrick (Malone Quantitative) <malone using malonequantitative.com> a écrit :
>> > 
>> > No, because I don't think it can be. That's not how power analysis works.
>> > It's bad practice.
>> > 
>> >> On Fri, Jul 3, 2020, 6:42 PM Han Zhang <hanzh using umich.edu> wrote:
>> >> 
>> >> Hi Pat,
>> >> 
>> >> Thanks for your quick reply. Yes, I already have the data and the actual
>> >> effects, and the analysis was suggested by a reviewer. Can you elaborate on
>> >> when do you think such an analysis might be justified?
>> >> 
>> >> Thanks!
>> >> Han
>> >> 
>> >> On Fri, Jul 3, 2020 at 6:34 PM Patrick (Malone Quantitative) <
>> >> malone using malonequantitative.com> wrote:
>> >> 
>> >>> Han,
>> >>> 
>> >>> (1) Usually, yes, but . . .
>> >>> 
>> >>> (2) If you have an actual effect, does that mean you're doing post hoc
>> >>> power analysis? If so, that's a whole can of worms, for which the best
>> >>> advice I have is "don't do it." Use the size of the confidence
>> >>> interval of your estimate as an assessment of sample adequacy.
>> >>> 
>> >>> Pat
>> >>> 
>> >>> On Fri, Jul 3, 2020 at 6:27 PM Han Zhang <hanzh using umich.edu> wrote:
>> >>>> 
>> >>>> Hello,
>> >>>> 
>> >>>> I'm trying to find the minimum detectable effect size (MDES) given my
>> >>>> sample, alpha (.05), and desired power (90%) in a linear mixed model
>> >>>> setting. I'm using the simr package for a simulation-based approach.
>> >>> What I
>> >>>> did is changing the original effect size to a series of hypothetical
>> >>> effect
>> >>>> sizes and find the minimum effect size that has a 90% chance of
>> >>> producing a
>> >>>> significant result. Below is a toy code:
>> >>>> 
>> >>>> library(lmerTest)
>> >>>> library(simr)
>> >>>> 
>> >>>> # fit the model
>> >>>> model <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
>> >>>> summary(model)
>> >>>> 
>> >>>> Fixed effects:
>> >>>>            Estimate Std. Error      df t value Pr(>|t|)
>> >>>> (Intercept)  251.405      6.825  17.000  36.838  < 2e-16 ***
>> >>>> Days          10.467      1.546  17.000   6.771 3.26e-06 ***
>> >>>> 
>> >>>> 
>> >>>> Here is the code for minimum detectable effect size:
>> >>>> 
>> >>>> pwr <- NA
>> >>>> 
>> >>>> # define a set of reasonable effect sizes
>> >>>> es <- seq(0, 10, 2)
>> >>>> 
>> >>>> # loop through the effect sizes
>> >>>> for (i in 1:length(es)) {
>> >>>>  # replace the original effect size with new one
>> >>>>  fixef(model)['Days'] =  es[i]
>> >>>>  # run simulation to obtain power estimate
>> >>>>  pwr.summary <- summary(powerSim(
>> >>>>    model,
>> >>>>    test = fixed('Days', "t"),
>> >>>>    nsim = 100,
>> >>>>    progress = T
>> >>>>  ))
>> >>>>  # store output
>> >>>>  pwr[i] <- as.numeric(pwr.summary)[3]
>> >>>> }
>> >>>> 
>> >>>> # display results
>> >>>> cbind("Coefficient" = es,
>> >>>>      Power = pwr)
>> >>>> 
>> >>>> Output:
>> >>>> 
>> >>>>                           Coefficient   Power
>> >>>> [1,]                                     0  0.09
>> >>>> [2,]                                     2  0.24
>> >>>> [3,]                                     4  0.60
>> >>>> [4,]                                     6  0.99
>> >>>> [5,]                                     8  1.00
>> >>>> [6,]                                    10  1.00
>> >>>> 
>> >>>> My questions:
>> >>>> 
>> >>>> (1) Is this the right way to find the MDES?
>> >>>> 
>> >>>> (2) I have some trouble making sense of the output. Can I say the
>> >>>> following: because the estimated power when the effect = 6 is 99%, and
>> >>>> because the actual model has an estimate of 10.47, then the study is
>> >>>> sufficiently powered? Conversely, imagine that if the actual estimate
>> >>> was
>> >>>> 3.0, then can I say the study is insufficiently powered?
>> >>>> 
>> >>>> Thank you,
>> >>>> Han
>> >>>> --
>> >>>> Han Zhang, Ph.D.
>> >>>> Department of Psychology
>> >>>> University of Michigan, Ann Arbor
>> >>>> https://sites.lsa.umich.edu/hanzh/
>> >>>> 
>> >>>>        [[alternative HTML version deleted]]
>> >>>> 
>> >>>> _______________________________________________
>> >>>> R-sig-mixed-models using r-project.org mailing list
>> >>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>> >>> 
>> >>> 
>> >>> 
>> >>> --
>> >>> Patrick S. Malone, Ph.D., Malone Quantitative
>> >>> NEW Service Models: http://malonequantitative.com
>> >>> 
>> >>> He/Him/His
>> >>> 
>> >> 
>> >> 
>> >> --
>> >> Han Zhang, Ph.D.
>> >> Department of Psychology
>> >> University of Michigan, Ann Arbor
>> >> https://sites.lsa.umich.edu/hanzh/
>> >> 
>> > 
>> >    [[alternative HTML version deleted]]
>> > 
>> > _______________________________________________
>> > R-sig-mixed-models using r-project.org mailing list
>> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>> 
> 
> 
> -- 
> Han Zhang, Ph.D.
> Department of Psychology
> University of Michigan, Ann Arbor
> https://sites.lsa.umich.edu/hanzh/

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list