[R-sig-ME] Minimum detectable effect size in linear mixed model

Thierry Onkelinx th|erry@onke||nx @end|ng |rom |nbo@be
Sat Jul 4 23:47:31 CEST 2020


Dear Han,

As mentioned earlier, power analysis is only relevant _before_ you do the
study. To avoid running an underpowered study. Doing a post-hoc power
analysis on an underpowered study is putting the cart before the horse.

Once you have done the analysis, look at the confidence intervals of the
estimates.
- non-significant and values in the CI small compared to the
practical range: sufficient power
- non-significant and values in the CI similar or larger than the
practical range: underpowered
- significant and values in the CI similar or larger than the practical
range: sufficient power
- significant and values in the CI small compared to the practical range:
overpowered

Note that you should not only vary the coefficient of interest. At least
also take the uncertainty of the random effect variance into account. Don't
underestimate its effect on the power. The uncertainty on these variances
can be substantial. Especially when the design has a small (<200) number of
levels for the random effect.

Best regards,

ir. Thierry Onkelinx
Statisticus / Statistician

Vlaamse Overheid / Government of Flanders
INSTITUUT VOOR NATUUR- EN BOSONDERZOEK / RESEARCH INSTITUTE FOR NATURE AND
FOREST
Team Biometrie & Kwaliteitszorg / Team Biometrics & Quality Assurance
thierry.onkelinx using inbo.be
Havenlaan 88 bus 73, 1000 Brussel
www.inbo.be

///////////////////////////////////////////////////////////////////////////////////////////
To call in the statistician after the experiment is done may be no more
than asking him to perform a post-mortem examination: he may be able to say
what the experiment died of. ~ Sir Ronald Aylmer Fisher
The plural of anecdote is not data. ~ Roger Brinner
The combination of some data and an aching desire for an answer does not
ensure that a reasonable answer can be extracted from a given body of data.
~ John Tukey
///////////////////////////////////////////////////////////////////////////////////////////


<https://www.inbo.be>


Op za 4 jul. 2020 om 23:27 schreef Han Zhang <hanzh using umich.edu>:

> Hi Sacha,
>
> Correct me if I'm wrong, but I tend to think this is more like a
> sensitivity analysis (given alpha, power, and N, solve for the required
> effect size). If the minimum detectable effect size at 80% power ends up so
> large that it exceeds the typical range in the field (say,  a .6
> correlation is the minimum whereas a .2 is typically expected), then we may
> say the study is underpowered. So I think I made a mistake with question
> (2) - the MDES should be compared to an effect size with practical
> importance, not the observed effect size.
>
> Han
>
> On Sat, Jul 4, 2020 at 12:07 PM varin sacha <varinsacha using yahoo.fr> wrote:
>
> > Hi,
> >
> > Is the question about post hoc power analysis ?
> >
> > Post hoc power analyses are usually not suggested. (See for example The
> > abuse of power...hoenig & heisey).
> > You should do an a priori power analysis.  If you then do the small
> sample
> > study and obtain a negative result, you have no idea why – you are stuck.
> >
> > That is why I always tell people not to do a study where everything rides
> > on a significant result.  It is an unnecessary gamble.
> >
> > It is always better to realize an a priori power analysis to know Type II
> > error and the power in case of the test is not significant.
> >
> > Also, it is very easy to, a priori, estimate the power of say, a medium,
> > effect size.  So there is little reason for not doing that at the
> beginning.
> >
> > Best,
> > Sacha
> >
> > Envoyé de mon iPhone
> >
> > > Le 4 juil. 2020 à 01:04, Patrick (Malone Quantitative) <
> > malone using malonequantitative.com> a écrit :
> > >
> > > No, because I don't think it can be. That's not how power analysis
> > works.
> > > It's bad practice.
> > >
> > >> On Fri, Jul 3, 2020, 6:42 PM Han Zhang <hanzh using umich.edu> wrote:
> > >>
> > >> Hi Pat,
> > >>
> > >> Thanks for your quick reply. Yes, I already have the data and the
> actual
> > >> effects, and the analysis was suggested by a reviewer. Can you
> > elaborate on
> > >> when do you think such an analysis might be justified?
> > >>
> > >> Thanks!
> > >> Han
> > >>
> > >> On Fri, Jul 3, 2020 at 6:34 PM Patrick (Malone Quantitative) <
> > >> malone using malonequantitative.com> wrote:
> > >>
> > >>> Han,
> > >>>
> > >>> (1) Usually, yes, but . . .
> > >>>
> > >>> (2) If you have an actual effect, does that mean you're doing post
> hoc
> > >>> power analysis? If so, that's a whole can of worms, for which the
> best
> > >>> advice I have is "don't do it." Use the size of the confidence
> > >>> interval of your estimate as an assessment of sample adequacy.
> > >>>
> > >>> Pat
> > >>>
> > >>> On Fri, Jul 3, 2020 at 6:27 PM Han Zhang <hanzh using umich.edu> wrote:
> > >>>>
> > >>>> Hello,
> > >>>>
> > >>>> I'm trying to find the minimum detectable effect size (MDES) given
> my
> > >>>> sample, alpha (.05), and desired power (90%) in a linear mixed model
> > >>>> setting. I'm using the simr package for a simulation-based approach.
> > >>> What I
> > >>>> did is changing the original effect size to a series of hypothetical
> > >>> effect
> > >>>> sizes and find the minimum effect size that has a 90% chance of
> > >>> producing a
> > >>>> significant result. Below is a toy code:
> > >>>>
> > >>>> library(lmerTest)
> > >>>> library(simr)
> > >>>>
> > >>>> # fit the model
> > >>>> model <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
> > >>>> summary(model)
> > >>>>
> > >>>> Fixed effects:
> > >>>>            Estimate Std. Error      df t value Pr(>|t|)
> > >>>> (Intercept)  251.405      6.825  17.000  36.838  < 2e-16 ***
> > >>>> Days          10.467      1.546  17.000   6.771 3.26e-06 ***
> > >>>>
> > >>>>
> > >>>> Here is the code for minimum detectable effect size:
> > >>>>
> > >>>> pwr <- NA
> > >>>>
> > >>>> # define a set of reasonable effect sizes
> > >>>> es <- seq(0, 10, 2)
> > >>>>
> > >>>> # loop through the effect sizes
> > >>>> for (i in 1:length(es)) {
> > >>>>  # replace the original effect size with new one
> > >>>>  fixef(model)['Days'] =  es[i]
> > >>>>  # run simulation to obtain power estimate
> > >>>>  pwr.summary <- summary(powerSim(
> > >>>>    model,
> > >>>>    test = fixed('Days', "t"),
> > >>>>    nsim = 100,
> > >>>>    progress = T
> > >>>>  ))
> > >>>>  # store output
> > >>>>  pwr[i] <- as.numeric(pwr.summary)[3]
> > >>>> }
> > >>>>
> > >>>> # display results
> > >>>> cbind("Coefficient" = es,
> > >>>>      Power = pwr)
> > >>>>
> > >>>> Output:
> > >>>>
> > >>>>                           Coefficient   Power
> > >>>> [1,]                                     0  0.09
> > >>>> [2,]                                     2  0.24
> > >>>> [3,]                                     4  0.60
> > >>>> [4,]                                     6  0.99
> > >>>> [5,]                                     8  1.00
> > >>>> [6,]                                    10  1.00
> > >>>>
> > >>>> My questions:
> > >>>>
> > >>>> (1) Is this the right way to find the MDES?
> > >>>>
> > >>>> (2) I have some trouble making sense of the output. Can I say the
> > >>>> following: because the estimated power when the effect = 6 is 99%,
> and
> > >>>> because the actual model has an estimate of 10.47, then the study is
> > >>>> sufficiently powered? Conversely, imagine that if the actual
> estimate
> > >>> was
> > >>>> 3.0, then can I say the study is insufficiently powered?
> > >>>>
> > >>>> Thank you,
> > >>>> Han
> > >>>> --
> > >>>> Han Zhang, Ph.D.
> > >>>> Department of Psychology
> > >>>> University of Michigan, Ann Arbor
> > >>>> https://sites.lsa.umich.edu/hanzh/
> > >>>>
> > >>>>        [[alternative HTML version deleted]]
> > >>>>
> > >>>> _______________________________________________
> > >>>> R-sig-mixed-models using r-project.org mailing list
> > >>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> > >>>
> > >>>
> > >>>
> > >>> --
> > >>> Patrick S. Malone, Ph.D., Malone Quantitative
> > >>> NEW Service Models: http://malonequantitative.com
> > >>>
> > >>> He/Him/His
> > >>>
> > >>
> > >>
> > >> --
> > >> Han Zhang, Ph.D.
> > >> Department of Psychology
> > >> University of Michigan, Ann Arbor
> > >> https://sites.lsa.umich.edu/hanzh/
> > >>
> > >
> > >    [[alternative HTML version deleted]]
> > >
> > > _______________________________________________
> > > R-sig-mixed-models using r-project.org mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> >
> >
>
> --
> Han Zhang, Ph.D.
> Department of Psychology
> University of Michigan, Ann Arbor
> https://sites.lsa.umich.edu/hanzh/
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list