[R-sig-ME] lme4 sanple size analysis / power analysis by simulation ...
Steven McKinney
smckinney at bccrc.ca
Tue Oct 22 21:44:44 CEST 2013
> -----Original Message-----
> From: r-sig-mixed-models-bounces at r-project.org [mailto:r-sig-mixed-models-
> bounces at r-project.org] On Behalf Of Kevin E. Thorpe
> Sent: October-22-13 10:51 AM
> To: David Winsemius
> Cc: Lenth, Russell V; r-sig-mixed-models at r-project.org
> Subject: Re: [R-sig-ME] lme4 sanple size analysis / power analysis by
> simulation ...
>
> On 10/22/2013 01:45 PM, David Winsemius wrote:
> >
> > On Oct 22, 2013, at 6:35 AM, Lenth, Russell V wrote:
> >
> >> The reviewers were NOT correct in questioning whether you had
> >> sufficient power. Power is the probability of rejecting a null
> >> hypothesis. You have the data, you did your analysis, so you know
> >> which hypotheses were rejected (retrospectively, the power of those
> >> is 1) and those you did not (retrospective power of 0). There is no
> >> more information about power to be gleaned with respect to those
> >> data and analyses. You can use power calculations to decide sample
> >> size for a future study only.
> >
> > Don't we need to know what conclusions were being questioned when we
> > say this? I don't disagree about the vacuity of doing post-hoc power
> > analyses, especially when the study of a rare condition will
> > effectively place a hard limit on sample size. However, if
> > conclusions were being submitted about "no difference" for the
> > features that were "not significant", isn't it possible that
> > questions about power would have validity?
>
> I guess the obvious response to this is "power for what?" In such
> situations, I think a careful consideration of confidence intervals in
> the context of clinical significance is far more helpful.
>
> Kevin
If the study found differences with small p-values, there's no
power question to ask. Confidence intervals will not cover values
such as 0 or 1 that indicate no difference between/among groups.
A definitive assertion of a diference can be made, subject to the
error rate inherent in the specified type I error rate (often labeled
alpha, and often set to 0.05).
The only legitimate power question the reviewers can ask is in the
case that p-values were large, and corresponding confidence intervals
covered values indicating no difference. In that case the question is
"Did you specify a difference of scientific interest that you wanted to detect,
and did you do a power analysis with data at hand prior to this study, to determine
a minimum sample size to yield sufficient power to detect such a difference of
scientific interest?"
If the answer is yes, then a null finding can be definitively declared to be a
sound finding of no difference of scientific interest.
If the answer is no, then the authors can only conclude "We fail to reject
the null hypothesis", not "we accept the null hypothesis". This is the reason
statisticians came up with this oddly phrased expression - because failing
to reject is not equivalent to accepting the null hypothesis if a-priori
power calculations were not undertaken to ensure a large enough sample
to detect a difference of scientific interest with sufficiently high coverage
probability (power, or 1 - type II error rate).
Steven McKinney, Ph.D.
Statistician
Molecular Oncology and Breast Cancer Program
British Columbia Cancer Research Centre
>
> >
> >>
> >> Russ
> >>
> >> -- Russell V. Lenth - Professor Emeritus Department of Statistics
> >> and Actuarial Science The University of Iowa - Iowa City, IA
> >> 52242 USA Dept office (319)335-0712 - FAX (319)335-3017
> >> russell-lenth at uiowa.edu - http://www.stat.uiowa.edu/~rlenth/
> >>
> >> ... The paper was accepted with revisions which is where we are
> >> now. The reviewers correctly questioned to what extent we had
> >> sufficient power to come to the conclusions we did. I do not want
> >> to perform a post-hoc power analysis because from what I have read
> >> and seen on R discussions it is discouraged. ...
> >>
>
>
> --
> Kevin E. Thorpe
> Head of Biostatistics, Applied Health Research Centre (AHRC)
> Li Ka Shing Knowledge Institute of St. Michael's
> Assistant Professor, Dalla Lana School of Public Health
> University of Toronto
> email: kevin.thorpe at utoronto.ca Tel: 416.864.5776 Fax: 416.864.3016
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
More information about the R-sig-mixed-models
mailing list