[R-meta] Sample size and continuity correction

ne gic neg|c4 @end|ng |rom gm@||@com
Thu Aug 27 16:07:35 CEST 2020


Many thanks for the insights Wolfgang.

Apologies for my imprecise questions. By "agreed upon" & "what
conclusions/interpretations", I was thinking if there is a minimum sample
size whose pooled estimate can be considered somewhat reliable to produce
robust inferences e.g. inferences drawn from just 2 studies can be
drastically changed by the publication of a third study for instance - but
it seems like there isn't. But I guess readers have to then check this for
themselves to access how much weight they can place on the conclusions of
specific meta-analyses.

Again, I appreciate it!

Sincerely,
nelly

On Thu, Aug 27, 2020 at 3:43 PM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:

> Dear nelly,
>
> See my responses below.
>
> >-----Original Message-----
> >From: R-sig-meta-analysis [mailto:
> r-sig-meta-analysis-bounces using r-project.org]
> >On Behalf Of ne gic
> >Sent: Wednesday, 26 August, 2020 10:16
> >To: r-sig-meta-analysis using r-project.org
> >Subject: [R-meta] Sample size and continuity correction
> >
> >Dear List,
> >
> >I have general meta-analysis questions that are not
> >platform/software related.
> >
> >*=======================*
> >*1. Issue of few included studies *
> >* =======================*
> >It seems common to see published meta-analyses with few studies e.g. :
> >
> >(A). An analysis of only 2 studies.
> >(B). In another, subgroup analyses ending up with only one study in one of
> >the subgroups.
> >
> >Nevertheless, they still end up providing a pooled estimate in their
> >respective forest plots.
> >
> >So my question is, is there an agreed upon (or rule of thumb, or in your
> >view) minimum number of studies below which meta-analysis becomes
> >unacceptable?
>
> Agreed upon? Not that I am aware of. Some may want at least 5 studies (per
> group or overall), some 10, others may be fine with if one group only
> contains 1 or 2 studies.
>
> >What interpretations/conclusions can one really draw from such analyses?
>
> That's a vague question, so I can't really answer this in general. Of
> course, estimates will be imprecise when k is small (overall or within
> groups).
>
> >*===================*
> >*2. Continuity correction *
> >* ===================*
> >
> >In studies of rare events, zero events tend to occur and it seems common
> to
> >add a small value so that the zero is taken care of somehow.
> >
> >If for instance, the inclusion of this small value via continuity
> >correction leads to differing results e.g. from non-significant results
> >when not using correction, to significant results when using it, what does
> >make of that? Can we trust such results?
>
> If this happens, then the p-value is probably fluctuating around 0.05 (or
> whatever cutoff is used for declaring results as significant). The
> difference between p=.06 and p=.04 is (very very unlikely) to be
> significant (Gelman & Stern, 2006). Or, to use the words of Rosnow and
> Rosenthal (1989): "[...] surely, God loves the .06 nearly as much as the
> .05".
>
> Gelman, A., & Stern, H. (2006). The difference between "significant" and
> "not significant" is not itself statistically significant. American
> Statistician, 60(4), 328-331.
>
> Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the
> justification of knowledge in psychological science. American Psychologist,
> 44, 1276-1284.
>
> >If one instead opts to calculate a risk difference instead, and test that
> >for significance, would this be a better solution (more reliable result?)
> >to the continuity correction problem above?
>
> If one is worried about the use of 'continuity corrections', then I think
> the more appropriate reaction is to use 'exact likelihood' methods (such as
> using (mixed-effects) logistic regression models or beta-binomial models)
> instead of switching to risk differences (nothing wrong with the latter,
> but risk differences are really a fudamentally different effect size
> measure compared to risk/odds ratios).
>
> >Looking forward to hearing your views as diverse as they may be in cases
> >where there is no consensus.
> >
> >Sincerely,
> >nelly
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list