[R-meta] Sample size and continuity correction
neg|c4 @end|ng |rom gm@||@com
Wed Aug 26 10:15:56 CEST 2020
I have general meta-analysis questions that are not
*1. Issue of few included studies *
It seems common to see published meta-analyses with few studies e.g. :
(A). An analysis of only 2 studies.
(B). In another, subgroup analyses ending up with only one study in one of
Nevertheless, they still end up providing a pooled estimate in their
respective forest plots.
So my question is, is there an agreed upon (or rule of thumb, or in your
view) minimum number of studies below which meta-analysis becomes
What interpretations/conclusions can one really draw from such analyses?
*2. Continuity correction *
In studies of rare events, zero events tend to occur and it seems common to
add a small value so that the zero is taken care of somehow.
If for instance, the inclusion of this small value via continuity
correction leads to differing results e.g. from non-significant results
when not using correction, to significant results when using it, what does
make of that? Can we trust such results?
If one instead opts to calculate a risk difference instead, and test that
for significance, would this be a better solution (more reliable result?)
to the continuity correction problem above?
Looking forward to hearing your views as diverse as they may be in cases
where there is no consensus.
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis