[R-meta] Sample size and continuity correction

Nelson Ndegwa ne|@on@ndegw@ @end|ng |rom gm@||@com
Thu Aug 27 18:49:27 CEST 2020


Haha, sorry, I  was editing a response that included your signature and
forgot to exclude your signature :-)

nelson

On Thu, 27 Aug 2020 at 18:47, ne gic <negic4 using gmail.com> wrote:

> Wait, are you also Nelly @Nelson?
>
> On Thu, Aug 27, 2020 at 6:44 PM Nelson Ndegwa <nelson.ndegwa using gmail.com>
> wrote:
>
>> Dear Gerta,
>>
>> I agree with you. In the interest of playing the devil's advocate - and
>> my (and some list members) learning more, what would your opinion be if the
>> CI of the 2 studies did not overlap?
>>
>> Appreciate your response.
>>
>> Sincerely,
>> nelly
>>
>> On Thu, 27 Aug 2020 at 18:21, Gerta Ruecker <ruecker using imbi.uni-freiburg.de>
>> wrote:
>>
>>> Dear Nelly and all,
>>>
>>> With respect to (only) the first question (sample size):
>>>
>>> I think nothing is wrong, at least in principle, with a meta-analysis of
>>> two studies. We analyze single studies, so why not combining two of
>>> them? They may even include hundreds of patients.
>>>
>>> Of course, it is impossible to obtain a decent estimate of the
>>> between-study variance/heterogeneity from two or three studies. But if
>>> the confidence intervals are overlapping, I don't see any reason to
>>> mistrust the pooled effect estimate.
>>>
>>> Best,
>>>
>>> Gerta
>>>
>>>
>>>
>>> Am 27.08.2020 um 16:07 schrieb ne gic:
>>> > Many thanks for the insights Wolfgang.
>>> >
>>> > Apologies for my imprecise questions. By "agreed upon" & "what
>>> > conclusions/interpretations", I was thinking if there is a minimum
>>> sample
>>> > size whose pooled estimate can be considered somewhat reliable to
>>> produce
>>> > robust inferences e.g. inferences drawn from just 2 studies can be
>>> > drastically changed by the publication of a third study for instance -
>>> but
>>> > it seems like there isn't. But I guess readers have to then check this
>>> for
>>> > themselves to access how much weight they can place on the conclusions
>>> of
>>> > specific meta-analyses.
>>> >
>>> > Again, I appreciate it!
>>> >
>>> > Sincerely,
>>> > nelly
>>> >
>>> > On Thu, Aug 27, 2020 at 3:43 PM Viechtbauer, Wolfgang (SP) <
>>> > wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>>> >
>>> >> Dear nelly,
>>> >>
>>> >> See my responses below.
>>> >>
>>> >>> -----Original Message-----
>>> >>> From: R-sig-meta-analysis [mailto:
>>> >> r-sig-meta-analysis-bounces using r-project.org]
>>> >>> On Behalf Of ne gic
>>> >>> Sent: Wednesday, 26 August, 2020 10:16
>>> >>> To: r-sig-meta-analysis using r-project.org
>>> >>> Subject: [R-meta] Sample size and continuity correction
>>> >>>
>>> >>> Dear List,
>>> >>>
>>> >>> I have general meta-analysis questions that are not
>>> >>> platform/software related.
>>> >>>
>>> >>> *=======================*
>>> >>> *1. Issue of few included studies *
>>> >>> * =======================*
>>> >>> It seems common to see published meta-analyses with few studies e.g.
>>> :
>>> >>>
>>> >>> (A). An analysis of only 2 studies.
>>> >>> (B). In another, subgroup analyses ending up with only one study in
>>> one of
>>> >>> the subgroups.
>>> >>>
>>> >>> Nevertheless, they still end up providing a pooled estimate in their
>>> >>> respective forest plots.
>>> >>>
>>> >>> So my question is, is there an agreed upon (or rule of thumb, or in
>>> your
>>> >>> view) minimum number of studies below which meta-analysis becomes
>>> >>> unacceptable?
>>> >> Agreed upon? Not that I am aware of. Some may want at least 5 studies
>>> (per
>>> >> group or overall), some 10, others may be fine with if one group only
>>> >> contains 1 or 2 studies.
>>> >>
>>> >>> What interpretations/conclusions can one really draw from such
>>> analyses?
>>> >> That's a vague question, so I can't really answer this in general. Of
>>> >> course, estimates will be imprecise when k is small (overall or within
>>> >> groups).
>>> >>
>>> >>> *===================*
>>> >>> *2. Continuity correction *
>>> >>> * ===================*
>>> >>>
>>> >>> In studies of rare events, zero events tend to occur and it seems
>>> common
>>> >> to
>>> >>> add a small value so that the zero is taken care of somehow.
>>> >>>
>>> >>> If for instance, the inclusion of this small value via continuity
>>> >>> correction leads to differing results e.g. from non-significant
>>> results
>>> >>> when not using correction, to significant results when using it,
>>> what does
>>> >>> make of that? Can we trust such results?
>>> >> If this happens, then the p-value is probably fluctuating around 0.05
>>> (or
>>> >> whatever cutoff is used for declaring results as significant). The
>>> >> difference between p=.06 and p=.04 is (very very unlikely) to be
>>> >> significant (Gelman & Stern, 2006). Or, to use the words of Rosnow and
>>> >> Rosenthal (1989): "[...] surely, God loves the .06 nearly as much as
>>> the
>>> >> .05".
>>> >>
>>> >> Gelman, A., & Stern, H. (2006). The difference between "significant"
>>> and
>>> >> "not significant" is not itself statistically significant. American
>>> >> Statistician, 60(4), 328-331.
>>> >>
>>> >> Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the
>>> >> justification of knowledge in psychological science. American
>>> Psychologist,
>>> >> 44, 1276-1284.
>>> >>
>>> >>> If one instead opts to calculate a risk difference instead, and test
>>> that
>>> >>> for significance, would this be a better solution (more reliable
>>> result?)
>>> >>> to the continuity correction problem above?
>>> >> If one is worried about the use of 'continuity corrections', then I
>>> think
>>> >> the more appropriate reaction is to use 'exact likelihood' methods
>>> (such as
>>> >> using (mixed-effects) logistic regression models or beta-binomial
>>> models)
>>> >> instead of switching to risk differences (nothing wrong with the
>>> latter,
>>> >> but risk differences are really a fudamentally different effect size
>>> >> measure compared to risk/odds ratios).
>>> >>
>>> >>> Looking forward to hearing your views as diverse as they may be in
>>> cases
>>> >>> where there is no consensus.
>>> >>>
>>> >>> Sincerely,
>>> >>> nelly
>>> >       [[alternative HTML version deleted]]
>>> >
>>> > _______________________________________________
>>> > R-sig-meta-analysis mailing list
>>> > R-sig-meta-analysis using r-project.org
>>> > https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>>>
>>> --
>>>
>>> Dr. rer. nat. Gerta Rücker, Dipl.-Math.
>>>
>>> Institute of Medical Biometry and Statistics,
>>> Faculty of Medicine and Medical Center - University of Freiburg
>>>
>>> Stefan-Meier-Str. 26, D-79104 Freiburg, Germany
>>>
>>> Phone:    +49/761/203-6673
>>> Fax:      +49/761/203-6680
>>> Mail:     ruecker using imbi.uni-freiburg.de
>>> Homepage:
>>> https://www.uniklinik-freiburg.de/imbi-en/employees.html?imbiuser=ruecker
>>>
>>> _______________________________________________
>>> R-sig-meta-analysis mailing list
>>> R-sig-meta-analysis using r-project.org
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>>>
>>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list