[R-meta] Fail-safe numbers of non-significant effects
Viechtbauer, Wolfgang (SP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon Jul 6 16:35:37 CEST 2020
Dear Arne,
Nothing can be considered right or wrong here. This is a purely hypothetical exercise, so whatever approach you adopt is fine as long as you can convince readers (and yourself) that this is informative and your assumptions are defensible.
Best,
Wolfgang
>-----Original Message-----
>From: Arne Janssen [mailto:arne.janssen using uva.nl]
>Sent: Monday, 06 July, 2020 16:05
>To: Viechtbauer, Wolfgang (SP)
>Cc: 'r-sig-meta-analysis using r-project.org'
>Subject: Fail-safe numbers of non-significant effects
>
>Dear Wolfgang,
>
>The simulation of new effects you suggested seems to work, but in the
>case of my data, I will have to add more than 2000 cases to the 68 cases
>to get a power of 80%. Also, the observed effect size is slightly
>positive (0.085, conf int -0.2798 0.4496), so the simulations assess
>the number of cases needed for the effect to become significantly
>positive, but the expectation is that the effect should be negative.
>
>I therefore took a slightly different approach: instead of adding new
>simulated cases based on the effect size and the s.d. of the entire
>dataset, I added simulated cases based on the effect size and s.d. of
>all cases with negative effects, resulting in a "fail-safe number" of 35
>cases. Obviously, the same could be done with the subset of all cases
>with a positive effect. I would highly appreciate your opinion on this.
>
>With best wishes,
>Arne Janssen
>
>On 29-Jun-20 16:37, Viechtbauer, Wolfgang (SP) wrote:
>> Dear Arne,
>>
>> Please keep the mailing list in cc.
>>
>> Indeed, adding 'extreme' effects could drive up the heterogeneity to the
>point that reaching a significant result becomes difficult or even
>impossible. And yes, you could fix the variance component(s) to avoid this.
>Alternatively, instead of adding very large effects, one could add effects
>that have the same size as the average effect estimated from the initial
>model. That would have the opposite effect, driving down heterogeneity as
>more and more such effects are added.
>>
>> Even better would be an approach where we simulate new effects taking the
>amount of heterogeneity and the sampling variability into consideration. For
>a given number of 'new' effects to be added, one would then repeat this many
>times, checking in what proportion of cases the combined effect is
>significant. By increasing the number of new effects to be added, one could
>then figure out how many effects need to be added such that power to find a
>significant effect is at least 80% (or some other %). Here is an example of
>this idea:
>>
>> library(metafor)
>>
>> yi<- c(0.22, -0.12, 0.41, 0.13, 0.08)
>> vi<- c(0.008, 0.002, 0.019, 0.010, 0.0145)
>>
>> res<- rma(yi, vi, method="DL")
>> res
>>
>> iters<- 1000
>>
>> maxj<- 20
>>
>> power<- rep(NA, maxj)
>> pvals<- rep(NA, iters)
>>
>> set.seed(42)
>>
>> for (j in 1:maxj) {
>> print(j)
>> for (l in 1:iters) {
>> yi.fsn<- c(yi, rnorm(j, coef(res), sqrt(res$tau2 + 1/mean(1/vi))))
>> vi.fsn<- c(vi, rep(1/mean(1/vi), j))
>> pvals[l]<- rma(yi.fsn, vi.fsn, method="DL")$pval
>> }
>> power[j]<- mean(pvals<= .05)
>> }
>>
>> plot(1:maxj, power, type="o")
>> abline(h=.80, lty="dotted")
>> min(which(power>= .80))
>>
>> So, 15 effects would have to be added to reach 80% power. Note that the
>line is a bit wiggly, but one could just increase the number of iterations
>to smooth it out.
>>
>> Best,
>> Wolfgang
More information about the R-sig-meta-analysis
mailing list