[R-meta] Interpreting the overall effect adjusted for publication bias, in the absence of publication bias
Viechtbauer, Wolfgang (NP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon Feb 19 11:08:23 CET 2024
Hi Daniel,
The intercept in PET is an extrapolation to a study with an infinite sample size (i.e., where the standard error / sampling variance is equal to 0). Given that the studies are typically far away from having an infinitely large sample size, such an extrapolation leads to a large SE for the intercept term and hence a very wide CI / low power for the test of H0: intercept = 0.
Here is an illustration of this issue:
library(metafor)
k <- 20
iters <- 10000
pval1 <- rep(NA, iters)
pval2 <- rep(NA, iters)
for (i in 1:iters) {
# simulate data (without any publication bias)
vi <- runif(k, .01, .1)
yi <- rnorm(k, 0.2, sqrt(vi))
# fit the standard equal-effects model and save the p-value
res1 <- rma(yi, vi, method="EE")
pval1[i] <- res1$pval
# fit a meta-regression model with the standard errors as predictor and
# save the p-value for the intercept term
res2 <- rma(yi, vi, mods = ~ sqrt(vi), method="FE")
pval2[i] <- res2$pval[1]
}
# power of the tests
mean(pval1 <= .05)
mean(pval2 <= .05)
I kept things simple by not simulating any heterogeneity. Roughly, the data were simulated as if we are dealing with standardized mean differences where studies have sample sizes between 40 and 400 participants and the true standardized mean difference is 0.2. The standard equal-effects model has almost 100% power, while the test of the intercept in the meta-regression model has only around 28% power. Quite a dramatic difference.
So, unless there is evidence of publication bias, I would caution to use the significance of the intercept term (or some other 'corrected' estimate) for decision making.
Best,
Wolfgang
> -----Original Message-----
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On Behalf
> Of Daniel Foster via R-sig-meta-analysis
> Sent: Wednesday, January 17, 2024 18:47
> To: Daniel Foster via R-sig-meta-analysis <r-sig-meta-analysis using r-project.org>
> Cc: Daniel Foster <daniel.foster using utoronto.ca>
> Subject: [R-meta] Interpreting the overall effect adjusted for publication bias,
> in the absence of publication bias
>
> Hi all,
>
> I am carrying out a multi-level meta-analysis in which I have conducted a FAT-
> PET to ascertain whether or not there is evidence of publication bias, as well
> as the overall effect accounting for publication bias.
>
> The results, uncorrected for publication bias, indicate that that there is a
> significant association between the two variables I am looking at. The FAT was
> not significant, suggesting the absence of publication bias. However, the
> results from PET indicate that after accounting for publication bias, the
> relationship is no longer significant.
>
> Given that I did not find evidence of publication bias, can I conclude that
> there was a significant effect (i.e., using the findings from the model
> uncorrected for publication bias)? Or should I emphasize the PET findings in my
> discussion? How is this commonly dealt with?
>
> Thank you!
>
> Daniel
More information about the R-sig-meta-analysis
mailing list