[R-meta] Calculation of p values in selmodel
Viechtbauer, Wolfgang (NP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Thu Mar 28 12:41:09 CET 2024
If you passed two-sided p-values to the function but the simulated selection process was based on the significance of one-sided tests (i.e., the significance plus the direction of the effects), then this doesn't match up and it should not be a surprise then that the model cannot correct for the selection process.
Best,
Wolfgang
> -----Original Message-----
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On Behalf
> Of Will Hopkins via R-sig-meta-analysis
> Sent: Thursday, March 28, 2024 01:47
> To: 'R Special Interest Group for Meta-Analysis' <r-sig-meta-analysis using r-
> project.org>
> Cc: Will Hopkins <willthekiwi using gmail.com>
> Subject: Re: [R-meta] Calculation of p values in selmodel
>
> Thanks for the response, Wolfgang. I was using the usual p values for the
> two-sided nil-hypothesis test calculated via the t statistic for a mean
> change in a normally distributed continuous variable. I simulated
> publication bias by excluding a defined proportion (e.g., 90%) of study
> estimates that were NOT(P<0.05 AND positive), i.e., by excluding a
> proportion of study estimates that were P>0.05 OR negative. In other words,
> given the simulated means and sampling variability, very occasionally a
> statistically significant negative effect was included. I think this is the
> way publication based on significance would work: when everyone is expecting
> positive effects, the occasional significant negative effect would be as
> unlikely to get published as a non-significant effect. If I had excluded all
> significant negative effects, and I then wanted to simulate only minor
> publication bias (e.g., exclude only 10% of non-significant effects), it
> would be unrealistic to still exclude ALL significant negative effects, so I
> would have to start making an arbitrary decision about including some
> significant negative effects. Anyway, there were very few significant
> negative effects in the simulations. Selmodel worked fine for adjusting the
> bias associated with steps=(0.025) in the simulations. When the only change
> I made was to add the pval= option, the adjusted mean effects showed more
> bias, not less, as I described in my earlier posting. So I dunno what's
> going on.
>
> Will
>
> -----Original Message-----
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On
> Behalf Of Viechtbauer, Wolfgang (NP) via R-sig-meta-analysis
> Sent: Thursday, March 28, 2024 1:17 AM
> To: R Special Interest Group for Meta-Analysis
> <r-sig-meta-analysis using r-project.org>
> Cc: Viechtbauer, Wolfgang (NP)
> <wolfgang.viechtbauer using maastrichtuniversity.nl>
> Subject: Re: [R-meta] Calculation of p values in selmodel
>
> In the simulation study I did, I did not find any noteworthy difference when
> I used the standard Wald-type test p-values versus those from a 'proper'
> t-test (for a meta-analysis of standardized mean differences). Note that
> when supplying p-values, they should correspond in type (one-sided versus
> two-sided and in the proper direction when one-sided) as what is being
> used/assumed in the selection model. Otherwise, results would be total
> nonsense. Not sure what kind of p-values you were supplying to the function.
>
> Best,
> Wolfgang
More information about the R-sig-meta-analysis
mailing list