[R-meta] Selection models from *reported p-values*
Viechtbauer, Wolfgang (NP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Tue Mar 5 14:16:51 CET 2024
Dear Yashvin,
I haven't thought this all the way through, but the problem with this is that p-values enter the model in two different ways. There are indeed the actually observed p-values of the studies, but in the integration step (which is needed to compute the log likelihood), we also need to compute p-values. Those are not fixed, but arise from integrating over the density (assumed to be normal) of the effect size estimates. These p-values (which then enter the weight function) are computed as a function of y/sqrt(vi). If we use one way of computing the observed p-values and a different way of computing the p-values in this integration step, then there is a bit of a mismatch and I am not sure about the consequences of that. So for consistency, one should then also compute the p-values in the integration step in a corresponding manner, but this would be very case/measure/test specific and trying to fine-tune this for every specific measure and way of testing it becomes extremely difficult implementation-wise.
We can see a bit of this in Iyengar and Greenhouse (1988) where the weight function is based on a t instead of a normal distribution (analogous to a z- versus a t-test). But this leads to the extra headache inducing complexities in their appendix. I (and others) decided to avoid all of this by making the simplifying assumption that the p-values are always computed based on Wald-type tests of the form 'estimate / SE'.
This should not be too far off in many cases, especially if the sample sizes within studies are not small. For example, the difference between pnorm(2, lower.tail=FALSE) and pt(2, df=100, lower.tail=FALSE) makes very little practical difference. Also, selection models are really rough approximations to a much more complex data generating mechanism anyway, so trying to fine-tune this part of the model is like taking a ruler to align something to millimeter accuracy before taking a sledge hammer to smash it.
A bit like the bias correction for d-values. Whether you put d=0.53 or g=0.52 into your model makes so little difference compared to all the other inaccuracies and infidelities we accept in putting together our meta-analytic datasets in the first place.
But those are just my two cents.
Best,
Wolfgang
> -----Original Message-----
> From: R-sig-meta-analysis <mailman-bounces using stat.ethz.ch> On Behalf Of Seetahul,
> Yashvin
> Sent: Tuesday, March 5, 2024 13:09
> To: r-sig-meta-analysis using r-project.org
> Cc: r-sig-meta-analysis-owner using r-project.org
> Subject: Selection models from *reported p-values*
>
> Dear R meta-analysis community,
>
> I have a question with regards to selection models based on p-values.
>
> Is it possible to do the selection model based on reported p-values directly
> rather than the p-values calculated from the effect size and SE?
>
> In many cases, meta-analyses require transformations, or sometimes corrections.
> However, if we assume that there is a selection process in publishing papers
> that is based on the p-values, it would make more sense to consider the p-values
> that are reported in the papers, would it not?.
>
> How would one proceed to do this? I believe the selmodel() function in metafor
> works with objects fitted with the rma() function, therefore, the p-values are
> re-calculated only from the effect size and SE. Assuming I have the reported p-
> values (detailed up to three decimals) of all the studies included in my meta-
> analysis, is it possible to test for the selection of studies based on reported
> p-values and then correct the effect size?
>
> I hope my question makes sense,
>
> Thank you for your help,
>
> Yashvin Seetahul
More information about the R-sig-meta-analysis
mailing list