[R-meta] Calculation of p values in selmodel

Will Hopkins w|||thek|w| @end|ng |rom gm@||@com
Thu Mar 28 01:47:19 CET 2024


Thanks for the response, Wolfgang. I was using the usual p values for the
two-sided nil-hypothesis test calculated via the t statistic for a mean
change in a normally distributed continuous variable. I simulated
publication bias by excluding a defined proportion (e.g., 90%) of study
estimates that were NOT(P<0.05 AND positive), i.e., by excluding a
proportion of study estimates that were P>0.05 OR negative. In other words,
given the simulated means and sampling variability, very occasionally a
statistically significant negative effect was included. I think this is the
way publication based on significance would work: when everyone is expecting
positive effects, the occasional significant negative effect would be as
unlikely to get published as a non-significant effect. If I had excluded all
significant negative effects, and I then wanted to simulate only minor
publication bias (e.g., exclude only 10% of non-significant effects), it
would be unrealistic to still exclude ALL significant negative effects, so I
would have to start making an arbitrary decision about including some
significant negative effects. Anyway, there were very few significant
negative effects in the simulations. Selmodel worked fine for adjusting the
bias associated with steps=(0.025) in the simulations. When the only change
I made was to add the pval= option, the adjusted mean effects showed more
bias, not less, as I described in my earlier posting. So I dunno what's
going on.

Will

-----Original Message-----
From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On
Behalf Of Viechtbauer, Wolfgang (NP) via R-sig-meta-analysis
Sent: Thursday, March 28, 2024 1:17 AM
To: R Special Interest Group for Meta-Analysis
<r-sig-meta-analysis using r-project.org>
Cc: Viechtbauer, Wolfgang (NP)
<wolfgang.viechtbauer using maastrichtuniversity.nl>
Subject: Re: [R-meta] Calculation of p values in selmodel

In the simulation study I did, I did not find any noteworthy difference when
I used the standard Wald-type test p-values versus those from a 'proper'
t-test (for a meta-analysis of standardized mean differences). Note that
when supplying p-values, they should correspond in type (one-sided versus
two-sided and in the proper direction when one-sided) as what is being
used/assumed in the selection model. Otherwise, results would be total
nonsense. Not sure what kind of p-values you were supplying to the function.

Best,
Wolfgang

> -----Original Message-----
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> 
> On Behalf Of Will Hopkins via R-sig-meta-analysis
> Sent: Friday, March 22, 2024 02:52
> To: 'R Special Interest Group for Meta-Analysis' 
> <r-sig-meta-analysis using r- project.org>
> Cc: Will Hopkins <willthekiwi using gmail.com>
> Subject: Re: [R-meta] Calculation of p values in selmodel
>
> ATTACHMENT(S) REMOVED: ATT00001.txt | results pb90 3PSM pval.txt | 
> results pb90 3PSM.txt | analyze SAS simulated meta data 24-03-22.R
>
> Yes, I had followed your instructions, Wolfgang, but in the welter of 
> warnings and options, I somehow managed not to update metafor. I did 
> it all again, and this time it worked. Your concern that things might 
> go horribly wrong with the pval= option you added to selmodel is 
> justified, at least for this combination of study characteristics and 
> number of studies in each simulation.  What follows is an explanation 
> of the simulations and results of bias and coverage without and with
pval=.
>
> I won't bother you with the details of errors of measurement and 
> sample sizes in the simulated studies, which were what I would expect 
> in my discipline for an uncontrolled study of a particular kind of 
> training on athlete endurance performance. The female and male true 
> mean changes (effectively in percent
> units) were 3.0 (borderline small-moderate) and 1.0 (borderline 
> trivial-small, i.e., the minimum practically/clinically important 
> difference), and the residual heterogeneity (SD) was 0.5 (borderline 
> trivial-small, half the smallest important for a mean change).  I 
> generated the data in SAS, such that 90% of non-significant studies 
> were deleted.  There had to be at least 10 studies in each 
> meta-analysis, at least one of which was non-significant. Of the 2500 
> initial simulations, 2172 satisfied these criteria. The number of non- 
> significant studies ranged from 1-9 (0-3 for females and 0-9 for 
> males). The number of female and male studies ranged from 1-8 and 3-16 
> (10-22 total). This time I didn't tweak the standard errors, so there 
> were some simulations where the only non-significant effects would 
> have been significant with a z score.  I analyzed the data in SAS 
> without any adjustment for publication bias, and then with the 
> standard error squared as a predictor (interacted with Sex), i.e., the 
> so-called PEESE adjustment. And of course, I imported the data into R 
> for analysis with rma, then with selmodel(...,type="step", steps=(0.025))
(the so- called 3PSM approach), and finally with confint to get the tau2
confidence limits. I then repeated the analysis with pval= included in
selmodel.
>
> I have attached two text files showing the results without pval 
> (results pb90
> 3PSM.txt) and with pval (results pb90 3PSM pval.txt). I have also 
> included the error messages and warnings with each of these. The R program
is also attached.
> I ran it without and with pval by commenting off and uncommenting the 
> appropriate lines. I apologize for any crudeness in the programming, 
> which was produced by an R newbie (me) with the unbelievably amazing help
of ChatGPT.
> Here's a summary.
>
> With rma, all 2172 simulations resulted in fixed-effect mean estimates 
> with CLs and heterogeneity SD estimates (tau). The mean female and 
> male means were estimated as 3.38 and 1.85, showing (surprisingly) 
> trivial upward publication bias compared with the true values of 3.0 
> and 1.0. Coverage of the females' 90% Cis wasn't too bad (83%), but 
> males' coverage was way off (28%). There was only slight upward 
> publication bias for tau (0.53 vs true value of 0.50). I didn't bother
with coverage of the unadjusted estimates for tau.
>
> The usual selmodel (i.e., no pval) produced adjusted estimates for the 
> fixed effects and tau for 2169 of the 2172 simulations, but 151 
> (2169-2018) lacked CLs for the means. (I suppose I could get those 86 
> CLs with confint, but I haven't done that yet.) The adjusted estimates 
> for the female and male means showed improvements to 3.13 and 1.28, 
> and the coverage improved a little (86%) for females, but was still 
> bad for the males (74%). Selmodel over-adjusted the mean tau of the 
> 2169 simulations a little to 0.42, but the coverage of the Cis 
> produced by confint (with all 2169 simulations) was good (93%). I also 
> produced CIs for tau using the SE for tau^2 produced by selmodel and 
> by assuming a normal sampling distribution. Unfortunately, over 700 of 
> the 2169 simulations produced no SE, so it's not practical to get the 
> CLs for tau, at least not for study characteristics like those 
> simulated here. In SAS, the PEESE approach was practically perfect for 
> correcting bias and for coverage of the female and male means, but it 
> worked for only 1872 of the 2172 simulations. The adjusted hetero SD was
hopeless (0.10 vs true 0.50), and the coverage was bad (62% using a z
distribution, 77% using a t distribution).
>
> What about selmodel with the pval= option? Once again it gave adjusted 
> point estimates for 2169 of the 2172 simulation, but publication bias 
> was made *worse* for females (3.51 vs true 3.0) and males (2.04 vs 
> true 1.0). CLs were now lacking for only 86 (2169-2083) simulations, 
> but the coverage was hopeless (females 72%, males 27%). The point 
> estimate for tau was 0.36 (vs true 0.50), i.e., over-adjusted even 
> more than without pval; CLs were produced for only 1877 simulations, and
the coverage wasn't good (82%).
>
> So including pval= in selmodel doesn't work, assuming I have not made 
> an error in its implementation. Thanks for going to all the trouble of 
> adding it to selmodel, Wolfgang. Maybe you can see how to make it work 
> better, assuming I have used it correctly.
>
> Will
>
> -----Original Message-----
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> 
> On Behalf Of Viechtbauer, Wolfgang (NP) via R-sig-meta-analysis
> Sent: Thursday, March 21, 2024 11:31 PM
> To: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis using r- 
> project.org>
> Cc: Viechtbauer, Wolfgang (NP) 
> <wolfgang.viechtbauer using maastrichtuniversity.nl>
> Subject: Re: [R-meta] Calculation of p values in selmodel
>
> Did you actually install the devel version as instructed under the 
> link I posted? When you do library(metafor), which version is being 
> loaded? If it is 4.4-0, then you are not using the devel version (which
has version 4.5-x).
_______________________________________________
R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org To
manage your subscription to this mailing list, go to:
https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis



More information about the R-sig-meta-analysis mailing list