[R-meta] Calculation of p values in selmodel

Will Hopkins w|||thek|w| @end|ng |rom gm@||@com
Sat Mar 30 00:12:34 CET 2024


Sorry about my simplistic approximate approach to the p value, and for
replying (by mistake) to you, Wolfgang, and not to the list. (BTW I do wish
the R Powers That Be would migrate to googlegroups lists, which would be
substantially better in several ways than the current text-only
dinosaur/snake.)

FYI, I have now calculated the one-sided p values (in SAS as pValue1sided =
1-probt(tValue,SampleSize-1)) and fired them into selmodel without and with
the pval option, for the simulated metas I have been using in the last few
days (female and male true means of 3.0 and 1.0; true residual heterogeneity
SD of 0.5; lots of non-significant effects, 90% of which are randomly not
included, to simulate publication bias; 10-24 studies per meta). The 3PSM
selmodel results with and without pval= are practically identical: slightly
better adjustment without pval (female mean 3.10, male mean 1.28, hetero SD
0.41; coverage of 90%CIs 87%, 72%, and 91% respectively) than with pval
(female mean 3.12, male mean 1.35, hetero SD 0.40; coverage 86%, 69%, 92%).
Confidence limits for the fixed effects were produced for 2084/2233 sims
without pval and for 2085/2233 with pval; confidence limits for the hetero
were produced by confint for all but 2 of the sims. 

The analyses included the PEESE approach, which works a bit worse than
selmodel for females and hetero, and a bit better for males (female, male
means and hetero SD 2.70, 1.10, 0.23; coverage 83%, 88%, 75%), and it
produced confidence limits for all 2233 sims. I need to run sims with many
other study characteristics, including within-study hetero, but currently
I'm leaning towards PEESE for fixed effects and selmodel for hetero, because
inferences using inferiority, superiority and equivalence testing are based
on confidence limits, not point estimates.

Thanks again for your patience and engagement, Wolfgang. 

Will

-----Original Message-----
From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On
Behalf Of Viechtbauer, Wolfgang (NP) via R-sig-meta-analysis
Sent: Saturday, March 30, 2024 12:06 AM
To: R Special Interest Group for Meta-Analysis
<r-sig-meta-analysis using r-project.org>
Cc: Viechtbauer, Wolfgang (NP)
<wolfgang.viechtbauer using maastrichtuniversity.nl>
Subject: Re: [R-meta] Calculation of p values in selmodel

Please always respond to the list, not just the individual that replied to
you.

Halving the p-value from two-sided tests is not the right way to compute
one-sided p-values.

Say you do an independent samples t-test with H1: mu1 > mu2 versus H0: mu1
<= mu2. Then:

pt(2.34, df=20, lower.tail=FALSE)

and

pt(-2.34, df=20, lower.tail=FALSE)

will give you the correct one-sided p-values, depending on whether mean1 >
mean2 (in the first case) or mean1 < mean2 (in the second case).

In a two-sided test (i.e., H1: mu1 != mu2 versus H0: mu1 = mu2), we would
compute the p-value with:

2*pt(abs(2.34), df=20, lower.tail=FALSE) 2*pt(abs(-2.34), df=20,
lower.tail=FALSE)

for these two cases, but dividing these by 2 does not work in the second
case.

If you use selmodel(..., alternative="greater"), then you really should also
pass one-sided p-values to the function, where the p-values are computed for
an alternative hypothesis with the appropriate directionality.

Best,
Wolfgang

> -----Original Message-----
> From: Will Hopkins <willthekiwi using gmail.com>
> Sent: Friday, March 29, 2024 01:04
> To: Viechtbauer, Wolfgang (NP) 
> <wolfgang.viechtbauer using maastrichtuniversity.nl>
> Cc: 'Will Hopkins' <willthekiwi using gmail.com>
> Subject: RE: [R-meta] Calculation of p values in selmodel
>
> Oh, I just assumed that it was appropriate to pass the usual p value 
> into selmodel with your new pval= option. Halving the p value did the
trick.
>
> I ran it with 2172 simulations in which 90% of non-significant effects 
> were omitted. The coverage and confidence limits were not quite as 
> good, but practically the same, as with the usual method. The usual 
> method produced confidence limits in 2018 of the 2172 sims, whereas 
> the pval method produced them in 2007, a negligible difference.  I had 
> downloaded the latest metafor from github, and it's showing 4.7-0.
>
> Thanks again for your expertise and engagement, Wolfgang!
>
> Will
>
> -----Original Message-----
> From: Viechtbauer, Wolfgang (NP)
> <wolfgang.viechtbauer using maastrichtuniversity.nl>
> Sent: Friday, March 29, 2024 12:41 AM
> To: R Special Interest Group for Meta-Analysis 
> <r-sig-meta-analysis using r-project.org>
> Cc: Will Hopkins <willthekiwi using gmail.com>
> Subject: RE: [R-meta] Calculation of p values in selmodel
>
> If you passed two-sided p-values to the function but the simulated 
> selection process was based on the significance of one-sided tests 
> (i.e., the significance plus the direction of the effects), then this 
> doesn't match up and it should not be a surprise then that the model 
> cannot correct for the selection process.
>
> Best,
> Wolfgang

_______________________________________________
R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org To
manage your subscription to this mailing list, go to:
https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis



More information about the R-sig-meta-analysis mailing list