[R-meta] Constraint rrror when using Wald_test_cwb

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Tue Sep 17 16:37:47 CEST 2024


Hi Wolfgang,

Responses inline below.

Cheers,
James

On Tue, Sep 17, 2024 at 2:34 AM Viechtbauer, Wolfgang (NP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:

> A follow-up question (for James) on my part.
>
> I saw this reponse by James:
>
> "If you only have 1 coefficient, then you don't need to go to the trouble
> of cluster wild bootstrapping--you can just use RVE with small sample
> corrections" (
> https://github.com/meghapsimatrix/wildmeta/issues/17#issuecomment-1833956635
> )
>
> with respect to a model that includes an intercept and 1 moderator. So, if
> I understand this correctly, then cluster wild bootstrapping only becomes
> relevant (i.e., has some advantages over RVE) when the model includes more
> than 1 moderator?
>
>
A more precise statement is that the difference between CWB and RVE (with
small-sample corrections as implemented in clubSandwich) hypothesis tests
appears to be quite small for null hypotheses involving a single
constraint. Probably the major use-case where this holds it tests of a
single coefficient in a meta-regression model (regardless of whether that
model has a single predictor or multiple predictors). The difference in
calibration between CWB and RVE becomes more apparent for null hypotheses
involving _multiple_ constraints, such as for omnibus tests of a model or
for tests that the average effects are equal across levels of a categorical
predictor with 3 or more categories. My statements here are supported by
simulation findings in Joshi, Pustejovsky, and Beretvas (2022;
https://jepusto.com/publications/cluster-wild-bootstrap-for-meta-analysis/index.html).
The reason you might expect worse calibration for RVE-based tests of
multiple-constraint hypotheses is that it is harder to find a good
small-sample approximation to the null distribution of the test statistic
when you're dealing with more than one constraint (in the simple case of a
single constraint, the Satterthwaite approximation is well understood and
works pretty well). The CWB test does better here by replacing brute-force
computation for the hard mathy bits.


> But then I also saw this issue:
>
> https://github.com/meghapsimatrix/wildmeta/issues/18
>
> about the possibility to do CWB for a model with just an intercept term.
> So is CWB is in principle useful for this scenario?
>
>
I was interested in this as a hypothetical edge case moreso than as a
practical issue. I would become relevant if one were trying to use CWB to
construct a confidence interval for an average effect size because you'd
need to profile across different values of beta. But the wildmeta package
does not yet implement confidence intervals so this is just a hypothetical
at this point.


> And to answer the question in that issue: As was just discussed in another
> thread, it is possible to fit an rma.mv() model with only an intercept
> term where the coefficient is constrained to 0, with rma.mv(..., beta=0).
> This is undocumented, experimental, and implemented right now in a rather
> crude manner, but I am actually working on rma.mv() right now so that
> instead of profiling out the fixed effects (and then optimizing only over
> the variance/correlation components of the model), one can optimize over
> the fixed effects as well.


This is good to know! Thanks for the worked example.


> Then one can also contrain coefficients in meta-regression models to 0.
> This is already possible with rma() when fitting location-scale models:
>
> library(metafor)
>
> dat <- dat.bcg
> dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat)
>
> res1 <- rma(yi, vi, mods = ~ 1, scale = ~ 1, data=dat, method="ML",
> optbeta=TRUE)
> res0 <- rma(yi, vi, mods = ~ ablat, scale = ~ 1, data=dat, method="ML",
> optbeta=TRUE, beta=c(NA,0))
> res1
> res0
> fitstats(res0, res1)
>
> (with method="REML", leaving out a moderator is not the same as
> constraining its coefficient to zero, since in REML the model matrix
> affects the restricted log-likelihood, but under ML as shown above, these
> two approaches are identical).
>
> Best,
> Wolfgang
>
> > -----Original Message-----
> > From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org>
> On Behalf
> > Of Pearl, Brendan via R-sig-meta-analysis
> > Sent: Sunday, September 15, 2024 01:13
> > To: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis using r-
> > project.org>
> > Cc: Pearl, Brendan <Brendan.Pearl using mh.org.au>
> > Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb
> >
> > Hi James,
> >
> > Thankyou for this.
> >
> > For anyone else who stumbles on this question, I also found that James
> answered
> > it here:
> https://github.com/meghapsimatrix/wildmeta/issues/17#issuecomment-
> > 1833956635
> >
> > ________________________________
> > From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org>
> on behalf
> > of James Pustejovsky via R-sig-meta-analysis <
> r-sig-meta-analysis using r-project.org>
> > Sent: Saturday, 14 September 2024 10:54:55 PM
> > To: R Special Interest Group for Meta-Analysis
> > Cc: James Pustejovsky
> > Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb
> >
> > Wald_test_cwb() is for testing null hypotheses specified by a constraint
> or
> > constraints on the model coefficients. In your MWE, you fit a summary
> meta-
> > analysis with only one beta coefficient, so the constraints can only
> refer to
> > that first coefficient (hence “Constraint indices must be less than or
> equal to
> > 1”).
> >
> > As an aside, note that the same error would occur if you use the more
> basic
> > Wald_test() from clubSandwich.
> >
> > James
> >
> > > On Sep 14, 2024, at 7:32 AM, Michael Dewey via R-sig-meta-analysis
> <r-sig-
> > meta-analysis using r-project.org> wrote:
> > >
> > > Dear Brendan
> > >
> > > When I run your MWE (after inserting library(metafor) I get
> > >
> > > Error in Wald_test_cwb(full_model = meta_analysis_robust, constraints =
> > constrain_equal(1:3),  :
> > >  could not find function "Wald_test_cwb"
> > >
> > > Michael
> > >
> > >> On 14/09/2024 10:28, Pearl, Brendan via R-sig-meta-analysis wrote:
> > >> Hello,
> > >> I am trying to run a cluster wild bootstrap, but am getting the
> following
> > error:
> > >> Error in constrain_zero(constraints = constraints, coefs = coefs) :
> > >>   Constraint indices must be less than or equal to 1.
> > >>   Question: What does this mean?
> > >> Thankyou,
> > >> Brendan
> > >> Background (if relevant):
> > >> I am running a purely exploratory series of meta-analyses of the
> > relationships between several predictors and outcomes (i.e. n x m meta-
> > analyses).
> > >> There is non-independence within each predictor-relationship pair
> (some
> > studies report multiple effect sizes for the same group of participants)
> and the
> > effect sizes are nested.
> > >> I am following the general workflow outlined here:
> > (
> https://wviechtb.github.io/metafor/reference/misc-recs.html#general-workflow-
> > for-meta-analyses-involving-complex-dependency-structures) and want to
> use
> > cluster wild bootstrapping because some analyses have very few studies
> (and
> > cluster-robust inference methods led to very wide confidence intervals)
> > >> Minimal working example:
> > >> ```{r}
> > >> dat_temp_mwe <- structure(list(study = c("A", "B", "C", "D", "E",
> "E", "F",
> > "F",
> > >>     "F", "F", "G"), effect_id = c(11, 28, 73, 93, 115, 232, 236,
> > >>     242, 252, 266, 284), Paper = c("AA", "BB", "CC", "DD", "EE1",
> > >>     "EE2", "FF1", "FF2", "FF3", "FF4", "GG"),
> Mean_age_when_outcome_measured
> > = c(21,
> > >>     19, 26, 19, 19, 21, 21, 21, 21, 19, 19), yi = structure(c(-
> > 0.0401817896328312,
> > >>     -0.0700000000000002, -0.151002873536528, -0.113328685307003,
> > >>     -0.139761942375159, -0.0392207131532808, -0.0487901641694324,
> > >>     -0.05, -0.041141943331175, -0.0421011760186351, -0.604315966853329
> > >>     ), ni = c(1566, 844, 624, 355, 7449, 2135, 2410, 4853, 6912,
> > >>     7842, 1202), measure = "GEN"), vi = c(0.00014647659162424,
> > 0.000527143487544687,
> > >>     0.00336452354486442, 0.00116040765035603, 0.00667694039383453,
> > >>     9.6295107522168e-05, 9.44692075770055e-05, 0.000100003675148229,
> > >>     2.50009187870589e-05, 2.50009187870581e-05, 0.0292124283937479
> > >>     )), row.names = c(NA, 11L), class = c("escalc", "data.frame"),
> yi.names =
> > "yi", vi.names = "vi", digits = c(est = 4,
> > >>     se = 4, test = 4, pval = 4, ci = 4, var = 4, sevar = 4, fit = 4,
> > >>     het = 4))
> > >>     V <- vcalc(vi,
> > >>     cluster = study,
> > >>     obs = effect_id,
> > >>     time1 = Mean_age_when_outcome_measured,
> > >>     data = dat_temp_mwe,
> > >>     rho = 0.8,
> > >>     phi = 0.9)
> > >>   meta_analysis_output <- rma.mv(
> > >>     yi,
> > >>     V = V,
> > >>     random = ~ 1 | study / Paper / effect_id,
> > >>     data = dat_temp_mwe,
> > >>     control = list(rel.tol = 1e-8))
> > >>   Wald_test_cwb(full_model = meta_analysis_robust,
> > >>     constraints = constrain_equal(1:3),
> > >>     R = 99,
> > >>     seed = 20201229)
> > >> ```
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list