[R-meta] Constraint rrror when using Wald_test_cwb

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Tue Sep 17 18:17:42 CEST 2024


Thanks for the feedback!

Just as a follow-up:

I just pushed an update to GitHub that makes an (undocumented) 'optbeta' argument also available for rma.mv(). When optbeta=TRUE, then the optimization is carried out not only over the variance/correlation components of the model, but also the fixed effect (by default, the fixed effect are profiled out). This also allows constraining a fixed effect to a given value in a meta-regression model. With this, these two yield (essentially) identical results:

rma.mv(yi, vi, random = ~ 1 | trial, data=dat, method="ML", optbeta=TRUE)
rma.mv(yi, vi, mods = ~ ablat, random = ~ 1 | trial, data=dat, method="ML", optbeta=TRUE, beta=c(NA,0))

Best,
Wolfgang

> -----Original Message-----
> From: James Pustejovsky <jepusto using gmail.com>
> Sent: Tuesday, September 17, 2024 16:38
> To: Viechtbauer, Wolfgang (NP) <wolfgang.viechtbauer using maastrichtuniversity.nl>
> Cc: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis using r-
> project.org>
> Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb
>
> Hi Wolfgang,
>
> Responses inline below.
>
> Cheers,
> James
>
> On Tue, Sep 17, 2024 at 2:34 AM Viechtbauer, Wolfgang (NP)
> <wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> > A follow-up question (for James) on my part.
> >
> > I saw this reponse by James:
> >
> > "If you only have 1 coefficient, then you don't need to go to the trouble of
> cluster wild bootstrapping--you can just use RVE with small sample corrections"
> (https://github.com/meghapsimatrix/wildmeta/issues/17#issuecomment-1833956635)
> >
> > with respect to a model that includes an intercept and 1 moderator. So, if I
> understand this correctly, then cluster wild bootstrapping only becomes relevant
> (i.e., has some advantages over RVE) when the model includes more than 1
> moderator?
>
> A more precise statement is that the difference between CWB and RVE (with small-
> sample corrections as implemented in clubSandwich) hypothesis tests appears to
> be quite small for null hypotheses involving a single constraint. Probably the
> major use-case where this holds it tests of a single coefficient in a meta-
> regression model (regardless of whether that model has a single predictor or
> multiple predictors). The difference in calibration between CWB and RVE becomes
> more apparent for null hypotheses involving _multiple_ constraints, such as for
> omnibus tests of a model or for tests that the average effects are equal across
> levels of a categorical predictor with 3 or more categories. My statements here
> are supported by simulation findings in Joshi, Pustejovsky, and Beretvas (2022;
> https://jepusto.com/publications/cluster-wild-bootstrap-for-meta-
> analysis/index.html). The reason you might expect worse calibration for RVE-
> based tests of multiple-constraint hypotheses is that it is harder to find a
> good small-sample approximation to the null distribution of the test statistic
> when you're dealing with more than one constraint (in the simple case of a
> single constraint, the Satterthwaite approximation is well understood and works
> pretty well). The CWB test does better here by replacing brute-force computation
> for the hard mathy bits.
>
> > But then I also saw this issue:
> >
> > https://github.com/meghapsimatrix/wildmeta/issues/18
> >
> > about the possibility to do CWB for a model with just an intercept term. So is
> CWB is in principle useful for this scenario?
>
> I was interested in this as a hypothetical edge case moreso than as a practical
> issue. I would become relevant if one were trying to use CWB to construct a
> confidence interval for an average effect size because you'd need to profile
> across different values of beta. But the wildmeta package does not yet implement
> confidence intervals so this is just a hypothetical at this point.
>
> > And to answer the question in that issue: As was just discussed in another
> thread, it is possible to fit an rma.mv() model with only an intercept term
> where the coefficient is constrained to 0, with rma.mv(..., beta=0). This is
> undocumented, experimental, and implemented right now in a rather crude manner,
> but I am actually working on rma.mv() right now so that instead of profiling out
> the fixed effects (and then optimizing only over the variance/correlation
> components of the model), one can optimize over the fixed effects as well.
>
> This is good to know! Thanks for the worked example.
>
> > Then one can also contrain coefficients in meta-regression models to 0. This
> is already possible with rma() when fitting location-scale models:
> >
> > library(metafor)
> >
> > dat <- dat.bcg
> > dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat)
> >
> > res1 <- rma(yi, vi, mods = ~ 1, scale = ~ 1, data=dat, method="ML",
> optbeta=TRUE)
> > res0 <- rma(yi, vi, mods = ~ ablat, scale = ~ 1, data=dat, method="ML",
> optbeta=TRUE, beta=c(NA,0))
> > res1
> > res0
> > fitstats(res0, res1)
> >
> > (with method="REML", leaving out a moderator is not the same as constraining
> its coefficient to zero, since in REML the model matrix affects the restricted
> log-likelihood, but under ML as shown above, these two approaches are
> identical).
> >
> > Best,
> > Wolfgang
>
> > -----Original Message-----
> > From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On
> Behalf
> > Of Pearl, Brendan via R-sig-meta-analysis
> > Sent: Sunday, September 15, 2024 01:13
> > To: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis using r-
> > project.org>
> > Cc: Pearl, Brendan <Brendan.Pearl using mh.org.au>
> > Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb
> >
> > Hi James,
> >
> > Thankyou for this.
> >
> > For anyone else who stumbles on this question, I also found that James
> answered
> > it here: https://github.com/meghapsimatrix/wildmeta/issues/17#issuecomment-
> > 1833956635


More information about the R-sig-meta-analysis mailing list