[R-meta] robust error is smaller than model-based error

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Fri Feb 16 13:42:46 CET 2024


Hi all,

Chiming in here. Sure, it can happen that the model-based test is not significant and the robust one is. There is no guarantee that robust variance estimation is going to lead to a more conservative estimate of the standard error(s). I don't think that this is immediately weird. In fact, the example below illustrates that this can happen (the model-based test is not significant, while robust(res, cluster=article) yields a significant test).

And yes, robust(res, cluster=article) runs just fine in the case below, but here the robust variance estimation doesn't capture all of the dependencies that are assumed to be present according to the working model. By using 'article' as the cluster variable, RVE will assume that estimates from different articles are independent, while the working model allows for dependency across different articles due to the crossed species random effects. That is apparently an important source of dependency, since if we ignore this, the model-based SE also changes quite dramatically:

res0 <- rma.mv(yi, vi, random = list(~ 1 | article, ~ 1 | esid), data=dat)
res0

And now the model-based SE is quite similar to robust(res, cluster=article) or robust(res0, cluster=article). But according to an LRT comparing res0 with res, one should prefer the latter:

anova(res, res0)

As James mentioned, one would need to use cluster-robust methods that allow for multi-way clustered standard errors, but such methodology has not been developed in the context of such models.

Best,
Wolfgang

> -----Original Message-----
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On Behalf
> Of Yefeng Yang via R-sig-meta-analysis
> Sent: Friday, February 16, 2024 06:02
> To: James Pustejovsky <jepusto using gmail.com>; R Special Interest Group for Meta-
> Analysis <r-sig-meta-analysis using r-project.org>
> Cc: Yefeng Yang <yefeng.yang1 using unsw.edu.au>
> Subject: Re: [R-meta] robust error is smaller than model-based error
>
> Hi James,
>
> Thanks for your reply.
>
> On point 1, what I mean is that if we use the model-based SE to test null-
> hypothesis of the average effect, we get a null-effect (p > 0.05). But, if we
> use robust error (which is returned by robust()​), we get a non-zero effect. The
> result itself is a bit weird.
>
> Regarding your comment on `robust()`, let me use the reproducible example to
> explain what I mean (`metafor` is amazing - you can find all sorts of data
> structure you have interest).
>
> # load package and data
> library(metafor)
> library(clubSandwich)
> dat <- dat.lim2014$o_o_unadj
>
> # calculate zr and sampling variances
> dat <- escalc(measure="ZCOR", ri=ri, ni=ni, data=dat)
>
> # create effect size id variable
> dat$esid <- 1:nrow(dat)
>
> # fit a multilevel model with a non-nested random-effects structure
> res <- rma.mv(yi, vi,  random = list(~ 1 | article, ~ 1 | esid, ~ 1 | species),
> data=dat)
>
> But, we still can use `robust()` to calculate the cluster-robust error with CR1
> adjustment as:
> # robust error with CR1 adjustment
> robust(res, cluster = article, adjust = T)
>
> As expected, the clubsandwich​ cannot be used to calculate robust error for such
> a model
> coef_test(res, vcov = "CR1", cluster = dat$article)
>
> Also, if we use CR2 adjustment, `robust()` is also not working
> # robust error with CR2 adjustment
> robust(res, cluster = article, clubSandwich = T)
>
>  I might misunderstand something or made mistakes. But I would be grateful if
> you would you like to explain a bit.
>
> Best,
> Yefeng
> ________________________________
> From: James Pustejovsky <jepusto using gmail.com>
> Sent: 16 February 2024 15:18
> To: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis using r-
> project.org>
> Cc: Yefeng Yang <yefeng.yang1 using unsw.edu.au>
> Subject: Re: [R-meta] robust error is smaller than model-based error
>
> Hi Yefeng,
>
> On point 1, I am not sure what your question is. From inspecting the source code
> of metafor::robust(), the function is not set up to handle models with crossed
> random effects. I'm not at all sure what it does if you feed it a model with
> crossed random effects, but I would be very cautious about interpreting the
> output. Perhaps Wolfgang can comment on whether robust() is meant to accommodate
> models with crossed random effects.
>
> On point 2, I can verify that clubSandwich does not support CRVE for models with
> crossed random effects. Cameron, Gelbach, and Miller (2011) describe multi-way
> clustered standard errors, but only for ordinary least squares models. As far as
> I am aware, the statistical theory for multi-way clustered standard errors has
> not been developed for models that have crossed random effects and the extension
> from Cameron, Gelbach and Miller is not obvious. So if you want to stay on solid
> ground in terms of statistical theory, I think your best approach might be just
> to do a good job of developing and checking the model, and then rely on the
> model-based SEs for inference.
>
> James
>
> On Thu, Feb 15, 2024 at 7:37 PM Yefeng Yang via R-sig-meta-analysis <r-sig-meta-
> analysis using r-project.org<mailto:r-sig-meta-analysis using r-project.org>> wrote:
> Dear community,
>
> I (or, more precisely, my collaborator) am helping with one meta-analysis with
> dependent effect sizes. We used a multilevel model with effect size ID, study
> ID, and species ID as random effects.  We also used the RVE to calculate the
> robust error. I have two questions.
>
>   1.
> The test of model coefficient based on RVE indicates a significant effect (p <
> 0.05), while the test based on model-based error (we call it naive/original
> error) shows a non-significant effect (p < 0.05). I used `robust` in `metafor`,
> with `CR1` correction (`clubsandwich` is not working in my case; see below​) .
> Sorry, I do not have the raw data so there is no reproducible example.
>   2.
> How to calculate the robust error for models with non-nested random-effects
> structure? This issue has troubled me for a long time. Precisely, in my case,
> because effect size ID is nested within the study ID, so it is easy to calculate
> robust error (either using ​ ​`robust` or ​`clubsandwich` ​). However, I still have
> species ID as the random effect (it is a kind of crossed random effect). In such
> a case `clubsandwich` is not working. `robust` is still working, but we only can
> use `CR1` correction.
>
> Regards,
> Yefeng


More information about the R-sig-meta-analysis mailing list