[R-meta] Small-sample adjustment in robust versus coef_test function

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Tue May 12 05:01:55 CEST 2020


Hi Sarah,

There are two differences between the methods:

- the metafor robust() method uses an ad-hoc small sample adjustment to the
standard errors and then uses (k - p) as the degrees of freedom for
t-tests, where k is the number of independent studies and p is the number
of predictors in the model. These are rough, ad-hoc corrections. They will
generally work okay for overall models with no moderators, but their
justification breaks down in many instances when you're running
meta-regressions with moderator variables.
- the clubSandwich CR2 method uses the bias-reduced linearization
adjustment to the standard errors and then uses a Satterthwaite
approximation for the degrees of freedom of t-tests. Both of these
corrections "adapt" to the model that you're trying to estimate.

Tipton (2015) compared both of these methods and found that the CR2
correction (the method implemented in the clubSandwich package)
outperformed the small-sample correction implemented in metafor, in that it
provides hypothesis tests with closer to correct Type I error rates. In
Tipton & Pustejovsky (2015) we argued that the BRL correction and
Satterthwaite degrees of freedom should become the default methods because
they work well in instances where small-sample correction is needed and
they fade away (they become identical to the ad hoc corrections or
large-sample methods) in instances where small-sample correction is not
needed.

One way that you can kind of interpret what's going on is to look at the
Satterthwaite degrees of freedom of the t-tests. If any are much smaller
than (k - p) then that means that there will be differences between the ad
hoc corrections and the BRL + Satterthwaite corrections, and (we argued) it
would be better to report the latter.

James


Tipton, E. (2015). Small sample adjustments for robust variance estimation
with meta-regression. *Psychological Methods, 20*(3), 375–393.
https://doi.org/10.1037/met0000011
<https://psycnet.apa.org/doi/10.1037/met0000011>

Tipton, E., & Pustejovsky, J. E. (2015). Small-sample adjustments for tests
of moderators and model fit using robust variance estimation in
meta-regression. *Journal of Educational and Behavioral Statistics*, *40*(6),
604-634. https://doi.org/10.3102/1076998615606099

On Mon, May 11, 2020 at 5:23 PM Sarah Roesch <roesch using cbs.mpg.de> wrote:

>
> Dear community,
>
> I am currently running a three-level meta-analysis and I aim for cluster
> robust tests and confidence intervals.
>
> Therefore, I applied the "robust" function to my rma.mv object.
>
> I then wanted to validate my results using the "coef_test" from the
> ClubSandwich package,
> applying it to the rma.mv object just as I did with the robust function.
>
> For my overall model (including NO moderators), I get perfectly identical
> results.
>
> However, when it comes to moderator analyses, the results are marginally
> different depending on the use of robust or coef_test.
> Specifically, the estimates are still identical, however the standard
> errors seem to be slightly larger for the coef_test function, leading to
> smaller t-values.
>
> Specifically, the formulas I used are for instance:
> valenceeff <- rma.mv(yi, vi, mods = ~ valencesimple, random = list(~ 1 |
> esID, ~ 1 | searchID), tdist=TRUE, data=alldata)
> summary(robust(valenceeff, cluster=alldata$searchID, adjust=TRUE))
> coef_test(valenceeff, cluster=alldata$searchID, vcov="CR2")
>
> I guess that this difference occurs by virtue of different small-sample
> adjustments?
>
> In coef_test, I applied the bias-reduced linearization adjustment proposed
> by Bell and McCaffrey (2002) and Pustejovsky and Tipton (2017),
> whereas robust uses the factor n/(n−p) as a small-sample adjustment.
>
> Can anyone explain the difference to me and point out, which small-sample
> adjustment is preferable?
>
> Thank you so much!
>
> Sarah
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list