[R-meta] Small-sample adjustment in robust versus coef_test function

Sarah Roesch roe@ch @end|ng |rom cb@@mpg@de
Mon May 11 12:21:58 CEST 2020


Dear community,

I am currently running a three-level meta-analysis and I aim for cluster robust tests and confidence intervals.

Therefore, I applied the "robust" function to my rma.mv object.

I then wanted to validate my results using the "coef_test" from the ClubSandwich package,
applying it to the rma.mv object just as I did with the robust function. 

For my overall model (including NO moderators), I get perfectly identical results.

However, when it comes to moderator analyses, the results are marginally different depending on the use of robust or coef_test.
Specifically, the estimates are still identical, however the standard errors seem to be slightly larger for the coef_test function, leading to smaller t-values.

Specifically, the formulas I used are for instance:
valenceeff <- rma.mv(yi, vi, mods = ~ valencesimple, random = list(~ 1 | esID, ~ 1 | searchID), tdist=TRUE, data=alldata) 
summary(robust(valenceeff, cluster=alldata$searchID, adjust=TRUE))
coef_test(valenceeff, cluster=alldata$searchID, vcov="CR2")

I guess that this difference occurs by virtue of different small-sample adjustments?

In coef_test, I applied the bias-reduced linearization adjustment proposed by Bell and McCaffrey (2002) and Pustejovsky and Tipton (2017),
whereas robust uses the factor n/(n−p) as a small-sample adjustment.

Can anyone explain the difference to me and point out, which small-sample adjustment is preferable?

Thank you so much!

Sarah



More information about the R-sig-meta-analysis mailing list