[R-meta] rma, sandwich correction and very small data sets
Valeria Ivaniushina
v@|v@n|u@h|n@ @end|ng |rom gm@||@com
Wed Dec 9 16:21:52 CET 2020
Dear Wolfgang,
Thank you VERY much!
Thank you for correcting my code -- indeed, random effect on the 1st level
is totally needed!
A couple more questions, if I may
1. There are too little cases for such a complex data structure, and it's a
serious limitation.
But I hope that even if the results may be considered only as descriptive,
they still point out in the correct direction?
Especially taking into account that all three subsamples show quite similar
results.
Is it a valid interpretation?
2. Considering that the sample is small (and 3-level!), I guess that
analysis of outliers would be excessive. Is it right?
3. The same goes for publication bias analysis? (as James points out, these
tests do not have strong power:
www.jepusto.com/publication/selective-reporting-with-dependent-effects/ )
4. and there is no power for mediation analysis, so I don't have to even
attempt to do it?
5. Estimators question:
"robust" function in rma is using sandwich-type estimator, and with adjust
= TRUE it does a small-sample adjustment
In the clubSandwich library there are a bunch of estimators with different
small sample corrections. They give somewhat different results, some are
very close to "robust" output
Is clubSandwich CR2 (for example) better than robust.rma?
Or, if CR estimators from clubSandwich are not definitely preferable, can I
just use robust.rma?
Best,
Valeria
On Wed, Dec 9, 2020 at 1:28 PM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> Dear Valeria,
>
> Unless you have very good reasons to assume that estimates within studies
> are homogeneous, you should always add a random effect at the estimate
> level to the model. See:
>
> http://www.metafor-project.org/doku.php/analyses:konstantopoulos2011
>
> and esp. the "A Common Mistake in the Three-Level Model" section.
>
> So, I would do:
>
> wb$ID_estimate <- 1:nrow(wb)
>
> random = list(~ 1 | ID_estimate, ~ 1 | ID_study, ~ 1 | ID_database)
>
> Also, if you use data=wb, you do not need wb$ in the model call.
>
> Finally, SE_Influence sounds like this is a variable for the standard
> errors. The second argument of rma.mv() is for specifying the sampling
> *variances* (or an entire var-cov matrix).
>
> So, to summarize:
>
> eff1 <- rma.mv(yi=EFFECT_SIZE_Influence, V=SE_Influence^2,
> random = list(~ 1 | ID_estimate, ~ 1 | ID_study, ~ 1 |
> ID_database),
> tdist=TRUE, data=wb)
>
> However, with the number of levels you show, I would indeed be worried
> about fitting such a complex model with so little data. You won't get
> precise estimates of the variance components and hence they can be all over
> the place.
>
> Also, cluster-robust inference methods work asymptotically, that is, when
> the number of levels of the clustering variable gets large. With 5 or 7
> levels for 'databases', I would say we are rather far away from
> 'asymptotically'. The clubSandwich package you are using for this includes
> small-sample corrections which should help a but, but I would still
> question the use of such methods with such low k at the clustering level.
> Maybe James Pustejovsky (the author of clubSandwich) can chime in here.
>
> As for combining the results of multiple (independent) meta-analyses, see:
>
> http://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates
>
> Best,
> Wolfgang
>
> >-----Original Message-----
> >From: R-sig-meta-analysis [mailto:
> r-sig-meta-analysis-bounces using r-project.org]
> >On Behalf Of Valeria Ivaniushina
> >Sent: Monday, 07 December, 2020 18:00
> >To: R meta
> >Subject: [R-meta] rma, sandwich correction and very small data sets
> >
> >Dear colleagues,
> >
> >I do 3-level meta-analysis with a small number of studies and a small
> >number of clusters.
> >1st level - model, 2nd level - study, 3rd level - database.
> >
> >The effect I am interested in can be specified in different ways. Experts
> >in the field advised me to make separate meta analyses for each
> >specification and then combine the results, kind of meta-meta.
> >
> >I have several questions:
> >
> >1) Is this a correct code?
> >First I do REML:
> >eff1 <- rma.mv(yi=wb$EFFECT_SIZE_Influence,
> >V=wb$SE_Influence,random = list(~1 | ID_study, ~1 | ID_database),
> >tdist=TRUE, data=wb)
> >
> >Then with this object I use sandwich, to get cluster-robust standard
> >errors, clustering at the highest level of nesting:
> >coef_test(eff1, vcov = "CR2",cluster = wb$ID_database)
> >
> >2) I am worried that the numbers of clusters are too small -- are the
> >results reliable?
> >eff1: 17 models, 12 studies, 5 databases
> >eff2: 8 models, 5 studies, 5 databases
> >eff3: 11 models, 9 studies, 7 databases
> >
> >3) Variance distribution is vastly different between three models - what
> >does it tell me?
> >eff1: 1st level 4%, 2nd level 0%, 3rd level 96%
> >eff2: 1st level 100%, 2nd level 0%, 3rd level 0%
> >eff3: 1st level 15%, 2nd level 0%, 3rd level 85%
> >
> >4) How can I combine the results of three meta-analyses?
> >
> >Best,
> >Valeria
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list