[R-meta] RVE or not RVE in meta-regressions with small number of studies?

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Thu Apr 20 17:32:01 CEST 2023


Dear Sebastian,

I was hoping James (or Megha, but I am not sure if she is on this mailing list) would have jumped in here. But just some brief thoughts from my side:

My first question would be: What is the alternative? I assume the idea is to forgo RVE and just use the results from the multilevel model with correlated sampling errors in the V matrix (I think this is what you mean by CRVE). That approach might be fine but one has to hope that it provides a reasonable approximation to the underlying data generating mechanism. Also, Type I error rates might be inflated unless the number of studies (within each category) is sufficiently large.

I am not a fan of simply pretending the V matrix is diagonal (I think this is what you mean by the HE model) because we known a priori that the model is then rather misspecified (all models are of course misspecified, but I would argue that this one is more misspecified than others).

With RVE, you have to worry less that the model captures the dependencies appropriately, but you have to hope that the 'asymptotics' kick in for the approach to provide appropriate inferences. For testing, cluster wild bootstrapping should help to provide better control of the Type I error rate even in cases where this is not the case.

One thing I hope people do not do is run RVE, find that results are no longer significant, and then conclude that RVE must lack power for their analysis. That might be the case or RVE might be picking up dependencies that are not accounted for in the working model (typically then leading to larger SEs and more conservative inferences).

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of Röhl, Sebastian via R-sig-meta-analysis
>Sent: Tuesday, 18 April, 2023 16:13
>To: R Special Interest Group for Meta-Analysis
>Cc: Röhl, Sebastian
>Subject: [R-meta] RVE or not RVE in meta-regressions with small number of
>studies?
>
>Dear all,
>
>I came across an article in RER that argues that one could or should forgo RVE
>for analysis of categorical moderators in case of smaller study numbers:
>
>Cao, Y., Grace Kim, Y.‑S., & Cho, M. (2022). Are Observed Classroom Practices
>Related to Student Language/Literacy Achievement? Review of Educational Research,
>003465432211306. https://doi.org/10.3102/00346543221130687
>
>Page 10: “We acknowledge the superiority of robust variance estimation (RVE) for
>handling dependent effect sizes. However, it has a few important limitations.
>First, it neither
>models heterogeneity at multiple levels nor provides corresponding hypothesis
>tests. Second, the power of the categorical moderator highly depends on the
>number of studies and features of the covariate (Tanner-Smith, Tipton, & Polanin,
>2016). When the number of studies is small, the test statistics and confidence
>intervals based on RVE can have inflated Type I error (Hedges et al., 2010;
>Tipton & Pustejovsky, 2015). Relating to our cases, many of our moderators had
>imbalanced distributions […]. Consequently, tests of particular moderators may be
>severely underpowered.”
>
>Of course, the first argument can be invalidated by the use of correlated
>hierarchical effects models with RVE. However, I find the second argument very
>relevant from my experience.
>
>How is this viewed here on the mailing list?
>
>In the social sciences, after all, we more often conduct meta-analyses with
>relatively small study corpus (n<100 or n<50). In high-ranked journals in this
>research field (e.g., Psychological Bulletin, Review of Educational Research,
>Educational Research Review…) I very rarely find the use of RVE / CRVE.
>
>In mentioned types of moderator analyses with small number of studies in one
>category, I also often face the same problem that effects become non-significant
>when using CRVE as soon as moderator levels are populated with less than 10-15
>studies. Joshi et al (2022) also talk about RVE being (too) highly conversative
>in these cases. I have also used cluster wild bootstrapping for significance
>testing of individual effects in this case. However, the problem of missing SEs
>and C.I.s as well as the high computation time arises here.
>
>Right now, I am again facing the problem of model selection for a meta-analysis
>with about 50 studies and 500 ES (correlations). Since we are dealing with ES
>within studies, I would choose a correlated hierarchical effects model with CRVE,
>which also works very well for the main effects, but again leads to said very
>large SEs for the moderators. As a pure CHE model (which in my opinion still fits
>better than the pure HE model in the above mentioned article by Cao et al) the
>SEs are of course somewhat more moderate.
>
>Do you have any tips or hints for an alternative?
>
>Thank you for your help and comments!
>Kind regards,
>Sebastian
>
>****************************
>Dr. Sebastian Röhl
>Eberhard Karls Universität Tübingen
>Institut für Erziehungswissenschaft
>Tübingen School of Education (TüSE)
>Wilhelmstraße 31 / Raum 302
>72074 Tübingen
>
>Telefon: +49 7071 29-75527
>Fax: +49 7071 29-35309
>E-Mail: sebastian.roehl using uni-tuebingen.de<mailto:sebastian.roehl using uni-tuebingen.de>
>Twitter: @sebastian_roehl  @ResTeacherEdu


More information about the R-sig-meta-analysis mailing list