[R-meta] Multivariate data: RVE imputing covariance matrices

Bernard Fernou bern@rd@|ernou @end|ng |rom gm@||@com
Wed Apr 21 21:57:19 CEST 2021


Dear James

Thank you so much for your support!

Indeed, regarding the issue of returning NaNs, this happens only for the
categories with one or two studies assessing them.
Note that I experienced an error with the Wald_test function when trying to
run the robust F test but, again, this seems to be related to the outcome
categories with very few studies. Thanks to your reply, I noticed that when
excluding them, everything works fine.

Your help is truly appreciated

All the best
Bernard

Le jeu. 15 avr. 2021 à 19:19, James Pustejovsky <jepusto using gmail.com> a
écrit :

> Hi Bernard,
>
> Responses inline below, marked with JEP.
>
> Kind Regards,
> James
>
> On Fri, Apr 9, 2021 at 4:36 AM Bernard Fernou <bernard.fernou using gmail.com>
> wrote:
>
>>
>> *Question 1.*
>>
>> Could you confirm that the robust variance estimation is appropriate for
>> our data (given that dependence between effect sizes are produced not only
>> by the presence of several outcomes, but also by the presence of several
>> independent variable measures)?
>>
>
> JEP: Yes. If your data are bivariate correlations between x and y, it
> makes no difference (statistically speaking) whether you interpret one
> variable as the IV and the other as the outcome.  If you have multiple
> correlations estimated from a common sample of observations, then there
> will be dependence in the effect size estimates.
>
>
>> *Question 2.*
>>
>> Is there an approach that should be absolutely privileged (we tend to
>> believe that the CSE approach would be the most suitable) and is the
>> implantation of the various models employing an appropriate syntax?
>>
>>
> JEP: First, the issue of returning NaNs. I can't say for sure without
> access to your data, but this may be happening because some outcome
> category is never observed in the data, or is observed only very
> infrequently. Have you counted how many effect size estimates you have for
> every category?
>
> JEP: Second, to the question of which working model to use. Either the CHE
> or SCE seem like they could be appropriate here. One of the
> main differences between the two models is that CHE uses a study-level
> random effect that is common across categories. As a consequence, it treats
> the effect sizes from one category as *partially informative* about the
> effect sizes from other categories. The literature on multi-level modeling
> talks about this phenomenon as "partial pooling" or "borrowing of strength"
> across effect sizes in the same study. Along with "borrowing of strength,"
> CHE also assumes that there is a common structure to the heterogeneity
> within each category, i.e., the between-study variance in true effect sizes
> is the same across categories. In contrast, the SCE model ONLY uses the
> effect size estimates directly within that category to estimate the
> corresponding average effect. Using only the direct information can be
> cleaner and easier to explain, but will usually yield less precise
> estimates than an approach that involves borrowing of strength. SCE also
> allows each category of effects to have a different between-study variance,
> which can be useful (if the flexibility is needed) but costs something in
> precision, especially if there are only a few effect sizes within a
> category.
>
>
>> *Question 3.*
>>
>> Is it correct to anticipate a within-study heterogeneity in true effect
>> size according to the measure of outcome/independent variable while most
>> of
>> the studies (70%) used only one combination of outcome and independent
>> variable measure?
>>
>
> JEP: Yes. The within-study heterogeneity term captures variation in the
> true effect sizes within a study, above and beyond variation that is
> explained by the category of the outcome/IV. If the categories explain all
> of the within-study variation, then within-study heterogeneity should be
> estimated as something near zero.
>
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list