[R-meta] rma.mv meta-regression

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Tue Jan 5 09:38:41 CET 2021


Further comments from my side below as well.

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org]
>On Behalf Of James Pustejovsky
>Sent: Monday, 04 January, 2021 22:25
>To: Danka Puric
>Cc: R meta
>Subject: Re: [R-meta] Dealing with effect size dependance with a small
>number of studies
>
>Hi Danka,
>
>Responses inline below.
>
>Kind Regards,
>James
>
>On Mon, Jan 4, 2021 at 5:41 AM Danka Puric <djaguard using gmail.com> wrote:
>
>> Hi everyone,
>>
>> Apologies for the long post and lots of questions.
>>
>> We are doing a meta-analysis where a single study sometimes included more
>> than one subsample and also the same subsample (same group of participants)
>> sometimes yielded more than one effect size.
>>
>> 1. Following the Berkey et al. (1998) example in metafor, we tried fitting
>> the following "basic" model:
>> nested_UN <-rma.mv(ES_corrected, SV, random = ~ IDeffect | IDstudy, struct
>> = "UN", data=MA_dat_raw)
>> where individual effect sizes are nested within studies. This model,
>> however, produces profile likelihood plots which have flat parts (both for
>> sigma2.1 and sigma2.2), which (if I'm not mistaken) indicates model
>> overparametrization. We believe this is most likely due to a small number
>> of effect sizes (k = 69, from 53 subsamples, from 20 studies).

This model should not have any variance components called "sigma2.1" or "sigma2.2". When using the "random = ~ IDeffect | IDstudy" notation, you should get "tau2" and "rho" values.

However, this model doesn't make much sense. I assume that different values of "IDeffect" are just used to differentiate multiple effects within the same study, but the levels are not meaningful in themselves (as opposed to the Berkey example, where the two levels of the 'inner' factor differentiate the two different outcomes). It would make more sense to use 'random = ~ IDeffect | IDstudy, struct = "CS"' which is in essence the same as 'random = ~ 1 | IDstudy / IDeffect'. See:

http://www.metafor-project.org/doku.php/analyses:konstantopoulos2011

>> We tried a similar model with random = ~ IDeffect | IDsubsample, but this
>> model did not even converge (I assume because the number of effect sizes
>> per subsample is even smaller than the number of ES per study).

This is probably again related to using struct="UN", which is (probably) not appropriate here.

>> Are we correct in concluding that a multi-level model can not be properly
>> fit with the data that we have and an alternative approach (RVE or effect
>> size aggregation) is better suited to the data?
>
>You have the wrong syntax here. If you want to specify a multi-level
>meta-analysis model in which effecstares nested within studies, use the "/"
>character to indicate nesting:
>  nested_UN <-rma.mv(ES_corrected, SV, random = ~ IDstudy / IDeffect,
>data=MA_dat_raw)
>Or if you want to include sub-samples as an intermediate level:
>  nested_UN <-rma.mv(ES_corrected, SV, random = ~ IDstudy / IDsubsample /
>IDeffect, data=MA_dat_raw)

It should be: "random = ~ 1 | IDstudy / IDeffect" or "random = ~ 1 | IDstudy / IDsubsample / IDeffect".

>Both of these will give estimates of average effect size and variance
>component estimates. However, the corresponding standard errors of the
>average effect sizes are based on the assumption that the entire model is
>correctly specified. RVE relaxes that assumption. Thus, the decision to use
>RVE or not should be based on a judgement about the plausibility of the
>model's assumptions (rather than on whether you can get a model to
>converge).
>
>> 2. If we want to use RVE, would the following model which includes random
>> effects at all three levels (effect size, subsample, study) be appropriate
>> in combination with clubSandwich package robust coefficient estimates?
>> model <-rma.mv(ES_corrected, SV, random =  ~ 1 | IDstudy / IDsubsample/
>> IDeffect, data=MA_dat_raw)
>> coef_test(model, vcov = "CR2")
>> Or should something else be done in order to adequately address the issue
>> of effect size dependence?
>
>This seems fine. One step better would be to consider whether the effect
>size estimates within a given sub-sample have correlated sampling errors.
>This would be the case, for instance, if the effect sizes are for different
>outcome measures (or measures of the same outcome at different points in
>time), assessed on the same sub-sample of individual participants. Details
>on how to do this can be found here:
>https://www.jepusto.com/imputing-covariance-matrices-for-multi-variate-meta-
>analysis/
>
>> 3. The variances for this model are:
>>
>> Variance Components:
>>             estim    sqrt  nlvls  fixed                        factor
>>
>> sigma^2.1  0.0589  0.2427     20     no                       IDstudy
>> sigma^2.2  0.0250  0.1583     53     no           IDstudy/IDsubsample
>> sigma^2.3  0.0014  0.0373     69     no  IDstudy/IDsubsample/IDeffect
>>
>> In other words, there is very little variance at the level of IDeffect,
>> after Study and Subsample have been taken into account. The profile
>> likelihood plot for sigma^2.3 does, however, appear to peak at the
>> corresponding value when "zoomed in" (with xlim=c(0,0.01)).
>> Should we consider this a satisfactory model, or is the variance at the
>> level of IDeffect too small to be meaningful? Presumably, this has to do
>> with the fact that the majority of subsamples (43 out of 53) only
>> contribute to the MA with one effect size, for 8 subsamples there are 2 ES
>> per subsample, and in two instances 5 ESs per subsample.
>>
>> Would an acceptable alternative model be:
>> nested <- rma.mv(ES_corrected, SV, random = ~ 1 | IDstudy/IDeffect,
>> data=MA_dat_raw)
>> Here, we've excluded random effects at the subsample level, because it made
>> more sense to include random effects at the level of individual effect
>> sizes and the two variables have a substantial overlap. The variances for
>> this model seem adequate (and their profile plots look fine, too).
>>
>> Variance Components:
>>             estim    sqrt  nlvls  fixed            factor
>>
>> sigma^2.1  0.0678  0.2604     20     no           IDstudy
>> sigma^2.2  0.0150  0.1223     69     no  IDstudy/IDeffect
>>
>The nice thing about RVE is that the standard errors for the average effect
>are calculated in a way that does not require the correct specification of
>the random effects structure. As a result, you should get very similar
>standard errors regardless of whether you include random effects for all
>three levels or whether you exclude a level. However, the variance
>component estimates are still based on an assumption that the model is
>correctly specified. I think it would therefore be preferable to use the
>model that captures the theoretically relevant levels of variation, so in
>this case, all three levels.

Agree. I would go with the IDstudy/IDsubsample/IDeffect model.

>> 4. Finally, we are also interested in examining the effects of a moderator
>> variable which defines different outcomes. So, in cases when one subsample
>> produces more than one effect size - sometimes these effect sizes belong to
>> the same level of the moderator variable (same outcome under different
>> circumstances) and sometimes they belong to different levels of the
>> moderator (different outcomes). Theoretically, we would expect "same-level"
>> ESs to be more correlated than "different-level" ones, but with the small
>> number of subsamples that report more than one ES this seems impossible to
>> model. Does the use of clubSandwich robust coefficient already take care of
>> this?
>
>It depends on what you mean by "take care of" this issue. RVE does not
>really solve the problem of how to model within- versus between-sample
>variation in a predictor, but it does mean that you can be less worried
>about getting the variance structure exactly correct. To address the issue
>you raise, one thing you could do is include a version of the moderator
>that is centered within each study, in addition to the study-level mean of
>the moderator. This would let you parse out "same-level" versus
>"different-level" variation in the moderator. However, with so few studies
>that have more than one level of the moderator, the within-study version of
>the predictor will have very little variation and so it will come with a
>large standard error.



More information about the R-sig-meta-analysis mailing list