[R-meta] multilevel meta-analysis using metafor
brauldeq
brauldeq at hu-berlin.de
Tue Sep 5 13:25:25 CEST 2017
I was wondering how the weights are assigned correctly when specifying
rma.mv(yi, vi, random = ~ 1 | sample_nr/effect_nr, data = data) and
afterwards using
robust.rma.mv(model, cluster=data$sample_nr, adjust=T) for a (cluster)
robust estimation of variances and accurate assessment of standard
errors.
Just a little reminder concerning my data structure. I have multiple
effect sizes (all measured in the same sample) per study. The number of
effect sizes within one study varies greatly from only 1 up until 30.
I am asking because not long ago I was quite certain that by specifying
a two-level random effects model, the dependency of effect sizes within
studies would automatically be accounted for. Now I know that this is
not the case and I need to account for the dependency of standard errors
additionally with the robust() function. However, now I am wondering
whether the weights are assigned correctly with the approach explained
above. Or do I need to adjust the weights of studies with multiple
(dependent) effect sizes manually? Specifically, do studies with more
effect sizes are weighted more than studies with less effect sizes? This
would be very problematic in my case.
Thanks for providing me with more details concerning the weighting of
effect sizes within two-level random-effect models using the rma.mv()
function.
Best regards,
Denise
Am 30.08.2017 16:31, schrieb Wolfgang Viechtbauer:
> Yes, this is indeed how you can approach this.
>
> And yes, if the var-cov structured is misspecified (which it is in
> your case), then the fixed effects are still estimated unbiasedly
> (although not as efficiently). The problem is that the SEs of the
> fixed effects will not be correct. Using robust() allows you to get
> more appropriate estimates of the SEs (and hence more appropriate
> tests/CIs).
>
> Best,
> Wolfgang
>
> On 08/30/2017 03:10 PM, brauldeq wrote:
> Following
> https://stackoverflow.com/questions/44811867/multilevel-meta-analysis-using-metafor
>
> I understand that I could at first specify my model using model <-
> rma.mv(yi, vi, random = ~ 1 | sample_nr/effect_nr, data = data). To
> solve the problem concerning the covariances of the sampling errors I
> would hereafter use robust.rma.mv(model, cluster=data$sample_nr, adjust
> = T). Would this approach solve my problem?
>
> Am I right to assume that the rma.mv(yi, vi, random = ~1 |
> sample_nr/effect_nr) function would calculate proper estimate of effect
> size but is problematic in terms of sampling errors?
>
> Thanks,
> Denise
>
> Am 30.08.2017 14:05, schrieb Wolfgang Viechtbauer:
> Please keep the mailing list in cc.
>
> Yes, this means the subjects overlap, that is, the correlations are
> computed based on the same sample. In that case, the correlations are
> correlated. Equations for computing the covariances can be found in:
>
> Steiger, J. H. (1980). Tests for comparing elements of a correlation
> matrix. Psychological Bulletin, 87(2), 245-251.
>
> There are various cases. Let's say there are four variables: x1, x2,
> x4, and x4, all measured in the same sample. Then we have the case of
> non-overlapping variables:
>
> cov(cor(x1,x2), cor(x3,x4))
>
> To compute that covariance, you will need the full 4x4 correlation
> matrix.
>
> And there is the case of partially overlapping variables, for example:
>
> cov(cor(x1,x2), cor(x1,x3))
>
> To compute the covariance here, you will need cor(x2,x3) (obviously,
> cor(x1,x2) and cor(x1,x3) you already have, otherwise you would not be
> interested in their covariance).
>
> Again, the necessary equations can be found in Steiger (1980).
>
> If you do not have the information to compute the covariances, then we
> are back to the situation where the covariances between the outcomes
> cannot be computed. See previous posts on how to deal with that. For
> example:
>
> https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2017-August/000097.html
>
> https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2017-August/000094.html
>
>
> Best,
> Wolfgang
More information about the R-sig-meta-analysis
mailing list