[R-meta] Meta-analytical test of mediation model including dependent tests - looking to resolve metafor issue or find alternative approach

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Fri Dec 11 18:32:44 CET 2020


Dear Lukas,

It is to be expected that the results from separate analyses will differ from the multilevel model. This issue, albeit in a somewhat different modeling context, is discussed here:

http://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates

Also, 1/N is not quite the way the sampling variances for correlation coefficients should be calculated, but given that the correlations are not so large, this is probably not going to matter that much. One can also debate whether one should meta-analyze raw correlation coefficients, but let's leave this issue aside for now.

But the results don't look strange to me. It's also a rather small dataset, so changes in the modeling approach can lead to noticeably different results.

I am not sure if I would agree with the general approach here to deal with the multilevel structure though. Let's take the first study:

 1 UK_mediation affective pos_att -0.38  0.00446 
 2 UK_mediation cognitive pos_att -0.2   0.00446 
 3 UK_mediation affective neg_att  0.18  0.00446 
 4 UK_mediation cognitive neg_att  0.21  0.00446

So, as far as I can tell, there are 4 variables that were measured in this study: affective, cognitive, pos_att, and neg_att. If so, there should be 6 correlations in total, but you are showing only 4 of them. If one would also know the affective-cognitive and the pos_att-neg_att correlations, then one can construct the whole 6x6 var-cov matrix of the 6 correlations (or their r-to-z transformed values). The 'devel' version of metafor has a function for this called rcalc():

https://wviechtb.github.io/metafor/reference/rcalc.html

One can then use a 'proper' multivariate model. See here for an example:

https://wviechtb.github.io/metafor/reference/dat.craft2003.html

However, with 5 studies, I might even just consider using a model with a properly constructed V matrix and no further random effects. There doesn't seem to be a huge amount of heterogeneity in these data in the first place.

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org]
>On Behalf Of Lukas Wallrich
>Sent: Friday, 11 December, 2020 17:54
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] Meta-analytical test of mediation model including
>dependent tests - looking to resolve metafor issue or find alternative
>approach
>
>Dear list members,
>
>I am trying to run a meta-analysis that includes dependent effect sizes
>(correlations), nested into studies and then into measures, for which I
>ultimately want to estimate a mediation model. For that, I am trying to
>follow Wilson et al. (2016,
>https://onlinelibrary.wiley.com/doi/abs/10.1002/jrsm.1199) who propose to
>collate a long dataset of each correlation value with a factor that
>indicates which correlation it is testing, and then using that as a
>moderator.
>
>However, the results differ from separate estimates of the correlation
>coefficients and, more importantly, the confidence intervals shrink quite a
>lot. I am not sure why this would happen and whether I can trust the
>results.
>
>Below the code and results - I would very much appreciate any pointers
>regarding what is going wrong, or suggestions for an alternative approach.
>Please note that I am cross-posting this from Stackexchange where I have
>not received a response within a week:
>https://stats.stackexchange.com/q/499690/240420
>
>Many thanks,
>
>Lukas
>
>library(metafor)
>#> Loading required package: Matrix
>#> Loading 'metafor' package (version 2.4-0). For an overview
>#> and introduction to the package please type: help(metafor).
>
>dat <- tibble::tribble(
>  ~study, ~measure, ~pair, ~r, ~inv_N,
>  "UK_mediation", "affective", "pos_att", -0.38, 0.00446428571428571,
>  "UK_mediation", "cognitive", "pos_att", -0.2, 0.00446428571428571,
>  "UK_mediation", "affective", "neg_att", 0.18, 0.00446428571428571,
>  "UK_mediation", "cognitive", "neg_att", 0.21, 0.00446428571428571,
>  "DE_mediation", "only", "pos_att", -0.43, 0.000381970970206264,
>  "DE_mediation", "only", "neg_att", 0.26, 0.000381970970206264,
>  "longit", "T2_prej", "pos_att", -0.221742469419445, 0.004739336492891,
>  "longit", "T2_prej", "neg_att", 0.136975390214378, 0.004739336492891,
>  "longit", "T1_therm", "neg_att", 0.148356343157473, 0.004739336492891,
>  "longit", "T1_therm", "pos_att", -0.325301851215349, 0.004739336492891
>)
>
>#Approach per Wilson et al. - correlation pair as moderator
>summary(rma.mv(r,
>               inv_N,
>               random = ~ 1 | measure/study,
>               data = dat,
>               method = "REML",
>               mods = ~factor(pair)-1))
>#>
>#> Multivariate Meta-Analysis Model (k = 10; method: REML)
>#>
>#>   logLik  Deviance       AIC       BIC      AICc
>#>   2.9182   -5.8363    2.1637    2.4814   15.4970
>#>
>#> Variance Components:
>#>
>#>             estim    sqrt  nlvls  fixed         factor
>#> sigma^2.1  0.0000  0.0055      5     no        measure
>#> sigma^2.2  0.0000  0.0055      5     no  measure/study
>#>
>#> Test for Residual Heterogeneity:
>#> QE(df = 8) = 25.1654, p-val = 0.0015
>#>
>#> Test of Moderators (coefficients 1:2):
>#> QM(df = 2) = 726.4284, p-val < .0001
>#>
>#> Model Results:
>#>
>#>                      estimate      se      zval    pval    ci.lb
> ci.ub
>#> factor(pair)neg_att    0.2389  0.0179   13.3561  <.0001   0.2038
>0.2739  ***
>#> factor(pair)pos_att   -0.3917  0.0179  -21.8970  <.0001  -0.4267
> -0.3566  ***
>#>
>#> ---
>#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
>
>#Basic approach for single correlation
>summary(rma.mv(r,
>       inv_N,
>       random = ~ 1 | measure/study,
>       data = dat[dat$pair == "pos_att",],
>       method = "REML"))
>#>
>#> Multivariate Meta-Analysis Model (k = 5; method: REML)
>#>
>#>   logLik  Deviance       AIC       BIC      AICc
>#>   3.4054   -6.8107   -0.8107   -2.6519   23.1893
>#>
>#> Variance Components:
>#>
>#>             estim    sqrt  nlvls  fixed         factor
>#> sigma^2.1  0.0040  0.0631      5     no        measure
>#> sigma^2.2  0.0040  0.0631      5     no  measure/study
>#>
>#> Test for Heterogeneity:
>#> Q(df = 4) = 19.1211, p-val = 0.0007
>#>
>#> Model Results:
>#>
>#> estimate      se     zval    pval    ci.lb    ci.ub
>#>  -0.3224  0.0478  -6.7481  <.0001  -0.4160  -0.2287  ***
>#>
>#> ---
>#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
>
>*Lukas Wallrich* // *Goldsmiths*, University of London
>PhD candidate in Social Psychology
>Department of Psychology // 216 Whitehead Building
>New Cross // London SE14 6NW
>l.wallrich using gold.ac.uk // +44 7591 975294



More information about the R-sig-meta-analysis mailing list