[R-meta] Accounting for dependencies of effect sizes in meta-analysis including various study designs

Maximilian Steininger m@x|m|||@n@@te|n|nger @end|ng |rom un|v|e@@c@@t
Tue Apr 16 17:47:18 CEST 2024


Dear all,

First of all, thank you for this mailing list and the work that has gone into the responses and the materials linked so far.

I have tried to use the previous answers to solve my specific problem, but I am unsure if my conclusion is correct and appropriate and would appreciate further feedback.

I am a PhD student – so relatively unexperienced – currently running a systematic review and meta-analysis for the first time. My meta-analysis includes several studies (60 studies; with overall 99 effects), that all use the same dependent variable, but that have different designs and thus different forms of dependencies. I have three types of studies:

a) Between-participant designs comparing one (or more) intervention group to a control group.

b) Within-participant designs comparing one (or more) condition to a control condition.

c) Pre-Post control group designs comparing one (or more) intervention group (tested pre- and post-intervention) to a control group (also tested pre- and post-control).

As indicated above, there are studies that report more than one effect. Hence, there is effect-size dependency and/or sampling error dependency. Some studies have multiple intervention groups, some studies have multiple comparison groups and the within studies (b) have “multiple follow-up times” meaning that each participant is tested multiple times on the same outcome. I am a bit confused on how to best model these dependencies, since I came across several approaches.

Initially I wanted to run a multilevel (three-level) meta-analysis with participants (level 1) nested within outcomes (level 2) nested within studies (level 3). However, reading through the archives of this group I figured that this model does not appropriately deal with sampling error dependency.

To deal with this I came across the solution to construct a "working" variance-covariance matrix and input it into my three-level meta-analysis model (using e.g. this approach https://www.jepusto.com/imputing-covariance-matrices-for-multi-variate-meta-analysis/<https://www.jepusto.com/imputing-covariance-matrices-for-multi-variate-meta-analysis/>). Then I would fit this “working model” using metafor and feed it into the clubSadwich package to perform robust variance estimation (RVE). Of course I would conduct sensitivity analysis to check whether feeding different dependencies (i.e. correlation coefficients) into my variance-covariance matrix makes a difference. Q1) Is this the “best” approach to deal with my dependencies?

Alternatively, I came across the approach to use multivariate meta-analysis, again coupled with constructing a “working” variance-covariance matrix. However, I am unsure whether this makes sense because I don’t have multiple dependent variables.

Furthermore, I have a couple of questions regarding my dependencies:

Q2) To calculate a “guestimate” for the variance-covariance matrix I need a correlation coefficient. As (almost) always none is provided in the original studies. Would it be a plausible approach to use the test-retest reliability of my dependent variable (which is reported in a number of other studies not included in the analysis) to guess the correlation?

Q3) For my meta-analysis I use the yi and vi values (e.g. effect sizes and their variance). I calculate these beforehand using the descriptive stats of my studies and formulas suggested by Morris & DeShon (2002). For my effect sizes of the within- (b) as well as pre-post control group designs (c), I already use the test-retest reliability of the dependent variable to estimate the variances of these effect sizes. If I now use these “corrected” effect size variances and run the model, would I use this same correlation to compute my variance-covariance matrix? Am I not, overly conservatively, “controlling” for this dependency then twice (once in the estimation of the individual variance of the effect sizes and once in the model)?

Q4) For between-studies it is suggested to correct the sample size of the control group (by number of comparisons) if it is compared more than once to an intervention. Do I also have to do this if I calculate a variance-covariance matrix (which should take care of these dependencies already)? Is it enough to calculate the variance-covariance matrix and then use a multilevel or multivariate approach? If it is not enough, do I also have to correct the sample size for within-participant designs (b) as well (e.g., all participants undergo all conditions, so I must correct N by dividing overall sample size by number of conditions)? 

Q5) Can I combine multivariate and multilevel models with each other and would that be appropriate in my case?

Or is all of this utter nonsense and a completely different approach would be the best way to go?

Thank you very much for your time and kindness in helping a newcomer to the method.

Best and many thanks,
Max
——

Mag. Maximilian Steininger
  PhD candidate

  Social, Cognitive and Affective Neuroscience Unit
  Faculty of Psychology
  University of Vienna

  Liebiggasse 5
  1010 Vienna, Austria

  e: maximilian.steininger using univie.ac.at
  w: http://scan.psy.univie.ac.at



More information about the R-sig-meta-analysis mailing list