[R-meta] Question about Meta analysis

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Tue Apr 23 11:11:08 CEST 2024


Ah, now I get it. Then let me answer your other post here and maybe this will be of use to all.

As noted in my answer to Sevilay, this part of the metafor documentation is relevant:

https://wviechtb.github.io/metafor/reference/misc-recs.html#general-workflow-for-meta-analyses-involving-complex-dependency-structures

This is in essence your Q1, and yes, this is good practice. Not sure if this is 'best' practice. In general, how such complex cases should be handled depends on many factors.

Not sure what distinction you are making between this approach and the use of multivariate meta-analysis (combined with RVE), since the three-level model can also be seen as a multivariate meta-analysis, as discussed in these examples:

https://www.metafor-project.org/doku.php/analyses:konstantopoulos2011
https://www.metafor-project.org/doku.php/analyses:crede2010

A major challenge in cases where there is sampling error dependency is the construction of the V matrix. Many will not even attempt this and will rely on / hope that RVE fixes up the standard errors of the fixed effects. Roughly, this is at least asymptotically true as long as the cluster variable used in RVE encompasses estimates that are potentially dependent (due to whatever reason). In principle, the vcalc() function can handle quite a number of different types of dependencies for constructing the V matrix, but I even struggle at times trying to make it fit to a particular case. For example, this example shows this so some extent:

https://wviechtb.github.io/metadat/reference/dat.knapp2017.html

The other challenge is the choice of the random effects. Often, people just use a 'simple' three-level model, but more complex structures are certainly possible and may provide a better reflection of the depedency structure. An example where we did not use a V matrix (which would have been hopelessly complex) but used a more complex random effects structure is this:

https://wviechtb.github.io/metadat/reference/dat.mccurdy2020.html

With respect to your other questions:

Q2) Yes, I would say the test-retest reliability can be a decent proxy for estimating the correlation between estimates that are obtained at multiple time points (assuming that the time lags are similar).

Q3) As you note, the pre-post correlation is needed to correctly compute the sampling variance of a standardized mean change (with raw score standardization). That's a different issue than using a correlation coefficient to account for the dependency between two such effect sizes. So no, you are not being overly conservatively in doing so.

Q4) You do not need to 'correct' the control / common comparator group sample size when you account for the dependency via their covariance in the V matrix.

Q5) Hard to say without digging into the details of your data. But again, the three-level model *is* already a particular type of multivariate model. This aside, yes, these two ideas -- that there are multiple levels plus multiple types of outcomes -- can certainly be combined.

In general, I would say you are asking the right questions and are on the right track, but it is hard to say more without further details.

Best,
Wolfgang

> -----Original Message-----
> From: Maximilian Steininger <maximilian.steininger using univie.ac.at>
> Sent: Tuesday, April 23, 2024 10:16
> To: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis using r-
> project.org>
> Cc: Viechtbauer, Wolfgang (NP) <wolfgang.viechtbauer using maastrichtuniversity.nl>
> Subject: Re: [R-meta] Question about Meta analysis
>
> Dear Wolfgang, dear Selivay,
>
> I think Selivay was referring to my longer message from a few days ago (see
> below). However, as I am only just starting to familiarise myself with the
> method, I am unfortunately unable to provide Selivay with any conclusive/helpful
> answers.
>
> I had hoped that my open questions from back then might still be answered, but
> perhaps they are too obvious or uninformed (or simply too long) and can be
> answered with more literature research by myself.
>
> Many thanks in any case for the link Wolfgang.
>
> @Selivay: You can write me a direct message via
> maximilian.steininger using univie.ac.at , then I can share you a detailed list of all
> the resources I used.
>
> Best,
> Max
>
> > Am 16.04.2024 um 17:47 schrieb Maximilian Steininger via R-sig-meta-analysis
> <r-sig-meta-analysis using r-project.org>:
> >
> > Dear all,
> >
> > First of all, thank you for this mailing list and the work that has gone into
> the responses and the materials linked so far.
> >
> > I have tried to use the previous answers to solve my specific problem, but I
> am unsure if my conclusion is correct and appropriate and would appreciate
> further feedback.
> >
> > I am a PhD student – so relatively unexperienced – currently running a
> systematic review and meta-analysis for the first time. My meta-analysis
> includes several studies (60 studies; with overall 99 effects), that all use the
> same dependent variable, but that have different designs and thus different
> forms of dependencies. I have three types of studies:
> >
> > a) Between-participant designs comparing one (or more) intervention group to a
> control group.
> >
> > b) Within-participant designs comparing one (or more) condition to a control
> condition.
> >
> > c) Pre-Post control group designs comparing one (or more) intervention group
> (tested pre- and post-intervention) to a control group (also tested pre- and
> post-control).
> >
> > As indicated above, there are studies that report more than one effect. Hence,
> there is effect-size dependency and/or sampling error dependency. Some studies
> have multiple intervention groups, some studies have multiple comparison groups
> and the within studies (b) have “multiple follow-up times” meaning that each
> participant is tested multiple times on the same outcome. I am a bit confused on
> how to best model these dependencies, since I came across several approaches.
> >
> > Initially I wanted to run a multilevel (three-level) meta-analysis with
> participants (level 1) nested within outcomes (level 2) nested within studies
> (level 3). However, reading through the archives of this group I figured that
> this model does not appropriately deal with sampling error dependency.
> >
> > To deal with this I came across the solution to construct a "working"
> variance-covariance matrix and input it into my three-level meta-analysis model
> (using e.g. this approach https://www.jepusto.com/imputing-covariance-matrices-
> for-multi-variate-meta-analysis/<https://www.jepusto.com/imputing-covariance-
> matrices-for-multi-variate-meta-analysis/>). Then I would fit this “working
> model” using metafor and feed it into the clubSadwich package to perform robust
> variance estimation (RVE). Of course I would conduct sensitivity analysis to
> check whether feeding different dependencies (i.e. correlation coefficients)
> into my variance-covariance matrix makes a difference. Q1) Is this the “best”
> approach to deal with my dependencies?
> >
> > Alternatively, I came across the approach to use multivariate meta-analysis,
> again coupled with constructing a “working” variance-covariance matrix. However,
> I am unsure whether this makes sense because I don’t have multiple dependent
> variables.
> >
> > Furthermore, I have a couple of questions regarding my dependencies:
> >
> > Q2) To calculate a “guestimate” for the variance-covariance matrix I need a
> correlation coefficient. As (almost) always none is provided in the original
> studies. Would it be a plausible approach to use the test-retest reliability of
> my dependent variable (which is reported in a number of other studies not
> included in the analysis) to guess the correlation?
> >
> > Q3) For my meta-analysis I use the yi and vi values (e.g. effect sizes and
> their variance). I calculate these beforehand using the descriptive stats of my
> studies and formulas suggested by Morris & DeShon (2002). For my effect sizes of
> the within- (b) as well as pre-post control group designs (c), I already use the
> test-retest reliability of the dependent variable to estimate the variances of
> these effect sizes. If I now use these “corrected” effect size variances and run
> the model, would I use this same correlation to compute my variance-covariance
> matrix? Am I not, overly conservatively, “controlling” for this dependency then
> twice (once in the estimation of the individual variance of the effect sizes and
> once in the model)?
> >
> > Q4) For between-studies it is suggested to correct the sample size of the
> control group (by number of comparisons) if it is compared more than once to an
> intervention. Do I also have to do this if I calculate a variance-covariance
> matrix (which should take care of these dependencies already)? Is it enough to
> calculate the variance-covariance matrix and then use a multilevel or
> multivariate approach? If it is not enough, do I also have to correct the sample
> size for within-participant designs (b) as well (e.g., all participants undergo
> all conditions, so I must correct N by dividing overall sample size by number of
> conditions)?
> >
> > Q5) Can I combine multivariate and multilevel models with each other and would
> that be appropriate in my case?
> >
> > Or is all of this utter nonsense and a completely different approach would be
> the best way to go?
> >
> > Thank you very much for your time and kindness in helping a newcomer to the
> method.
> >
> > Best and many thanks,
> > Max
>
> > Am 23.04.2024 um 09:56 schrieb Viechtbauer, Wolfgang (NP) via R-sig-meta-
> analysis <r-sig-meta-analysis using r-project.org>:
> >
> > Dear Sevilay,
> >
> > I am not sure to whom you meant to write (you posted to the mailing list and I
> don't know who 'Steinininger' is), but you might find the following of relevance
> to your question:
> >
> > https://wviechtb.github.io/metafor/reference/misc-recs.html#general-workflow-
> for-meta-analyses-involving-complex-dependency-structures
> >
> > Best,
> > Wolfgang
> >
> >> -----Original Message-----
> >> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On
> Behalf
> >> Of Sevilay Cankaya via R-sig-meta-analysis
> >> Sent: Monday, April 22, 2024 16:46
> >> To: r-sig-meta-analysis using r-project.org
> >> Cc: Sevilay Cankaya <sevilaycankaya97 using gmail.com>
> >> Subject: [R-meta] Question about Meta analysis
> >>
> >> Dear Steinininger,
> >>
> >> I am writing to ask some questions about dependency in meta analysis. I
> >> read your questions in the meta analysis group. I realised that ı have
> >> similar questions. I am currently working on meta analysis about the
> >> effectiveness of psychotherapies on juveniles psychology. I have 42 effect
> >> sizes from 18 studies and the differences from your meta analysis is that ı
> >> have multiple outcomes (depression, anger, mindfulness)  and ı want to
> >> combine them as a psychosocial outcome. First I tried three level meta
> >> analysis. Then while researching,  I saw that . clubsandwich, , RVE was
> >> more suitable for my data. But I'm not sure because it is my first time.
> >> I want to ask how you deal with these issues(model selection).
> >> And Do you have any resources or ideas that can help me with this?
> >>
> >> Sincerely,
> >> Sevilay Çankaya


More information about the R-sig-meta-analysis mailing list