[R-meta] Is it is appropriate to simulate the original data to derive the interactions based on the correlation matrix

Charles FENG |||ch@o @end|ng |rom gm@||@com
Sun Mar 26 04:11:58 CEST 2023


Dear Mike,

I am writing to you to seek your advice on a meta-analytic structural
equation modeling (MASEM) problem. I am interested in estimating moderated
mediation effects using the metaSEM package in R. However, I do not have
the data (correlation matrices) for the interactions. I wonder if it is
appropriate to simulate the original data to derive the interactions based
on the correlation matrix, assuming a multivariate normal distribution, and
then produce the correlation matrices including the matrix for interactions
for tssem1. I know that simulation is just an approximation, but I am not
sure if it is acceptable when doing model-based meta-analysis.

Thank you very much for your time and attention.

Sincerely,
Best regards,
Charles Feng, Ph.D.


On Wed, Sep 21, 2022 at 8:20 AM Mike Cheung <mikewlcheung using gmail.com> wrote:

> Dear Anne,
>
>
> Apart from Wolfgang's excellent explanation of the general issues, there
> are additional issues in analyzing indirect effects. Here are some of them.
>
>
> 1) Interpreting the indirect effect alone may be misleading if we ignore
> the direct effect. It is preferable to include both of them in the
> meta-analysis.
>
>
> 2) It is well-known that the sampling distribution of the indirect effect
> is nonnormal. This is why researchers prefer using the bootstrap confidence
> interval in testing indirect effect in primary studies. As the effect size
> is nonnormally distributed, the accuracy of the meta-analysis is
> questionable. We have yet to see some empirical support for it.
>
>
> 3) When we conduct a meta-regression on the indirect effect, there is more
> than one way to interpret the intercept and slope. For example,
>
> (a*b) = β₀ + β₁*x, where a*b is the indirect effect and x is a covariate.
>
> β₁ is usually interpreted as the expected change in the indirect effect
> (a*b) when x increases 1 unit. However, there are also two equivalent
> interpretations:
>
> (i) a = β₀/b + β₁*(x/b), β₁ is the expected change in a when x increases 1
> unit "given b is 1."
>
> (ii) b = β₀/a + β₁*(x/a), β₁ is the expected change in b when x increases 1
> unit "given a is 1."
>
>
> Meta-analytic structural equation model (MASEM) may avoid these issues by
> synthesizing correlation matrices instead of indirect effect. The following
> paper has a more detailed discussion of these issues.
>
>
> Cheung, M. W.-L. (2022). Synthesizing indirect effects in mediation models
> with meta-analytic methods. Alcohol and Alcoholism, 57(1), 5–15.
> https://doi.org/10.1093/alcalc/agab044
>
>
> Best,
>
> Mike
>
> On Tue, Sep 20, 2022 at 5:47 PM Anne Olsen <anne.olsen.1994 using gmail.com>
> wrote:
>
> > Dear Wolfgang,
> > This is an amazing explanation! Thank you so so much!
> > Best,
> > Anne O.
> >
> > On Tue, Sep 20, 2022 at 11:04 AM Viechtbauer, Wolfgang (NP) <
> > wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> >
> > > Dear Anne,
> > >
> > > Yes, that is correct.
> > >
> > > And to answer your last question more broadly: As long as one has
> > > estimates (of whatever kind) that are 1) on the same scale (which
> either
> > > can be achieved by using a 'unitless' / standardized effect size
> measure,
> > > but would also apply if variables across studies are measured using the
> > > same measurement instrument / scale and one simply uses the 'raw
> > estimates'
> > > directly), 2) are 'about the same thing/phenomenon' (or to use a
> slightly
> > > fancier term: 'commensurable'), and 3) one has (estimates of) the
> > > corresponding standard errors (or SE^2 = sampling variances), then one
> > can
> > > combine them using standard meta-analytic methods.
> > >
> > > To give a counterexample to 2): It would make little sense to combine a
> > > bunch of correlation coefficients between anxiety and depression and a
> > > bunch of correlation coefficients between height and weight in the same
> > > analysis. While they are measured on the same scale (criterion 1) and
> one
> > > can also compute the corresponding SEs (criterion 3), they are not
> > > reflections of the same underlying phenomenon and hence not
> > commensurable.
> > > But it is actually in the eye of the beholder what is considered
> > > commensurable. In other words, while it is objectively nonsense to
> > combine
> > > a correlation coefficient with a standardized mean difference or the
> mean
> > > height with a mean weight (they are not on the same scale; a suitable
> > > cartoon I like to use when discussing this point:
> > >
> >
> https://condenaststore.com/featured/new-yorker-may-17th-1976-dana-fradon.html
> > ),
> > > there isn't an 'objective' way of defining what is commensurable. For
> > > example, Byrnes et al. (1999) did a meta-analysis on gender differences
> > in
> > > risk taking. There are very diverse ways of assessing such gender
> > > differences, for example, through surveys asking about 'risky
> behaviors'
> > > (driving over the speed limit, smoking, etc.), through gambling tasks,
> > > choice dilemma tasks, etc. etc. One can compute standardized mean
> > > differences based on such diverse assessments of risk taking, but some
> > > might argue that combining them is comparing apples and oranges. A
> > possible
> > > response to this is to empirically assess whether there are systematic
> > > differences between different types of assessments (via a moderator /
> > > meta-regression analysis) - which is also what Byrnes et al. (1999)
> did.
> > In
> > > fact, one could in principle do the same with a bunch of correlation
> > > coefficients between anxiety and depression and a bunch of correlation
> > > coefficients between height and weight, although I don't know what
> such a
> > > comparison would really tell us (and even if the two groups of
> > correlation
> > > coefficients happen to not differ, I still wouldn't be comfortable
> > > combining them into an overall aggregate).
> > >
> > > So, instead of addressing your question directly - which I can't,
> since I
> > > do not know the specifics of what you mean by "moderation effects" -
> you
> > > should think about the above and come to your own decision whether
> > > combining these effects makes sense under these criteria.
> > >
> > > Best,
> > > Wolfgang
> > >
> > > >-----Original Message-----
> > > >From: R-sig-meta-analysis [mailto:
> > > r-sig-meta-analysis-bounces using r-project.org] On
> > > >Behalf Of Anne Olsen
> > > >Sent: Tuesday, 20 September, 2022 10:11
> > > >To: r-sig-meta-analysis using r-project.org
> > > >Subject: [R-meta] meta analysis of indirect effects metafor
> > > >
> > > >Hello,
> > > >
> > > >We ran several studies where we had indirect effects, and we would
> like
> > to
> > > >report them in the form of meta-analyses. In one of the threads on
> stat
> > > >exchange (here
> > > ><
> > >
> >
> https://stats.stackexchange.com/questions/187463/how-does-one-run-a-meta-
> > > >analysis-on-indirect-mediated-effects>),
> > > >I found a comment suggesting that in the case all variables are the
> same
> > > >and the model is the same across these studies, one could just
> calculate
> > > >estimates and standard errors and put them into some package such as
> > > >metafor. So this would be my case, but I am wondering what would be
> the
> > > >exact code in metafor to calculate this?
> > > >
> > > >What I did was that I calculated variance ( vi=SE^2 ) and ran the
> > > following
> > > >code
> > > >
> > > > res <-rma.uni(yi=Mod_OSC,vi=vi,ni=N,slab=Studies, data=mydata)
> > > > res
> > > >
> > > >Is this correct?
> > > >
> > > >Also, would the same procedure work for moderation effects?
> > > >
> > > >I  know this question is basic, but I have no previous experience with
> > > >meta-analysis, and on the internet, I could not find some simple
> > solution
> > > >for which I am sure it is correct.
> > > >
> > > >Thanks!
> > > > Anne O.
> > >
> >
> >         [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> > To manage your subscription to this mailing list, go to:
> > https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> >
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list