[R-meta] correlation between pre and post test?

YA x|nx|813 @end|ng |rom 126@com
Fri Aug 27 04:53:45 CEST 2021


Hey guys,


Thank you very much for the valuable input.


Reza:



I do care about the heterogeneity between studies. And the choice of correlation coefficient values makes me hesitate to go that direction.


Philippe and Mike:


I have tried the metamisc::riley function, it is neat and consider the problem under the multiple_endpoint random effect meta analysis framework. Just to clear my understanding, Riley (2008, p175) said their alternative model could model the overall correlation directly, this 'overall correlation' is the one that we usually need to input into the meta analysis software and is usually not reported in the articles (the one we have been talking about in this email communication), right? If so, I could 


first, use the mean and sd of pre and post test, respectively, to compute the univariate effect size for pre test and post test, respectively. 


second, use the metamisc::riley to get the correlation between the pre and post test effect size.


third, use the correlation from the second step back together with the mean and SD from pre and post test to compute the effect size for the whole experiment-control group pre-post test design research in any meta analysis software. 


What do you think?


Best regards,


YA










------------------ Original ------------------
From:                                                                                                                        "Reza Norouzian"                                                                                    <rnorouzian using gmail.com>;
Date: Thu, Aug 26, 2021 07:03 PM
To: "Philippe Tadger"<philippetadger using gmail.com>;
Cc: "YA"<xinxi813 using 126.com>;"Michael Dewey"<lists using dewey.myzen.co.uk>;"r-sig-meta-analysis"<r-sig-meta-analysis using r-project.org>;"mikewlcheung"<mikewlcheung using gmail.com>;
Subject: Re: [R-meta] correlation between pre and post test?



Dear YA,

If you choose to use SMD, you will perhaps still benefit from knowing
or guesstimating the "correlation among effect sizes" (I use the term
"correlation among effect sizes" as a broader term not just to mean
the pre- post correlation that you inquired about) from each study.
I'll explain in a bit.

For now, you should consider extracting the means and sds for each
group (e.g., control and treatment(s)) at each time point (pre- and
post-tests).

For example, if in, say, a particular study in your pool of studies,
the study has a control group and a treatment group each of which
tested on one pre-test and two post-tests, then you'll need to extract
6 sets of means and sds from that study:

  time     group
1    0   control
2    1   control
3    2   control
4    0 treatment
5    1 treatment
6    2 treatment

Once you did this for every study, then you can enter your sets of
mean and sds for all studies into the metafor's "escalc()" function to
compute SMD effect size estimates (and their associated sampling
variances). The number of the resultant SMD effect sizes will be half
the number of the extracted sets of means and sds (i.e., you'll be
computing 3 SMD effect sizes) for each study.

In the end, you get a meta-analytic dataset perhaps looking like (only
two hypothetical studies shown):

study effect_size     sampling_variance  time
1        .1                  .12                            0
1        .2                  .15                            1
1        .3                  .11                            2
2        .1                  .14                            0
2        .6                  .10                            1
.         .                    .                                 .
.         .                    .                                 .

Now, back to the "correlation among effect sizes", in practice there
are more (and often multiple) sources of correlation among the set of
effect sizes in each study beyond simply a pre- post correlation. For
example, if some of your studies have more than 1 treatment group in
them, then all such treatment groups ought to be compared against the
same control group. That means calculating an effect size for each
treatment group requires "repeatedly" using the mean and the standard
deviation of the control group in each effect size, making such effect
sizes "positively" correlated. For example, if you have a
high-performing control group, then all your treatment groups will not
show much improvement over that control group, and if that control
group is low-performing, then, all your treatment groups will show
substantial improvement over that control group. Because they are
compared to the same control, your treatment groups' effects always
vary together in unison to varying degrees.

Although there are formulas for several (but not all) dependencies in
the primary studies, it's often a pain to individually use them for
each study to form what is called a block-diagonal matrix for sampling
dependence.

If you ignore such sampling dependence, no major harm is done to your
estimates of average effects (fixed effects), but your estimate(s) of
how variable your effects are may be systematically biased (i.e., even
if you have a very large dataset, you may not still obtain the true
value of heterogeneity).

If you don't care about heterogeneity of effect sizes, then knowing
about "any correlation among effect sizes" is not necessary, and you
can only use metafor to estimate the size of effect (SMD) for your
treatments at each time point across your studies, perhaps using:

m1 <- rma.mv(effect_size ~ factor(time), sampling_variance, random =
list(~ factor(time) | study, ~ 1 | interaction(study, time)), struct =
"HAR")

If you do care about heterogeneity of effect sizes, then you can use
metafor in conjunction with the clubSandwich package to enter a simple
guesstimate of the correlation among effect sizes, perhaps using:

V <- impute_covariance_matrix(sampling_variance, study, r = .6)

and then entering V into the same metafor model:

m2 <- rma.mv(effect_size ~ factor(time), V, random = list(~
factor(time) | study, ~ 1 | interaction(study, time)), struct = "HAR")

Because you have used a guesstimate for V, it would be a good idea to
guard against possible misspecification of your "m2" model. So,
instead of directly obtaining the results from metafor, you can use
clubSandwiche's package, perhaps using:

coef_test(m2, vcov = "CR2") for coefficients

conf_int(m2, vcov = "CR2") for CIs

Wald_test(m2, vcov = "CR2") for running comparisons among your coefficients

I would also use a range of "r" in my "impute_covariance_matrix()"
call to make sure my final results are not too sensitive to my choice
of "r".

Please take everything I said to be very general in nature, the final
decision regarding how to model your data depends on a whole host of
data-specific situations.

Kind regards,
Reza



On Thu, Aug 26, 2021 at 4:13 AM Philippe Tadger
<philippetadger using gmail.com> wrote:
>
> Hi YA,
>
> Yes, you can use the SMD of the post-measures, but it's the least
> interesting option (because you drop information).
>
> For Riley alternative model you can use metamisc::riley
>
> On 26/08/2021 04:52, YA wrote:
> > Hi everyone,
> >
> > Thank you very much for the helpful suggestions.
> >
> > Philippe:
> >
> > By 'If all the studies present pre and post (and you don't have any
> > study with change score) you can do a SMD only using the post
> > measures', do you mean use post SD as the standardizer to calculate
> > the SMD?
> >
> > Michael:
> >
> > 1. If I use change scores as the effect size, can I use the pre and
> > post means and SDs to calculate the effect size and the standard error?
> >
> > 2. By using a range of plausible pre-post correlations for a
> > sensitivity analysis, do you mean providing a correlation coefficient
> > for all of primary studies and save the results, then change the
> > correlation coefficient to another value for all the primary studies,
> > and run the analysis again, see if the two sets of results are
> > significantly different? Is it possible to do the significant test?
> >
> > Mike:
> >
> > I do not have access to SAS or Stata, do you know any R implementation
> > example code using Riley(2008) method?
> >
> > Thank you very much guys.
> >
> > Best regards,
> >
> > YA
> >
> >
> > ------------------ Original ------------------
> > *From:* "Philippe Tadger" <philippetadger using gmail.com>;
> > *Date:* Wed, Aug 25, 2021 08:54 PM
> > *To:* "Michael
> > Dewey"<lists using dewey.myzen.co.uk>;"YA"<xinxi813 using 126.com>;"r-sig-meta-analysis"<r-sig-meta-analysis using r-project.org>;
> > *Subject:* Re: [R-meta] correlation between pre and post test?
> >
> > Dear YA
> >
> > You can use correlation imputations from similar studies, or if this
> > is not available you can use a mean difference estimation (not SMD)
> > between the post and the change scores. If all the studies present pre
> > and post (and you don't have any study with change score) you can do a
> > SMD only using the post measures. All of this are common practices
> > that you can find in Cochrane book, and basic MA books.
> >
> >
> > On 25/08/2021 14:45, Michael Dewey wrote:
> >> If you are planning to analyse the change scores you will be OK with
> >> the mean change and its standard error. Otherwise try fitting with a
> >> range of plausible correlations and see how sensitive the results are
> >> to the assumed value.
> >>
> >> Michael
> >>
> >> On 25/08/2021 04:28, YA wrote:
> >>> Dear list,
> >>>
> >>>
> >>> I am trying to do meta analysis with random control trial research
> >>> that has experiment and control group, and pre and post test.
> >>> According to the meta analysis books, for these kind of research, I
> >>> need the mean and SD for the experiment and control group at both
> >>> pre and post test, I also need the correlation between pre and post
> >>> test. The means and SDs are usually reported by the authors, but the
> >>> correlations are usually not reported. How do I obtain the
> >>> correlations between the pre and the post test?
> >>>
> >>>
> >>> Thank you very much.
> >>>
> >>>
> >>> Best regards,
> >>>
> >>>
> >>> YA
> >>>     [[alternative HTML version deleted]]
> >>>
> >>> _______________________________________________
> >>> R-sig-meta-analysis mailing list
> >>> R-sig-meta-analysis using r-project.org
> >>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> >>>
> >>
> > --
> > Kind regards/Saludos cordiales
> > *Philippe Tadger*
> > ORCID <https://orcid.org/0000-0002-1453-4105>, Reseach Gate
> > <https://www.researchgate.net/profile/Philippe-Tadger>
> > Phone/WhatsApp: +32498774742
> --
> Kind regards/Saludos cordiales
> *Philippe Tadger*
> ORCID <https://orcid.org/0000-0002-1453-4105>, Reseach Gate
> <https://www.researchgate.net/profile/Philippe-Tadger>
> Phone/WhatsApp: +32498774742
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list