[R-meta] Dear Wolfgang
Viechtbauer, Wolfgang (SP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Tue Apr 14 22:43:51 CEST 2020
Yes, if the effect size measure is the same, one can make such a comparison. Also, there should not be any overlap in the studies included in the two meta-analyses (as otherwise the two estimates are not independent, as assumed by the test). And yes, you don't need sample sizes or tau^2 values or anything else - just the two estimates and their corresponding standard errors. And it doesn't depend on what random effects structure was used in the two meta-analyses -- assuming that the structures used in the two meta-analyses were appropriate for the studies at hand.
Best,
Wolfgang
>-----Original Message-----
>From: Ju Lee [mailto:juhyung2 using stanford.edu]
>Sent: Tuesday, 14 April, 2020 18:54
>To: Viechtbauer, Wolfgang (SP); r-sig-meta-analysis using r-project.org
>Subject: Re: Dear Wolfgang
>
>Dear Wolfgang,
>
>Thanks for your insights.
>I am reaching out to my colleagues to see how they have made such
>transformation.
>
>In the meantime, based on the information that you have sent, it is possible
>to compare two different meta-analyses if they are using the same effect
>size, say lnRR? and this wald-type test can be performed only with grand
>mean effect sizes and their standard error, without sample sizes or tau
>value, if I understood correctly?
>
>How would this approach be actually applicable to publications that
>seemingly used similar mixed-effect models but there is no guarantee that
>random effect structures are standardized between the two?
>
>Thank you very much!
>Best,
>JU
>________________________________________
>From: Viechtbauer, Wolfgang (SP)
><wolfgang.viechtbauer using maastrichtuniversity.nl>
>Sent: Tuesday, April 14, 2020 7:04 AM
>To: Ju Lee <juhyung2 using stanford.edu>; r-sig-meta-analysis using r-project.org <r-
>sig-meta-analysis using r-project.org>
>Subject: RE: Dear Wolfgang
>
>Dear Ju,
>
>In principle, this might be of interest to you:
>
>http://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates
>
>However, a standardized mean difference is given by (m1-m2)/sd, while a
>(log) response ratio is log(m1/m2). I see no sensible way of converting the
>former to the later.
>
>Best,
>Wolfgang
>
>>-----Original Message-----
>>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-
>project.org]
>>On Behalf Of Ju Lee
>>Sent: Monday, 13 April, 2020 22:47
>>To: r-sig-meta-analysis using r-project.org
>>Subject: [R-meta] Dear Wolfgang
>>
>>Dear Wolfgang,
>>
>>I hope you are doing well.
>>
>>My research group is currently working on a project where they are trying
>to
>>compare effect sizes generated from their current mixed-effect meta-
>analysis
>>with effect sizes (based on similar response variables) calculated in other
>>meta-analysis publications.
>>
>>We are currently using log response ratio and are trying to make some
>>statement or analysis to compare our grand mean effect sizes with other
>>studies. In more details, we are examining how herbivorous animal control
>>plant growth in degraded environment. Now, there is already a meta-analysis
>>out there that has examined this (in comparable manner) in natural
>>environment as opposed to our study.
>>
>>My colleagues want to know if there is a way to make some type of
>comparison
>>(ex. whether responses are stronger in degraded vs. natural environemnts)
>>between two effect sizes from these different studies using statistical
>>approaches.
>>So far what they have from other meta-analysis publication is grand mean
>>hedges'd and var which they transformed to lnRR and var in hopes to compare
>>with our lnRR effect sizes.
>>
>>My view is that this is not possible unless we can have their actual raw
>>dataset and run a whole new model combining with our original raw dataset.
>>But I wanted to reach out to you and the community if there is an
>>alternative approaches to compare mean effect sizes among different meta-
>>analysis which are assumed to have used similar approaches in study
>>selection and models (another issue being different random effect
>structures
>>used in different meta-analysis which may not be very apparent from method
>>description).
>>
>>Thank you for reading and I hope to hear from you!
>>Best,
>>JU
More information about the R-sig-meta-analysis
mailing list