[R-meta] Compiling different design in the same met-analysis

Philippe Tadger ph|||ppet@dger @end|ng |rom gm@||@com
Tue May 4 12:04:24 CEST 2021


Thanks Gerta for such a simple and important reminder.

Apart from having test for subgroup differences, which other advantage 
can have doing a subgroup analysis (with the moderator in 
meta-regression) vs separate meta-analyses?
Just assuming that is a categorical moderator

On 04/05/2021 11:25, Gerta Ruecker wrote:
> Hi Gladys,
>
> Note that separate meta-analyses is not the same as subgroup analysis.
> If you do a subgroup analysis (in the Cochrane sense), you have design
> as a moderator and obtain a treatment-design interaction test, which you
> don't get if conducing separate analyses. Therefore I would prefer to
> present all in one.
>
> Best,
>
> Gerta
>
> Am 04.05.2021 um 11:17 schrieb Gladys Barragan-Jason:
>> Hi all,
>> Thanks a lot for your responses.
>> Actually, I did not specify it before but I am using the rma.mv
>> <http://rma.mv> function since I can have several estimates from
>> several studies of the same lab  (random=~1|lab/study/estid).
>> Following your recommendations, I checked whether the type of design
>> had a significant effect on effect sizes and actually it didn't except
>> for one specific type of intervention in which I do not have that much
>> data:  3 papers for each design containing 7 and 4 effect sizes
>> respectively. In this case, meta-analysis of overall estimates is
>> non-significant while when computing them separately, one is
>> significant (control vs. treatment groups) while the other is not
>> (pre- vs. post treatment).
>> I do think that would make sense to present the overall meta-analysis
>> as well as the two designs separately ? In any case, we would need
>> more data to conclude for sure.
>> Best,
>> Gladys
>>
>> Le lun. 3 mai 2021 à 20:18, Viechtbauer, Wolfgang (SP)
>> <wolfgang.viechtbauer using maastrichtuniversity.nl
>> <mailto:wolfgang.viechtbauer using maastrichtuniversity.nl>> a écrit :
>>
>>      Agree, but I also want to point to this:
>>
>>      https://www.metafor-project.org/doku.php/tips:computing_adjusted_effects
>>      <https://www.metafor-project.org/doku.php/tips:computing_adjusted_effects>
>>
>>      It discusses the concept of computing adjusted effects, which may
>>      be what you are looking for, Gladys. However, as noted at the end,
>>      some may question the usefulness and interpretability of such an
>>      estimate.
>>
>>      Best,
>>      Wolfgang
>>
>>      >-----Original Message-----
>>      >From: R-sig-meta-analysis
>>      [mailto:r-sig-meta-analysis-bounces using r-project.org
>>      <mailto:r-sig-meta-analysis-bounces using r-project.org>] On
>>      >Behalf Of Dr. Gerta Rücker
>>      >Sent: Monday, 03 May, 2021 20:09
>>      >To: Gladys Barragan-Jason
>>      >Cc: R meta
>>      >Subject: Re: [R-meta] Compiling different design in the same
>>      met-analysis
>>      >
>>      >Hi Gladys,
>>      >
>>      >You may pool all effects in a meta-analysis, using "design" as a
>>      >moderator. In meta-analysis, this is called a subgroup analysis (for
>>      >example by Cochrane). You then get both within-subgroup effects and a
>>      >pooled effect, and also a test of treatment--design interaction, that
>>      >says whether the treatment effect is different between designs.
>>      Thus you
>>      >have all what you are interested in. However, in your
>>      interpretation you
>>      >have to account for the different character of the studies: In a
>>      >two-group parallel design, if it is randomized (you did not mention
>>      >whether it is), you can expect an unbiased estimate of the treatment
>>      >effect. In a pre-post design, you must expect all kinds of biases (to
>>      >mention only regression to the mean) and also, as Michael said,
>>      >different variation. Therefore you have to interpret results with
>>      caution.
>>      >
>>      >Best, Gerta
>>      >
>>      >Am 03.05.2021 um 19:42 schrieb Gladys Barragan-Jason:
>>      >> Hi Gerta and Michael,
>>      >> I am not sure to understand. I am not saying the the effect
>>      size are
>>      >> different. They are comparable but of course differ in term of ci
>>      >> since the number of studies, participants are different. I
>>      would like
>>      >> to know whether we can make obtain an overall effect size while
>>      >> controlling for design. So maybe the answer is no.
>>      >> Thanks
>>      >> Gladys
>>
>>
>>
>> -- 
>>
>> ------------------------------------------
>>
>> Gladys Barragan-Jason, PhD.  Website
>> <https://sites.google.com/view/gladysbarraganjason/home>
>>
>> Station d'Ecologie Théorique et Expérimentale (SETE)
>>
>> CNRS de Moulis
>>
>> image.pngimage.png
>>
>>
>>
>>
>> _______________________________________________
>> R-sig-meta-analysis mailing list
>> R-sig-meta-analysis using r-project.org
>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
-- 
Kind regards
*Philippe Tadger*


	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list