[R-meta] Pre-test Post-test Control design Different N

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Sat Jan 16 11:18:33 CET 2021


Dear Marianne,

For studies that have unpaired samples, then you should not be using Morris' formulas. They are for paired samples.

See below for additional comments.

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org]
>On Behalf Of Marianne DEBUE
>Sent: Wednesday, 13 January, 2021 16:01
>To: Michael Dewey
>Cc: r-sig-meta-analysis
>Subject: Re: [R-meta] Pre-test Post-test Control design Different N
>
>Hi Michael,
>
>In fact, there are different situations:
>- For some studies, the same plots are sampled once before and once after
>the treatment so they are paired, and pre-Test nT = post-Test nT.

So here, you can use what is described in Morris (2008) and Becker (1988). So the effect size measure is:

d_T = (mean_T_2 - mean_T_1) / SD_T_1
d_C = (mean_C_2 - mean_C_1) / SD_C_1 
d = d_T - d_C

with T/C denoting treatment/control and 1/2 the pre/post-test timepoint. Here, mean_T_1 and mean_T_2 are computed from the exact same plots and the same goes for mean_C_1 and mean_C_2.

>- For some studies, the same plots are sampled twice before and once after
>the treatment so they are paired, but as I group pre-Test data with Cochrane
>formula, pre-Test nT = 2 * post-Test nT.

The formula given in the Cochrane Handbook is for pooling together two *unpaired/independent* samples. If the same plots have been measured twice, then combining the two means/SDs into a single mean/SD is more difficult and requires other formulas (which will depend also on the correlation between the measurements). Unfortunately, I don't have the time right now to look them up or derive those equations.

>- For some studies, x plots are sampled pre-treatment and y (x and y
>different) plots are sampled post-treatment so pre-Test nT and post-Test nT
>are different; either some plots are common between pre-treatment and post-
>treatment (so they are paired) and some are only sampled once (not paired),
>or plots are completely different between pre and post-treatment sampling
>(not paired).

If it's a mix of unpaired/paired, it gets even more tricky. But for the case where the plots before and the plots after are different, then it's relatively simple. Then mean_T_1 and mean_T_2 are independent, since they are based on different plots. In essence, d_T is then a standardized mean difference for two independent samples. The same goes for d_C. So just compute a regular standardized mean difference (and its sampling variance) for the treatment group plots, a standardized mean difference (and its sampling variance) for the control group plots, and then take their difference as above. The sampling variance of this difference is just the sum of the two sampling variances.

>I agree that I don't really have a paired design, as it depends on the
>studies (and on the plots within studies), but I couldn't find any other
>formula than Morris' to calculate an effect size for Pre-Test Post-Test
>Control design. Because I suppose that even if the plots are not paired, I
>can't consider pre-test and post-test data as independent, am I wrong? 

Why not? That's in essence what we do with unpaired data. Of course, there could still be dependencies for other reasons even in unpaired data, but this is usually/typically not a major concern.

>So that is why I was wondering if it was possible to adapt the formula for 
>a non-paired design. But if you are aware of such formulas (for non-paired
>Pre-Test Post-Test Control design), I'm interested in it!

As described above.

>Regards,
>Marianne
>
>----- Mail original -----
>De: "Michael Dewey" <lists using dewey.myzen.co.uk>
>À: "Marianne DEBUE" <marianne.debue using mnhn.fr>, "r-sig-meta-analysis" <r-sig-
>meta-analysis using r-project.org>
>Envoyé: Mercredi 13 Janvier 2021 15:28:41
>Objet: Re: [R-meta] Pre-test Post-test Control design Different N
>
>Dear Marianne
>
>Perhaps you could clarify something for us. You state that you have a
>pre-post design and you have used methods for paired data. You then go
>on to describe situations in which the number of units before and after
>is not the same. Perhaps it is just me but I do not understand how you
>can then have pairing.
>
>Michael
>
>On 13/01/2021 08:04, Marianne DEBUE wrote:
>> Hi,
>>
>> I'm conducting a meta-analysis in ecology on a Pre-test Post-test Control
>design.
>> I'd like to use Morris "dppc2" formula (in Estimating effect sizes from
>pretest-posttest-control group designs , [
>https://doi.org/10.1177%2F1094428106291059 |
>https://doi.org/10.1177/1094428106291059 ] ) in order to take into account
>the non-independency of the pre-test post-test design.
>> This formula applies for paired-observations and depends on the Pre-test
>Control Mean, Post-test Control Mean, Pre-test Treatment Mean, Post-test
>Treatment Mean, Pre-Test Control SD, Pre-test Treatment SD, Treatment Sample
>size nT and Control Sample size nC.
>> Some studies have the same pre-test and post-test nT (and nC) because they
>always sample the same plots. However, so me studies have a different Pre-
>Test nT and Post-Test nT (and/or pre-Test nC and Post-Test nC), either
>because of the experimental design of the author of the study (for example
>30 Before samples and 60 After samples), or because we have gathered the
>Before data if they were given for several years using the Cochrane combined
>group formula ( [ https://handbook-5-
>1.cochrane.org/chapter_7/table_7_7_a_formulae_for_combining_groups.htm |
>https://handbook-5-
>1.cochrane.org/chapter_7/table_7_7_a_formulae_for_combining_groups.htm ] )
>(for example, 30 samples taken 2 years before the intervention, 30 samples
>taken one year before the intervention, and 30 samples taken one year after
>the intervention, giving 60 samples taken before the intervention and 30
>taken after).
>>
>> Do you know if it is possible to adapt the formula to take into account a
>possible difference in nT or nC between the Before and After?
>>
>> Best regards,
>> Marianne


More information about the R-sig-meta-analysis mailing list