[R-meta] correlation between pre and post test?
Viechtbauer, Wolfgang (SP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Fri Aug 27 09:40:51 CEST 2021
You will find a discussion of this here:
This doesn't get you around the problem of needing the pre-post correlations. As others have suggested, if you really cannot get the correlations from the articles/authors, you can assume some reasonable 'guestimates' and do sensitivity analyses.
The other things that have been brought up (e.g., the method by Riley, approximate V matrices, cluster-robust variance estimation) are actually addressing a different issue / type of correlation (the one between multiple effects measured in the same sample of subjects).
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of YA
>Sent: Friday, 27 August, 2021 5:20
>To: Reza Norouzian; Philippe Tadger
>Subject: Re: [R-meta] correlation between pre and post test?
>Another way I found to calculate experiment-control pre-post test design SMD is
>Morris, S. B. . (2008). Estimating effect sizes from pretest-posttest-control
>group designs. Organizational Research Methods, 11(2), 364-386.
>Morris calculated the SMD by
>SMD = cp * (((mean_post_exp - mean_pre_exp) - (mean_post_ctrl - mean_pre_ctrl)) /
>cp = 1- 3/(4(n_exp+n_ctrl - 2) -1)
>sd_pre = sqrt(((n_exp - 1)*sd^2_pre_exp+(n_ctrl -1)*sd^2_pre_ctrl) / (n_exp
>There are people talking about this approach online. Do you think it is still a
>common practice today?
More information about the R-sig-meta-analysis