[R-meta] Effect sizes calculation in Pretest Posttest Control design

Célia Sofia Moreira celiasofiamoreira at gmail.com
Fri Jan 26 23:50:54 CET 2018


Hi!


I am studying a Pretest Posttest Control group design. I saw the
recommended method (Morris) to compute the effect sizes, presented in one
of the examples from Prof. Wolfgang’ s webpage:

http://www.metafor-project.org/doku.php/analyses:morris2008



However, I don’t have pretest-posttest correlations. Prof. Wolfgang
suggests that in this case “one can substitute approximate values (...) and
conduct a sensitivity analysis to ensure that the conclusions from the
meta-analysis are unchanged when those correlations are varied”. However,
since I have many different outcomes, sensitive analysis will be a very
complex task. So, I was wondering if, instead of measure = "SMCR", I could
use measure ="SMD". More specifically:



datT <- escalc(measure="SMD", m1i=m_post, m2i=m_pre, sd1i=sd_post, sd2i=
sd_pre, n1i=N1, n2i=N2, vtype="UB" , data=datT)

datC <- escalc(measure="SMD", m1i=m_post, m2i=m_pre, sd1i=sd_post, sd2i=
sd_pre, n1i=N1, n2i=N2, vtype="UB" , data=datC)

dat <- data.frame(yi = datT$yi - datC$yi, vi = datT$vi + datC$vi)



If not, can you please explain the problem of this approach and inform
about the existence of any other simpler alternative?



Kind regards

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list