[R-meta] Effect sizes calculation in Pretest Posttest Control design

Michael Dewey lists at dewey.myzen.co.uk
Sun Jan 28 13:12:10 CET 2018


Comment in line

On 27/01/2018 23:29, Célia Sofia Moreira wrote:
> Dear Prof. Michael Dewey,
> 
> Thank you very much for your encouraging comments. Indeed, I considered 
> different values for the correlation and the results on the differences 
> (between the two standardised mean changes values - "SMCR"), for each 
> outcome, were the same. Only the variances of these differences varied a 
> bit, according to the following rule: higher correlation --> lower 
> variance. Thus, following your advice, maybe r=.5 is a reasonable 
> choice. Do you agree?
> 

That may depend on your field of research. For what one might loosely 
call psychological variables (attitude, belief, ...) test-retest 
correlations over any reasonable time period would not be expected to be 
much above 0.5. If you were measuring something harder (systolic blood 
pressure, serum creatinine, .. ) over a shorter period then I might 
expect the correlation to be a bit higher.

> Kind regards,
>   celia
> 
> 2018-01-27 13:54 GMT+00:00 Michael Dewey <lists at dewey.myzen.co.uk 
> <mailto:lists at dewey.myzen.co.uk>>:
> 
>     Dear Célia
> 
>     I do not think the sensitivity analysis needs to be quite so complex
>     as you suggest. You can use the same imputed correlation for all
>     your primary studies. Then do it for (say) 0.2, 0.5, 0.8 and see
>     what happens. If the results are very different then use some
>     intermediate values as well to see where it all breaks down.
> 
>     Michael
> 
> 
> 
>     On 26/01/2018 22:50, Célia Sofia Moreira wrote:
> 
>         Hi!
> 
> 
>         I am studying a Pretest Posttest Control group design. I saw the
>         recommended method (Morris) to compute the effect sizes,
>         presented in one
>         of the examples from Prof. Wolfgang’ s webpage:
> 
>         http://www.metafor-project.org/doku.php/analyses:morris2008
>         <http://www.metafor-project.org/doku.php/analyses:morris2008>
> 
> 
> 
>         However, I don’t have pretest-posttest correlations. Prof. Wolfgang
>         suggests that in this case “one can substitute approximate
>         values (...) and
>         conduct a sensitivity analysis to ensure that the conclusions
>         from the
>         meta-analysis are unchanged when those correlations are varied”.
>         However,
>         since I have many different outcomes, sensitive analysis will be
>         a very
>         complex task. So, I was wondering if, instead of measure =
>         "SMCR", I could
>         use measure ="SMD". More specifically:
> 
> 
> 
>         datT <- escalc(measure="SMD", m1i=m_post, m2i=m_pre,
>         sd1i=sd_post, sd2i=
>         sd_pre, n1i=N1, n2i=N2, vtype="UB" , data=datT)
> 
>         datC <- escalc(measure="SMD", m1i=m_post, m2i=m_pre,
>         sd1i=sd_post, sd2i=
>         sd_pre, n1i=N1, n2i=N2, vtype="UB" , data=datC)
> 
>         dat <- data.frame(yi = datT$yi - datC$yi, vi = datT$vi + datC$vi)
> 
> 
> 
>         If not, can you please explain the problem of this approach and
>         inform
>         about the existence of any other simpler alternative?
> 
> 
> 
>         Kind regards
> 
>                  [[alternative HTML version deleted]]
> 
>         _______________________________________________
>         R-sig-meta-analysis mailing list
>         R-sig-meta-analysis at r-project.org
>         <mailto:R-sig-meta-analysis at r-project.org>
>         https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>         <https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis>
> 
> 
>     -- 
>     Michael
>     http://www.dewey.myzen.co.uk/home.html
>     <http://www.dewey.myzen.co.uk/home.html>
> 
> 

-- 
Michael
http://www.dewey.myzen.co.uk/home.html



More information about the R-sig-meta-analysis mailing list