# [R-meta] differences in within subject standardized mean difference estimated via means / SD's compared with test statistics

Brendan Hutchinson Brend@n@Hutch|n@on @end|ng |rom @|umn|@@nu@edu@@u
Sat Apr 22 16:52:51 CEST 2023

```Hi all,

I'm currently conducting a meta-analysis that will be using primarily within-subject / repeated measures standardised mean difference as the effect size.

As is common, many studies in my sample do not provide means/standard deviations for estimating the effect size, therefore I'm required to use test statistics (p's, t's, F's) for some studies.

My issue is that I'm getting, in some instances, considerably large differences in effect sizes calculated between the two methods. For example, one study in my meta-analysis provides both, which allows me to compare the two. When calculated via means / SD's (assuming a correlation of 0.7 between the two repeated measures), the effect size estimate is approx 1.77, whereas the effect size derived via t test statistic is 0.62 (reproducible example below).

I imagine slight differences between the two would be normal, but differences this large have me scratching my head. I'm wondering if this is a problem on my end (for example, potential sources of error could be e.g. in the assumed correlation value, that I've transformed SE's to SD's, and have transformed F to t), or are these sorts of differences to be expected? More broadly and notably, in what circumstance would differences between the two be within the normal range, and when would they be sufficient to raise a red flag?

If anybody has any insight, advice, or recommended literature they can point me to, that would be much appreciated!

reproducible example:

library(metafor)

mean_condition1 <- 0.197
se_condition1 <- 0.082
mean_condition2 <- -0.350
se_condition2 <- 0.105
sample_size <- 15
assumed_r <- 0.7
F_value <- 5.78
t_value <- sqrt(F_value)

#via means/SD
escalc("SMCC", m1i = mean_condition1, sd1i = se_condition1 * sqrt(sample_size),
m2i = mean_condition2, sd2i = se_condition2 * sqrt(sample_size),
ni = sample_size,
ri = assumed_r)

#via t
d_via_t <- t_value / sqrt(sample_size)

(Note that I've used escalc here simply for streamlining the reproducible example. I'm actually using calculations for these derived from Lakens (2013, https://doi.org/10.3389/fpsyg.2013.00863) and Borenstein et al (2009 https://doi.org/10.1002/9780470743386.ch4), and estimates are largely similar. Also the example estimates are not the same (since d_via_t is uncorrected) but my question remains since that, of course, doesn't explain the difference).

Thanks so much,
Brendan

[[alternative HTML version deleted]]

```