[R-meta] Standard Error of an effect size for use in longitudinal meta-analysis

Simon Harmel @|m@h@rme| @end|ng |rom gm@||@com
Thu Aug 13 01:38:46 CEST 2020


Dear Wolfang,

Thank you very much for your insightful comments. Regarding my question #
2, I meant after correcting "dppc" and "dppt" (as well as their sampling
variances "vc" and "vt"), is there any need for further correction of
d_dif and sqrt(vt + vc)? I think no, right?

Now, am I wrong to think that your large sample "vc" and "vt" need to be
multiplied by "cfactor(n-1)^2" to become bias-free?

where cfactor = function(df) exp(lgamma(df/2)-log(sqrt(df/2)) -
lgamma((df-1)/2)) # df = n -1

Simon

On Wed, Aug 12, 2020 at 1:36 AM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:

> 1) Yes, d = (m_1 - m_2) / SD_d (where m_1 and m_2 are the observed means
> at time 1 and 2 and SD_d is the standard deviation of the change scores) is
> a biased estimator of (mu_1-mu_2)/sigma_D. An approximatly unbiased
> estimator is given by:
>
> g = (1 - 3/(4*(n-1) - 1) * d
>
> See, for example:
>
> Gibbons, R. D., Hedeker, D. R., & Davis, J. M. (1993). Estimation of
> effect size from a series of experiments involving paired comparisons.
> Journal of Educational Statistics, 18(3), 271-279.
>
> where you can also find the exact correction factor.
>
> Not sure what you mean by correcting the sampling variance (and which
> variance estimate you are referring to). You can find the equation for an
> unbiased estimator of the sampling variance of d and g in:
>
> Viechtbauer, W. (2007). Approximate confidence intervals for standardized
> effect sizes in the two-independent and two-dependent samples design.
> Journal of Educational and Behavioral Statistics, 32(1), 39-60.
>
> In particular, see equations 25 and 26.
>
> 2) I don't understand your question.
>
> 3) The idea of using 'd_dif' for computing an effect size for
> independent-groups pretest–posttest design is not new. See Gibbons et al.
> (1993) and:
>
> Becker, B. J. (1988). Synthesizing standardized mean-change measures.
> British Journal of Mathematical and Statistical Psychology, 41(2), 257-278.
>
> Morris, S. B., & DeShon, R. P. (2002). Combining effect size estimates in
> meta-analysis with repeated measures and independent-groups designs.
> Psychological Methods, 7(1), 105-125.
>
> In this context, one also needs to consider whether one should compute d
> or g using raw-score or change-score standardization (the former being
> (m1-m2)/SD_1 where SD_1 is the standard deviations at the first time
> point). Becker (1988) discusses the use of raw-score standardization, while
> Gibbons et al. (1993) is focused on change-score standardization. In any
> case, I think you will find some useful discussions in these articles.
>
> Best,
> Wolfgang
>
> >-----Original Message-----
> >From: Simon Harmel [mailto:sim.harmel using gmail.com]
> >Sent: Tuesday, 11 August, 2020 23:22
> >To: Viechtbauer, Wolfgang (SP)
> >Cc: R meta
> >Subject: Re: Standard Error of an effect size for use in longitudinal
> meta-
> >analysis
> >
> >Dear Wolfang,
> >
> >Many thanks for your confirmation! I have some follow-up questions.
> >
> >(1) I believe "dppc" is biased and requires a correction, so does its
> >sampling variance, right?
> >
> >(2) If "dppc" and "dppt" (along with their sampling variances) are each
> >biased and must be corrected, then, we don't need to again correct the
> >sampling variance of "d_dif" (i.e., dppt - dppc)?
> >
> >(3) Do you see any advantage or disadvantage for using "d_dif" in
> >longitudinal meta-analysis where studies that have a control group?
> >
> >[In terms of advantages:
> > a- I think logistically using "d_dif" will reduce the number of effect
> >sizes that are otherwise usually computed from such studies by half,
> > b- "d_dif"'s metric seems to better suit the repeated measures design
> used
> >in the primary studies,
> > c- "d_dif" seems to allow for investigating (by using appropriate
> >moderators) the threats to the internal validity (regression to mean,
> >history, maturation, testing) say when the primary studies didn't use
> random
> >assignment of subjects (i.e., nonequivalent groups designs)
> >
> >In terms disadvantages:
> > a- I think obtaining "r", the correlation bet. pre- and post-tests to
> >compute "dppc" and "dppt" is a bit difficult,
> > b- while "d_dif"'s metric seems to better suit the repeated measures
> design
> >used in the primary studies, most applied meta-analysts are not used to
> such
> >a metric.
> >]
> >
> >On Tue, Aug 11, 2020 at 2:04 PM Viechtbauer, Wolfgang (SP)
> ><wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> >Dear Simon,
> >
> >Based on what you describe, dppc and dppt appear to be standardized mean
> >changes using what I would call "change-score standardization" (i.e., (m1
> -
> >m2)/SDd, where SDd is the standard deviation of the change scores).
> >
> >I haven't really tried to figure out the details of what you are doing
> (and
> >I am not familiar with the distr package), but the large-sample sampling
> >variances of dppc and dppt are:
> >
> >vc = 1/nc + dppc^2/(2*nc)
> >
> >and
> >
> >vt = 1/nt + dppt^2/(2*nt)
> >
> >(or to be precise, these are the estimates of the sampling variances,
> since
> >dppc and dppt have been plugged in for the unknown true values).
> >
> >Hence, the SE of dppt - dppc is simply:
> >
> >sqrt(vt + vc)
> >
> >For the "example use" data, this is:
> >
> >sqrt((1/40 + 0.2^2 / (2*40)) + (1/40 + 0.4^2 / (2*40)))
> >
> >which yields 0.2291288.
> >
> >Running your code yields 0.2293698. The former is a large-sample
> >approximation while you seem to be using an 'exact' approach, so they are
> >not expected to coincide, but should be close, which they are.
> >
> >Best,
> >Wolfgang
> >
> >>-----Original Message-----
> >>From: Simon Harmel [mailto:sim.harmel using gmail.com]
> >>Sent: Tuesday, 11 August, 2020 14:59
> >>To: R meta
> >>Cc: Viechtbauer, Wolfgang (SP)
> >>Subject: Standard Error of an effect size for use in longitudinal meta-
> >>analysis
> >>
> >>Dear All,
> >>
> >>Suppose I know that the likelihood function for an estimate of effect
> size
> >>(called `dppc`) measuring the change in a "control" group from pre-test
> to
> >>post-test in R language is given by:
> >>
> >>    like1 <- function(x) dt(dppc*sqrt(nc), df = nc - 1, ncp = x*sqrt(nc))
> >>
> >>where `dppc` is the observed estimate of effect size, and `nc` is the
> >>"control" group's sample size.
> >>
> >>Similarly, the likelihood function for an estimate of effect size (called
> >>`dppt`) measuring the change in a "treatment" group from pre-test to
> post-
> >>test in `R` language is given by:
> >>
> >>    like2 <- function(x) dt(dppt*sqrt(nt), df = nt - 1, ncp = x*sqrt(nt))
> >>
> >>where `dppt` is the observed estimate of effect size, and `nt` is the
> >>"treatment" group's sample size.
> >>
> >>>>>"Question:" Is there any way to find the "Standard Error (SE)" of the
> >>`d_dif = dppt - dppc` (in `R`)?
> >>
> >>Below, I tried to first get the likelihood function of `d_dif` and then
> get
> >>the *Standard Deviation* of that likelihood function. In a sense, I
> assumed
> >>I have a Bayesian problem with a "flat prior" and thus "SE" is the
> standard
> >>deviation of the likelihood of `d_dif`.
> >>
> >>>>> But I am not sure if my work below is at least approximately
> correct? -
> >-
> >>Thank you, Simon
> >>
> >>    library(distr)
> >>
> >>    d_dif <- Vectorize(function(dppc, dppt, nc, nt){
> >>
> >>     like1 <- function(x) dt(dppc*sqrt(nc), df = nc - 1, ncp =
> x*sqrt(nc))
> >>     like2 <- function(x) dt(dppt*sqrt(nt), df = nt - 1, ncp =
> x*sqrt(nt))
> >>
> >>      d1 <- distr::AbscontDistribution(d = like1, low1 = -15, up1 = 15,
> >>withStand = TRUE)
> >>      d2 <- distr::AbscontDistribution(d = like2, low1 = -15, up1 = 15,
> >>withStand = TRUE)
> >>
> >>     like.dif <- function(x) distr::d(d2 - d1)(x) ## Gives likelihood of
> >the
> >>difference i.e., `d_dif`
> >>
> >>     Mean <- integrate(function(x) x*like.dif(x), -Inf, Inf)[[1]]
> >>       SE <- sqrt(integrate(function(x) x^2*like.dif(x), -Inf, Inf)[[1]]
> -
> >>Mean^2)
> >>
> >>     return(c(SE = SE))
> >>    })
> >>
> >>     # EXAMPLE OF USE:
> >>     d_dif(dppc = .2, dppt = .4, nc = 40, nt = 40)
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list