[R-meta] Standard Error of an effect size for use in longitudinal meta-analysis

Simon Harmel @|m@h@rme| @end|ng |rom gm@||@com
Tue Aug 11 23:22:05 CEST 2020


Dear Wolfang,

Many thanks for your confirmation! I have some follow-up questions.

(1) I believe "dppc" is biased and requires a correction, so does its
sampling variance, right?

(2) If "dppc" and "dppt" (along with their sampling variances) are each
biased and must be corrected, then, we don't need to again correct the
sampling variance of "d_dif" (i.e., dppt - dppc)?

(3) Do you see any advantage or disadvantage for using "d_dif" in
longitudinal meta-analysis where studies that have a control group?

[In terms of advantages:
 a- I think logistically using "d_dif" will reduce the number of effect
sizes that are otherwise usually computed from such studies by half,
 b- "d_dif"'s metric seems to better suit the repeated measures design used
in the primary studies,
 c- "d_dif" seems to allow for investigating (by using appropriate
moderators) the threats to the internal validity (regression to mean,
history, maturation, testing) say when the primary studies didn't use
random assignment of subjects (i.e., nonequivalent groups designs)

In terms disadvantages:
 a- I think obtaining "r", the correlation bet. pre- and post-tests to
compute "dppc" and "dppt" is a bit difficult,
 b- while "d_dif"'s metric seems to better suit the repeated measures
design used in the primary studies, most applied meta-analysts are not used
to such a metric.
]

On Tue, Aug 11, 2020 at 2:04 PM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:

> Dear Simon,
>
> Based on what you describe, dppc and dppt appear to be standardized mean
> changes using what I would call "change-score standardization" (i.e., (m1 -
> m2)/SDd, where SDd is the standard deviation of the change scores).
>
> I haven't really tried to figure out the details of what you are doing
> (and I am not familiar with the distr package), but the large-sample
> sampling variances of dppc and dppt are:
>
> vc = 1/nc + dppc^2/(2*nc)
>
> and
>
> vt = 1/nt + dppt^2/(2*nt)
>
> (or to be precise, these are the estimates of the sampling variances,
> since dppc and dppt have been plugged in for the unknown true values).
>
> Hence, the SE of dppt - dppc is simply:
>
> sqrt(vt + vc)
>
> For the "example use" data, this is:
>
> sqrt((1/40 + 0.2^2 / (2*40)) + (1/40 + 0.4^2 / (2*40)))
>
> which yields 0.2291288.
>
> Running your code yields 0.2293698. The former is a large-sample
> approximation while you seem to be using an 'exact' approach, so they are
> not expected to coincide, but should be close, which they are.
>
> Best,
> Wolfgang
>
> >-----Original Message-----
> >From: Simon Harmel [mailto:sim.harmel using gmail.com]
> >Sent: Tuesday, 11 August, 2020 14:59
> >To: R meta
> >Cc: Viechtbauer, Wolfgang (SP)
> >Subject: Standard Error of an effect size for use in longitudinal meta-
> >analysis
> >
> >Dear All,
> >
> >Suppose I know that the likelihood function for an estimate of effect size
> >(called `dppc`) measuring the change in a "control" group from pre-test to
> >post-test in R language is given by:
> >
> >    like1 <- function(x) dt(dppc*sqrt(nc), df = nc - 1, ncp = x*sqrt(nc))
> >
> >where `dppc` is the observed estimate of effect size, and `nc` is the
> >"control" group's sample size.
> >
> >Similarly, the likelihood function for an estimate of effect size (called
> >`dppt`) measuring the change in a "treatment" group from pre-test to post-
> >test in `R` language is given by:
> >
> >    like2 <- function(x) dt(dppt*sqrt(nt), df = nt - 1, ncp = x*sqrt(nt))
> >
> >where `dppt` is the observed estimate of effect size, and `nt` is the
> >"treatment" group's sample size.
> >
> >>>>"Question:" Is there any way to find the "Standard Error (SE)" of the
> >`d_dif = dppt - dppc` (in `R`)?
> >
> >Below, I tried to first get the likelihood function of `d_dif` and then
> get
> >the *Standard Deviation* of that likelihood function. In a sense, I
> assumed
> >I have a Bayesian problem with a "flat prior" and thus "SE" is the
> standard
> >deviation of the likelihood of `d_dif`.
> >
> >>>> But I am not sure if my work below is at least approximately correct?
> --
> >Thank you, Simon
> >
> >    library(distr)
> >
> >    d_dif <- Vectorize(function(dppc, dppt, nc, nt){
> >
> >     like1 <- function(x) dt(dppc*sqrt(nc), df = nc - 1, ncp = x*sqrt(nc))
> >     like2 <- function(x) dt(dppt*sqrt(nt), df = nt - 1, ncp = x*sqrt(nt))
> >
> >      d1 <- distr::AbscontDistribution(d = like1, low1 = -15, up1 = 15,
> >withStand = TRUE)
> >      d2 <- distr::AbscontDistribution(d = like2, low1 = -15, up1 = 15,
> >withStand = TRUE)
> >
> >     like.dif <- function(x) distr::d(d2 - d1)(x) ## Gives likelihood of
> the
> >difference i.e., `d_dif`
> >
> >     Mean <- integrate(function(x) x*like.dif(x), -Inf, Inf)[[1]]
> >       SE <- sqrt(integrate(function(x) x^2*like.dif(x), -Inf, Inf)[[1]] -
> >Mean^2)
> >
> >     return(c(SE = SE))
> >    })
> >
> >     # EXAMPLE OF USE:
> >     d_dif(dppc = .2, dppt = .4, nc = 40, nt = 40)
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list