# [R-meta] ES and meta-analysis for single-case studies

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Wed Jul 21 16:58:37 CEST 2021

```Glancing through that paper, it seems
Zcc = (x* - x-bar-C) / s-C,
where
x* is the score for the single subject, x-bar-C is the mean of the control
sample, and s-C is the SD of the control sample. In their earlier paper,
they give the sampling variance of the numerator as
Var(x* - x-bar-C) = s-C^2 * (1 + 1 / n-C),
where n-C is the control group sample size. The variance of Zcc would
therefore be approximately
Var(Zcc) = (1 + 1 / n-C) + Zcc^2 / [2 * (n-C - 1)]

Conceptually, this doesn't seem unreasonable to me. It's a standardized
mean difference, using the control sample to estimate the scale of the
outcome. If the control samples used in the single-case-control designs are
similar to the control groups from classical studies, then using s-C as the
denominator of the effect size seems reasonable enough.

On Wed, Jul 21, 2021 at 9:40 AM Filippo Gambarota <
filippo.gambarota using gmail.com> wrote:

> Thank you James, that's a great suggestion. Basically my main doubt was
> about calculating appropriate effect size and variance for that specific
> situation. Of course, putting in the same meta-analysis very different
> research design could be very complicated. For example, Crawford et al.
> (2010; COGNITIVE NEUROPSYCHOLOGY) proposed a measure called Zcc as
> effect size index but for me does not make sense for a meta-analytic
> approach. Is basically a one-sample cohen's d where the population mean is
> the single subject and the group mean is the controls mean. I don't know if
> it is possible to calculate an appropriate effect size in that situation.
>
> FIlippo
>
> On Wed, 21 Jul 2021 at 17:12, James Pustejovsky <jepusto using gmail.com> wrote:
>
>> Hi Filippo,
>>
>> limitations of evidence from these single-case-control designs relative to
>> the evidence from classical control-patient comparisons. Without
>> understanding the substance of the studies you're looking at, I am not in a
>>
>> That said, one strategy that meta-analysts often use in these sorts of
>> situations is to investigate design differences empirically. That is: go
>> ahead and calculate effect size estimates across both types of designs,
>> then use sub-group analysis (or meta-regression) to look at differences in
>> the distribution of effects between the two types of study designs.
>>
>> James
>>
>> On Tue, Jul 20, 2021 at 11:09 AM Filippo Gambarota <
>> filippo.gambarota using gmail.com> wrote:
>>
>>> Hello!
>>> I would like to perform a meta-analysis where a lot of studies report a
>>> single-subject analysis compared to a control group (this is very common
>>> in
>>> neuropsychological literature). I've found some literature
>>> (e.g., Crawford-Howell, 1998) where t-test and the respective effect size
>>> are proposed for that kind of analysis, however I'm not totally sure if
>>> it's possible to compare classical control-patients studies with this
>>> case-control design. Do you have some suggestions?
>>> Thanks!
>>> Filippo
>>>
>>>         [[alternative HTML version deleted]]
>>>
>>> _______________________________________________
>>> R-sig-meta-analysis mailing list
>>> R-sig-meta-analysis using r-project.org
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>>>
>>
>
> --
> *Filippo Gambarota*
> PhD Student - University of Padova
> Department of Developmental and Social Psychology
> Website: filippogambarota.netlify.app
> Research Group: Colab <http://colab.psy.unipd.it/>   Psicostat
> <https://psicostat.dpss.psy.unipd.it/>
>

[[alternative HTML version deleted]]

```