[R-meta] SMD from three-level nested design (raw data available)
Fabian Schellhaas
f@bi@n@@chellh@@@ @ending from y@le@edu
Thu Nov 29 22:41:40 CET 2018
Dear all,
We had a couple of related questions, so I will add to this thread. If
preferred, I'd be happy to start a new thread instead.
1. Another study, for which we obtained the raw data from the authors, has
a nested data structure with two levels – individual participants nested in
clusters. What complicates matters here is that the construct of interest,
which is continuous, was operationalized as a dichotomous measure. In the
non-nested case, we would just compute the transformed log-odds ratio
(e.g., escalc(measure = "OR2DN") in metafor). However, since the data are
nested, we can fit a generalized linear mixed model (GLMM) to predict this
dichotomous outcome. How would we then extract an effect size analogous to
the transformed log-odds ratio from this model? The Hedges chapter and
papers only describe the case of a continuous outcome variable, so we're
not sure about the correct approach.
2. We also wondered, more generally, how effect-size calculation from
nested data with a dichotomous outcome would be handled in a meta-analysis
of correlations. In the non-nested case, we could compute the biserial
correlation when the predictor is continuous and the outcome is
dichotomized (e.g., Jacobs & Viechtbauer, 2017). However, how would we
extract an effect size analogous to the biserial correlation from a GLMM,
which could then be combined with correlations from single-level data?
Many thanks for any pointers!
Fabian
---
Fabian M. H. Schellhaas | Ph.D. Candidate | Department of Psychology | Yale
University
On Tue, Nov 6, 2018 at 11:55 AM James Pustejovsky <jepusto using gmail.com> wrote:
> I don't know of any further work on this beyond the Hedges chapter and the
> two papers on which it is based. Anyone else have pointers?
>
> On Tue, Nov 6, 2018 at 10:49 AM Fabian Schellhaas <
> fabian.schellhaas using yale.edu> wrote:
>
>> Dear James,
>>
>> Thanks for these clarifications, this helps a lot. Scenario 1 does indeed
>> apply here, so I will add the within-subjects variance component to the
>> denominator. Is there any further reading you would recommend on this topic?
>>
>> Many thanks,
>> Fabian
>>
>> ---
>> Fabian M. H. Schellhaas | Ph.D. Candidate | Department of Psychology |
>> Yale University
>>
>>
>> On Tue, Nov 6, 2018 at 11:30 AM James Pustejovsky <jepusto using gmail.com>
>> wrote:
>>
>>> Fabian,
>>>
>>> The overarching goal in this context is to choose an effect size
>>> parameter that is as comparable as possible to the other studies in the
>>> synthesis. Three scenarios:
>>>
>>> 1. If those other studies are mostly individually randomized experiments
>>> conducted across multiple contexts, but without the repeated measures
>>> component, then I would argue that d_T (the average effect, standardized
>>> based on the total variance of the outcome) might be more appropriate. The
>>> reason is that the distribution of observed outcomes will be comprised of
>>> both between-person _and within-person (between-trial)_ variation. If
>>> participants respond to an instrument only once, then there is still some
>>> unreliability in the resulting scores, so the corresponding variance
>>> component should be included in the denominator.
>>>
>>> 2. If the other studies are mostly individually randomized experiments
>>> conducted across narrow contexts, then it might make sense to use d_WS (eq.
>>> 18.35 in Hedges, 2009), which excludes the between-group variation from the
>>> denominator of the effect size. The reasoning here is that if the other
>>> studies use samples that would end up as a single group in the
>>> cluster-randomized trial, then the distribution of observed outcomes in
>>> those studies will not include the between-group variation. For instance,
>>> say that study A randomized at the school level, whereas studies B, C,
>>> D,... used samples from a single school each. Then the latter studies won't
>>> have between-school variation in the outcome, and we would exclude the
>>> between-school component from study A in order to maintain comparability
>>> with the other studies.
>>>
>>> 3. If the other studies mostly DID use repeated measures, but averaged
>>> the scores together before analysis, then the distribution of observed
>>> outcomes in those studies will not include the within-participant variation
>>> (or actually it will but to a much-reduced extent). In this situation, it
>>> would make sense to exclude the within-participant variance component from
>>> the denominator of the effect size (and thus include only the
>>> between-participant or the sum of the between-participant and between-group
>>> variance components, depending on considerations analogous to the above).
>>> But note that Hedges (2009) sees these effect sizes as less likely to be of
>>> general interest (see notes on p. 348).
>>>
>>> James
>>>
>>> On Mon, Nov 5, 2018 at 5:31 PM Fabian Schellhaas <
>>> fabian.schellhaas using yale.edu> wrote:
>>>
>>>> Dear James,
>>>>
>>>> Thanks so much for your reply, this is really helpful and made me think
>>>> carefully about the data I'm dealing with. The effect I'm trying to compute
>>>> is defined by Hedges (2009, p. 348) as d_BC, i.e. the treatment effect at
>>>> level 2 of a 3-level design. In "my" dataset, the unit of measurement is
>>>> the allocation decision (level 1), and the unit of randomization is the
>>>> group (level 3). The effect I'm after, however, is the treatment effect at
>>>> the level of the participant (level 2).
>>>>
>>>> Unfortunately, Hedges (2009) does not provide the equation for the
>>>> computation of d_BC using fixed-effect estimates and variance components.
>>>> However, in the context of a 2-level model, Hedges (2009) defines the
>>>> between-cluster effect as
>>>>
>>>> d_B = b / sig_B [Eq. 18.17]
>>>>
>>>> where b is the estimated fixed effect and sig_B^2 is the
>>>> between-cluster variance component. Note that the within-cluster variance
>>>> component is omitted from the denominator. By contrast, the total treatment
>>>> effect is defined as
>>>>
>>>> d_T = b / sqrt(sig_B^2 + sig_W^2) [Eq. 18.23]
>>>>
>>>> where b is again the estimated fixed effect, sig_B^2 is the
>>>> between-cluster variance component, and sig_W^2 is the within-cluster
>>>> variance component. I tried to apply this logic to the study I'm coding, in
>>>> which the effect size of interest is not the total treatment effect, but
>>>> rather the treatment effect at the level of individual participants (level
>>>> 2). As such, I omitted sig_w from the denominator. My understanding is that
>>>> if I add the repeated-measures variance component to the denominator, as
>>>> you suggested, I would get the treatment effect at the level of the
>>>> allocation decision (as per Hedges, 2009, Eq. 18.55). And wouldn't such an
>>>> effect size be incomparable to the other SMDs in the meta-analysis, which
>>>> represent a treatment effect at the level of participants?
>>>>
>>>> Many thanks for your help,
>>>> Fabian
>>>>
>>>> ---
>>>> Reference:
>>>> Hedges, L. V. (2009). Effect sizes in nested designs. In Cooper, H.,
>>>> Hedges, L. V., & Valentine, J. C. (Eds.), The Handbook of Research
>>>> Synthesis and Meta-Analysis (pp. 337-355). New York: Russell Sage
>>>> Foundation.
>>>>
>>>>
>>>> On Sun, Nov 4, 2018 at 10:49 PM James Pustejovsky <jepusto using gmail.com>
>>>> wrote:
>>>>
>>>>> Fabian,
>>>>>
>>>>> Your calculations make sense to me for a two-level model (participants
>>>>> nested within groups), but you've described a three-level model. What
>>>>> happened to the other level (repeated measures, nested within
>>>>> participants)? If you have a positive variance component estimate for it,
>>>>> then I think it would make sense to include it in the denominator of the
>>>>> effect size. If X is the estimated variance of the repeated measures nested
>>>>> within participant, then take
>>>>>
>>>>> d = 6.95 / sqrt(X + 143.64 + 217.17)
>>>>>
>>>>> James
>>>>>
>>>>> On Sat, Nov 3, 2018 at 3:22 PM Fabian Schellhaas <
>>>>> fabian.schellhaas using yale.edu> wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> I have a question about computing a standardized mean difference
>>>>>> (SMD) from
>>>>>> a primary study with a three-level nested design. The study in
>>>>>> question
>>>>>> randomly assigned groups of participants to a treatment or control
>>>>>> condition, and then measured individual participants' resource
>>>>>> allocations.
>>>>>> While some respondents made only one such decision, others made two.
>>>>>> As
>>>>>> such, the data in this study has three levels: resource allocation
>>>>>> decisions, which are nested in participants, which in turn are nested
>>>>>> in
>>>>>> groups.
>>>>>>
>>>>>> I would like to compute an effect size that reflects the
>>>>>> between-participant effect of treatment vs. control. I have the raw
>>>>>> data,
>>>>>> which the authors luckily made available. As such, I can easily fit a
>>>>>> linear mixed model with a fixed effect for treatment vs. control, and
>>>>>> a
>>>>>> nested random effect to account for the three-level design. However,
>>>>>> how do
>>>>>> I extract a SMD from the fitted model that is comparable to SMDs from
>>>>>> single-level designs?
>>>>>>
>>>>>> The estimate for the fixed effect is 6.95, with a SE of 6.27. The
>>>>>> variance
>>>>>> components of the random effects are 143.64 for participant nested in
>>>>>> group, and 217.17 for group. Based on formula 18.17 in Hedges (2009),
>>>>>> I
>>>>>> believe I would compute *d* = 6.95/sqrt(143.64 + 217.17) = 0.366.
>>>>>> However,
>>>>>> I would like to confirm that this is indeed the correct approach
>>>>>> before I
>>>>>> proceed.
>>>>>>
>>>>>> Many thanks!
>>>>>> Fabian
>>>>>>
>>>>>> ---
>>>>>> Fabian M. H. Schellhaas | Ph.D. Candidate | Department of Psychology
>>>>>> | Yale
>>>>>> University
>>>>>>
>>>>>> [[alternative HTML version deleted]]
>>>>>>
>>>>>> _______________________________________________
>>>>>> R-sig-meta-analysis mailing list
>>>>>> R-sig-meta-analysis using r-project.org
>>>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>>>>>>
>>>>>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list