[R-meta] Combining studies reporting effects at different level of analysis/aggregation

F S crpt@f@ @ending from gm@il@com
Sun Oct 14 20:30:04 CEST 2018


Dear Wolfgang,
Thanks for clarifying -- I will attempt this approach then, and also
include study type as a moderator, as per your recommendation.
All the best,
Fabian

On Thu, Oct 11, 2018 at 1:20 PM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:

> Please always cc the mailing list when replying.
>
> Yes, you could also 'guestimate' the ICC and use that (and then do a
> sensitivity analysis). Even if you do the correction, I would still
> recommend to include study type as a moderator in the analyses.
>
> Best,
> Wolfgang
>
> -----Original Message-----
> From: F S [mailto:crpt.fs using gmail.com]
> Sent: Thursday, 11 October, 2018 18:47
> To: Viechtbauer, Wolfgang (SP)
> Subject: Re: [R-meta] Combining studies reporting effects at different
> level of analysis/aggregation
>
> Hello Wolfgang,
>
> Thank you for your helpful answer. I'm afraid none of the studies in
> question report the ICC, so I guess a precise correction for the inflated d
> won't be possible. However, would it be sensible to instead impute a value
> for rho and perform the adjustment for the design effect using that value?
> Ideally, one would impute ICC values lifted from studies with a similar
> type of aggregation and similar measures, but I suppose one could also
> perform the correction for a range of plausible values of rho and evaluate
> the impact on the overall results via sensitivity analysis. What do you
> think?
>
> Thank you very much,
> Fabian
>
> On Fri, Oct 5, 2018 at 1:17 PM Viechtbauer, Wolfgang (SP) <
> wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> Hi Fabian,
>
> I don't think you have received any responses to your question so far, so
> let me take a stab here.
>
> You did not say what kind of effect size / outcome measure you want to use
> for your meta-analysis, but if it something like a standardized mean
> difference ('d-values'), then what you describe is definitely an issue. The
> means (i.e., the averaged individual responses within groups) will have a
> lower variance than the responses from individuals, leading to higher
> d-values in studies reporting statistics based on group-level means. That
> makes d-values from the two types of studies pretty much non-comparable. At
> the very least, you should include study type as a moderator in all of the
> analyses.
>
> If you know the ICC of the responses within groups, then one could correct
> for the inflation of the d-values based on the 'variance inflation factor'
> or 'design effect'. In essence, d-values from 'group studies' are then
> adjusted by the multiplicative factor
>
> sqrt((1+(n-1)*rho)/n),
>
> where n is the (average) group size and rho is the ICC. That should make
> the d-values from the two types of studies more directly comparable. The
> sampling variance of a d-value from a 'group study' also needs to be
> adjusted based on the square of the multiplicative factor (this ignores the
> uncertainty in the estimated value of the ICC, but ignoring sources of
> uncertainty when estimating sampling variances happens all the time).
>
> Best,
> Wolfgang
>
> -----Original Message-----
> From: R-sig-meta-analysis [mailto:
> r-sig-meta-analysis-bounces using r-project.org] On Behalf Of F S
> Sent: Tuesday, 18 September, 2018 20:47
> To: r-sig-meta-analysis using r-project.org
> Subject: [R-meta] Combining studies reporting effects at different level
> of analysis/aggregation
>
> I am currently working on a meta-analysis in the social sciences. All
> studies measured the relevant outcome at the level of participants, but a
> few studies aggregated at a higher level of analysis (e.g., groups) before
> statistics were computed. Can these studies be meta-analyzed together?
>
> More detail: The relevant outcome is a continuous measure, assessed at the
> level of individual participants. The majority of studies report
> statistical effects computed at the level of participants. However, in a
> number of studies, random assignment occurred not at the participant level,
> but at the level of groups (e.g., dyads, 3-person groups, classrooms).
> Although each of these studies did assess the outcome at the participant
> level, just like the other studies, statistical effects are computed at the
> group level. As such, they are different from cluster-randomized studies,
> in which randomization occurs at the group level but results are reported
> at the individual level. By contrast, the studies in question averaged
> individual responses within groups before computing effects with group as
> the unit of analysis.
>
> I'm not sure I can include these studies in my meta-analysis, but could not
> find much work on this question. Ostroff and Harrison (1999) focused
> specifically on correlations computed at different levels of analysis, and
> they make a strong case against combining ES from such studies: "the
> obtained meta-analytic ρ̂  may not be interpretable as an estimate of any
> population parameter because authors have cumulated studies in which
> samples were drawn from different levels" (p. 267).
>
> Can I can include these studies reporting effects from aggregated
> observations, and if so, are there specific procedures to do so? (I'm
> planning to use rma.mv in metafor, with cluster-robust variance estimates,
> using clubSandwich.)
>
> Many thanks!
> Fabian
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list