[R-meta] Studies with more than one control group

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Thu Jun 24 23:43:52 CEST 2021


The random effect for controlID is capturing any heterogeneity in the
effect sizes across control groups nested within studies, *above and beyond
heterogeneity explained by covariates.* Thus, if you include a covariate to
distinguish among types of control groups, and the differences between
types of control groups are consistent across studies, then the covariate
might explain all (or nearly all) of the variation at that level, which
would obviate the purpose of including the random effect at that level.

On Thu, Jun 24, 2021 at 9:56 AM Jack Solomon <kj.jsolomon using gmail.com> wrote:

> Thank you James. On my question 3, I was implicitly referring to my
> previous question (a previous post titled: Studies with independent
> samples) regarding the fact that if I decide to drop 'sampleID', then I
> need to change the coding of the 'studyID' column (i.e., then, each sample
> should be coded as an independent study). So, in my question 3, I really
> was asking that in the case of 'controlID', removing it doesn't require
> changing the coding of any other columns in my data.
>
> Regarding adding 'controlID' as a random effect, you said: "... an
> additional random effect for controlID will depend on how many studies
> include multiple control groups and whether the model includes a covariate
> to distinguish among types of control groups (e.g., business-as-usual
> versus waitlist versus active control group)."
>
> I understand that the number of studies with multiple control groups is
> important in whether to add a random effect or not. But why having "a
> covariate to distinguish among types of control groups" is important in
> whether to add a random effect or not?
>
> Thanks, Jack
>
> On Thu, Jun 24, 2021 at 9:17 AM James Pustejovsky <jepusto using gmail.com>
> wrote:
>
>> Hi Jack,
>>
>> Responses inline below.
>>
>> James
>>
>>
>>> I have come across a couple of primary studies in my meta-analytic pool
>>> that have used two comparison/control groups (as the definition of
>>> 'control' has been debated in the literature I'm meta-analyzing).
>>>
>>> (1) Given that, should I create an additional column ('control') to
>>> distinguish between effect sizes (SMDs in this case) that have been
>>> obtained by comparing the treated groups to control 1 vs. control 2 (see
>>> below)?
>>>
>>>
>> Yes. Along the same lines as my response to your earlier question, it
>> seems prudent to include ID variables like this in order to describe the
>> structure of the included studies.
>>
>>
>>> (2) If yes, then, does the addition of a 'control' column call for the
>>> addition of a random effect for 'control' of the form:  "~ |
>>> studyID/controlID" (to be empirically tested)?
>>>
>>>
>> I expect you will find differences of opinion here. Pragmatically, the
>> feasibility of estimating a model with an additional random effect for
>> controlID will depend on how many studies include multiple control groups
>> and whether the model includes a covariate to distinguish among types of
>> control groups (e.g., business-as-usual versus waitlist versus active
>> control group).
>>
>> At a conceptual level, omitting random effects for controlID leads to
>> essentially the same results as averaging the ES across both control
>> groups. If averaging like this makes conceptual sense, then omitting the
>> random effects might be reasonable.
>>
>>
>>> (3) If I later decide to drop controlID from my dataset, I think I can
>>> still keep all effect sizes from both control groups intact without any
>>> changes to my coding scheme, right?
>>>
>>
>> I don't understand what you're concern is here. Why not just keep
>> controlID in your dataset as a descriptor, even if it doesn't get used in
>> the model?
>>
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list