[R-meta] How to deal with "dependent" Effect sizes?

James Pustejovsky jepusto at gmail.com
Tue Feb 27 16:34:13 CET 2018


Angeline,

There's several different types of dependence at work here. The main issue
you raised has to do with sampling dependence. For example, the effects
from studies 1, 2, and 3, are dependent because they are estimated from a
common sample of individuals. Dealing with this dependence is my step (1)
from the previous email. Basically, I suggest making an assumption about
the dependence, such as assuming that the various outcomes used in studies
1, 2, and 3 have inter-correlations of 0.5 or 0.6 or something like that.
This assumption leads to a block-diagonal variance-covariance matrix, where
studies 1, 2, and 3 form a block. See the linked blog post for an example
of how to calculate this matrix in R. This matrix becomes the "V" argument
in rma.mv.

Next is the question of what sort of random effects to put in the model.
Here, I suggest starting by including a random effect for every *sample*.
So studies 1, 2, and 3 would share a common random effect, while study 4
and study 5 would each have their own random effect. These random effects
allow for variation in the true effect size parameters. There would not be
any random slopes in the basic model (because there are no
covariates/predictor variables). Beyond this, you might consider including
an additional random effect for every paper or laboratory. This is worth
considering if the studies that appear together in the same paper tend to
share common operational procedures, treatment manipulations, or the like,
which would lead us to expect that their true effect size parameters for
samples in the same paper would be more similar to each other than to the
effect sizes from different papers.

James

On Mon, Feb 26, 2018 at 4:50 PM, Angeline Tsui <angelinetsui at gmail.com>
wrote:

> Dear James,
>
> Thank you very much for your suggestions. For point 2, it looks like you
> suggested me to run a hierarchical regression model where I should capture
> dependence across repeated samples by allowing intercepts (in this case,
> effect sizes) vary as random effects? And there is no random slopes in this
> model? Am I correct?
>
> But I think I do not understand how to incorporate "the dependences" here
> because some samples in the study are dependent whereas the other samples
> in the study can be independent (for example, there are 5 samples in the
> study. Study 1, 2,3 are testing the same group of participants, so they are
> dependent with each other. In contrast, Study 4 and 5 are independent of
> each other because they test different groups of participants. Study 4 and
> 5 are also not dependent of Study 1, 2 and 3). In this case, how can I
> capture the dependence here?
>
> Sorry for asking more questions and I hope you can give me some directions
> here.
>
> Many thanks,
> Angeline
>
> On Mon, Feb 26, 2018 at 1:33 PM, James Pustejovsky <jepusto at gmail.com>
> wrote:
>
>> Angeline,
>>
>> My generic suggestion would be to do something like the following:
>>
>> 1. Either find information or make an assumption about the degree of
>> dependence among the effect sizes from the same sample, and then use this
>> to construct a "working" variance-covariance matrix for the effect size
>> estimates (see here for more information: http://jepusto.gi
>> thub.io/imputing-covariance-matrices-for-multi-variate-meta-analysis).
>> 2. Use rma.mv to estimate the overall average ES and any
>> meta-regressions of interest. In rma.mv, you should definitely include a
>> random effect for each sample. You might also want to examine whether there
>> is further dependence among samples nested within studies, by including a
>> random effect for each study.
>> 3. Once you've estimated the model with rma.mv, use the functions
>> mentioned above to compute robust variance estimates (RVE), clustering at
>> the level of studies. Using RVE will ensure that the standard errors,
>> hypothesis tests, and CIs for the overall average effect (and/or
>> meta-regression coefficient estimates) are robust to the possibility that
>> the "working" variance-covariance matrix is inaccurate.
>>
>> James
>>
>> On Mon, Feb 26, 2018 at 11:29 AM, Angeline Tsui <angelinetsui at gmail.com>
>> wrote:
>>
>>> Dear James and Wolfgang,
>>>
>>> Thank you so much for your prompt reply. In this meta-analysis, I am
>>> talking about "cohen's d" for my effect sizes. I have a follow up question
>>> and I wonder if you can give me some directions:
>>>
>>> James got my message that the data structure of my meta-analysis.
>>> Indeed, I see at least 20 to 30 studies in total (may be more, but I am not
>>> sure yet cause I need to contact authors for missing information to
>>> estimate the ES). The problem is that some papers reported several samples
>>> that are dependent with each other (i.e., they were testing the same group
>>> of participants) whereas the other papers are reporting studies that are
>>> totally independent (i.e., testing totally different group of
>>> participants). Thus, my concern is how to run a meta-regression (for
>>> example, a random-effect model to estimate the average ES) when some ES in
>>> the dataset are dependent with each other whereas other ES are independent
>>> with each other. Should I run two meta-regression models: one for dependent
>>> ES only and the other for independent ES only? But I really want to combine
>>> all studies together to get a sense of the average ES across all studies?
>>> Also, I am planning to run moderator analysis to identify how experimental
>>> factors can explain variability across studies. So it will be most useful
>>> if I can run meta-regression and moderator analysis using the whole data
>>> set.
>>>
>>> Please share your thoughts with me.
>>>
>>> Thanks again,
>>> Angeline
>>>
>>> On Mon, Feb 26, 2018 at 12:19 PM, James Pustejovsky <jepusto at gmail.com>
>>> wrote:
>>>
>>>> I interpreted Angeline's original message as describing the data
>>>> structure for one of the papers included in the meta-analysis, but I assume
>>>> that the meta-analysis includes more than a single paper with three
>>>> samples. Angeline, do you know (yet) the total number of papers from which
>>>> you draw effect size estimates? And the number of distinct samples reported
>>>> in those papers?
>>>>
>>>> Incidentally, some colleagues and I have been looking at the techniques
>>>> that have been used in practice to conduct meta-analyses with dependent
>>>> effect sizes (across several different journals in psychology, education,
>>>> and medicine). Along the way, we're noting a number of ways in which the
>>>> reporting of such studies could be improved. One basic thing that we'd love
>>>> to see consistently reported is the total number of studies, the total
>>>> number of (independent) samples, and the total number of effect size
>>>> estimates (preferably also the range) after all inclusion/exclusion
>>>> criteria have been applied. For instance, fill in the blank:
>>>>
>>>> The final sample consisted of XX effect size estimates, drawn from XX
>>>>> distinct samples, reported in XX papers/manuscripts. Each paper reported
>>>>> results from between 1 and XX samples (median = XX) and contributed between
>>>>> 1 and XX effect size estimates (median = XX).
>>>>
>>>>
>>>> On Mon, Feb 26, 2018 at 10:55 AM, Viechtbauer Wolfgang (SP) <
>>>> wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
>>>>
>>>>> For cluster-robust inference methods, there is the robust() function
>>>>> in metafor. James' clubSandwich package (https://cran.r-project.org/pa
>>>>> ckage=clubSandwich) also works nicely together with metafor. However,
>>>>> generally speaking, these methods work *asymptotically*. clubSandwich
>>>>> includes some small-sample corrections, but I doubt that James would
>>>>> advocate their use in such a small k setting. So I don't think
>>>>> cluster-robust inference methods are an appropriate way to handle the
>>>>> dependency here.
>>>>>
>>>>> What kind of 'effect sizes' are we talking about here anyway?
>>>>>
>>>>> Best,
>>>>> Wolfgang
>>>>>
>>>>> >-----Original Message-----
>>>>> >From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
>>>>> >project.org] On Behalf Of Angeline Tsui
>>>>> >Sent: Monday, 26 February, 2018 17:27
>>>>> >To: Mark White
>>>>> >Cc: r-sig-meta-analysis at r-project.org
>>>>> >Subject: Re: [R-meta] How to deal with "dependent" Effect sizes?
>>>>> >
>>>>> >Hello Mark,
>>>>> >
>>>>> >Thanks for sharing your manuscript with me. I will take a look.
>>>>> >
>>>>> >But, if anyone knows how to deal with dependent ES using metafor,
>>>>> please
>>>>> >let me know.
>>>>> >
>>>>> >Best,
>>>>> >Angeline
>>>>> >
>>>>> >On Mon, Feb 26, 2018 at 10:26 AM, Mark White <markhwhiteii at gmail.com>
>>>>> >wrote:
>>>>> >
>>>>> >> I did a meta-analysis that dealt with a lot of studies with
>>>>> dependent
>>>>> >> variables at the participant level. I got a great deal of help from
>>>>> >this
>>>>> >> group (and others), and I settled eventually on robust variance
>>>>> >estimation.
>>>>> >> See pages 21 to 23 here (https://github.com/markhwhite
>>>>> ii/prej-beh-meta/
>>>>> >> blob/master/docs/manuscript.pdf) on how I came to that decision and
>>>>> >some
>>>>> >> great references for using their robumeta package. I'm sure there
>>>>> is a
>>>>> >way
>>>>> >> to do this in metafor, as well.
>>>>> >>
>>>>> >> On Mon, Feb 26, 2018 at 10:08 AM, Angeline Tsui
>>>>> ><angelinetsui at gmail.com>
>>>>> >> wrote:
>>>>> >>
>>>>> >>> Hello all,
>>>>> >>>
>>>>> >>> I am working on a meta-analysis that may contain dependent effect
>>>>> >sizes.
>>>>> >>> For example, there are five studies in a paper. However, study 1, 2
>>>>> >and 3
>>>>> >>> tested the same group of participants whereas study 4 and 5 tested
>>>>> >>> different groups of participants. This means that the effect sizes
>>>>> in
>>>>> >>> study
>>>>> >>> 1, 2 and 3 are dependent of each other, whereas study 4 and 5 are
>>>>> >>> independent of each other. In this case, how should I incorporate
>>>>> >these
>>>>> >>> studies in a meta-analysis? Specifically, my concern is that if I
>>>>> put
>>>>> >all
>>>>> >>> five studies in a meta-regression, then I am not ensuring that each
>>>>> >effect
>>>>> >>> size is independent of each other.
>>>>> >>>
>>>>> >>> Thanks,
>>>>> >>> Angeline
>>>>> >>>
>>>>> >>> --
>>>>> >>> Best Regards,
>>>>> >>> Angeline
>>>>>
>>>>> _______________________________________________
>>>>> R-sig-meta-analysis mailing list
>>>>> R-sig-meta-analysis at r-project.org
>>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Angeline
>>>
>>
>>
>
>
> --
> Best Regards,
> Angeline
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list