[R-meta] Question about running a completely "within study" meta analysis
Viechtbauer, Wolfgang (SP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Tue Jan 22 21:11:46 CET 2019
Whenever multiple estimates are based on the sample of individuals (or there is at least some overlap -- which covers the 'multiple-treatment study' case discussed here: http://www.metafor-project.org/doku.php/analyses:gleser2009#multiple-treatment_studies), then the sampling errors can no longer assumed to be independent. So this applies whether multiple estimates are obtained from the same sample because multiple outcomes were assessed or whether the same sample was assessed multiple times (or some combination thereof).
In the latter case, autoregressive structures are often appropriate for constructing the var-cov matrix of the sampling errors (and also for the random effects). See help(rma.mv) (see "For meta-analyses of studies reporting outcomes at multiple time points ...") and the two examples linked there. You will notice that this isn't entirely straightforward (a bit of an understatement).
If you also have multiple outcomes, then you would get a combination of this and the Berkey et al. (1998) case. Constructing an appropriate var-cov matrix of the sampling errors is then even more challenging.
I wonder though whether it would not be easier to use an appropriate mixed-effects model on the raw data directly. Essentially, you can do an 'individual participant/patient data meta-analysis' (IPDMA) here. That could even be considered the gold standard approach.
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Kate Humphreys
Sent: Tuesday, 22 January, 2019 18:57
To: r-sig-meta-analysis using r-project.org
Subject: [R-meta] Question about running a completely "within study" meta analysis
We would like to conduct a "within study" meta-analysis. Briefly, we have
been following a group of children who, at infancy, were randomly assigned
to a high-quality foster care or to care as usual. In addition to a
baseline assessment, children were again assessed in waves at 30, 42, and
54 months, and at 8, 12, and 16 years. The assessments include a large
number of domains and a number of different types of information sources.
In other words, we want to determine the average "effect" of the
intervention across all outcomes and, of course, whether aspects of the
measurement, assessment wave, or domain might moderate these effects. Only
participants from this single study would be included, and all measurements
The issue we have run into is with regards to the variance structure.
Specifically, we initially envisioned a multi-level meta-analysis similar
to the example by Konstantopoulos (2011) posted on your website (
a structure similar to this (prior to testing moderators):
model1 <- rma(
random = 1 | wave/domain
method = "REML",
data = df
However, because we wish to estimate the average true effect across waves
and domains for the same individuals, I believe we would be violating the
assumption that the sampling errors of the effect size estimates are
independent. Therefore, we also considered the Berkey et al. (1998) example
(http://www.metafor-project.org/doku.php/analyses:berkey1998); however, it
isn't clear to us whether this would apply to our special case given that
this example deals with multiple outcomes assessed within multiple studies.
We aren't sure how we would calculate the covariances of our observed
effects when we have effect sizes from a single group of individuals nested
within waves rather studies.
Thank you for considering this request for guidance.
Kathryn L. Humphreys, Ph.D., Ed.M.
Department of Psychology and Human Development
Vanderbilt University, Peabody College
230 Appleton Place #552
Hobbs Building 307B
Nashville, TN 37203
Director: Stress and Early Adversity (SEA) Lab
Jacobs Foundation Research Fellow, 2018-2020
Member, Vanderbilt Kennedy Center <https://vkc.mc.vanderbilt.edu/vkc/>
More information about the R-sig-meta-analysis