[R-meta] multi-level meta-analysis with dependent effect sizes - Gleser and Olkin formulae?

James Pustejovsky jepu@to @ending from gm@il@com
Fri Sep 14 18:21:27 CEST 2018


Andrew,

The Gleser & Olkin formulas are indeed specific to standardized mean
differences. Suppose we have a study (as my previous example) with two
treatment cells A and B and a control cell C. Let mA, mB, mC be the means
per cell, sA, sB, sC be the standard deviations per cell, and nA, nB, and
nC be the cell sizes. For raw mean differences, and without assuming equal
variances, the variances and covariances would be:

Var(mA - mC) =  sA^2 / nA + sc^2 / nc
Var(B - C) = sB^2 / nB + sC^2 / nC
Cov(A - C, B - C) = sC^2 / nC

For the log variability ratios, the lnVR is calculated with a small-sample
bias correction:

lnVR_AC = ln(sA / sC) + 1 / 2 * (nA - 1) - 1 / 2 * (nC - 1)
lnVR_BC = ln(sB / sC) + 1 / 2 * (nB - 1) - 1 / 2 * (nC - 1)

with variances (as given by Nakagawa et al., 2015):

Var(lnVR_AC) = 1 / 2 * (nA - 1) + 1 / 2 * (nC - 1)
Var(lnVR_BC) = 1 / 2 * (nB - 1) + 1 / 2 * (nC - 1)

and covariance:

Cov(lnVR_AC, lnVR_BC) = 1 / 2 * (nC - 1)

The covariance follows from the fact that

Var( ln(sC) ) ~= 1 / 2 * (nC - 1)

(also stated in Nakagawa et al, 2015) and the fact that covariance is a
linear operator.

James


Nakagawa, S., Poulin, R., Mengersen, K., Reinhold, K., Engqvist, L.,
Lagisz, M., & Senior, A. M. (2015). Meta-analysis of variation: Ecological
and evolutionary applications and beyond. Methods in Ecology and Evolution,
6, 143–152.


On Fri, Sep 14, 2018 at 10:10 AM Andrew Guerin <
Andrew.Guerin using newcastle.ac.uk> wrote:

> Hi James,
>
> Thanks for your help so far. I think I have got the hang of building a
> vcov matrix based on the code from the worked example here:
> http://www.metafor-project.org/doku.php/analyses:gleser2009
>
> Just a couple of questions remain:
> - as in the example above I have used the Gleser and Olkin formula 19.19
> (p365 in G&O 2009) to calculate the covariance: 1/nc + d1*d2/2ntotal.
> However, in the book this is given as the formula for standardised mean
> differences, whereas I am using a simple raw mean difference. Does the same
> formula apply? The 1994 edition of the G&O chapter makes it sound like this
> version of the formula is specifically for situations where the
> standardised mean difference, d, is calculated using the pooled SD. G&O
> (1994) give a slightly different formula for situations where the pooled sd
> is not used (ie. the effect size is Glass' D) : cov = (1+ 1/2d1*d2)/nc.
> This leaves me with the impression that neither of these formulae apply to
> raw mean differences.
>
> - For a follow-up part of my analysis, I plan to look at whether
> treatments affect sample variability - the data structure will be identical
> but the effect size will be lnVR (variability ratio) - this is available in
> the metafor escalc() function. Is anyone aware of an appropriate formula
> for calculating the covariance for this type of data? G&O 2009 do give some
> formulae for ratios, but these seem to be specific to cases where the
> original data are proportions.
>
> Best Regards
>
> Andrew
>
> From: James Pustejovsky <jepusto using gmail.com>
> Sent: 11 September 2018 19:47
> To: Andrew Guerin
> Cc: r-sig-meta-analysis using r-project.org
> Subject: Re: [R-meta] multi-level meta-analysis with dependent effect sizes
>
>
> Andrew,
>
>
> Your description of the data structure is very clear. (Bravo for that!
> Many meta-analyses that I have reviewed---even quite recent ones---don't do
> a great job of describing the hierarchical/multivariate structure of the
> data, even though it's quite important.)  Based on your responses, I think
> the best thing to do would be to use the Gleser & Olkin approach to try and
> approximate the covariances among effect size estimates that share a common
> control condition. The reason I think it's best is that you should have
> all the information you need in order to calculate the covariances (when
> the effect sizes are differences in means, all that is needed are means,
> SDs, and Ns per cell), and so there's not a strong rationale for using
> other approximations. For example, using  impute_covariance_matrix() with r
> = 0.7 is likely to be a bit too high. Consider an example where two
> treatment cells A and B, and the control cell C all have equal sample sizes
> n, and the SDs are also all equal to S. Then
>
>
> Var(A - C) =  S^2 * 2 / n
>
> Var(B - C) = S^2 * 2 / n
> Cov(A - C, B - C) = S^2 / n
> And so Cor(A - C, B - C) = 0.5, rather than 0.7.
>
>
> A further reason for going this route is that using good estimates of
> sampling covariances is the only way I know of to get defensible,
> interpretable estimates of the random effects variance components of the
> model. If you use something along the lines  of your model D,
>
>
> nmtD <- rma.mv(yi=md, V=vcvndat,  mods = ~ factor(sampletype) + factor(A)
> ... + factor(H), random = ~1|sid/contrast, data=ndat, method="REML")
>
>
> (but with vcvndat based on Gleser & Olkin), then you would be able to
> interpret the random effects vairances on sid and contrast. You noted that
> one of the variance components (not sure which one is sigma2.1 versus
> sigma2.2) does seem to be sensitive to  the assumed r, so Gleser & Olkin
> would be a way to pin that down. It would also let you explore other
> potential specifications of the random effects structure, such as allowing
> variance components to differ across levels of sampletype (or other
> moderators).
>
>
>
> All that said, computing the full Gleser & Olkin vcov matrix is a bit
> inconvenient because you have to work with block diagonal matrices. If you
> send a smallish example dataset, I can try to provide some example code for
> how to do these calculations.
>
>
> James
>
>
> On Tue, Sep 11, 2018 at 10:54 AM Andrew Guerin <
> Andrew.Guerin using newcastle.ac.uk> wrote:
>
>
> Hi James,
>
> Thanks for the rapid response.
>
> The study is looking at how a preparation technique (acid treatment)
> affects particular measurements (stable isotope signatures) in sample
> materials. Experiments compare the isotope signatures of replicates with
> and  without acid treatment - so it is the differences, rather than the
> absolute values, which are of interest (so of your two suggestions, it
> would have to be option 1).
>
> The hierarchical dependence arises as you have guessed. All the included
> studies are explicit tests of acid treatment. While some studies look at
> the effects on just one type of sample material (say,  crab muscle or fish
> scales)  others might test a selection of materials from various sources.
> So there can be multiple values of 'sampletype' within a study. The other
> moderators (A-H) concern potentially influential differences  in the
> experimental methods used (eg. type of acid, method of application, how
> samples were treated before acid exposure, etc). 'Sampletype' is the main
> moderator of interest - the idea is to be able to say whether or not acid
> treatment is desirable/necessary  for a particular type of material (an
> overall estimate of effect size for all types of sample combined is not
> very interesting). The other moderators are less fundamentally interesting,
> they are there mainly to account for differences in the methods used in
> the studies included in the analysis, though of course it is interesting to
> be able to report whether they make any difference.
>
>
> In terms of variation in sampletype and the other moderators, you could
> put the studies into 4 groups:
>
>
> 1) Single sample type, fixed protocol (all moderators A-H constant; 10
> studies)
> 2) Single sample type, multiple protocols (generally one or two moderators
> will vary, others will remain constant; 2 studies)
> 3) Multiple sample types, fixed protocol (the most common - 25 studies)
> 4) Multiple sample types, multiple protocols (such studies are a lot of
> work, so they are generally limited to 4 or fewer sample types, and a small
> number of variations in one or two moderators;  7 studies)
>
>
> So there is a bit of both between-study and within-study variation in the
> moderators. For the two most common types of study (1 and 3) since there is
> only one protocol there will be multiple independent
> treatment/control pairs, so no 'multiple treatments' type dependence
> (though still potential hierarchical dependence). For type 2 there will
> be one group of treatments per study, but all compared to a single control.
> Studies in group 4 will presumably exhibit  both types of dependence.
>
>
> Cheers,
> Andrew
>
>
>
>
>
>
> From: James Pustejovsky <jepusto using gmail.com>
> Sent: 11 September 2018 15:23
> To: Andrew Guerin
> Cc: r-sig-meta-analysis using r-project.org
> Subject: Re: [R-meta] multi-level meta-analysis with dependent effect sizes
>
>
> Andrew,
>
>
> In your study, how does the hierarchical dependence arise? Is it that some
> studies report results from multiple (presumably independent) samples?
>
>
> If so, then I would argue that the best thing to do might be one of the
> following:
>
>
> 1. Go full Gleser-Olkin. If multiple-treatments dependence is the *only*
> type of sampling dependence in your data, then it would be better just to
> work through the calculation of the covariances. This would let you then
> treat the sampling variance-covariance  matrix as known, and you could
> avoid the need for cluster-robust standard errors all together (unless
> there are other types of mis-specification to worry about, in which case
> clustered SEs could still be useful).
>
>
> 2. Since you have a common outcome scale, you could also consider using
> the mean outcome as the ES. If different treatment groups within a study
> are independent (i.e., a between-subjects design), then modeling the means
> directly would let you avoid dealing  with the Gleser-Olkin business.
>
>
> Generally, which of these (or the other approaches you described) is most
> defensible will depend on the goals of your analysis and, in particular,
> whether the moderators of interest involve between-study variation,
> within-study variation, or both. For  moderators that vary between-study, I
> have found (based on personal experience) that the approach to
> modeling/handling dependence tends not to matter much. But for moderators
> where there is within-study variation, it is more salient.
>
>
> Does sampletype vary within study? And what about the other factors?
>
>
> James
>
>
> On Tue, Sep 11, 2018 at 8:30 AM Andrew Guerin <
> Andrew.Guerin using newcastle.ac.uk> wrote:
>   I am new to meta-analysis, and have benefited greatly from the excellent
> documentation and resources for meta-analysis in R, especially the websites
> for metafor and clubSandwich. I am planning some extensive future work
> using meta-analysis, so thought it would  be useful to get some experience
> of the whole process by starting with something relatively simple (haha)
> and trying to work it up for a publication. I would like some advice on
> whether I am following the correct approach.
>
>
> I selected a topic with which I am quite familiar and which has a
> relatively limited literature: I did some thorough literature searching and
> came up with ~70 relevant studies, which I will use for a few models
> looking at different aspects (I'll focus on one  here). Once I had coded
> all the data it became clear that the analysis was not going to be as
> simple as I had hoped, since the data have two kinds of dependence that
> should be accounted for:
>
>
> 1. Hierarchical dependence - a lot of the studies contribute multiple
> effect sizes.
>
> 2. Many of the effect sizes share controls - different treatments are
> compared against a common 'untreated' control - Gleser and Olkin's (2009)
> 'multiple treatments' dependence.
>
>
> For this analysis, there are a total of 302 effect sizes, grouped into 244
> 'contrast' groups which share a control, from 43 studies. The 244 contrast
> groups contain 1-8 effect sizes.
>
>
> There are various moderators. The most important is 'sampletype' -
> expected to be significant, as it is expected that treatment will have
> different effects on different sample types. The other factors all relate
> to variations in the experimental treatments  used, to keep things general
> I'll just call them "A" to "H".
>
>
> ndat = data frame
>
> md = effect size measure (raw mean difference, since all data are on a
> common scale, with a limited range)
>
> mdv = sampling variation for mean difference, obtained using escalc().
>
> sid = study id (a simple numerical identifier)
>
> contrast = identifier showing which effect size measures / treatments
> share a control (eg. study one contributes 5 effect sizes, with values of
> 'contrast' 1A, 1B, 1B, 1C, 1D, indicating that of the 5 effect sizes in
> study 1, 2 share a control).
>
>
> As I see it I have the following options for how to proceed:
>
>
> A) conduct a completely 'cluster naive' mixed effects meta-analysis
>
>
> nmtA <- rma(md, mdv,  mods = ~ factor(sampletype) + factor(A) ... +
> factor(H), data=ndat, method="REML")
>
> anova(nmtA, btt=20:21) ## example hypothesis test for moderator 'D'
>
>
> B) ignore the hierarchical dependence (ie. assume that any samples with
> different values of 'contrast' within the same study are independent), but
> account for the correlated errors within clusters / contrast groups by RVE
> in robumeta, using clubSandwich for  the hypothesis tests.
>
>
> nmtB <- robu(md ~ factor(sampletype) + factor(A) ... + factor(H),
> data=ndat, var.eff.size=mdv, rho = 0.8, modelweights="CORR",
> studynum=contrast)
>
> Wald_test(nmtB, constraints = 20:21, vcov="CR2") ## example hypothesis
> test for moderator 'D'
>
>
>
> C) still ignoring the hierarchical dependence, but now explicitly
> specifying the random effects in  rma.mv, after imputing a
> variance-covariance matrix. Robust parameter estimates and hypothesis test
> are then carried out using clubSandwich.
>
>
> vcvndat <- impute_covariance_matrix(vi = ndat$mdv, cluster=ndat$contrast,
> r=0.7)
>
> nmtC <-  rma.mv(yi=md, V=vcvndat,  mods = ~ factor(sampletype) +
> factor(A) ... + factor(H), random = ~1|contrast, data=ndat, method="REML")
>
> nmtC_robust <- coef_test(nmtC, vcov="CR2")
>
> Wald_test(nmtC, constraints = 20:21, vcov="CR2") ## example hypothesis
> test for moderator 'D'
>
>
> D) attempting to account for the hierarchical and multiple-treatments
> dependence via a multi-level model, setting random effect structure as
> random = ~ 1| sid/contrast, and then using clubSandwich for RVE and
> hypothesis tests.
>
>
> vcvndat <- impute_covariance_matrix(vi = ndat$mdv, cluster=ndat$contrast,
> r=0.7)
>
> nmtD <-  rma.mv(yi=md, V=vcvndat,  mods = ~ factor(sampletype) +
> factor(A) ... + factor(H), random = ~1|sid/contrast, data=ndat,
> method="REML")
>
> nmtD_robust <- coef_test(nmtD, vcov="CR2", cluster=ndat$contrast)
>
> Wald_test(nmtD, constraints = 20:21, vcov="CR2", cluster=ndat$contrast) ##
> example hypothesis test for moderator 'D'
>
>
> My main question really is which (if any) of these approaches is the best
> way forward? For this particular dataset, the outcomes (at least in terms
> of which moderators are significant/non-significant, and most parameter
> estimates) are pretty similar for all  the options (including some other
> options that I tried that I have not mentioned here). All find 'sampletype'
> highly significant, and find similar sets of the other moderators to be
> non-significant (except for the naive analysis, A which finds more
> moderators  to be significant). B, C, and D are all very similar. Although
> D is the only one that explicitly accounts for the hierarchical dependence,
> does the fact that the outcomes from B and C are substantially similar mean
> that this could be safely overlooked for  this analysis?
>
>
> I have not yet attempted to build my own vcov matrix using the Gleser and
> Olkin (2009) formula. Is this worth doing instead of using
> impute_covariance_matrix? I picked 0.7 as a plausible value for r in
> impute_convariance_matrix(), but I have raw data from one  study that might
> allow me to derive a better estimate. I did investigate the effect of
> varying r from 0-1. It does not seem to have much impact on parameter
> estimates, the significance/non-significance of moderators, or the estimate
> of sigma^2.2 in the multi-level  model. The intercept moves about a little,
> but the main effect is that when r is less than about 0.5, sigma^2.1 is
> basically 0. Is this indicating a problem?
>
>
> Finally I was wondering whether it is wise to account for multiple
> comparisons when evaluating the hypothesis tests. Since I have several
> factors, I repeat Wald_test() several times. Models nmtB, nmtC, and nmtD
> all have moderators  (aside from 'sampletype')  that are significant at p <
> 0.05, but with p values only just under 0.05. Adjustment for multiple
> comparisons (eg. using p.adjust(method="fdr")) shifts the outcomes of all
> these tests to p > 0.05.
>
>
> I would be grateful for any advice on the above.
>
>
> Many thanks,
>
> Andrew
>
>
>
>
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list