[R-meta] Mean-adjustment for weighting

Viechtbauer Wolfgang (SP) wolfgang.viechtbauer at maastrichtuniversity.nl
Wed Mar 28 10:13:15 CEST 2018


This type of approach of calculating the sampling variances is already incorporated in metafor. However, this is not currently documented. For correlations, an example is described here:

http://www.metafor-project.org/doku.php/tips:hunter_schmidt_method

As James noted, this idea can be traced back to the work by Hunter & Schmidt. A slight difference is that Doncaster and Spake (2018) just use the average, while H&S plug the sample-size weighted average of the estimates into the equation for the variances.

The same idea can be applied to other outcome measures. Here is an example with SMDs:

### load data
dat <- get(data(dat.normand1999))

### calculate standardized mean differences and corresponding sampling variances
dat <- escalc(measure="SMD", m1i=m1i, sd1i=sd1i, n1i=n1i, m2i=m2i, sd2i=sd2i, n2i=n2i, data=dat)
dat

### meta-analysis using a random-effects model
res <- rma(yi, vi, data=dat)
res

### calculate sampling variances plugging in the sample-size weighted
### average of the SMD values into the equation for the sampling variance
dat <- escalc(measure="SMD", m1i=m1i, sd1i=sd1i, n1i=n1i, m2i=m2i, sd2i=sd2i, n2i=n2i, data=dat, vtype="AV")
dat

### meta-analysis using a random-effects model
res <- rma(yi, vi, data=dat)
res

Note: Due to a recent change in the naming of the possible options for the 'vtype' argument, you will have to install the development version of the metafor package for this to work. See here for instructions:

https://github.com/wviechtb/metafor#installation

At the moment, vtype="AV" is only implemented for a few outcome measures (including SMD, COR, various meaures for proportions, but not ROM).

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
>project.org] On Behalf Of Vojtech Brlík
>Sent: Wednesday, 28 March, 2018 9:24
>To: James Pustejovsky
>Cc: r-sig-meta-analysis at r-project.org
>Subject: Re: [R-meta] Mean-adjustment for weighting
>
>Dear James,
>
>Thank you for your comments and suggestion. The last point of presenting
>both calculations seems very reasonable to me.
>
>Best regards, Vojtech
>
>On 27 March 2018 at 23:16, James Pustejovsky <jepusto at gmail.com> wrote:
>Vojtech,
>
>I do not know enough about the performance of the adjustment to be able
>to unequivocally recommend it or not. All the same, I will offer a couple
>of observations in case they are useful to you:
>
>1. The adjustments described by Doncaster & Spake are very similar to
>methods proposed by Hunter & Schmidt in their book, Methods of Meta-
>Analysis. So they are not entirely unknown.
>2. This adjustment should only matter much if you are dealing with
>exceedingly small sample sizes, which as Doncaster & Spake demonstrate
>are not uncommon in ecology. If your sample sizes are much larger (say,
>smallest total sample sizes are in the 20's, not the single digits), then
>perhaps it is less of a concern.
>3. The range of effect size estimates is also a consideration. In
>psychology and education, I don't usually think about standardized mean
>differences bigger than 1 or 1.5. For SMDs larger than 3, I often start
>to wonder whether a different effect size metric might be more
>appropriate.
>4. An ideal way to address your question about whether to use the
>adjustment method would be to run some simulations that emulate the
>conditions (sample sizes, ranges of effects, number of studies) you
>observed in your meta-analysis. The authors provide R code for their
>simulations, which could be modified to resemble the conditions in your
>meta. But of course nobody has unlimited time and resources so this might
>not be feasible.
>5. I think it would useful to also report standard errors/confidence
>intervals based on other techniques, such as the Knapp-Hartung adjustment
>or Sidik & Jonkman's robust standard errors. Reporting results based on
>these other techniques would, I think, help to build the reader's
>confidence that your ultimate findings are credible rather than being
>contingent on use of an uncommon set of methods. The Knapp-Hartung
>adjustment is available in metafor using test = "knha". Robust standard
>errors can be calculated using robust() in metafor or coef_test() in the
>clubSandwich package. In either case, you would specify a unique id
>variable for the cluster = argument.
>
>James
>
>On Tue, Mar 27, 2018 at 6:56 AM, Vojtěch Brlík <vojtech.brlik at gmail.com>
>wrote:
>Dear all,
>
>I have conducted a meta-analysis for my bachelor thesis (that means I am
>highly inexperienced) using the unbiased standardized mean difference
>(Hedges‘ g) as a measure of the effect size. I have noticed recently
>published study (https://doi.org/10.1111/2041-210X.12927) suggesting the
>adjustment in the standard error calculation as the weights of the effect
>sizes are not corresponding to their sample sizes symmetrically. This
>inequality causes the biased estimates of pooled effect size variance.
>
>I decided to use this adjustment but it does not cause the same
>adjustment in all same-sized studies as the differences between the
>adjusted and non-​adjusted errors are not symmetric (see below the plots
>in four categories of effect I want to recalculate​, also attached below​).
>
>Please, write me in case you cannot see the figures.
>
>However, the effect sizes​ remain unchanged and the variance is wider as
>Doncaster & Spake 2018 suggested.
>
>What is you opinion about this study, do you recommend the use the
>adjustment for the standard error calculation or not?
>
>Thank you for your advises and comments.
>
>With kind regards,
>
>Vojtech
>​ Brlik​


More information about the R-sig-meta-analysis mailing list