[R-meta] questions on some functions in metafor and clubsandwich

Farzad Keyhan |@keyh@n|h@ @end|ng |rom gm@||@com
Thu Feb 17 06:20:53 CET 2022


And James, even when I set smooth_vi to TRUE, I still see negative weights!

Is this possible and avoidable?

dat <- read.csv("https://raw.githubusercontent.com/ilzl/i/master/j.csv")
V <- with(dat, impute_covariance_matrix(vi, study, .6, smooth_vi = TRUE))

z <- rma.mv(yi ~ 0 + factor(outcome), V, random = ~ 1 |study/es, data = dat)

W <- weights(z, type="matrix")
X <- model.matrix(z)
WX <- W %*% X
B <- solve(t(X) %*% WX)
weighting_mat <- WX %*% B

any(weighting_mat < 0) #### TRUE

On Wed, Feb 16, 2022 at 10:45 PM Farzad Keyhan <f.keyhaniha using gmail.com> wrote:
>
> Thanks a lot, James! I also tried:  `any(colSums(W) / sum(W) < 0)`, I
> assume this would only be appropriate when the model doesn't contain
> any predictors?
>
> Thanks for the prompt response and the useful code,
> Fred
>
> On Wed, Feb 16, 2022 at 10:32 PM James Pustejovsky <jepusto using gmail.com> wrote:
> >
> > Hi Fred,
> >
> > Your intuition here is spot on, although the weight calculations are a bit more complicated with this model. Using weights(z) returns only the diagonal entries of the weight matrix, but rma.mv models can have off-diagonal entries too. For a model with separate intercepts, I would approach the weight calculations as follows:
> >
> > dat <- read.csv("https://raw.githubusercontent.com/ilzl/i/master/j.csv")
> > V <- with(dat, impute_covariance_matrix(vi, study, .6))
> > z <- rma.mv(yi ~ 0 + factor(outcome), V, random = ~ 1 |study/es, data = dat)
> > W <- weights(z, type="matrix")
> > X <- model.matrix(z)
> > WX <- W %*% X
> > B <- solve(t(X) %*% WX)
> > weighting_mat <- WX %*% B
> >
> > The result is a matrix with rows corresponding to each observation and columns for each coefficient in the model (so here, each level of factor(outcome). A negative entry in weighting_mat[i,j] means that observation i gets negative weight in calculating the estimate of coefficient j.
> >
> > For more on the details behind these calculations, see
> > https://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models
> >
> > James
> >
> > On Wed, Feb 16, 2022 at 7:34 PM Farzad Keyhan <f.keyhaniha using gmail.com> wrote:
> >>
> >> Hi James,
> >>
> >> Thank you for the clarification. Per your insightful response, I would
> >> like to run some sanity checks on my model to make sure that my use of
> >> a single within-study r has not led to weird stuff happening in my
> >> data.
> >>
> >> Are the following sufficient to capture the occurrence of weird stuff?
> >> (I'm sure this will be useful to a lot of the list members.)
> >>
> >> Many thanks,
> >> Fred
> >>
> >> dat <- read.csv("https://raw.githubusercontent.com/ilzl/i/master/j.csv")
> >>
> >> V <- with(dat, impute_covariance_matrix(vi, study, .6))
> >>
> >> z <- rma.mv(yi ~ factor(outcome), V, random = ~ 1 |study/es, data = dat)
> >>
> >> any(weights(z) < 0)
> >>
> >> any(z$b[,1] < min(dat$yi, na.rm = TRUE))
> >>
> >> any(z$b[,1] > max(dat$yi, na.rm = TRUE))
> >>
> >>
> >> On Tue, Feb 15, 2022 at 4:27 PM James Pustejovsky <jepusto using gmail.com> wrote:
> >> >
> >> > Hi Fred,
> >> >
> >> > Unfortunately, we did not really get into the details behind smoothing the sampling variances in that paper (mainly due to page restrictions--even as written, we were well over the recommended page count for that journal).
> >> >
> >> > If you are using an imputed covariance matrix with model-based standard errors/inferential results, then the key thing is to make the assumptions as realistic and defensible as is feasible. If your clusters of correlated effect size estimates arise from having multiple measures of a common outcome construct or set of constructs, then I would guess that using smooth_vi = TRUE will usually be pretty reasonable (because the sampling variances from a given study are probably all quite similar anyways, so averaging them together won't really change much).
> >> >
> >> > For more complex cases, such as where you have multiple measures of a common outcome, each assessed at several points in time, with multiple treatment groups compared to a common control group, then I would want to be more cautious about smoothing the variances and, generally, more cautious in constructing the imputed covariance matrix, such as by using the new vcalc() function in metafor (https://wviechtb.github.io/metafor/reference/vcalc.html).
> >> >
> >> > My comments about "weird stuff happening when using inverse-variance weights and a correlated effect structure" pertain to what happens with the _weights_ assigned to each effect size estimate. Thus, they're relevant both to model-based and robust inference approaches.
> >> >
> >> > James
> >> >
> >> >
> >> > On Wed, Feb 9, 2022 at 9:45 PM Farzad Keyhan <f.keyhaniha using gmail.com> wrote:
> >> >>
> >> >> Dear James,
> >> >>
> >> >> Thanks for this information. Did you possibly reflect on/emphasize
> >> >> this in your paper [https://doi.org/10.1007/s11121-021-01246-3]?
> >> >>
> >> >> I ask this for two reasons.
> >> >>
> >> >> First, some folks may not want to apply an RVE after fitting an
> >> >> rma.mv() call and instead use the model-based results (i.e., they
> >> >> solely want to account for their correlated errors).
> >> >>
> >> >> Second, some folks cannot apply an RVE after fitting an rma.mv() call
> >> >> because their model contains a pair of random-effects that are crossed
> >> >> with each other, but still want to account for their correlated
> >> >> errors.
> >> >>
> >> >> Should we possibly be concerned about our final results when using
> >> >> somooth_vi = TRUE, if we fall into these two categories?
> >> >>
> >> >> Many thanks for your attention,
> >> >> Fred
> >> >>
> >> >> On Wed, Feb 9, 2022 at 8:58 PM James Pustejovsky <jepusto using gmail.com> wrote:
> >> >> >
> >> >> > Hi Brendan,
> >> >> >
> >> >> > The option to "smooth" the sampling variances (i.e., averaging them
> >> >> > together across effect size estimates from the same sample) can be helpful
> >> >> > for two reasons. The main one (as discussed in the original RVE paper by
> >> >> > Hedges, Tipton, and Johnson, 2010) is that effect size estimates from the
> >> >> > same sample often tend to have very similar sampling variances, and the
> >> >> > main reason for differences in sampling variances could be effectively
> >> >> > random error in their estimation. Smoothing them out within a given sample
> >> >> > might therefore cut down on the random error in the sampling variance
> >> >> > estimates. Further, if inference is based on RVE, then we don't need
> >> >> > sampling variances that are exactly correct anyways, so we have a fair
> >> >> > amount of "wiggle room" here.
> >> >> >
> >> >> > A secondary reason that smoothing can be helpful is that it avoids some
> >> >> > weird behavior that can happen when you use inverse-variance weights (which
> >> >> > is what we usually do) and a correlated effect structure with *dis-similar*
> >> >> > sampling variances. If the sampling variances of the effect size estimates
> >> >> > from a given sample are far from equal, then you can end up in a situation
> >> >> > where the effect sizes with the largest sampling variances end up getting
> >> >> > *negative* weight in the overall meta-analysis. I gave an example of this
> >> >> > recently in the context of aggregating effect sizes prior to analysis:
> >> >> > https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2022-January/003728.html
> >> >> > But effectively the same thing can happen also implicitly in a
> >> >> > meta-analytic model.
> >> >> >
> >> >> > James
> >> >> >
> >> >> > On Wed, Feb 9, 2022 at 6:49 AM Brendan Hutchinson <
> >> >> > Brendan.Hutchinson using anu.edu.au> wrote:
> >> >> >
> >> >> > > Dear Wolfgang,
> >> >> > >
> >> >> > > Thank you very much for your quick response! Your responses are very
> >> >> > > helpful and appreciated.
> >> >> > >
> >> >> > > In relation to the second question, this is precisely what I thought it
> >> >> > > might be doing. However, I'm still a bit confused. To be more precise, if
> >> >> > > you examine this code sample from Puchevosky et al 2021 (
> >> >> > > https://osf.io/z27wt/), in particular the CHE model, they have set
> >> >> > > smooth_VI to true and specified a random effects model with effect sizes
> >> >> > > nested within studies. This is what is confusing me - would you not wish to
> >> >> > > retain the differences in sampling variance in such a model, rather than
> >> >> > > setting them all to the average?
> >> >> > >
> >> >> > > Best,
> >> >> > > Brendan
> >> >> > >
> >> >> > >
> >> >> > > Brendan Hutchinson
> >> >> > > Research School of Psychology
> >> >> > > ANU College of Medicine, Biology and Environment
> >> >> > > Building 39 University Ave | The Australian National University | ACTON
> >> >> > > ACT 2601 Australia
> >> >> > > T: +61 2 6125 2716 | E: brendan.hutchinson using anu.edu.au | W:  Brendan
> >> >> > > Hutchinson | ANU Research School of Psychology<
> >> >> > > https://psychology.anu.edu.au/people/students/brendan-hutchinson>
> >> >> > >
> >> >> > > ________________________________
> >> >> > > From: Viechtbauer, Wolfgang (SP) <
> >> >> > > wolfgang.viechtbauer using maastrichtuniversity.nl>
> >> >> > > Sent: Wednesday, 9 February 2022 7:06 PM
> >> >> > > To: Brendan Hutchinson <Brendan.Hutchinson using anu.edu.au>;
> >> >> > > r-sig-meta-analysis using r-project.org <r-sig-meta-analysis using r-project.org>
> >> >> > > Subject: RE: [R-meta] questions on some functions in metafor and
> >> >> > > clubsandwich
> >> >> > >
> >> >> > > Dear Brendan,
> >> >> > >
> >> >> > > Please see below.
> >> >> > >
> >> >> > > Best,
> >> >> > > Wolfgang
> >> >> > >
> >> >> > > >-----Original Message-----
> >> >> > > >From: R-sig-meta-analysis [mailto:
> >> >> > > r-sig-meta-analysis-bounces using r-project.org] On
> >> >> > > >Behalf Of Brendan Hutchinson
> >> >> > > >Sent: Wednesday, 09 February, 2022 7:54
> >> >> > > >To: r-sig-meta-analysis using r-project.org
> >> >> > > >Subject: [R-meta] questions on some functions in metafor and clubsandwich
> >> >> > > >
> >> >> > > > Hi mailing list,
> >> >> > > >
> >> >> > > >Thanks in advance for any help regarding my questions - I have two and
> >> >> > > they
> >> >> > > >concern the metafor and clubsandwich packages, and multilevel modelling.
> >> >> > > >
> >> >> > > >1. My first question concerns the difference between the robust()
> >> >> > > function in
> >> >> > > >metafor and the coef_test() function in clubsandwich - I'm a little
> >> >> > > confused as
> >> >> > > >to the precise difference between these. Do they not perform the same
> >> >> > > operation?
> >> >> > > >Is there any situations in which one would be preferred over another?
> >> >> > >
> >> >> > > coef_test() in itself is just a function for testing coefficients. The
> >> >> > > real difference between robust() and clubSandwich is the kind of
> >> >> > > adjustments they provide for the var-cov matrix and how they estimate the
> >> >> > > dfs. Note that metafor can now directly interface with clubSandwich. See:
> >> >> > >
> >> >> > > See:
> >> >> > > https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwviechtb.github.io%2Fmetafor%2Freference%2Frobust.html&data=04%7C01%7CBrendan.Hutchinson%40anu.edu.au%7Cd1e97df0aafc4e31775b08d9eba312dc%7Ce37d725cab5c46249ae5f0533e486437%7C0%7C0%7C637799907910897611%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=bySiP2DTn0vKfszQhiKLSKzHIOMofOOC8N5X3rvs0k0%3D&reserved=0
> >> >> > >
> >> >> > > >2. Second, in order to control for correlated effect sizes and correlated
> >> >> > > >sampling variance in my own dataset, I will need to produce a
> >> >> > > variance-covariance
> >> >> > > >matrix for my data using the impute_covariance_matrix() function in
> >> >> > > clubsandwich,
> >> >> > > >which will then be fed into a multilevel model (effect sizes nested within
> >> >> > > >studies) specified in the metafor function rma.mv().
> >> >> > > >
> >> >> > > >My question here concerns the "smooth_vi" input of the
> >> >> > > impute_covariance_matrix()
> >> >> > > >function. I am a little unclear as to its use. The help page specifies "If
> >> >> > > >smooth_vi = TRUE, then all of the variances within cluster j will be set
> >> >> > > equal to
> >> >> > > >the average variance of cluster j".
> >> >> > > >
> >> >> > > >I interpreted this as though it is simply removing variance within
> >> >> > > clusters (i.e.
> >> >> > > >studies) via averaging, which I suspect would be inappropriate for a
> >> >> > > multi-level
> >> >> > > >meta-analysis in which we would want to capture that variance - indeed,
> >> >> > > is this
> >> >> > > >not the reason we specify a multilevel structure in the first place? What
> >> >> > > is
> >> >> > > >confusing to me is the only example code I have seen online appears to set
> >> >> > > >smooth_VI to true when specifying a multi-level model (in which effects
> >> >> > > are
> >> >> > > >nested within studies), so I am a little lost.
> >> >> > >
> >> >> > > I think you are misunderstanding this option. Say you have two effect
> >> >> > > sizes with sampling variances equal to .01 and .03 within a cluster. Then
> >> >> > > with smooth_vi=TRUE, the sampling variances would be set to .02 and .02 for
> >> >> > > the two estimates.
> >> >> > >
> >> >> > > >Once again, any help on the above is greatly appreciated!
> >> >> > > >
> >> >> > > >Brendan
> >> >> > >
> >> >> > >         [[alternative HTML version deleted]]
> >> >> > >
> >> >> > > _______________________________________________
> >> >> > > R-sig-meta-analysis mailing list
> >> >> > > R-sig-meta-analysis using r-project.org
> >> >> > > https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> >> >> > >
> >> >> >
> >> >> >         [[alternative HTML version deleted]]
> >> >> >
> >> >> > _______________________________________________
> >> >> > R-sig-meta-analysis mailing list
> >> >> > R-sig-meta-analysis using r-project.org
> >> >> > https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis



More information about the R-sig-meta-analysis mailing list