[R-meta] questions on some functions in metafor and clubsandwich
Farzad Keyhan
|@keyh@n|h@ @end|ng |rom gm@||@com
Thu Feb 10 04:44:53 CET 2022
Dear James,
Thanks for this information. Did you possibly reflect on/emphasize
this in your paper [https://doi.org/10.1007/s11121-021-01246-3]?
I ask this for two reasons.
First, some folks may not want to apply an RVE after fitting an
rma.mv() call and instead use the model-based results (i.e., they
solely want to account for their correlated errors).
Second, some folks cannot apply an RVE after fitting an rma.mv() call
because their model contains a pair of random-effects that are crossed
with each other, but still want to account for their correlated
errors.
Should we possibly be concerned about our final results when using
somooth_vi = TRUE, if we fall into these two categories?
Many thanks for your attention,
Fred
On Wed, Feb 9, 2022 at 8:58 PM James Pustejovsky <jepusto using gmail.com> wrote:
>
> Hi Brendan,
>
> The option to "smooth" the sampling variances (i.e., averaging them
> together across effect size estimates from the same sample) can be helpful
> for two reasons. The main one (as discussed in the original RVE paper by
> Hedges, Tipton, and Johnson, 2010) is that effect size estimates from the
> same sample often tend to have very similar sampling variances, and the
> main reason for differences in sampling variances could be effectively
> random error in their estimation. Smoothing them out within a given sample
> might therefore cut down on the random error in the sampling variance
> estimates. Further, if inference is based on RVE, then we don't need
> sampling variances that are exactly correct anyways, so we have a fair
> amount of "wiggle room" here.
>
> A secondary reason that smoothing can be helpful is that it avoids some
> weird behavior that can happen when you use inverse-variance weights (which
> is what we usually do) and a correlated effect structure with *dis-similar*
> sampling variances. If the sampling variances of the effect size estimates
> from a given sample are far from equal, then you can end up in a situation
> where the effect sizes with the largest sampling variances end up getting
> *negative* weight in the overall meta-analysis. I gave an example of this
> recently in the context of aggregating effect sizes prior to analysis:
> https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2022-January/003728.html
> But effectively the same thing can happen also implicitly in a
> meta-analytic model.
>
> James
>
> On Wed, Feb 9, 2022 at 6:49 AM Brendan Hutchinson <
> Brendan.Hutchinson using anu.edu.au> wrote:
>
> > Dear Wolfgang,
> >
> > Thank you very much for your quick response! Your responses are very
> > helpful and appreciated.
> >
> > In relation to the second question, this is precisely what I thought it
> > might be doing. However, I'm still a bit confused. To be more precise, if
> > you examine this code sample from Puchevosky et al 2021 (
> > https://osf.io/z27wt/), in particular the CHE model, they have set
> > smooth_VI to true and specified a random effects model with effect sizes
> > nested within studies. This is what is confusing me - would you not wish to
> > retain the differences in sampling variance in such a model, rather than
> > setting them all to the average?
> >
> > Best,
> > Brendan
> >
> >
> > Brendan Hutchinson
> > Research School of Psychology
> > ANU College of Medicine, Biology and Environment
> > Building 39 University Ave | The Australian National University | ACTON
> > ACT 2601 Australia
> > T: +61 2 6125 2716 | E: brendan.hutchinson using anu.edu.au | W: Brendan
> > Hutchinson | ANU Research School of Psychology<
> > https://psychology.anu.edu.au/people/students/brendan-hutchinson>
> >
> > ________________________________
> > From: Viechtbauer, Wolfgang (SP) <
> > wolfgang.viechtbauer using maastrichtuniversity.nl>
> > Sent: Wednesday, 9 February 2022 7:06 PM
> > To: Brendan Hutchinson <Brendan.Hutchinson using anu.edu.au>;
> > r-sig-meta-analysis using r-project.org <r-sig-meta-analysis using r-project.org>
> > Subject: RE: [R-meta] questions on some functions in metafor and
> > clubsandwich
> >
> > Dear Brendan,
> >
> > Please see below.
> >
> > Best,
> > Wolfgang
> >
> > >-----Original Message-----
> > >From: R-sig-meta-analysis [mailto:
> > r-sig-meta-analysis-bounces using r-project.org] On
> > >Behalf Of Brendan Hutchinson
> > >Sent: Wednesday, 09 February, 2022 7:54
> > >To: r-sig-meta-analysis using r-project.org
> > >Subject: [R-meta] questions on some functions in metafor and clubsandwich
> > >
> > > Hi mailing list,
> > >
> > >Thanks in advance for any help regarding my questions - I have two and
> > they
> > >concern the metafor and clubsandwich packages, and multilevel modelling.
> > >
> > >1. My first question concerns the difference between the robust()
> > function in
> > >metafor and the coef_test() function in clubsandwich - I'm a little
> > confused as
> > >to the precise difference between these. Do they not perform the same
> > operation?
> > >Is there any situations in which one would be preferred over another?
> >
> > coef_test() in itself is just a function for testing coefficients. The
> > real difference between robust() and clubSandwich is the kind of
> > adjustments they provide for the var-cov matrix and how they estimate the
> > dfs. Note that metafor can now directly interface with clubSandwich. See:
> >
> > See:
> > https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwviechtb.github.io%2Fmetafor%2Freference%2Frobust.html&data=04%7C01%7CBrendan.Hutchinson%40anu.edu.au%7Cd1e97df0aafc4e31775b08d9eba312dc%7Ce37d725cab5c46249ae5f0533e486437%7C0%7C0%7C637799907910897611%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=bySiP2DTn0vKfszQhiKLSKzHIOMofOOC8N5X3rvs0k0%3D&reserved=0
> >
> > >2. Second, in order to control for correlated effect sizes and correlated
> > >sampling variance in my own dataset, I will need to produce a
> > variance-covariance
> > >matrix for my data using the impute_covariance_matrix() function in
> > clubsandwich,
> > >which will then be fed into a multilevel model (effect sizes nested within
> > >studies) specified in the metafor function rma.mv().
> > >
> > >My question here concerns the "smooth_vi" input of the
> > impute_covariance_matrix()
> > >function. I am a little unclear as to its use. The help page specifies "If
> > >smooth_vi = TRUE, then all of the variances within cluster j will be set
> > equal to
> > >the average variance of cluster j".
> > >
> > >I interpreted this as though it is simply removing variance within
> > clusters (i.e.
> > >studies) via averaging, which I suspect would be inappropriate for a
> > multi-level
> > >meta-analysis in which we would want to capture that variance - indeed,
> > is this
> > >not the reason we specify a multilevel structure in the first place? What
> > is
> > >confusing to me is the only example code I have seen online appears to set
> > >smooth_VI to true when specifying a multi-level model (in which effects
> > are
> > >nested within studies), so I am a little lost.
> >
> > I think you are misunderstanding this option. Say you have two effect
> > sizes with sampling variances equal to .01 and .03 within a cluster. Then
> > with smooth_vi=TRUE, the sampling variances would be set to .02 and .02 for
> > the two estimates.
> >
> > >Once again, any help on the above is greatly appreciated!
> > >
> > >Brendan
> >
> > [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-meta-analysis mailing list
> > R-sig-meta-analysis using r-project.org
> > https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> >
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
More information about the R-sig-meta-analysis
mailing list