[R-meta] Inquiry on metafor trimfill and weightr

James Pustejovsky jepu@to @ending from gm@il@com
Wed Dec 19 18:41:56 CET 2018


For trim-and-fill, it could be that non-convergence is due to
non-convergence of the random effects model used to initialize the
algorithm. In my experience, applying the R0 trim-and-fill algorithm after
fitting a fixed (common) effect model does not have convergence problems.
Not sure about the L0 version.

For weightr, there are several reasons that you can get non-convergence or
NPD hessian: it could be because the variance component estimate approaches
zero or because the weight parameters (the probability that an effect is
observed, given its significance level, relative to the probability that
significant effects are observed) are near zero. The latter happens when
one of the significance ranges has very few observed effects in it. For
instance, in the simplest 3-parameter model with one threshold at p = .025,
non-convergence tends to occur when there are no significant effects or all
significant effects. Eithe rof these cases will be more or less likely
depending on the average effect size and degree of heterogeneity in the
underlying population.

I've done some simulations with both of these methods, reported in this
forthcoming paper: https://psyarxiv.com/ea6kz/
The code for the simulations is here: https://osf.io/xqcra/
In particular, see the function fit_3PSM() for an example of one way to
handle the various non-convergence cases (though there are surely better


On Wed, Dec 19, 2018 at 9:31 AM Michael Dewey <lists using dewey.myzen.co.uk>

> Dear Chishio
> Do the two methods stop on the same data sets? As well as possibly
> giving you a clue as to what is happening that would, in my opinion,
> make leaving those sets out more justifiable for your comparison.
> Michael
> On 19/12/2018 12:50, Chishio Furukawa wrote:
> > Dear the community of metafor users,
> >
> > I am writing to seek your advice for using the metafor trimfill (Duval
> and
> > Tweedie 2000) and weightr (Hedges and Vevea 2005).
> >
> > Currently I am conducting a simulation study comparing their performance
> > across various models of publication selection. However, when I looped
> over
> > many simulated meta-analysis data sets, I encountered various errors that
> > stopped the loop. I am writing to ask your advice on what I can do to
> > address them.
> >
> > 1) trimfill
> > - for some data, I receive an error message "Trim and fill algorithm did
> > not converge."
> > 2) weightr
> > - for some data, there is an error with "optim" package that has
> > "non-finite finite-difference value."
> > - the Hessian is non-singular so I cannot calculate the standard error of
> > each parameter.
> >
> > May I ask if anyone could advise me on (i) what features of data create
> any
> > of these problems, and (ii) if there are ways to let the computation go
> > through without having the code break? I am thinking, after all, I could
> > skip these few simulation data sets that generate the prolems, and
> compute
> > the coverage probabilities and interval lengths on the data that do not
> > have these problems (although that might not be the most ideal way to
> > address them.)
> >
> > Thank you so much in advance for your advice.
> >
> > Best regards,
> > Chishio
> >
> >
> >
> >
> --
> Michael
> http://www.dewey.myzen.co.uk/home.html
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis

	[[alternative HTML version deleted]]

More information about the R-sig-meta-analysis mailing list