[R-meta] Inquiry on metafor trimfill and weightr

Viechtbauer, Wolfgang (SP) wolfg@ng@viechtb@uer @ending from m@@@trichtuniver@ity@nl
Wed Dec 19 15:39:05 CET 2018

Dear Chishio,

As for trimfill(), I cannot tell you what particular features of the data might make (non)convergence more or less likely. But you could take a problematic dataset and try:

trimfill(res, verbose=TRUE)

What do you see in the output? You could also try increasing the maximum number of iterations:

trimfill(res, maxiter=1000)

But usually trim and fill converges quickly, so if it doesn't converge after 100 iterations (the default), then I doubt it will after 1000 (or more). But it is worth trying.

More generally, in your simulation code, you need to use the try() function. So, something like this:

for (...) {


   sav <- try(trimfill(res))

   if (inherits(sav, "try-error"))

   [do stuff if it converges]


This way, the code/loop will continue even if the function inside try() throws an error.


-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Chishio Furukawa
Sent: Wednesday, 19 December, 2018 13:50
To: R meta
Subject: [R-meta] Inquiry on metafor trimfill and weightr

Dear the community of metafor users,

I am writing to seek your advice for using the metafor trimfill (Duval and
Tweedie 2000) and weightr (Hedges and Vevea 2005).

Currently I am conducting a simulation study comparing their performance
across various models of publication selection. However, when I looped over
many simulated meta-analysis data sets, I encountered various errors that
stopped the loop. I am writing to ask your advice on what I can do to
address them.

1) trimfill
- for some data, I receive an error message "Trim and fill algorithm did
not converge."
2) weightr
- for some data, there is an error with "optim" package that has
"non-finite finite-difference value."
- the Hessian is non-singular so I cannot calculate the standard error of
each parameter.

May I ask if anyone could advise me on (i) what features of data create any
of these problems, and (ii) if there are ways to let the computation go
through without having the code break? I am thinking, after all, I could
skip these few simulation data sets that generate the prolems, and compute
the coverage probabilities and interval lengths on the data that do not
have these problems (although that might not be the most ideal way to
address them.)

Thank you so much in advance for your advice.

Best regards,

Chishio Furukawa
PhD Candidate
Department of Economics
Massachusetts Institute of Technology
Cell: 617-767-1209
Email: chishio using mit.edu

More information about the R-sig-meta-analysis mailing list