[R-meta] Error: Ratio of largest to smallest sampling variance extremely large
florencia miguel
m||orm|gue| @end|ng |rom gm@||@com
Fri Feb 15 18:40:10 CET 2019
I understand. Thanks so much for the feedback.
Best
Florencia
El vie., 15 de feb. de 2019 a la(s) 14:21, James Pustejovsky (
jepusto using gmail.com) escribió:
> I agree with Wolfgang's suggestion that it would be important to conduct
> sensitivity analysis here. Specifically, I would want to see what happens
> if the smallest sampling variances are set to some lower bound. For
> example, if there is just one effect size with a very small sampling
> variance, try replacing the actual sampling variance with the next largest
> sampling variance (or if there are several very small, outlying variances,
> replace them all with the minimum of the remaining sampling variances.
> Hopefully this won't have much consequence on the overall average effect
> size. But I would anticipate that the between-study heterogeneity estimate
> could be quite sensitive.
>
> James
>
> On Fri, Feb 15, 2019 at 10:08 AM Viechtbauer, Wolfgang (SP) <
> wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>
>> Hi Florencia,
>>
>> As the warning says, the results may not be stable. To be precise, the
>> function checks if:
>>
>> max(vi) / min(vi) >= 1e7
>>
>> So, that would be the case for example if the largest sampling variance
>> is 1 and the smallest is .0000001 (or even smaller). For many effect size
>> measures, such a discrepancy would usually be pretty indicative of a data
>> entry/coding error, so this is why James was suggesting to check the data.
>>
>> You mentioned using log response ratios, where the usual equation for the
>> sampling variance is:
>>
>> vi <- sd1i^2/(n1i*m1i^2) + sd2i^2/(n2i*m2i^2)
>>
>> Another way to write this is:
>>
>> vi <- cv1i^2 / n1i + cv2i^2 / n2i
>>
>> where cv1i and cv2i are the coefficient of variation values for the two
>> groups. So, either the sample sizes vary by many many magnitudes over
>> studies in your data (e.g., one study with 10 observations, another with
>> 100000000) or the coefficient of variation values differ greatly (or a
>> combination of the two). The former seems unlikely to me, the latter would
>> make me question whether the corresponding lrr values should even be
>> combined.
>>
>> This aside, I cannot tell you whether the warning can be ignored or not
>> in your particular case. The reason why I put in the check is that I saw
>> some numerical issues with the internal algorithms in rma.mv() that were
>> sometimes triggered by having a V matrix that is very ill-conditioned (
>> https://en.wikipedia.org/wiki/Condition_number). Using max(vi) / min(vi)
>> >= 1e7 is a very simplified check for this.
>>
>> If you have just a few cases where vi is very very small (or large), you
>> could see what happens with the results if you shrink those vi values a bit
>> up (or down). Hopefully results do not change drastically. That would be a
>> good sign that things are alright.
>>
>> Best,
>> Wolfgang
>>
>> -----Original Message-----
>> From: florencia miguel [mailto:mflormiguel using gmail.com]
>> Sent: Friday, 15 February, 2019 16:11
>> To: Viechtbauer, Wolfgang (SP)
>> Cc: James Pustejovsky; R meta
>> Subject: Re: [R-meta] Error: Ratio of largest to smallest sampling
>> variance extremely large
>>
>> Thanks James and Wolfgang.
>>
>> James, I´ve checked data and I have some very small sampling variances.
>> BUT, these are not errors, it is the structure of data we are working with.
>>
>> Wolfgang, I installed that version of metafor, i did get warnings when
>> running models, is that ok?
>>
>> Best
>> Florencia
>>
>> El jue., 14 de feb. de 2019 a la(s) 19:29, Viechtbauer, Wolfgang (SP) (
>> wolfgang.viechtbauer using maastrichtuniversity.nl) escribió:
>> Hi Florencia,
>>
>> If you install the 'devel' version of metafor (
>> https://wviechtb.github.io/metafor/#installation), then you should get a
>> warning but no longer an error. However, the warning is there for a reason;
>> the results might not be trustworthy.
>>
>> Best,
>> Wolfgang
>>
>> -----Original Message-----
>> From: R-sig-meta-analysis [mailto:
>> r-sig-meta-analysis-bounces using r-project.org] On Behalf Of James Pustejovsky
>> Sent: Thursday, 14 February, 2019 22:46
>> To: florencia miguel
>> Cc: R meta
>> Subject: Re: [R-meta] Error: Ratio of largest to smallest sampling
>> variance extremely large
>>
>> Florencia,
>>
>> I think the issue might have more to do with your data than with the
>> estimation procedures. Could you try running one or all of the following
>> lines:
>>
>> summary(mdata.all$var.es)
>> plot(density(mdata.all$var.es)
>> bottom_n(mdata.all, 5, var.es)
>>
>> This will provide a five-number summary of the sampling variances of your
>> effect size estimates. If you have some that are very very small, this
>> will
>> cause the error you seem to have encountered. It might be worth checking
>> the summary statistics for the effect sizes with very small variances, to
>> see if there are data entry errors, or reporting errors in the primary
>> sources.
>>
>> James
>>
>> On Thu, Feb 14, 2019 at 1:57 PM florencia miguel <mflormiguel using gmail.com>
>> wrote:
>>
>> > Dear all, I am running a meta analysis with the main aim of comparing
>> three
>> > different kind of interventions and four kind of outcomes. I want to
>> > perform different models for interventions and outcomes. I could run
>> random
>> > effects models using the package meta but, as I need to include
>> moderators
>> > in the models I tryed the metafor package.
>> >
>> > The problem is that I obtained this error when running rma.uni and
>> > rma.mv functions:
>> > "Error in rma.mv(yi = lrr, V = var.es, mods = ~aridity.index, method =
>> > "REML", : Ratio of largest to smallest sampling variance extremely
>> large.
>> > Cannot obtain stable results."
>> >
>> > I am using Log response ratio as effect sizes. I know that data are very
>> > heterogeneous (some rows with high variances values and other with low
>> > variances) because I am comparing different kind of measures. So, I
>> > performed models by subgroups (subsetting by interventions), and I
>> obtained
>> > the same type of error.
>> >
>> > Here are some codes:
>> >
>> > mod1 <- rma(lrr, var.es, mods= aridity.index, data=mdata.all,
>> > subset=intervention=="vegetation")
>> >
>> > mod.2<-rma.mv(yi=lrr, V=var.es, mods= aridity.index, method = "REML",
>> > test="t", random = ~ 1 | ID, data=mdata.all, sparse=TRUE)
>> >
>> > mod.3 <- rma(lrr, var.es, mods= ~intervention, data = mdata.all,
>> subset =
>> > paradigm == "active")
>> >
>> > ##filtering by interventions
>> > mdata.veg <- mydata %>%
>> > filter(intervention=="vegetation") %>%
>> > filter(!is.na(lrr)) %>%
>> > filter(!is.na(var.es))
>> >
>> > mod<-rma(lrr,var.es, mods= aridity.index, digits=4,data=mdata.veg)
>> >
>> > I dont know why i getting the same error after subsetting or filtering
>> by
>> > groups.
>> >
>> > Thank you in advance!
>> > Florencia
>>
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list