[R-sig-ME] ZINB model validation and interpretation

Ben Bolker bbolker at gmail.com
Mon Oct 9 19:07:24 CEST 2017


 Please keep r-sig-mixed-models in the Cc: (this is borderline since
the questions are a bit off topic, but the more important [to me]
point is that I don't want to engage in off-list conversations about
stats help outside of direct collaborations ...)

On Fri, Oct 6, 2017 at 4:11 PM,  <miriam.alzate at unavarra.es> wrote:
> Hi Ben,
>
> Many thanks for the answer and sorry for the delay.
>
> The first part is ok. Would you compute the BIC test as well? The point is
> that I am getting a N/A when I run the BIC test in R for my ZINB and ZIP
> models, it works right for the P and NB. AIC, VUONG and likelihood ratio
> test are Ok.

  BIC, AIC, likelihood, and Vuong are all reasonable approaches to
model testing. They all answer slightly different questions, and you
should be aware of the differences, e.g. see
http://emdbolker.wikidot.com/blog:aic-vs-bic . I don't know why you
get NA values; a reproducible example would be helpful (you should
probably post it to the glmmTMB issues list
<https://github.com/glmmTMB/glmmTMB/issues>

>
> The second part is quite unfamiliar for me. Could you let me know the
> package or what kind of code should I use for it? I have read something
> about bootstrapping.

  You can use the simulate() function for a sort of posterior
predictive simulation (although not one that takes uncertainty in the
parameter estimates into account ...) see e.g. Gelman and Hill 2007.
>
> Thanks a lot
>
> Miriam
>
> El Mie, 20 de Septiembre de 2017, 20:24, Ben Bolker escribió:
>> This isn't actually a mixed-model question as far as I can tell, but
>> I'll take a stab at it.  (https://stats.stackexchange.com is probably
>> the best option for follow-ups, as R-help isn't for general statistics
>> questions.)
>>
>> Your approach seems not-crazy to me, although I would probably be
>> lazier/slopper and compare all four cases (P, NB, ZIP, ZINB) in a
>> single AIC(c) table. In any case, there are very basic issues with
>> either P vs NB or ZIP vs ZINB tests based on any of the standard
>> approaches (Vuong, *IC, likelihood ratio test) that come from the fact
>> that one of the pair of models is on the boundary of the feasible
>> space, see e.g.
>> https://stats.stackexchange.com/questions/182020/zero-inflated-poisson-regression-vuong-test-raw-aic-or-bic-corrected-results/217869
>>
>> For validity and robustness, I would suggest more "impressionistic"
>> diagnostics (inspect residuals for independence of predictors, lack of
>> heteroscedasticity; look for influential/outlier residuals; compare
>> patterns of predictions with patterns in raw data for evidence of
>> unexpected patterns). If you want more formal tests, try generating
>> posterior predictive simulations of quantities that are important to
>> you and see if they match the observed values of those quantities.
>>
>> On Mon, Sep 18, 2017 at 6:25 PM,  <miriam.alzate at unavarra.es> wrote:
>>> Hello,
>>> I am working with a ZINB model in R. To validate it, I first did a VUONG
>>> test to compare it with a standard NB model. The result is that the ZINB
>>> is better than the NB. Then, I compared the ZINB to a ZIP model,
>>> comparing
>>> the AIC index and the log-likelihood and I also get that the ZINB fits
>>> better than the ZIP.
>>>
>>> However, I would like to know if I should take other tests into
>>> consideration to show the validity and robustness of my model.
>>>
>>> On the other hand, I would like to know if I can interpret the
>>> coefficients directly from the model result or I should compute the Odds
>>> ratios.
>>>
>>> Thanks a lot,
>>>
>>> Miriam
>>>
>>> _______________________________________________
>>> R-sig-mixed-models at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>
>



More information about the R-sig-mixed-models mailing list