[R-sig-ME] Poor mixing and autocorrelation of ZIP data in MCMCglmm

Pierre de Villemereuil p|erre@dev|||emereu|| @end|ng |rom ephe@p@|@eu
Fri Jun 19 10:43:55 CEST 2020


Hi,

> I wanted to double-check if I understand the prior suggestion correctly.
> While I do understand the basics of the priors to some extent, I still have
> difficulties to look at a model and say which prior would be appropriate. I
> thought that the probability density distribution of the prior is flatter
> the smaller nu is, i.e. the smaller nu, the weaker the prior. By
> recommending to make the prior stronger, do you mean to increase nu? If so,
> I have nu=1000 in the G part of the prior formula, which I thought was a
> very strong prior already? Or did you mean to increase nu in the R-part of
> the prior specification?

I think the prior is OK as it is.

> I read somewhere that the ratio of (NITT-BURN)/THIN should be kept between
> 1000-2000. So, my question is, if as long as this ratio is between
> 1000-2000 is it ok to run the model for very long – or is that trying too
> hard to fit the model?

That's not the best kind of advice in my opinion. The number of iterations is a bit meaningless without accounting for auto-correlation, which is why effective sample size was invented. Simply decide on a minimal target for effective sample size, depending on the kind of precision you would like for your estimates.

> I ran a model with NITT = 920000, BURN = 20000, THIN = 600, i.e.
> (NITT-BURN)/THIN = 1500. I attached the details below and added you the
> trace and density plots (I removed them for the list email as it would make
> the message too large). The ZI-related variance in the baitIntake is
> slightly larger again (12 compared to 9 in the previous model where I
> excluded some animals).  The effective sample sizes are larger than 200 for
> all ZI-related variances but still rather small for zi_baitIntake.id (412)
> and zi_baitIntake.event (303). The autocorrelation is also rather large for
> some of the ZI-related variances.

If you are still unsatisfied by the effective sample size, you simply need to run the model for longer.

> However, I also read some more publications this morning that used MCMCglmm
> and looked at their nitt, burn and thin parameters. A lot of them result in
> (NITT-BURN)/THIN = 1000. I therefore re-run my model with parameters that
> also had 1000 as a ratio.

Again, this is somewhat meaningless as it depends on your autocorrelation level, which in turns depends on both the model and the data. If anything, I believe you should save much more than 1000 iterations if you want to increase your effective sample size without having to run your MCMC for eons. The way you set up everything with this MULT variable might not be the best way forward.

> On a different note: the prior question at the beginning of the message
> might not be appropriate for the list as I realize it is a rather basic
> question. I tried to find some more resources online to get a better
> understanding about choosing and defining appropriate priors in the future.
> I found some webpages and forum entries but wanted to ask if you have
> resources you would recommend in regards to better understanding priors and
> prior choice?

There is this page from Andrew Gelman, but not much it the priors presented here apply to MCMCglmm:
https://github.com/stan-dev/stan/wiki/Prior-Choice-Recommendations

Cheers,
Pierre



More information about the R-sig-mixed-models mailing list