[R-sig-ME] mcmcglmm Priors: Auto correlation and extreme post mean values
sree datta
@reedt@8 @end|ng |rom gm@||@com
Fri May 15 16:24:44 CEST 2020
Hi Alexander,
Prior to running the model with "MCMCglmm", have you attempted to run a
log-linear model / a mixed-model with "nlme" or "lme4" / a decision tree? I
ask these questions to better understand what the simpler univariate and
multivariate approaches would reveal in terms of association between your
dependent variable and your predictor variables.
These are the exploratory models I would run to understand what would
represent reasonable priors to use for Bayesian based models. If you could
share more on some of your explorations with the data, that would be
helpful.
Sree
On Fri, May 15, 2020 at 9:29 AM Alexander Botha <alexbotha555 using gmail.com>
wrote:
> Good day List,
> My name is Alex, I am currently using the package mcmcglmm to determine the
> impact of lunar cycles, human presence and land use type (agricultural vs
> protected) on the trapping success of meso-predators for my PhD. I am new
> to MCMCglmm and I was wondering if you could assist with my problem.
>
> Structure of data: I am testing if and how the change in lunar cycle
> (factor) and human presence ( factor) impact trapping success. Trapping
> success is count data but because we only trapped 1 individual at most, the
> data can also be considered binary. Human presence (HP) is split into 5
> categories.
>
> Model: I am running Human presence, Moon phase and Land use type as fixed
> effects and the site name as a random effect.
>
> Problem: I have quantified human presence in various locations, with areas
> exposed to intense human pressures, there is a complete absence of any
> successful trappings (HP2 and HP4 have no trapping success resulting in
> only zeros). Post mcmcglmm, these parameters display auto correlation in
> their graphs (if i set them as fixed or random effects) as well as when
> using the geweke and gelman tests, but almost perfect mixing for the
> others, especially when I run it with 5-20 million iterations. I also see
> poorly mixed VCV graphs if the degree of belief is less than 20. I have
> used a variety of priors, see below, with no success. I have also increased
> the amount of itterations to 20 million, decreased the thinning factor,
> increased the burnin and used poisson, zapoisson, zipoisson, ordinal and
> threshold distributions, with threshold and ordinal having the best DIC
> values. Together with this, I am also seeing extreme post mean values for
> the models that are displaying the lowest DIC values, which is not
> something that I see in any of the literature (see below).
>
> I have read Jarrod Hadfields awesome course and tutorial notes, as well as
> other literature in my field, online tutorials, information documents and
> the vignettes and help functions in R as well as the correspondence on this
> email list between Jarrod Hadfield and other users but I cannot seem to
> figure out why the above is happening. My data set is quite small (176
> entries in total, with about 40 entries per HP category) and according to
> what I have read, priors can have a large impact on small to moderately
> sized data sets.
>
> Questions:
> 1. Does the problem lie with my priors? And if so, do you have any tips on
> how to solve it?
> 2. Should I be using ordinal or threshold family? I have read that the
> mixing of zi and za poisson models can be poor, and from my tries with
> them, this seems to be the case.
> 3. Are either the extreme post mean values or the auto correlation in
> certain parameters avoidable in this case?
>
> *Examples of priors*
> prior1<-list(R=list(V=diag(1)*1e-8, nu=0.2),G=list(G1=list(V=diag(1)*1e-6,
> nu=0.2))
> prior2<-list(R=list(V=diag(1), nu=0.2),G=list(G1=list(V=diag(1), nu=2)))
> prior3<-list(R=list(V=diag(2), nu=0, fix=2), G=list(G1=G1))
> G1=list(V=diag(2), nu=2, alpha.mu=c(0,0), alpha.V=diag(2)*1000)
> prior4:
> A note: I have these with degree belief ranging from 0.0002 to 20, with the
> best mixing (apart from the auto correlation in certain parameters) with nu
> set at 20 for the R structure. If i set the G structures degree of belief
> to less than 20, it mixes the random effect extremely poorly. I have also
> used ranging values from 1e-1 to 1e-20 and fixed the variances at 1 and 2
> using various different priors.
> *Examples of model scripts:*
> OM1.1<-MCMCglmm(Jackal_trapped ~ Human_presence + MP + Landuse, random
> =~Site, data = data, prior = prior1,
> family = "ordinal", nitt = 5000000, thin = 500, burnin = 500000, verbose =
> FALSE, pl = TRUE, DIC=TRUE)
> model1.1<-MCMCglmm(Jackal_trapped~trait,random=~us(trait):Human_presence +
>
> us(trait):Landuse,family="zapoisson",rcov=~idh(trait):units,data=data,prior=prior,nitt=500000,thin=500,burnin=200000,verbose=FALSE)
>
> model1.2<-MCMCglmm(Jackal_trapped~trait,random=~idh(trait):Human_presence +
>
> idh(trait):Landuse,family="zapoisson",rcov=~idh(trait):units,data=data,prior=prior,nitt=500000,thin=500,burnin=200000,verbose=FALSE)
>
> *Example of large post mean summary*
> family: I have used zapoisson, ordinal and zipoisson distributions
> prior1<-list(R=list(V=diag(1)*1e-4, nu=0.2),G=list(G1=list(V=diag(1)*1e-2,
> nu=2)))
> OM1.1<-MCMCglmm(Jackal_trapped ~ Human_presence + MP + Landuse, random
> =~Site, data = data, prior = prior1, family = "family", nitt = 5000000,
> thin = 500, burnin = 500000, verbose = FALSE, pl = TRUE, DIC=TRUE)
> Iterations = 500001:4999501
> Thinning interval = 500
> Sample size = 9000
>
> DIC: 0.003093406
>
> G-structure: ~Site
>
> post.mean l-95% CI u-95% CI eff.samp
> Site 0.1943 0.001154 0.1913 9000
>
> R-structure: ~units
>
> post.mean l-95% CI u-95% CI eff.samp
> units 1.755e+10 1.72e+09 3.97e+10 3632
>
> Location effects: Jackal_trapped ~ Human_presence + MP + Landuse
> post.mean l-95% CI u-95% CI eff.samp
> pMCMC
> (Intercept) -107789 -229797 9915 4666
> 0.0498 *
> Human_presence1 49459 -12241 117809 6957 0.1002
> Human_presence2 -49280 -153682 46279 8478 0.3184
> Human_presence3 18642 -66603 104880 8529 0.6684
> Human_presence4 -214029 -340008 -89259 6098 <1e-04 ***
> MPFQ -44627 -119345 24385 7328
> 0.1856
> MPNM 12362 -52404 72882 9000
> 0.6691
> MPTQ -43907 -122103 22055 8357
> 0.1942
> Landusereserve -51105 -138603 35033 9000
> 0.2302
>
> *Example of model outputs with auto correlation*
> family: I have used zapoisson, ordinal and zipoisson distributions
> prior2<-list(R=list(V=diag(1)*1e-6, nu=20),G=list(G1=list(V=diag(1)*1e-6,
> nu=20)))
> OM2.1<-MCMCglmm(Jackal_trapped ~ Human_presence + MP + Landuse, random
> =~Site, data = data, prior = prior2,
> family = "family", nitt = 5000000, thin = 500, burnin = 500000, verbose =
> FALSE, pl = TRUE, DIC=TRUE)
> Iterations = 500001:4999501
> Thinning interval = 500
> Sample size = 9000
>
> DIC: 182.6921
>
> G-structure: ~Site
> post.mean l-95% CI u-95% CI eff.samp
> Site 1.114e-06 5.171e-07 1.911e-06 8515
>
> R-structure: ~units
>
> post.mean l-95% CI u-95% CI eff.samp
> units 1.106e-06 5.129e-07 1.87e-06 9000
>
> Location effects: Jackal_trapped ~ Human_presence + MP + Landuse
>
> post.mean l-95% CI u-95% CI eff.samp pMCMC
>
> (Intercept) -0.70349 -1.20259 -0.28723 2.027 < 1e-04
> ***
> Human_presence1 0.53752 0.41033 0.67240 6.030 < 1e-04 ***
> Human_presence2 -0.44909 -0.65858 -0.03042 2.600 0.00289 **
> Human_presence3 -0.11236 -0.48808 0.30079 1.683 0.53556
> Human_presence4 -1.38806 -1.74144 -0.93892 2.829 < 1e-04 ***
> MPFQ -0.38918 -0.57513 -0.23967 2.300 < 1e-04
> ***
> MPNM 0.14273 -0.05171 0.28830 1.625 0.14356
>
> MPTQ -0.25338 -0.52899 -0.03162 1.477 0.01222
> *
> Landusereserve -0.64469 -1.06522 -0.18880 2.239 < 1e-04 ***
>
> *Example: Using my variables as random effects*
> *family: I have used zapoisson, ordinal and zipoisson distributions*
> model1.1<-MCMCglmm(Jackal_trapped~trait,random=~us(trait):Human_presence +
>
> us(trait):Landuse,family="family",rcov=~idh(trait):units,data=data,prior=prior,nitt=500000,thin=500,burnin=200000,verbose=FALSE)
> Iterations = 200001:499501
> Thinning interval = 500
> Sample size = 600
>
> DIC: 175.5933
>
> G-structure: ~us(trait):Human_presence
>
>
> post.mean l-95% CI u-95% CI eff.samp
> traitJackal_trapped:traitJackal_trapped.Human_presence 1.179e-06 5.584e-07
> 2.051e-06 472.7
> traitza_Jackal_trapped:traitJackal_trapped.Human_presence -1.243e-08
> -5.964e-07 5.494e-07 600.0
> traitJackal_trapped:traitza_Jackal_trapped.Human_presence -1.243e-08
> -5.964e-07 5.494e-07 600.0
> traitza_Jackal_trapped:traitza_Jackal_trapped.Human_presence 1.172e-06
> 5.252e-07 1.883e-06 507.3
>
> ~us(trait):Landuse
>
> post.mean l-95% CI u-95% CI eff.samp
> traitJackal_trapped:traitJackal_trapped.Landuse 1.143e-06 4.876e-07
> 2.029e-06 600
> traitza_Jackal_trapped:traitJackal_trapped.Landuse 7.237e-09 -5.758e-07
> 4.980e-07 600
> traitJackal_trapped:traitza_Jackal_trapped.Landuse 7.237e-09 -5.758e-07
> 4.980e-07 600
> traitza_Jackal_trapped:traitza_Jackal_trapped.Landuse 1.157e-06 4.896e-07
> 2.022e-06 600
>
> R-structure: ~idh(trait):units
>
> post.mean l-95% CI u-95% CI eff.samp
> traitJackal_trapped.units 9.855e-07 4.689e-07 1.685e-06 704.6
> traitza_Jackal_trapped.units 1.017e-06 4.490e-07 1.625e-06 600.0
>
> Location effects: Jackal_trapped ~ trait
>
> post.mean l-95% CI u-95% CI eff.samp pMCMC
> (Intercept) 0.7789 0.7730 0.7875 8.419 <0.002 **
> traitza_Jackal_trapped -2.8911 -2.9052 -2.8741 6.982 <0.002 **
>
> I hope I have shared enough info regarding my problem and that it makes
> sense. I hope my post meets the requirements of the list and that it does
> not seem like I am making my problem somebody elses, I am honestly just
> lost at the moment.
> I welcome any constructive criticism and any other help you can provide.
>
> Thank you for your help.
> I look forward to your responses.
>
> With kind regards,
>
> Alexander Edward Botha
> alexbotha555 using gmail.com 082 414 9030
> PhD candidate
> Mammal ecology
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list