[R-sig-ME] zero-inflated models in MCMCglmm

Gustaf Granath gustaf.granath at gmail.com
Thu Dec 1 00:30:17 CET 2016


Hi,

Thanks for looking at this Jarrod. I have included R code below and data 
is loaded from github.

I realized that I had missed one thing though. I must have suppressed 
warnings, because I now see that my zip-model gives a warning about not 
being able to estimate all fixed effects (thus removed). I get no 
warnings for poisson and zap models. The warning is telling my that 
there is a reason why the contrasts are messed up, but I havent been 
able to pinpoint whats going on though (I have checked that the 
experimental design is correct etc). This is likely very case specific, 
but I'd appreciate any qualified guess that can help me.

Cheers

Gustaf

#################
# test data
require(RCurl)
zero.dat 
<-read.csv(text=getURL("https://raw.githubusercontent.com/ggranath/zero_inf_models/master/test_data.csv"),header=T)
#zero.dat <- read.csv("test_data.csv")

#plots nested in site (X3 treatments at plot-level)
zero.dat$nested_plot <- factor(with(zero.dat, paste(site, plot, sep= 
"_")) )

#zip model
nitt = 2000 #low for testing
thin = 10 #low for testing
prior = list( R = list(V = diag(2), nu = 0.002, fix = 2),
               G=list(G1=list(V=1, nu=0.002,alpha.mu=0, alpha.V=625^2)))
zip <- MCMCglmm(y ~ trait -1 + at.level(trait, 1):(X1*X2*X3_nest),
                 random = ~ nested_plot, rcov = ~idh(trait):units,
                 data=zero.dat, family = "zipoisson",  nitt = nitt,
                 burnin = 100, thin=thin, prior=prior, pr = TRUE, pl = TRUE)
summary(zip)
# Gives a warning!! And coef table seems messed up.
# Something is going on here.....dont know what though.

#zap model
zap <- MCMCglmm(y ~ trait*(X1*X2*X3_nest),
                     random = ~ nested_plot, rcov = ~idh(trait):units,
                     data=zero.dat, family = "zapoisson",  nitt = nitt,
                     burnin = 100, thin=thin,  prior=prior, pr = TRUE, 
pl = TRUE)
summary(zap)
# No warning and everything looks good

# Poisson model
prior = list( R = list(V = diag(1), nu = 0.002),
               G=list(G1=list(V=1, nu=0.002,alpha.mu=0, alpha.V=625^2)))
pois <- MCMCglmm(y ~ X1*X2*X3_nest,
                 random = ~ nested_plot,
                 data=zero.dat, family = "poisson",  nitt = nitt,
                 burnin = 100, thin=thin,  prior=prior, pr = TRUE, pl = 
TRUE)
summary(pois)
# No warning

# Some marginal predictions
p.zip <- predict(zip, marginal = ~ nested_plot,  type="response", 
posterior = "mean")
p.zap <- predict(zap, marginal = ~ nested_plot,  type="response", 
posterior = "mean")
p.pois <- predict(pois, marginal = ~ nested_plot, type="response", 
posterior = "mean")
cbind(aggregate(y ~ X1*X2*X3_nest, zero.dat, mean),
       zip = aggregate(p.zip ~ fire*retention*micro_hab.two, dat, mean)$V1,
       zap = aggregate(p.zap ~ fire*retention*micro_hab.two, dat, mean)$V1,
       pois = aggregate(p.pois ~ fire*retention*micro_hab.two, dat, 
mean)$V1)
# poisson model gives extreme (unrealistic) values.


On 2016-11-30 21:21, Jarrod Hadfield wrote:
> Hi Gustaf,
>
> Is it possible to post your data/code so I can have a look? I can't 
> think of any reason why the base-line level of X1 would change for the 
> main effect but be the same for the interaction.
>
> Cheers,
>
> Jarrod
>
>
> On 28/11/2016 11:11, Gustaf Granath wrote:
>> Jarrod,
>>
>> singular.ok=TRUE was the problem. I did some copy-paste and didnt 
>> clean up the code. However, Im still confused about the coefs. Say 
>> that X1 and X2 each have two levels: no/yes and empty/full 
>> respectively. For a "zipoisson" I get the coef table:
>>
>> traity
>> traitzi_y
>> at.level(trait, 1):X1no
>> at.level(trait, 1):X2full
>> at.level(trait, 1):X1yes:X2full
>>
>> But when running "zapiosson" I get:
>>
>> traity
>> traitza_y
>> at.level(trait, 1):X1yes
>> at.level(trait, 1):X2full
>> at.level(trait, 1):X1yes:X2full
>> + traitza_...
>>
>> The "zapiosson" output makes sense to me but the "zipoisson" looks 
>> odd to me.
>>
>> Predictions: ok, estimates actually changed after I updated the 
>> package. I didnt make a huge difference though and it doesnt clear 
>> things up completely. The strange thing is that conditional 
>> predictions captures the data really well - both for zi, za and hu. 
>> However, the estimated marginal predictions are sensible for hu, and 
>> za (ie conditional and marginal predictions are quite similar), while 
>> for zi they are way too large (getting values much higher than you 
>> find in the observed data). Yes, marginal predictions should be 
>> larger by definition, and may be quite much higher if you have some 
>> sites with very high means that pulls the expected "random site" 
>> prediction above the "average site" estimate. However, this crazy 
>> large "blow up" effect that I see (predictions outside data range), I 
>> guess can happen when random effects are large. But what are actually 
>> "good" predictions here? If all classic model diagnostics looks good 
>> and conditional predictions are close to raw (pooled) treatments 
>> means but marginal predictions are way too high, what is then "wrong" 
>> with the model? Where do you start looking to identify whats going 
>> on? Rarely are predicted estimates reported in papers (cond nor 
>> marginal) and Im trying to get my head around how they can be better 
>> used in model checking.
>>
>> Tom> Thanks for your input. I will look closer at your paper.
>>
>> Gustaf
>>
>>
>> On 2016-11-26 08:59, Jarrod Hadfield wrote:
>>> Hi,
>>>
>>> If X1 was has two levels then it is odd that you have an intercept 
>>> AND 2 X1 contrasts. Have you used singular.ok=TRUE? If not then are 
>>> you sure there is not a third (rare) level to X1? Try 
>>> unique(your_data$X1): trailing whitespace from excel spreadsheets is 
>>> a common cause of such a problem.
>>>
>>> Regarding the predictions, the issue may be with the above. However, 
>>> note that there was a bug with posterior="mean" in the call to 
>>> predict.MCMCglmm (the first posterior iteration of the fixed effects 
>>> was used rather than there mean). This bug is fixed in the current 
>>> version, and there was no bug in the default posterior="all".
>>>
>>> Cheers,
>>>
>>> Jarrod
>>>
>>>
>>>
>>> On 25/11/2016 13:16, Gustaf Granath wrote:
>>>> Hi,
>>>> A few questions regarding zero-inflated models using MCMCglmm.
>>>>
>>>> 1. Odd contrasts are created when using, e.g. :
>>>> y ~ trait-1 + at.level(trait, 1):(X1+X2), family = "zipoisson" #X1 
>>>> and X2 are factors with 2 levels
>>>> COEF TABLE
>>>> traity
>>>> traitzi_y
>>>> at.level(trait, 1):X1lev1
>>>> at.level(trait, 1):X1lev2
>>>> at.level(trait, 1):X2lev2
>>>>
>>>> It doesnt look like this in the course notes (what I can see). Is 
>>>> at.level(trait, 1):X1lev1 the reference level for everything below? 
>>>> I also get very high (>1000) estimates for traity, at.level(trait, 
>>>> 1):X1lev1 and at.level(trait, 1):X1lev2.
>>>>
>>>> 2. I made four models
>>>> a) y ~ X1*X2, family = "poisson"
>>>> Large overdispersion (units ~ 5) but everything looks fine 
>>>> (traceplots, random effects (including units) ). Predictive checks 
>>>> show that the models predict too few zeros though. And alarming is 
>>>> that predictions are really bad, e.g. a treatment mean of 40 is 
>>>> predicted to have 300 counts.
>>>>
>>>> b) y ~ trait-1 + at.level(trait, 1):(X1+X2), family = "zipoisson"
>>>> Model looks OK, but again, predictions are way too high for the 
>>>> higher tretament means.
>>>>
>>>> c:d) y ~ trait-1 + at.level(trait, 1):(X1+X2), family = "zapoisson" 
>>>> or "hupoisson"
>>>> Predicted values are much better. But is it the same way get 
>>>> predictions here? Possible to get predictions from the separate 
>>>> models (poisson/binomial) in the hurdle case? (code available?)
>>>> #get e.g. treatment predictions?
>>>> predict.MCMCglmm(model, marginal = ~ random_factor, 
>>>> type="response", posterior = "mean")
>>>>
>>>> The "zapoisson" model performs best but I dont understand why the 
>>>> "zipoisson" is so bad. Any typical things to look for when it looks 
>>>> like this?
>>>> Interpreting "zapoisson" isnt easy, any good literature/tutorials 
>>>> on this model?
>>>>
>>>>
>>>> Cheers
>>>>
>>>> Gustaf
>>>>
>>>>
>>>
>>>
>>



More information about the R-sig-mixed-models mailing list