[R-sig-ME] zero-inflated models in MCMCglmm

Jarrod Hadfield j.hadfield at ed.ac.uk
Wed Nov 30 21:21:29 CET 2016


Hi Gustaf,

Is it possible to post your data/code so I can have a look? I can't 
think of any reason why the base-line level of X1 would change for the 
main effect but be the same for the interaction.

Cheers,

Jarrod


On 28/11/2016 11:11, Gustaf Granath wrote:
> Jarrod,
>
> singular.ok=TRUE was the problem. I did some copy-paste and didnt 
> clean up the code. However, Im still confused about the coefs. Say 
> that X1 and X2 each have two levels: no/yes and empty/full 
> respectively. For a "zipoisson" I get the coef table:
>
> traity
> traitzi_y
> at.level(trait, 1):X1no
> at.level(trait, 1):X2full
> at.level(trait, 1):X1yes:X2full
>
> But when running "zapiosson" I get:
>
> traity
> traitza_y
> at.level(trait, 1):X1yes
> at.level(trait, 1):X2full
> at.level(trait, 1):X1yes:X2full
> + traitza_...
>
> The "zapiosson" output makes sense to me but the "zipoisson" looks odd 
> to me.
>
> Predictions: ok, estimates actually changed after I updated the 
> package. I didnt make a huge difference though and it doesnt clear 
> things up completely. The strange thing is that conditional 
> predictions captures the data really well - both for zi, za and hu. 
> However, the estimated marginal predictions are sensible for hu, and 
> za (ie conditional and marginal predictions are quite similar), while 
> for zi they are way too large (getting values much higher than you 
> find in the observed data). Yes, marginal predictions should be larger 
> by definition, and may be quite much higher if you have some sites 
> with very high means that pulls the expected "random site" prediction 
> above the "average site" estimate. However, this crazy large "blow up" 
> effect that I see (predictions outside data range), I guess can happen 
> when random effects are large. But what are actually "good" 
> predictions here? If all classic model diagnostics looks good and 
> conditional predictions are close to raw (pooled) treatments means but 
> marginal predictions are way too high, what is then "wrong" with the 
> model? Where do you start looking to identify whats going on? Rarely 
> are predicted estimates reported in papers (cond nor marginal) and Im 
> trying to get my head around how they can be better used in model 
> checking.
>
> Tom> Thanks for your input. I will look closer at your paper.
>
> Gustaf
>
>
> On 2016-11-26 08:59, Jarrod Hadfield wrote:
>> Hi,
>>
>> If X1 was has two levels then it is odd that you have an intercept 
>> AND 2 X1 contrasts. Have you used singular.ok=TRUE? If not then are 
>> you sure there is not a third (rare) level to X1? Try 
>> unique(your_data$X1): trailing whitespace from excel spreadsheets is 
>> a common cause of such a problem.
>>
>> Regarding the predictions, the issue may be with the above. However, 
>> note that there was a bug with posterior="mean" in the call to 
>> predict.MCMCglmm (the first posterior iteration of the fixed effects 
>> was used rather than there mean). This bug is fixed in the current 
>> version, and there was no bug in the default posterior="all".
>>
>> Cheers,
>>
>> Jarrod
>>
>>
>>
>> On 25/11/2016 13:16, Gustaf Granath wrote:
>>> Hi,
>>> A few questions regarding zero-inflated models using MCMCglmm.
>>>
>>> 1. Odd contrasts are created when using, e.g. :
>>> y ~ trait-1 + at.level(trait, 1):(X1+X2), family = "zipoisson" #X1 
>>> and X2 are factors with 2 levels
>>> COEF TABLE
>>> traity
>>> traitzi_y
>>> at.level(trait, 1):X1lev1
>>> at.level(trait, 1):X1lev2
>>> at.level(trait, 1):X2lev2
>>>
>>> It doesnt look like this in the course notes (what I can see). Is 
>>> at.level(trait, 1):X1lev1 the reference level for everything below? 
>>> I also get very high (>1000) estimates for traity, at.level(trait, 
>>> 1):X1lev1 and at.level(trait, 1):X1lev2.
>>>
>>> 2. I made four models
>>> a) y ~ X1*X2, family = "poisson"
>>> Large overdispersion (units ~ 5) but everything looks fine 
>>> (traceplots, random effects (including units) ). Predictive checks 
>>> show that the models predict too few zeros though. And alarming is 
>>> that predictions are really bad, e.g. a treatment mean of 40 is 
>>> predicted to have 300 counts.
>>>
>>> b) y ~ trait-1 + at.level(trait, 1):(X1+X2), family = "zipoisson"
>>> Model looks OK, but again, predictions are way too high for the 
>>> higher tretament means.
>>>
>>> c:d) y ~ trait-1 + at.level(trait, 1):(X1+X2), family = "zapoisson" 
>>> or "hupoisson"
>>> Predicted values are much better. But is it the same way get 
>>> predictions here? Possible to get predictions from the separate 
>>> models (poisson/binomial) in the hurdle case? (code available?)
>>> #get e.g. treatment predictions?
>>> predict.MCMCglmm(model, marginal = ~ random_factor, type="response", 
>>> posterior = "mean")
>>>
>>> The "zapoisson" model performs best but I dont understand why the 
>>> "zipoisson" is so bad. Any typical things to look for when it looks 
>>> like this?
>>> Interpreting "zapoisson" isnt easy, any good literature/tutorials on 
>>> this model?
>>>
>>>
>>> Cheers
>>>
>>> Gustaf
>>>
>>>
>>
>>
>


-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.



More information about the R-sig-mixed-models mailing list