[R-sig-ME] unreliable DIC vs. how to evaluate complex interactions in MCMCglmm

Steven Long stevenlong75x at gmail.com
Sun Jan 22 14:10:32 CET 2017


Dear List Members,

I have a mixed effects model setup, that would ideally incorporate
pedigree. This is why I started to run the models using MCMCglmm. As I am
unfamiliar with the latter approach I first tried everything using lme4 and
used AIC to compare the models. Then I rerun everything in MCMCglmm.
Neither of the model sets included pedigree at this stage.

My main concern is the very large difference in AIC vs. DIC based model
selection result.

I've attached my dataset. See my code here.






































*library(MCMCglmm)d2 <- read.csv("d2.csv")d2$f1 <- as.factor(d2$f1)d2$f2 <-
as.factor(d2$f2)n=8prior = list(R = list(V = n, fix = 1),              G =
list(G1 = list(V = 1, nu = 1, alpha.mu <http://alpha.mu> = 0, alpha.V =
1000)),        B = list(mu = rep(0,     n), V = diag(n) * (1 + pi^2/3)))m0
<-MCMCglmm(y ~ x1 +f1 +f2, random = ~ R,
data=d2,family='gaussian',             prior=prior,
nitt=101000,burnin=1000,thin=10)n=9prior = list(R = list(V = n, fix = 1),
             G = list(G1 = list(V = 1, nu = 1, alpha.mu <http://alpha.mu> =
0, alpha.V = 1000)),        B = list(mu = rep(0,     n), V = diag(n) * (1 +
pi^2/3)))m1 <-MCMCglmm(y ~ x1 +f1 +f2 +x2, random = ~ R,
data=d2,family='gaussian',             prior=prior,
nitt=101000,burnin=1000,thin=10)n=12prior = list(R = list(V = n, fix = 1),
             G = list(G1 = list(V = 1, nu = 1, alpha.mu <http://alpha.mu> =
0, alpha.V = 1000)),        B = list(mu = rep(0,     n), V = diag(n) * (1 +
pi^2/3)))m2 <-MCMCglmm(y ~ x1 +f1 +f2*x2, random = ~ R,
data=d2,family='gaussian',             prior=prior,
nitt=101000,burnin=1000,thin=10)n=15prior = list(R = list(V = n, fix = 1),
             G = list(G1 = list(V = 1, nu = 1, alpha.mu <http://alpha.mu> =
0, alpha.V = 1000)),        B = list(mu = rep(0,     n), V = diag(n) * (1 +
pi^2/3)))m3 <-MCMCglmm(y ~ x1 +f1*x1 +f2*x2, random = ~ R,
data=d2,family='gaussian',             prior=prior,
nitt=101000,burnin=1000,thin=10)m0$DIC; m1$DIC; m2$DIC; m3$DIC*

*# m0 -684.7527*

*# m1  -741.8908*

*# m2  -880.8456*

*# m3  -987.9276*







*library(lme4)M0 <- lmer(y ~ x1 +f1 +f2 +(1|R),REML=F, d2)M1 <- lmer(y ~ x1
+f1 +f2 +x2 +(1|R),REML=F, d2)M2 <- lmer(y ~ x1 +f1 +f2*x2 +(1|R),REML=F,
d2)M3 <- lmer(y ~ x1 +f1*x2 +f2*x2 +(1|R),REML=F, d2)AIC(M0);AIC(M1);
AIC(M2); AIC(M3)*

*# M0 -122.76*

*# M1 -122.8364*

*# M2 -118.4911*
*# M3 -113.0851*

So in conclusion, lme4 shows that x2 has no effect on y, neither standing
alone nor in interactions. This of course is highly consistent with LR
tests and is sort of what I expect.

In turn the MCMCglmms show that the most complex model is favored, and x2
largely increases model fit in every combination. The more complex the
model, the smaller the DIC.

I read in MCMCglmm course notes that DIC has large sampling variance, so I
tried running the same models using 10million iterations, but the results
are the same.

So basically I have two questions:
1. DIC seems to be very unreliable. It feels that penalisation of extra
parameters estimated is very low, selecting the most complex models at
times. Does this seem right to more experienced MCMCglmm users or did I do
something wrong in the analyses?
2. Other than DIC, how can I evaluate the importance of interaction terms
and decide to keep it in the models or not?

Any advice or help would be much appreciated.

Kind regards,
Steve


More information about the R-sig-mixed-models mailing list