[R-sig-ME] Apologies + a couple of data points (Re: lme4a, glmer and all that)
Emmanuel Charpentier
charpent at bacbuc.dyndns.org
Fri Mar 5 23:13:37 CET 2010
Dear list,
First and foremost, an apology for my yesterday's post : re-reading
myself, I found *quite obvious* faults with my initial proposal.
I *swear* I wasn't drunk ! But this learns me never, ever, to post
anything on lme4.devel over 39°C (102°F)...
I succeeded in fitting a couple of models in JAGS reproducing analyses
similar to those created in lme4/lme4a and discussed by Bates and
Caboli. They might be useful before deciding to "go with the majority
rule", which strikes me as a curious way to solve a scientific
problem...
The fits did not exhibit any convergence problems (plot + Gelman
diagnostics + autocorrelation plots) ; adaptation and run length were
chosen to be able to "reasonably believe" the second figure after the
decimal mark in the estimators (see the "Time-series SE" in summaries).
The first model goes to some length to reproduce lmer/Stata
parametrization :
model {
for(k in 1:nobs) {
incidence[k]~dbin(p[k], size[k])
logit(p[k])<-alpha[herd[k]]+Period[period[k]]+Intercept
}
Intercept~dnorm(0,tau.Intercept)
tau.Intercept<-pow(sigma.Intercept,-2)
sigma.Intercept~dunif(0,100)
for(i in 1:nherd) {
alpha[i]~dnorm(0,tau.alpha)
}
tau.alpha<-pow(sigma.alpha,-2)
sigma.alpha~dunif(0,100)
Period[1]<-0
for(j in 2:nperiod) {
Period[j]~dnorm(0,1.0E-6)
}
}
This gives the following results :
> summary(Mod[[2]])
Iterations = 2001:12000
Thinning interval = 1
Number of chains = 3
Sample size per chain = 10000
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
Intercept -1.4073 0.2684 0.001550 0.006564
Period[1] 0.0000 0.0000 0.000000 0.000000
Period[2] -0.9954 0.3113 0.001797 0.003520
Period[3] -1.1444 0.3321 0.001917 0.003328
Period[4] -1.6243 0.4405 0.002543 0.004272
deviance 170.4377 6.0311 0.034821 0.060296
sigma.alpha 0.7746 0.2383 0.001376 0.004547
tau.alpha 2.2092 1.6582 0.009573 0.028883
[ ... Confidence intervals : Snip ... ]
The fixed effect estimates correlations r as follows :
cor(as.matrix(Mod[[2]][,c(1,3:5)]))
Intercept Period[2] Period[3] Period[4]
Intercept 1.0000000 -0.3196581 -0.2910961 -0.2264680
Period[2] -0.3196581 1.0000000 0.2787671 0.2046347
Period[3] -0.2910961 0.2787671 1.0000000 0.1810498
Period[4] -0.2264680 0.2046347 0.1810498 1.0000000
The second tries to be "symmetric" in handling the fixed effects : the
beta parameters are direct estimations of the (logit of the) incidence
*rate* during a given period, while the dbeta parameters are an estimate
of the corresponding effects parametrized "à la lmer/Stata" :
model {
for(k in 1:nobs) {
incidence[k]~dbin(p[k], size[k])
logit(p[k])<-alpha[herd[k]]+beta[period[k]]
}
moy~dnorm(0,1.0E-6)
#tau.moy<-pow(sigma.moy,-2)
#sigma.moy~dunif(0,100)
for(i in 1:nherd) {
alpha[i]~dnorm(0,tau.alpha)
}
tau.alpha<-pow(sigma.alpha,-2)
sigma.alpha~dunif(0,100)
for(j in 1:nperiod) {
beta[j]~dnorm(moy,tau.beta)
dbeta[j]<-beta[j]-beta[1]
}
tau.beta<-pow(sigma.beta,-2)
sigma.beta~dunif(0,100)
}
Results :
> summary(Mod2[[2]])
Iterations = 2001:12000
Thinning interval = 1
Number of chains = 3
Sample size per chain = 10000
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
beta[1] -1.4591 0.2641 0.001525 0.005610
beta[2] -2.4047 0.3243 0.001872 0.005246
beta[3] -2.5295 0.3448 0.001991 0.005736
beta[4] -2.9033 0.4301 0.002483 0.006491
dbeta[1] 0.0000 0.0000 0.000000 0.000000
dbeta[2] -0.9456 0.3073 0.001774 0.002710
dbeta[3] -1.0704 0.3274 0.001890 0.002899
dbeta[4] -1.4443 0.4229 0.002442 0.004340
deviance 170.2087 5.9780 0.034514 0.058695
moy -2.3220 0.9743 0.005625 0.007056
sigma.alpha 0.7736 0.2275 0.001314 0.003476
tau.alpha 2.1946 1.7145 0.009899 0.028458
[ ... re-Snip ... ]
> cor(as.matrix(Mod2[[2]][,c(10,6:8)]))
moy dbeta[2] dbeta[3] dbeta[4]
moy 1.00000000 0.04219075 0.04847081 0.06753645
dbeta[2] 0.04219075 1.00000000 0.33301136 0.30281680
dbeta[3] 0.04847081 0.33301136 1.00000000 0.28874007
dbeta[4] 0.06753645 0.30281680 0.28874007 1.00000000
> cor(as.matrix(Mod2[[2]][,c(10,1:4)]))
moy beta[1] beta[2] beta[3] beta[4]
moy 1.0000000 0.1812401 0.1875507 0.1848307 0.1776862
beta[1] 0.1812401 1.0000000 0.4698507 0.4471592 0.3340112
beta[2] 0.1875507 0.4698507 1.0000000 0.3999593 0.3426471
beta[3] 0.1848307 0.4471592 0.3999593 1.0000000 0.3297085
beta[4] 0.1776i862 0.3340112 0.3426471 0.3297085 1.0000000
There exists no obvious, deviance-based reasons to prefer one of these
models over the other :
> DIC(Mod[[2]])
Deviance pD DIC
170.43770 18.18215 188.61985
> DIC(Mod2[[2]])
Deviance pD DIC
170.2087 17.8677 188.0764
(The homebrew DIC function implements the algorithms described in the
vignette for the R2WinBUGS package, which is inspired by Gelman & al
(2004)).
Both models give "reasonable" estimators of the fixed effects, but with
a slightly different partitioning of the variance. They might be useful
as a "raincheck" for lme4/lme4a/Stata arbitration.
Of course, another class of models might offer another, quite different
class of solutions : model p as extracted from beta(a,b) distribution
and model a and b as functions of herd and period. The resulting
estimates would probably have no easy correspondence with logistic
regression estimates, but might be more useful for prediction.
HTH,
Emmanuel Charpentier
recovering...
Le jeudi 04 mars 2010 à 12:06 -0600, Douglas Bates a écrit :
> On Thu, Mar 4, 2010 at 11:32 AM, Federico Calboli
> <f.calboli at imperial.ac.uk> wrote:
> > On 4 Mar 2010, at 16:11, Douglas Bates wrote:
> > <cut>
> >>
> >> which, now I see are more consistent with the results from lme4, not
> >> lme4a. I misunderstood the message I received regarding the Stata
> >> results.
> >>
> >> OK, this might have all been a red herring. I'll look into it some more.
> >>
> >> At one time on Saturday Night Live Gilda Radner would play a citizen
> >> commentator on weekend news update who was confused about the issue
> >> and, after a long harangue, would end up realizing she got it wrong.
> >> She ended by saying "Never mind". That might be the case here too.
> >
> >
> > Just out of curiosity, how do we know stata and lme4 are correct and lme4a is not? why is stata the benchmark? Please note I am only being facetious.
>
> In cases like this we usually go with majority rule. I'm still
> verifying and validating.
>
> > _______________________________________________
> > R-sig-mixed-models at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> >
>
More information about the R-sig-mixed-models
mailing list