[R-sig-ME] random effects CIs
Greg Dropkin
gregd at gn.apc.org
Mon Aug 28 11:54:05 CEST 2017
Hi Phillip
Thanks. Yes, I was aware of your points but what surprised me was that the
R2jags CI was closer to use.u = true.
Greg
> Hi Greg,
>
> I haven't checked your code for correctness, but the pattern you've
> found is typical. I don't know if there is a "best" answer, but broader
> confidence intervals are more conservative and so should have better
> coverage, if that's what you're going for. The biggest difference for
> use.u=FALSE is how the random effects are handled, from the Details for
> ?bootMer:
>
>> If ‘use.u’ is ‘FALSE’ and ‘type’ is ‘"parametric"’, each
>> simulation generates new values of both the “_spherical_”
>> random effects u and the i.i.d. errors epsilon, using
>> ‘rnorm()’ with parameters corresponding to the fitted model
>> ‘x’.
>>
>> If ‘use.u’ is ‘TRUE’ and ‘type=="parametric"’, only the
>> i.i.d. errors (or, for GLMMs, response values drawn from the
>> appropriate distributions) are resampled, with the values of
>> u staying fixed at their estimated values.
>
> In other words, use.u=FALSE generates new levels of the random effects,
> so in the case of Dyestuff, it simulates new, unseen Batches based on
> the estimates of the between-Batch variance. In the use.u=TRUE case, it
> re-uses the observed Batches and just simulates the observation-level /
> residual error. So, the use.u=FALSE case introduces "new" variation /
> models additional variation and has broader confidence intervals. If I'm
> not mistaken, this is related to the difference between prediction and
> confidence intervals.
>
> Best,
> Phillip
>
>
>
> On 08/21/2017 10:21 PM, Greg Dropkin wrote:
>> Dear list
>>
>> apologies if these questions have already been answered here - if so
>> just
>> point me to them.
>>
>> Am I implementing these 4 approaches to CIs for the random effects
>> correctly (for Dyestuff), and if so, which is best (or is "best"
>> meaningless)? bootMer use.u=FALSE seems to be recommended in the lme4
>> manual, but gives quite different results to the other 3 which are
>> roughly
>> comparable. Is there a better way?
>>
>> thanks
>>
>> Greg Dropkin
>>
>> --
>>
>> library(lme4)
>> library(lattice)
>> library(boot)
>> library(R2jags)
>>
>> #1
>> #as in Ch 1 of Douglas Bates' book on lme4:
>> fm01<-lmer(Yield~1+(1|Batch),data=Dyestuff)
>> dotplot(ranef(fm01,condVar=TRUE))
>>
>> attr(ranef(fm01,condVar=TRUE)$Batch,"postVar")
>> #all the same
>> sd1<-attr(ranef(fm01,condVar=TRUE)$Batch,"postVar")[[1]]^0.5
>> sd1
>> #the estimates are ranef(fm01), and the CIs shown in the dotplot are +/-
>> sd1*qnorm(0.975)
>> #e.g. for Batch A:
>> ranef(fm01)$Batch[1,]
>> ranef(fm01)$Batch[1,]+c(-1,1)*sd1*qnorm(0.975)
>>
>> #2
>> #as recommended in the lme4 manual
>> #bootMer with use.u=FALSE
>> g<-function(.) ranef(.)$Batch[1,]
>> bootg<-bootMer(fm01,g,seed=1,nsim=1000,use.u=FALSE,.progress="txt",PBarg=list(style=3))
>> boot.ci(bootg,type=c("norm","basic","perc"))
>>
>> #3
>> #bootMer with use.u=TRUE
>> bootgu<-bootMer(fm01,g,seed=1,nsim=1000,use.u=TRUE,.progress="txt",PBarg=list(style=3))
>> boot.ci(bootgu,type=c("norm","basic","perc"))
>>
>> #4
>> #R2jags
>> I<-1:30
>> J<-1+floor((I-1)/5)
>>
>> zY<-with(Dyestuff,(Yield-mean(Yield))/sd(Yield))
>> sdY<-with(Dyestuff,sd(Yield))
>>
>> mod1.data<-list("zY","I","J")
>> mod1.parameters<-c("Con.adj","a.adj","mu.a","sigma.a","sig")
>>
>> mod1.mod<-function()
>> {
>> Con.adj<-Con+mean(a[])
>> for (j in 1:6)
>> {
>> a[j]~dnorm(mu.a,1/sigma.a^2)
>> a.adj[j]<-a[j]-mean(a[])
>> }
>>
>> for (i in I)
>> {
>> zY[i]~dnorm(lp[i],1/sig^2)
>> lp[i]<-Con+a[J[i]]
>> }
>> Con~dnorm(0,0.000001)
>> mu.a~dnorm(0,0.0001)
>> sigma.a~dunif(0,100)
>> sig~dunif(0,100)
>> }
>>
>> set.seed(1)
>> mod1.mod.fit<-jags(data=mod1.data, inits=NULL,
>> parameters.to.save=mod1.parameters, n.chains=3, n.iter=50000,
>> n.burnin=10000, model.file=mod1.mod)
>> mod1.mod.fit
>> mat1<-mod1.mod.fit$BUGSoutput$sims.matrix
>>
>> mean(mat1[,1]+mat1[,2])*sdY
>> quantile(mat1[,1]+mat1[,2],c(0.025,0.975))*sdY
>>
>> #comparison
>>
>> cim<-matrix(0,8,3)
>> rownames(cim)<-c("conditional sd","bootMer use.u=F norm","bootMer
>> use.u=F
>> basic","bootMer use.u=F perc","bootMer use.u=T norm","bootMer use.u=T
>> basic","bootMer use.u=T perc","jags")
>> colnames(cim)<-c("estimate","low","high")
>> cim[1:7,1]<-ranef(fm01)$Batch[1,]
>> cim[1,2:3]<-ranef(fm01)$Batch[1,]+c(-1,1)*sd1*qnorm(0.975)
>> cim[2,2:3]<-boot.ci(bootg,type=c("norm","basic","perc"))$norm[,2:3]
>> cim[3,2:3]<-boot.ci(bootg,type=c("norm","basic","perc"))$basic[,4:5]
>> cim[4,2:3]<-boot.ci(bootg,type=c("norm","basic","perc"))$perc[,4:5]
>> cim[5,2:3]<-boot.ci(bootgu,type=c("norm","basic","perc"))$norm[,2:3]
>> cim[6,2:3]<-boot.ci(bootgu,type=c("norm","basic","perc"))$basic[,4:5]
>> cim[7,2:3]<-boot.ci(bootgu,type=c("norm","basic","perc"))$perc[,4:5]
>> cim[8,1]<-mean(mat1[,1]+mat1[,2])*sdY
>> cim[8,2:3]<-quantile(mat1[,1]+mat1[,2],c(0.025,0.975))*sdY
>>
>> cim
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
>
>
More information about the R-sig-mixed-models
mailing list