[R] Simulation confidence interval
Scott Raynaud
scott.raynaud at yahoo.com
Wed Feb 1 16:45:57 CET 2012
The follwing is a code snippet from a power simulation
program that I'm using:
estbeta<-fixef(fitmodel)
sdebeta<-sqrt(diag(vcov(fitmodel)))
for(l in 1:betasize)
{
cibeta<-estbeta[l]-sgnbeta[l]*z1score*sdebeta[l]
if(beta[l]*cibeta>0) powaprox[[l]]<-powaprox[[l]]+1
sdepower[l,iter]<-as.numeric(sdebeta[l])
}
Estbeta recovers the fixed effects from a model fitted using lmer.
Beta is defined elsewhere and is a user specified input
that relates the data generated in the simulation to an oucome.
So, it seems pretty clear that the third line from the bottom is
a clever test of whether the confidence interval traps 0. My
question is why use beta[l]*cibeta>0 rather than
estbeta[l]*cibeta>0. Is that because in the long run the model
parameter etimates tend toward the betas specified by the user?
In other words, what really matters is the standard errors, right?
More information about the R-help
mailing list