[R-sig-ME] within subjects, 3 conditions, 3 lines
Stanislav Aggerwal
stan.aggerwal at gmail.com
Wed Jan 21 11:07:11 CET 2015
I have revised the simulation. Now each subject has autocorrelated errors.
It is a substantial autocorrelation. Still, the 3 methods, including the
obviously "wrong" and simple lm() method, all give the same parameter
estimates and SEs.
Thanks for any insights. Should I really just ignore the repeated measures
design and use lm() for problems like this? (Funny how lme() cannot fit
these data)
Cheers,
Stan
The simulation:
######### previous example assumed indep errors. Now use errors that
# are correlated within each subject
set.seed(1234)
nsubj<-30
z<-expand.grid(x=1:5,cond=c("a","b","c"),subj=as.factor(1:nsubj))
x<-z$x
cond<-z$cond
subj<-z$subj
y<-rep(0,nrow(z))
a1<- rep(1+rnorm(nsubj,mean=0,sd=.1),each=5) #inta=1
a2<- rep(1+rnorm(nsubj,mean=0,sd=.1),each=5) #intb-inta=2-1=1
a3<- rep(2+rnorm(nsubj,mean=0,sd=.1),each=5) #intc-inta=3-1=2
a4<- rep(3+rnorm(nsubj,mean=0,sd=.1),each=5) #slopea=3
a5<- rep(-1+rnorm(nsubj,mean=0,sd=.1),each=5) #slopeb-slopea
a6<- rep(-2+rnorm(nsubj,mean=0,sd=.1),each=5) #slopec-slopea
y[cond=="a"]<-a1 + a4*x[cond=="a"]
y[cond=="b"]<-a1+a2 + (a4+a5)*x[cond=="b"]
y[cond=="c"]<-a1+a3 + (a4+a6)*x[cond=="c"]
autocorrelated errors
for(i in 1:nsubj)
{
y[subj==i]<-y[subj==i] +
as.numeric(filter(rnorm(5*3,mean=0,sd=.5),filter=0.5,method="recursive"))
}
plot(x[cond=="a"],y[cond=="a"],xlab="x", ylab="y")
points(x[cond=="b"],y[cond=="b"],col='red')
points(x[cond=="c"],y[cond=="c"],col='green')
## ignore within subjects design and treat as independent errors
fit<-lm(y~cond*x)
summary(fit)
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) 1.00355 0.13628 7.364 8.75e-13 ***
#condb 0.84320 0.19273 4.375 1.51e-05 ***
#condc 2.03207 0.19273 10.544 < 2e-16 ***
#x 2.99975 0.04109 73.005 < 2e-16 ***
#condb:x -0.95972 0.05811 -16.516 < 2e-16 ***
#condc:x -1.98477 0.05811 -34.156 < 2e-16 ***
b<-coef(fit)
abline(b[1],b[4]) #line for cond a
abline(b[1]+b[2],b[4]+b[5],col='red') #line for cond b
abline(b[1]+b[3],b[4]+b[6],col='green') #line for cond c
## use Linear Mixed Effects to take account of repeated measures
# lmer() from lme4 package
library(lme4)
fit2<-lmer(y~x*cond + (x*cond|subj))
summary(fit2)
# no p-values. Same param estimates and ses as fit()
#Fixed effects:
# Estimate Std. Error t value
#(Intercept) 1.00355 0.12632 7.94
#x 2.99975 0.03463 86.62
#condb 0.84320 0.21619 3.90
#condc 2.03207 0.18783 10.82
#x:condb -0.95972 0.05558 -17.27
#x:condc -1.98477 0.04671 -42.50
## lme from nlme package
#library(nlme)
#fit3<-lme(y~x*cond, random=~(x*cond)|subj)
##does not converge
##fit line to each subject; get mean and sd of params across subjects
beta<-matrix(0,ncol=6,nrow=nsubj)
for(i in 1:nsubj)
{
f<-lm(y[subj==i]~cond[subj==i]*x[subj==i])
beta[i,]<-coef(f)
}
m<-colMeans(beta)
s<-apply(beta,2,sd)/sqrt(nsubj)
z<-m/s
pval<-pnorm(abs(z),lower.tail=F)*2 #two-tailed
#pval<-pt(abs(z),df=nsubj-1,lower.tail=F)*2 #two-tailed
cbind(m,s,z,pval)
# m s z pval
#[1,] 1.0035461 0.12250006 8.192209 2.564748e-16
#[2,] 0.8432030 0.21703074 3.885178 1.022551e-04
#[3,] 2.0320664 0.19155457 10.608290 2.726940e-26
#[4,] 2.9997546 0.03330906 90.058219 0.000000e+00
#[5,] -0.9597171 0.05593306 -17.158318 5.446869e-66
#[6,] -1.9847668 0.05045750 -39.335421 0.000000e+00
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list