[R-sig-ME] Simulate type I error

Chao Liu p@ych@o||u @end|ng |rom gm@||@com
Thu Jan 27 19:00:00 CET 2022


Dear R-sig-mixed-models community,

I would like to simulate type I error for a random-effects model I
generated.

The statistic of interest is standard deviations of the random intercept
and random slope. Specifically, for random intercept, H_{0}: lambda_{0} =2
and H_{1}: lambda_{0} not equal to 2; for random slope, H_{0}: lambda_{1}
=1 and H_{1}: lambda_{1} not equal to 1. I assume the test would be a
likelihood ratio test but please correct me if I am wrong. How do I assess
type I error for the random-effects model I specified below:

set.seed(323)
#The following code is to specify the structure and parameters of the
random-effects model
dtfunc = function(nsub){
  time = 0:9
  rt = c()
  time.all = rep(time, nsub)
  subid.all = as.factor(rep(1:nsub, each = length(time)))

  # Step 1:  Specify the lambdas.
  G = matrix(c(2^2, 0, 0, 1^2), nrow = 2)
  int.mean = 251
  slope.mean = 10
  sub.ints.slopes = mvrnorm(nsub, c(int.mean, slope.mean), G)
  sub.ints = sub.ints.slopes[,1]
  time.slopes = sub.ints.slopes[,2]

  # Step 2:  Use the intercepts and slopes to generate RT data
  sigma = 30
  for (i in 1:nsub){
    rt.vec = sub.ints[i] + time.slopes[i]*time + rnorm(length(time), sd =
sigma)
    rt = c(rt, rt.vec)
  }

  dat = data.frame(rt, time.all, subid.all)
  return(dat)
}

#Here I run one random-effects model
set.seed(10)
dat = dtfunc(16)
lmer(rt~time.all + (1+time.all |subid.all), dat)

Assuming the test for significance is likelihood ratio test and so in the
end, I want to see if I run the test 1000 times, what is the probability of
rejecting the null hypothesis when it is TRUE. Also, how do I plot the
behavior of type I error if I change the values of standard deviations?

Any help is appreciated!

Best,

Chao

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list