[R-sig-ME] IRT: discrimination parameters from LMER?
Paul Johnson
pauljohn32 at gmail.com
Mon Nov 1 21:40:17 CET 2010
On Sat, Oct 30, 2010 at 4:54 PM, Doran, Harold <HDoran at air.org> wrote:
> IRT people can be odd. Yes, your interpretation of the differences between the ltm package and lme4 are correct. lmer estimates the standard deviation of the population distribution, but ltm fixes it and then estimates a discrimination value that is constant for all items. It is currently not possible to estimate the 2pl using lmer, but that model can be easily estimated using other functions, such as those in ltm.
Thanks for answering!
I'm glad to learn I'm not grossly misunderstanding IRT/lmer. What do
you think about my observation about the fact that the difficulty
estimate from ltm seems to be systematically different/better than the
lmer estimate?
Do you care to weigh in on the question of whether lmer could ever be
made to fit a 2PL or 3PL model? I mean, could I write a family for
lmer that would do that, or is the calculation required too grossly
different?
If 2PL or 3PL is really a whole different structure, I don't mind
using "ltm." But defeats the ideal that you proposed in that 2007 JSS
article (which I really like!), that we might be better off fitting
specialized models with general purpose software.
Here's a similar example. Sometimes I have wanted to fit ordinal
dependent variables, and have learned that those models do not fall
into the family of GLM and a tool different from lmer is necessary to
work on them. I wish it weren't so, but have accepted reality.
pj
> Hello, everybody.
>
> I've fallen into a crowd of IRT users. I remembered seeing the IRT
> with lmer paper.
>
> ## Doran, Harold, Douglas Bates, Paul Bliese,
> ## Maritza Dowling, 2007. “Estimating the
> ## Multilevel Rasch Model: With the lme4 Package.”
> ## Journal of Statistical Software 20(2): 1–18.
>
> I had not understood that a Rasch model is a very specialized kind of
> logistic regression. The items are all assumed to have the same
> discrimination parameter.
>
> I started to wonder if I could get a 2 parameter logistic IRT from
> lmer, and I'm a bit stuck. I'm comparing against the output from the
> rasch function in the "ltm" package. I'm pasting in the code below
> for comparison, and after that I have the output so you can compare.
>
> I found a nice BUGS survey for the various kinds of IRT, in case you
> wonder what all of these are. The argument there is the same one that
> Doran et al make, in favor of the idea of using general purpose
> software for IRT.
>
> Curtis, S. McKay. 2010. “BUGS Code for Item Response Theory.” Journal
> of Statistical Software, Code Snippets 36(1): 1–34.
>
> In the most restrictive form of the Rasch model, the discrimination
> parameter is assumed to be 1, but there is an estimate of the standard
> deviation of the latent ability coefficient. In other implementations
> of Rasch, it is assumed the latent trait is N(0,1) and then a
> discrimination coefficient is estimated.
>
> I *believe* that is the point of comparison from ltm's rasch to
> lmer's random effect estimate. ltm rasch assumes the individual
> ability parameter is N(0,1) and estimates a discrimination coefficient
> common to all items
>
> disc * N(0,1)
>
> and the lmer estimates disc as the standard deviation of the random effect.
>
> N(0, disc^2).
>
> So the random effect standard deviation estimated by lmer is one
> estimate of that same disc parameter.
>
> This little simulation shows that lmer's fixed effects for the items
> are very close to the same as the difficulty parameters in rasch. The
> one disc parameter estimate from ltm is usually closer to the "true"
> discrimination value than the standard deviation of the random effect
> in lmer. However, they are "in the ballpark".
>
> Partly I'm curious to know if you agree I'm comparing the numbers properly.
>
> But, also, I want to estimate a different discrimination parameter for
> each item, so I can get an IRT 2PL model. It would be nice to get a
> 3PL to allow a guessing parameter.
>
> Once I realized this mismatch between lmer and the more general IRT, I
> started to think either 1) we really do need specialized IRT software,
> or 2) perhaps I need to learn to write family code for lmer.
>
> PJ
>
> ## Paul Johnson Oct. 30, 2010
> ## Item Response Theory, compare the
> ## IRT-specific R package "ltm" with mixed-effct GLM
> ## perspective of lmer in "lme4" package.
>
> ## The argument against IRT specific software
> ## is that it restricts the user. May be better
> ## to adapt general purpose programs to
> ## estimate same models. That's argued in:
>
> library(ltm)
>
> library(lme4)
>
>
> ### Create Data: 300 respondents with 3 items each.
> N <- 300
> Adiff <- c(-1, 2, 2) ### actually, easiness
> Bdisc <- rep(2.2, 3) ### rasch requires all same!
>
> z <- rnorm( N, 0, sd=1)
>
> #create one long column for latent trait
> z3 <- kronecker(Adiff, rep(1,N)) - kronecker(Bdisc, z)
> ## caution about signs of parameters. Very
> ## confusing, standard IRT is
> ## Bdisc ( theta - Adiff)
> ## So if Adiff = c(-1, 2, 2), IRT standard
> ## difficulty estimates will be sign reversed
> ## c(0.5, -0.84, -0.84)
>
>
> pr <- 1/(1 + exp(-1*(z3)))
> y <- rbinom(length(pr), prob=pr, size=1)
>
> id <- rep(1:N, 3)
> item <- c( rep(1, N), rep(2, N), rep(3, N))
>
> ### the item matrix, N by 3
> maty <- matrix(y, ncol=3)
>
>
> ### Long version of data frame for use with lmer
> dflong <- data.frame("id"=id, "item"=item, "value"=y)
>
> ### convert "item" to factor variable
> dflong$item <- as.factor(dflong$item)
> ### create indicators for items, just experimenting
> dflong$item1 <- ifelse(dflong$item == "1",1,0)
> dflong$item2 <- ifelse(dflong$item == "2",1,0)
> dflong$item3 <- ifelse(dflong$item == "3",1,0)
>
> ### fit with rasch in ltm,
> ### Transform estimates with IRT.param=F,
> ### gives values comparable to lmer, it gives
> ### parameters in form b0 + b1*theta
> ### rather than IRT code like b1(b0 - theta)
>
> dfrasch <- rasch(maty, IRT.param=F)
> summary(dfrasch)
>
> ### Compare to IRT standard coding, for fun
> dfrasch2 <- rasch(maty)
> summary(dfrasch2)
> ### Note discrim param is same as dfrasch, but
> ### difficulties are different. Can convert
> ### the dfrasch2 difficulties by multiplying by
> ### discrimination.
>
>
> ### use lmer, estimates of fixed effect parameters
> ### "item1", "item2", "item3" are "difficulty
> ### parameters.
>
> dflmer <- lmer (value ~ -1 + item1 + item2 + item3 + (1|id) ,
> data=dflong, family=binomial)
> summary(dflmer)
>
> ### How to estimate the "discrimination parameter"?
> ### I'm just guessing now. rasch assumes all
> ### discrimination parameters for all items are same.
> ### Recall x ~ N(0, s^2), same as x ~ s * N(0,1).
> ### So estimated standard deviation of random effect
> ### is same as one discrimination parameter "s" in
> ### above. I think.
>
>
> dflmer2 <- lmer (value ~ -1 + item + (1|id) , data=dflong, family=binomial)
> summary(dflmer2)
>
>
> ldfy <- ltm( dfy ~ z1 , IRT.param=F)
> summary(ldfy)
>
> ##################################################
>
>
> Here's the output:
>
>
>> library(ltm)
> Loading required package: MASS
> Loading required package: msm
> Loading required package: mvtnorm
> Loading required package: polycor
> Loading required package: sfsmisc
>
> This is package 'ltm' version '0.9-5'
>
>> library(lme4)
> Loading required package: Matrix
> Loading required package: lattice
>
> Attaching package: 'Matrix'
>
> The following object(s) are masked from 'package:base':
>
> det
>
>
> Attaching package: 'lme4'
>
> The following object(s) are masked from 'package:stats':
>
> AIC
>
>> N <- 300
>> Adiff <- c(-1, 2, 2) ### actually, easiness
>> Bdisc <- rep(2.2, 3) ### rasch requires all same!
>> z <- rnorm( N, 0, sd=1)
>> z3 <- kronecker(Adiff, rep(1,N)) - kronecker(Bdisc, z)
>> pr <- 1/(1 + exp(-1*(z3)))
>> y <- rbinom(length(pr), prob=pr, size=1)
>> id <- rep(1:N, 3)
>> item <- c( rep(1, N), rep(2, N), rep(3, N))
>> maty <- matrix(y, ncol=3)
>> dflong <- data.frame("id"=id, "item"=item, "value"=y)
>> dflong$item <- as.factor(dflong$item)
>> dflong$item1 <- ifelse(dflong$item == "1",1,0)
>> dflong$item2 <- ifelse(dflong$item == "2",1,0)
>> dflong$item3 <- ifelse(dflong$item == "3",1,0)
>> dfrasch <- rasch(maty, IRT.param=F)
> summary(dfrasch)
>>
> Call:
> rasch(data = maty, IRT.param = F)
>
> Model Summary:
> log.Lik AIC BIC
> -483.7193 975.4385 990.2537
>
> Coefficients:
> value std.err z.vals
> Item1 -1.1239 0.2029 -5.5392
> Item2 1.8955 0.2371 7.9957
> Item3 1.7567 0.2305 7.6227
> z 1.8959 0.2336 8.1157
>
> Integration:
> method: Gauss-Hermite
> quadrature points: 21
>
> Optimization:
> Convergence: 0
> max(|grad|): 0.003
> quasi-Newton: BFGS
>
>
>> dfrasch2 <- rasch(maty)
>> summary(dfrasch2)
>
> Call:
> rasch(data = maty)
>
> Model Summary:
> log.Lik AIC BIC
> -483.7193 975.4385 990.2537
>
> Coefficients:
> value std.err z.vals
> Dffclt.Item 1 0.5928 0.1082 5.4799
> Dffclt.Item 2 -0.9998 0.1249 -8.0022
> Dffclt.Item 3 -0.9266 0.1211 -7.6502
> Dscrmn 1.8959 0.2336 8.1157
>
> Integration:
> method: Gauss-Hermite
> quadrature points: 21
>
> Optimization:
> Convergence: 0
> max(|grad|): 0.003
> quasi-Newton: BFGS
>
>
>> dflmer <- lmer (value ~ -1 + item1 + item2 + item3 + (1|id) , data=dflong, family=binomial)
>> summary(dflmer)
> Generalized linear mixed model fit by the Laplace approximation
> Formula: value ~ -1 + item1 + item2 + item3 + (1 | id)
> Data: dflong
> AIC BIC logLik deviance
> 984 1003 -488 976
> Random effects:
> Groups Name Variance Std.Dev.
> id (Intercept) 2.9100 1.7059
> Number of obs: 900, groups: id, 300
>
> Fixed effects:
> Estimate Std. Error z value Pr(>|z|)
> item1 -1.1155 0.1754 -6.358 2.04e-10 ***
> item2 1.8382 0.1938 9.486 < 2e-16 ***
> item3 1.7049 0.1898 8.984 < 2e-16 ***
> ---
> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> Correlation of Fixed Effects:
> item1 item2
> item2 0.247
> item3 0.255 0.310
>>
>
>
>
>
> --
> Paul E. Johnson
> Professor, Political Science
> 1541 Lilac Lane, Room 504
> University of Kansas
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
--
Paul E. Johnson
Professor, Political Science
1541 Lilac Lane, Room 504
University of Kansas
More information about the R-sig-mixed-models
mailing list