[R-sig-ME] Rasch with lme4
Andy Fugard
andy.fugard at sbg.ac.at
Tue Jun 9 12:59:15 CEST 2009
Dear all,
What happens in practice when you compare the two approaches of item as
a fixed versus as a random effect?
Consider:
M1 = lmer(Reaction ~ Days + (1|Subject), sleepstudy)
M2 = lm(Reaction ~ Days + factor(Subject), sleepstudy)
The slope estimates for Days for are practically identical, the mean
intercepts differ:
For M1:
...
Fixed effects:
Estimate Std. Error t value
(Intercept) 251.4051 9.7459 25.80
Days 10.4673 0.8042 13.02
...
For M2:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 295.0310 10.4471 28.240 < 2e-16 ***
Days 10.4673 0.8042 13.015 < 2e-16 ***
...
I didn't look at the estimators for Subject, e.g., for M2 the predictors:
factor(Subject)309 -126.9008 13.8597 -9.156 2.35e-16 ***
factor(Subject)310 -111.1326 13.8597 -8.018 2.07e-13 ***
factor(Subject)330 -38.9124 13.8597 -2.808 0.005609 **
...
But it could be done...
Is there a paper on these sorts of comparisons? How does the mixed
effects approach differ from a standard regression model with a heap of
categorical predictors for representing, e.g., deviations from the mean
intercept?
Presumably this could be done too for estimates for items, e.g., for
binary logistic models and beyond.
Cheers,
Andy
Ken Beath wrote:
> On 09/06/2009, at 8:58 AM, Stuart Luppescu wrote:
>
>> On 火, 2009-06-09 at 08:04 +1000, Ken Beath wrote:
>>> The model treats item as a random effect and should be a fixed effect.
>>
>> Hmm. In Doran, Bates, Bliese and Dowling (2007), the authors treat the
>> item as random.
>>
>
> It can be argued that the items are a sample from a population of items
> which is possibly reasonable for educational testing where there might
> be a population of questions which can be asked. Even so, assumptions
> about the distribution are optimistic and most items are used because
> they test something obvious. Maybe others have a different philosophy. A
> more pedantic argument is that this isn't the model Rasch used.
>
>> [snip]
>>> Another question to ask is whether the Rasch model is appropriate. If
>>> an IRT is more sensible it would cause some problems with the second
>>> model.
>>
>> Sorry, but I don't understand this at all.
>>
>
> By an IRT I mean the 2 parameter version where there is a discriminant
> parameter which varies among items, in contrast to the Rasch where it is
> constant. It probably gives problems with the other model as well but
> the second model should have more problems.
>
> I don't like the idea of assuming a Rasch model at all, its popularity
> seems to derive from an era when fitting anything else was difficult.
> Modern software offers proper solutions, unfortunately at a cost but
> that shouldn't be a consideration.
>
> Ken
>
>
>> --
>> Stuart Luppescu -=- slu .at. ccsr.uchicago.edu
>> University of Chicago -=- CCSR
>> 才文と智奈美の父 -=- Kernel 2.6.28-gentoo-r5
>> Drusilla: How do you feel about eternal life?
>> Xander: We couldn't just start with coffee?
>>
>>
>>
>>
>>
>>
>>
>>
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
--
Andy Fugard, Post-doc, ESF LogICCC (LcpR) project
Fachbereich Psychologie, Universitaet Salzburg
Hellbrunnerstr. 34, 5020 Salzburg, Austria
+43 (0)680 2199 346 http://figuraleffect.googlepages.com
More information about the R-sig-mixed-models
mailing list