[R-sig-ME] Fwd: questions about mixed logit models with R

Douglas Bates bates at stat.wisc.edu
Tue Jan 17 17:38:49 CET 2012


Yet another occasion when I said I would cc: the list and forgot to.


---------- Forwarded message ----------
From: Douglas Bates <bates at stat.wisc.edu>
Date: Tue, Jan 17, 2012 at 10:38 AM
Subject: Re: questions about mixed logit models with R
To: Angel Tabullo <angeltabullo at yahoo.com>


I suggest that you send such a request to the
R-SIG-Mixed-Models at R-project.org mailing list, which I am copying on
this reply.  There are several experts who read that list and may be
able to provide help more readily than I can.

On Tue, Jan 17, 2012 at 9:04 AM, Angel Tabullo <angeltabullo at yahoo.com> wrote:
> Dear professor Bates
>
> My name's Angel Tabullo, I'm a phd student and I'm currently working on
> neurolinguistics and experimental psychology. I'm trying to run a mixed
> effects model analysis on some behavioral data with R, but I'm quite new to
> this kind of statistics and I'm having trouble to interpret the results. I'm
> writing to you because I found your tutorial in the web and it was very
> helpful.  I also wrote to the R-lang mailing list. I will be very thankful
> for any advice you could give me in this matter.
>
> In my experiment, subjects were exposed to artificial languages with
> different word orders (two of them frequent among world languages: SOV, SVO
> and two of them infrequent: VSO, OSV). After training, subject had to
> classify new sentences as "correct" or incorrect, according to what they
> have learned. Sentences could either be correct, contain a syntax violation
> or a semantic violation (mismatch between a scene and the sentences
> describing it). Dependent variables were response latency and accuracy
> (right or wrong answer). I'm trying to analyze the accuracy (1 = right
> answer, 0 = wrong answer) data using a mixed logit model with "word order
> (OSV, SVO, SOV, VSO)" and "type of sentence" (correct, semantic violation,
> syntax violation) as fixed factors, and subject as a random factor. Word
> order is a between subjects variable, while type of sentences is a repeated
> measures factor.
>
> My questions are:
>
> 1) In order to contrast each level of each factor with all the others, as
> well as their interactions: should I ran different models changing the
> reference category? Does this mean I should run 4 x 3 = 12 models?
> 2) Would it be correct to compare interaction levels with post hoc Tukey
> contrasts (for instance: OSV - correct vs. OSV semantic violation, SVO
> correct vs. OSV correct and so on?).
> 3) How do I interpret a significant interaction? For instance:
>
> ModeloAngel = lmer(respuest=="1" ~ grupo * tipoF + (1|sujeto),
> data=DatosAngel, family="binomial")
>
> Fixed effects:
>                        Estimate Std. Error z value Pr(>|z|)
> (Intercept)             1.79585    0.19196   9.356  < 2e-16 ***
> grupoOSV                0.25816    0.26740   0.965   0.3343
> grupoSOV                0.70875    0.29315   2.418   0.0156 *
> grupoSVO                0.59607    0.26769   2.227   0.0260 *
> tipoFVsemanti          -1.01756    0.14765  -6.892 5.51e-12 ***
> tipoFVsintact          -1.46088    0.14566 -10.029  < 2e-16 ***
> grupoOSV:tipoFVsemanti -0.29214    0.20841  -1.402   0.1610
> grupoSOV:tipoFVsemanti -0.39714    0.23265  -1.707   0.0878 .
> grupoSVO:tipoFVsemanti  0.03181    0.21459   0.148   0.8821
> grupoOSV:tipoFVsintact  0.83284    0.21107   3.946 7.95e-05 ***
> grupoSOV:tipoFVsintact  0.42079    0.23408   1.798   0.0722 .
> grupoSVO:tipoFVsintact  0.16667    0.21136   0.789   0.4304
>
> If the reference levels are VSO and "correct": does this mean that
> performance of OSV in syntax violations trials is better than that of VSO in
> syntax violation trials. Or does this mean that OSV - syntax violations
> performance is better than VSO - "correct" performance?
>
> Thank you again for your kind attention, I look forward to your answer.




More information about the R-sig-mixed-models mailing list