[R-sig-ME] contradictory odds ratios--a problem with the equation or the interpretation?

Johnathan Jones john@th@n@jone@ @end|ng |rom gm@||@com
Sun May 16 00:23:39 CEST 2021

Hi all,

Thanks to everyone who contributed to this thread. Each comment and query
was helpful, not just for my immediate needs, but for a general
troubleshooting of mixed models.

I've modified the original equation somewhat by removing Language as a
predictor and allowing association to vary by participant. For anyone
interested, the best fit model--both conceptually and statistically--is:
correct response to a sentential listening prompt ~ isolated speech task
1 + isolated speech task 2 + association + (association | participant) +
(1|item). This yields the following:
*Predictors              Odds Ratios        CI                       p*
(Intercept)                     3.25                 1.82 – 5.79
bVt Transcription           1.03                 1.01 – 1.05        0.006
Oddity                            1.04                 1.02 – 1.06
Association—opposite   0.34                 0.21 – 0.55       <0.001
Association—same        3.33                 2.03 – 5.47       <0.001

John, Simpson's paradox is a keen take and something to keep my eye on for
subsequent related work. For unbalanced designs, you wouldn't say this is
something common with mixed models? Is this not one of its main advantages
over something like a traditional (or repeated measures) analysis of

Ben, I really like this approach. We've controlled pretty well for "word
type" and have a fairly tight understanding of which (types of) words lead
to perceptual issues in second language learners, but this could perhaps be
something to use down the line in another capacity. Thanks for the
suggestion and please feel free to email me if this is something you're
interested in.

All the best,

John Jones
E: johnathan.jones using gmail.com
SM: linkedin.com/in/johnathanjones

	[[alternative HTML version deleted]]

More information about the R-sig-mixed-models mailing list