[R-sig-ME] Post-hoc tests for model comparison, unbalanced binomial repeated data

Losecaat Vermeer, A.B. (Annabel) a.losecaatvermeer at fcdonders.ru.nl
Tue Aug 13 20:54:29 CEST 2013


Dear R-list members,


I have a question concerning post-hoc tests for unbalanced repeated data, and hope you can help me out. 

I have analysed my data, containing a 2x3x3 design, using the glmer() function. I found a significant 2-way interaction (Outcome*Group). Now I would like to test which levels within these factors are significant. How can I approach this in the best way? 

My data looks as the following (schematically): 
Fixed factors-within: Outcome (2 levels: Reward, Punishment), ChoiceType (3 levels: G, N, B)
Fixed factor-between: Group (3 levels: A,B,C)

DV (binary response): gambling probability (1,0)


How can I correctly test which differences are significant described by the interaction (Outcome*Group), thus whether all groups differ from each other for each Outcome (do A and B differ significantly in gambling for Reward, and also A-C and B-C?), and if for each group separately they differ in gambling for Outcome (e.g. Group A: Reward vs. Punishment)?

I have tried multiple things, such as data-reduction via drop.levels() to include one level of Outcome, and then run model comparisons via glmer() and anova() again. Though this doesn't take care of multiple comparisons. I have also used pairwise.t.test, but then I ignore my repeated nature of the data.


In addition, does anyone know which test can be used if I would have the exact same design as above, but with only within-factors (and again repeated measured, and all factors containing slightly unbalanced data). I run into problems that my x and y are not equal in length when trying some post-hoc tests.


Any help is much appreciated!

Many thanks,
Annabel Vermeer



More information about the R-sig-mixed-models mailing list