[R-sig-ME] LME model comparison - likelihood ratio tests

Paraskevi Argyriou pargyriou at gmail.com
Fri Feb 6 16:42:28 CET 2015


Hi there,

I was hoping if i could get some advice on the following:

*Info of design*
DV = continuous
IV1 = categorical predictor with two levels
IV2 = categorical predictor with two levels
Within participants and items manipulation

Following, Barr et al. (2013) paper on keeping the random effect structure
maximal in psycholinguistic experimental designs within subjects/items,
plus based on the fact that my research question needs to asses the
interaction between the two predictors, I built the following model

model1 -> lmer(DV ~ IV1 + IV2 + IV1:IV2 + (1 + IV1:IV2|Participant) + (1 +
IV1:IV2|Item), REML = FALSE)

*Other models*
model2 -> lmer(DV ~ IV1*IV2 + (1 + IV1*IV2|Participant) + (1 +
IV1*IV2|Item), REML = FALSE)

model.null.1 -> lmer(DV ~ IV1 + IV2 + (1 + IV1:IV2|Participant) + (1 +
IV1:IV2|Item), REML = FALSE)

model.null.2 -> lmer(DV ~ 1 + (1 + IV1*IV2|Participant) + (1 +
IV1*IV2|Item), REML = FALSE)

*Questions*
1. Is model1 the correct one?
2. What is the best comparison for the likelihood ratio tests to assess if
the interaction improves the model fit? Would it be anova(model.null.1,
model1)? Does it make sense to use a null model like model.null.2 and
compare it with model2?
3. Is it acceptable to further explore the simple main effects and
contrasts(using the glht( ) function), if the interaction reveals as not
important for the model fit?
4. Is it a good practice to center categorical predictors? How do we
perform contrasts with centered predictors?
5. Why R gives different results when you use string categorical predictors
compared to dummy coded predictors compared to centered ones?

I am sorry for all these questions and please excuse my ignorance -
Many thanks in advance for any help.

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list