[R-sig-ME] Related fixed and random factors and planned comparisons in a 2x2 design

Houslay, Tom T.Houslay at exeter.ac.uk
Mon Jun 6 19:10:18 CEST 2016

Hi Paul,

I don't think anyone's responded to this yet, but my main point would be that you should check out Schielzeth & Nakagawa's 2012 paper 'Nested by design' ( http://onlinelibrary.wiley.com/doi/10.1111/j.2041-210x.2012.00251.x/abstract ) for a nice rundown on structuring your model for this type of data. 

It may also be worth thinking about how random intercepts work in a visual sense; there are a variety of tools that help you do this from a model (packages sjplot, visreg, broom), or you can just plot different levels yourself (eg consider plotting the means for AP, AQ, BP, BQ; the same with mean values from each individual overplotted around these group means; and even the group means with all points shown, perhaps coloured by individual - ggplot is really useful for getting this type of figure together quickly).

As to some of your other questions:

1) You need to keep participant ID in. I'm not 100% on your data structure from the question, but you certainly seem to have repeated measures for individuals (I'm assuming that groups A and B each contain multiple individuals, none of whom were in both groups, and each of which were shown both objects P and Q, in a random order). It's not surprising that the effects of group are weakened if you remove participant ID, because you're then effectively entering pseudoreplication into your model (ie, telling your model that all the data points within a group are independent, when that isn't the case).

2) I think channel should be nested within individual, with a model something like model <- lmer(voltage ~ group * item + (1|participant/channel), data = ...)

3) This really depends on what your interest is. If you simply want to show that there is an overall interaction effect, then your p-value from a likelihood ratio test of the model with/without the interaction term gives significance of this interaction, and then a plot of predicted values for the fixed effects (w/ data overplotted if possible) should show the trends. You could also use binary dummy variables to make more explicit contrasts, but it's worth reading up on these a bit more. I don't really use these type of comparisons very much, so I can't comment further I'm afraid.

4) Your item is like treatment in this case - you appear to be more interested in the effect of different items (rather than how much variation 'item' explains), so keep this as a fixed effect and not as random.

Hope some of this is useful,



Message: 1
Date: Fri, 3 Jun 2016 14:28:59 +0200
From: paul <graftedlife at gmail.com>
To: r-sig-mixed-models at r-project.org
Subject: [R-sig-ME] Related fixed and random factors and planned
        comparisons     in a 2x2 design
        <CALS4JYfoTbhwhy8S0kHePuw9pPv-NTkrsLrB2Z2YO5ks5gnnOA at mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"

Dear All,

I am trying to use mixed-effect modeling to analyze brain wave data from
two groups of participants when they were presented with two distinct
stimulus. The data points (scalp voltage) were gathered from the same set
of 9 nearby channels from each participant. And so I have the following

   - voltage: the dependent variable
   - group: the between-participant/within-item variable for groups A and B
   - item: the within-participant variable (note there are exactly only 2
   items, P and Q)
   - participant: identifying each participant across the two groups
   - channel: identifying each channel (note that data from these channels
   in a nearby region tend to display similar, thus correlated, patterns in
   the same participant)

The hypothesis is that only group B will show difference between P and Q
(i.e., there should be an interaction effect). So I established a
mixed-effect model using the lme4 package in R:

model <- lmer(voltage~1+group+item+(group:item)+(1|participant)+(1|channel),
              data=data, REML=FALSE)



   I'm not sure if it is reasonable to add in participant as a random
   effect, because it is related to group and seems to weaken the effects of
   group. Would it be all right if I don't add it in?

   Because the data from nearby channels of the same participant tend to be
   correlated, I'm not sure if modeling participant and channel as crossed
   random effects is all right. But meanwhile it seems also strange if I treat
   channel as nested within participant, because they are the same set of
   channels across participants.

   The interaction term is significant. But how should planned comparisons
   be done (e.g., differences between groups A and B for P) or is it even
   necessary to run planned comparisons? I saw suggestions for t-tests,
   lsmeans, glht, or for more complicated methods such as breaking down the
   model and subsetting the data:

   data[, P_True:=(item=="P")]
       , data=data[item=="P"]
       , subset=data$P_True
       , REML=FALSE)

   But especially here comparing only between two groups while modeling
   participant as a random effect seems detrimental to the group effects. And
   I'm not sure if it is really OK to do so. On the other hand, because the
   data still contain non-independent data points (from nearby channels), I'm
   not sure if simply using t-tests is all right. Will non-parametric tests
   (e.g., Wilcoxon tests) do in such cases?

   I suppose I don't need to model item as a random effect because there
   are only two of them, one for each level, right?

I would really appreciate your help!!

Best regards,


        [[alternative HTML version deleted]]

More information about the R-sig-mixed-models mailing list