[R-sig-ME] multiple comparison (many to one) of fixed effect and random slop in lmer model
bbolker at gmail.com
Tue Apr 12 16:03:34 CEST 2011
On 04/12/2011 09:50 AM, Yuan-Ye Zhang wrote:
> 2011/4/11 Yuan-Ye Zhang <zhangyuanye0706 at gmail.com>
>> Dear list,
>> Sorry I am not sure if I sent this mail several times to this mail list,
> but I kept receiving fail delivered information......
>> I have a fixed effect (treatment) in my lmer model with 3 levels (C, D and
>> N). My question is whether treatment D or N respectively has a significant
>> effect compared to treatment C?
>> My model is written as
>> model<-lmer (Y~ harvest.time + treatment + (1|table) + (treatment | family)
>> Y: continuous
>> harvest.time: continuous, positively influence Y
>> treatment: categorical, levels =3 (C, D, N)
>> table: categorical, levels=6 (C1, C2, D1, D2, N1, N2) each treatment has
>> two tables
>> family: factored, categorical, levels=134 (1,2,3,....134) one family has 3
>> replicates on each table, therefore 6 replicates in each treatment, but I
>> have several missing values.
>> If I use LRT anova (model, update (model, ~. - treatment) ), I get an idea
>> of whether treatment overall has a significant effect on Y.
>> *But how can I know whether this significance is due to the significant
>> effect of D versus C, or the effect of N versus C?*
>> similarly, if I anova model, with model.1<- lmer (Y~ harvest.time +
>> treatment + (1|table) + (1 | family) ), I get an idea of whether there is
>> significant random slope of families among treatments (or family* treatment
>> But how can I know whether there is significant family* treatment
>> interaction between C and D, or significant family* treatment interaction
>> between C and N?
>> One solution that I can think of is just to divide the dataset into two
>> subset, C&D, C&N. Is it correct?
>> And is it any solution in lmer model comparable to TukeyHSD or Dunnett
>> of multiple comparisons in t.test?
I have three answers to this:
(1) I'm not wild about _post hoc_ pairwise tests in any case. They have
their place, but I think it is often sufficient to show that there is an
overall significant pattern, and then to interpret the pattern of
coefficients and confidence intervals on them, rather than specifically
fishing for p-values on particular contrasts.
(2) for fixed effect comparisons, the multcomp package seems to behave
sensibly (use with caution, there may be some issue with finite-sample
cbpp$obs <- 1:nrow(cbpp)
gm2 <- glmer(cbind(incidence, size - incidence) ~ period +
(1 | herd) + (1|obs),
family = binomial, data = cbpp)
(3) I'm not sure about the comparison you want to make above, which
involves random effects. You want to know "whether there is significant
family*treatment interaction between C and D" ... I think your idea of
dividing the data into subsets is a good one in this case.
More information about the R-sig-mixed-models