[R-sig-ME] multiple comparison (many to one) of fixed effect and random slop in lmer model

Ben Bolker bbolker at gmail.com
Tue Apr 12 16:54:01 CEST 2011


On 04/12/2011 10:26 AM, Yuan-Ye Zhang wrote:
> Dear Ben,
> 
> Thank you very much for your kind explanation. 
> 
> And this also assured me that my mail is not rejected, because I kept
> receive rejection mails.... :(
> 
> (1) Maybe a further related question. In lmer, I got SE estimated for
> fixed effect in the summary,  can I use simple X +/- 1.96 * SE for the
> confidence intervals? Or I need to do parametric bootstrap? 

  It depends on your sample size.  Since you have 134 families I would
guess that your sample is large enough that the +/- 1.96 SE confidence
intervals will be very close to the parametric bootstrap results.  (You
might try one example to convince yourself.)

> And if I
> have two dataset analyzed using the same model, how can I compare
> whether treatment effect (or treatment * family interaction) in data.1
> is larger than in data.2?  What kind of bootstrap should I use?

  Comparing across data sets is sometimes tricky because the standard
model comparison recipes don't work.  I would think the most rigorous
approach would be to combine the two data sets into a single data set
with an additional variable that marks the origin of each data point
(e.g. rbind(data.frame(data1,orig=1),data.frame(data2,orig=2))) and then
run an analysis that includes interactions with "orig" and see if the
treatment:orig interaction (for example) is significant.  More crudely,
you could assume normal sampling distributions for the parameters and do
a t-test for equality of the parameters.

> 
> (2) I will try it. Many thanks for the code.
> 
> Best,
> yuanye
> 
> 
> 2011/4/12 Ben Bolker <bbolker at gmail.com <mailto:bbolker at gmail.com>>
> 
>     On 04/12/2011 09:50 AM, Yuan-Ye Zhang wrote:
>     > 2011/4/11 Yuan-Ye Zhang <zhangyuanye0706 at gmail.com
>     <mailto:zhangyuanye0706 at gmail.com>>
>     >
>     >> Dear list,
>     >>
>     >>  Sorry I am not sure if I sent this mail several times to this
>     mail list,
>     > but I kept receiving fail delivered information......
>     >
>     >
>     >> Hi,
>     >>
>     >> I have a fixed effect (treatment) in my lmer model with 3 levels
>     (C, D and
>     >> N). My question is whether treatment D or N respectively has a
>     significant
>     >> effect compared to treatment C?
>     >>
>     >> My model is written as
>     >>
>     >> model<-lmer (Y~ harvest.time + treatment + (1|table) + (treatment
>     | family)
>     >> )
>     >>
>     >> Y: continuous
>     >> harvest.time: continuous, positively influence Y
>     >> treatment: categorical, levels =3  (C, D, N)
>     >> table: categorical, levels=6 (C1, C2, D1, D2, N1, N2) each
>     treatment has
>     >> two tables
>     >> family: factored, categorical, levels=134 (1,2,3,....134) one
>     family has 3
>     >> replicates on each table, therefore 6 replicates in each
>     treatment, but I
>     >> have several missing values.
>     >>
>     >>
>     >> If I use LRT anova (model, update (model, ~. - treatment) ), I
>     get an idea
>     >> of whether treatment overall has a significant effect on Y.
>     >> *But how can I know whether this significance is due to the
>     significant
>     >> effect of D versus C, or the effect of N versus C?*
>     >> *
>     >> *
>     >> similarly, if I anova model, with model.1<- lmer (Y~ harvest.time +
>     >> treatment + (1|table) + (1 | family) ), I get an idea of whether
>     there is
>     >> significant random slope of families among treatments (or family*
>     treatment
>     >> interaction).
>     >> But how can I know whether there is significant family* treatment
>     >> interaction between C and D, or significant family* treatment
>     interaction
>     >> between C and N?
>     >>
>     >> One solution that I can think of is just to divide the dataset
>     into two
>     >> subset, C&D, C&N. Is it correct?
>     >> And is it any solution in lmer model comparable to TukeyHSD or
>     Dunnett
>     >> of multiple comparisons in t.test?
> 
>      I have three answers to this:
> 
>      (1) I'm not wild about _post hoc_ pairwise tests in any case. They have
>     their place, but I think it is often sufficient to show that there is an
>     overall significant pattern, and then to interpret the pattern of
>     coefficients and confidence intervals on them, rather than specifically
>     fishing for p-values on particular contrasts.
> 
>      (2) for fixed effect comparisons, the multcomp package seems to behave
>     sensibly (use with caution, there may be some issue with finite-sample
>     corrections?)
> 
>     library(lme4)
>     cbpp$obs <- 1:nrow(cbpp)
>     gm2 <- glmer(cbind(incidence, size - incidence) ~ period +
>             (1 | herd) +  (1|obs),
>                      family = binomial, data = cbpp)
>     library(multcomp)
>     glht(gm2,linfct=mcp(period="Tukey"))
> 
>     (3) I'm not sure about the comparison you want to make above, which
>     involves random effects.  You want to know "whether there is significant
>     family*treatment interaction between C and D" ...  I think your idea of
>     dividing the data into subsets is a good one in this case.
> 
> 
>




More information about the R-sig-mixed-models mailing list