[R-sig-ME] ghlt different results for different hypotheses?

538280 at gmail.com 538280 at gmail.com
Fri Feb 10 18:36:05 CET 2012


The more comparisons/tests that you do the more opportunities you have
of getting a type I error.  The multiple comparisons procedures adjust
for the number of comparisons so that the overall probability of
making at least 1 type I error is fixed.  So the more comparisons the
more adjustment needs to be made.

Think of this simple example.  You are playing a game where you are
trying to throw a wadded up piece of paper into a basket, you win if
you get it in at least once.  What are your chances of winning if you
get 10 tries compared to if you get 20 tries (from the same spot)?  If
you want the same chance of winning with 20 tries as you had for 10
tries (or 1 try), then you need to move further away or some other
penalty.

So with glht there is a bigger penalty when you do more comparisons
since there are more opportunities of making a type I error.

On Thu, Feb 9, 2012 at 3:25 AM, m.fenati at libero.it <m.fenati at libero.it> wrote:
>
>
> Dear R users,
> I would like to understand a simple problem related to glht() multeplicity correction and linear Hypotheses testing. Given a simple lme model with two predictors (group = 3 levels; time =  2 levels) and their interaction with treatment contrast, I see that the p-values are lower and higher when I test few or many hypotheses respectively. Because I dont't have a deep knowledge of multiple comparison theory, I ask you some suggestion or explanation about the different obtained results.
> As you can see in the example below, "m1" and "m2" test a different number of hypotheses but comparing the same hypothesis a different results occurred.
>
>
> time<-rep(c(rep(0,8),rep(1,8)),3)
> group<-c(rep(0,16),rep(1,16),rep(2,16))
> id<-c(rep(1:8,2),rep(9:16,2),rep(17:24,2))
> w<-c(172.9, 185.8, 173.1, 187.3, 161.6, 167.1, 168.4, 161.1, 166.5, 175.3, 167.1, 181.9, 163.0, 167.7, 172.1, 170.3, 167.2, 183.3, 160.7,167.8, 149.6, 159.1, 164.2, 171.0, 168.6, 173.5, 161.8, 166.5, 148.4, 167.1, 166.8, 166.6, 150.6, 178.4, 166.4, 159.2, 163.2, 167.8, 136.6, 161.8, 166.1, 175.8, 175.6, 166.2, 168.5, 170.5, 152.0, 164.4)
> dati<-data.frame(time,group,id,w)
> dati$time<-as.factor(dati$time)
> dati$group<-as.factor(dati$group)
> dati$id<-as.factor(dati$id)
>
>
>
>
> kp<-rbind("after Treatment: Group 1 - Controls"=c(0,1,0,0,0,0),
>         "after Treatment: Group 2 - Controls"=c(0,0,1,0,0,0),
>         "before Treatment: Group 1 - Controls"=c(0,1,0,0,1,0),
>         "before Treatment: Group 2 - Controls"=c(0,0,1,0,0,1),
>         "Controls: time trend (T1 - T0)"=c(0,0,0,-1,0,0),
>         "Group 1: time trend (T1 - T0)"=c(0,0,0,-1,-1,0),
>         "Group 2: time trend (T1 - T0)"=c(0,0,0,-1,0,-1))
>
>
> k<-rbind("after Treatment: Group 1 - Controls"=c(0,1,0,0,0,0),
>         "after Treatment: Group 2 - Controls"=c(0,0,1,0,0,0),
>         "before Treatment: Group 1 - Controls"=c(0,1,0,0,1,0),
>         "before Treatment: Group 2 - Controls"=c(0,0,1,0,0,1)
>         )
>
>
>
>
> w.lme<-lme(w~group*time,data=dati,random=~1|id)
> m1<-summary(glht(w.lme,kp))
> m2<-summary(glht(w.lme,k))
>
>
> Thank in advances for your suggestions
>
>
> Massimo
>        [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models



-- 
Gregory (Greg) L. Snow Ph.D.
538280 at gmail.com




More information about the R-sig-mixed-models mailing list