[R-sig-ME] mcmcpvalue and contrasts

Steven McKinney smckinney at bccrc.ca
Tue Feb 26 20:21:23 CET 2008


> -----Original Message-----
> From: Ken Beath [mailto:kjbeath at kagi.com]
> Sent: Mon 2/25/2008 8:46 PM
> To: Steven McKinney
> Cc: Hank Stevens; Help Mixed Models
> Subject: Re: [R-sig-ME] mcmcpvalue and contrasts
>  
> On 26/02/2008, at 1:22 PM, Steven McKinney wrote:
> 
> >
> > Hi Hank,
> >
> >>
> >> -----Original Message-----
> >> From: r-sig-mixed-models-bounces at r-project.org on behalf of Ken Beath
> >> Sent: Mon 2/25/2008 4:05 PM
> >> To: Hank Stevens
> >> Cc: Help Mixed Models
> >> Subject: Re: [R-sig-ME] mcmcpvalue and contrasts
> >>
> >> On 26/02/2008, at 9:42 AM, Hank Stevens wrote:
> >>
> >>> Hi Folks,
> >>> I wanted to double check that my intuition makes sense.
> >>>
> >>> Examples of mcmcpvalue that I have seen use treatment "contrast"
> >>> coding.
> >>> However, in more complex designs, testing overall effects of a  
> >>> factor
> >>> might be better done with other contrasts, such as sum or Helmert
> >>> contrasts.
> >>>
> >>> My Contention:
> >>> Different contrasts test different hypothesis, and therefore  
> >>> result in
> >>> different P-values. This consequence of contrasts differs from
> >>> analysis of variance, as in anova( lm(Y ~ X1*X2) ).
> >>>
> >>> *** This is right, isn't it? ***
> >>>
> >>
> >> The main problem is testing for a main effect in the presence of
> >> interaction. While it looks like it gives sensible results in some
> >> cases like balanced ANOVA, they really aren't sensible and the effect
> >> of parameterisation in other cases makes that clear.
> >>
> >> The difference for the interaction is probably just sampling
> >> variation, increasing samples fixes this.
> >>
> >> Ken
> >
> > Ken is correct - testing some of the main effect terms resulting from
> > different parameterizations due to the differing contrast structures
> > will yield different results (though they in general will be somewhat
> > meaningless if the corresponding interaction term is in the model
> > and you do not have a balanced orthogonal design).
> >
> 
> Even with orthogonal designs there is still a problem with  
> interpretation. If we have a model with A*B and the interaction and B  
> are significant, then it seems that the conclusion about B is limited  
> to the choice of A in the experiment.  An assumption that the effect  
> of B will be the same with a different set of A seems rather risky,  
> although it seems to be what the FDA expect for multi-centre trials.  
> If there is some need to generalise then a model allowing for  a  
> random effect for B seems more sensible.
> 
> Ken
> 

Yes, in general if an interaction A*B is significant, then both
main effects that comprise the interaction are 'significant' in that
they are the variables that define the significant interaction.  The
interaction is significant because the relationship between the
reponse and one of the main effect terms depends on the level of the
other main effect term, so it is important that both main effects
remain in the model to allow the model to properly characterize this
complex relationship among the three variables involved.

Deleting either main effect from a model containing the interaction is
rarely advisable as doing so can yield biased estimates of the
interaction terms.  If the interaction term is significant, no more
testing need be done - both the main effects that comprise the
interaction are important and necessary.  Testing the main effects
that comprise a significant interaction using models that contain the
interaction seldom makes sense.

The second issue you raise involves extrapolating beyond the range of
the available data.  Conclusions about A and B are indeed limited to
the range of values covered by variables A and B.  An assumption that
the effect of B will be the same with a different set of A is indeed
rather risky.  Only if one can soundly argue that the set of
institutions in a multicentre trial truly reflects the full range of
populations that will be offered the treatment under study can one
make such generalizations.

Steve




More information about the R-sig-mixed-models mailing list