[R-sig-ME] mcmcpvalue and contrasts

Ken Beath kjbeath at kagi.com
Tue Feb 26 05:46:22 CET 2008

On 26/02/2008, at 1:22 PM, Steven McKinney wrote:

> Hi Hank,
>> -----Original Message-----
>> From: r-sig-mixed-models-bounces at r-project.org on behalf of Ken Beath
>> Sent: Mon 2/25/2008 4:05 PM
>> To: Hank Stevens
>> Cc: Help Mixed Models
>> Subject: Re: [R-sig-ME] mcmcpvalue and contrasts
>> On 26/02/2008, at 9:42 AM, Hank Stevens wrote:
>>> Hi Folks,
>>> I wanted to double check that my intuition makes sense.
>>> Examples of mcmcpvalue that I have seen use treatment "contrast"
>>> coding.
>>> However, in more complex designs, testing overall effects of a  
>>> factor
>>> might be better done with other contrasts, such as sum or Helmert
>>> contrasts.
>>> My Contention:
>>> Different contrasts test different hypothesis, and therefore  
>>> result in
>>> different P-values. This consequence of contrasts differs from
>>> analysis of variance, as in anova( lm(Y ~ X1*X2) ).
>>> *** This is right, isn't it? ***
>> The main problem is testing for a main effect in the presence of
>> interaction. While it looks like it gives sensible results in some
>> cases like balanced ANOVA, they really aren't sensible and the effect
>> of parameterisation in other cases makes that clear.
>> The difference for the interaction is probably just sampling
>> variation, increasing samples fixes this.
>> Ken
> Ken is correct - testing some of the main effect terms resulting from
> different parameterizations due to the differing contrast structures
> will yield different results (though they in general will be somewhat
> meaningless if the corresponding interaction term is in the model
> and you do not have a balanced orthogonal design).

Even with orthogonal designs there is still a problem with  
interpretation. If we have a model with A*B and the interaction and B  
are significant, then it seems that the conclusion about B is limited  
to the choice of A in the experiment.  An assumption that the effect  
of B will be the same with a different set of A seems rather risky,  
although it seems to be what the FDA expect for multi-centre trials.  
If there is some need to generalise then a model allowing for  a  
random effect for B seems more sensible.


More information about the R-sig-mixed-models mailing list