[R] Helmert contrasts for repeated measures and split-plot expts

Spencer Graves spencer.graves at pdf.com
Fri Oct 13 17:19:58 CEST 2006


<comments in line>      

Roy Sanderson wrote:
> Dear R-help
>
> I have two separate experiments, one a repeated-measures design, the other
> a split-plot.  In a standard ANOVA I have usually undertaken a
> multiple-comparison test on a significant factor with e.g TukeyHSD, but as
> I understand it such a test is inappropriate for repeated measures or
> split-plot designs.
>
> Is it therefore sensible to use Helmert contrasts for either of these
> designs?  Whilst not providing all the pairwise comparisons of TukeyHSD,
> presumably the P-statistic for each Helmert contrast will indicate clearly
> whether that contrast is significant and should be retained in the model.
> (This seems to come with the disadvantage that the parameter values are
> harder to interpret than with Treatment contrasts.)  In the
> repeated-measures design the factor in question has three levels, whilst in
> the split-plot design it has four.
>   
      You don't need to restrict yourself to Helmert vs. treatment 
contrasts:  You can use any set of "contrasts" that will provide 
estimates of (k-1) parameters for a factor with k levels and interpret 
the p values as you suggest.  I see two issues with doing this:  
correlation among parameter estimates and individual vs. group p values. 

CORRELATED PARAMETER ESTIMATES:  Helmert contrasts are orthogonal for a 
balanced design but will produce correlated parameter estimates with an 
unbalanced design.  This will generally increase the p values due to 
"variance inflation" created by the correlation.  If one or more 
correlations are too large, you may wish to try custom contrasts that 
produce parameter estimates that are essentially uncorrelated;  this 
should give you the smallest p value you could expect for that 
comparison.  If I was interested in, e.g., 2*k comparisons, I might run 
the same analysis several times with different contrasts, taking the p 
value for each comparison from an analysis in which the coefficient for 
that comparison had a low correlation with the remaining (k-2) 
coefficients for that k-level factor. 

INDIVIDUAL VS. GROUP p VALUES:  In many but not all cases, under the 
null hypothesis of no effect, a p value will follow a uniform 
distribution.  Thus, if we compute 1,000 p values using a typical 
procedure when nothing is going on, we can expect roughly 50 of them to 
be less than 0.05 by chance alone.  The Bonferroni inequality suggests 
that if we do m comparisons, we should multiply the smallest p value by 
m to convert it to a family- or group-wise p value.  This is known to be 
conservative, and with more than (k-1) comparisons among k levels of a 
factor, it is extremely conservative.  In that case, I would be inclined 
to multiple the smallest p value by (k-1), even if I considered many 
more than (k-1) comparisons among the k levels.  I don't know a 
reference for doing this, and if I were going to do it for a 
publication, I might do some simulations to check it.  Perhaps someone 
else might enlighten us both on how sensible this might be. 

      Hope this helps. 
      Spencer Graves
> Many thanks in advance
> Roy
> ----------------------------------------------------------------------------
> -------
> Roy Sanderson
> Institute for Research on Environment and Sustainability
> Devonshire Building
> University of Newcastle
> Newcastle upon Tyne
> NE1 7RU
> United Kingdom
>
> Tel: +44 191 246 4835
> Fax: +44 191 246 4999
>
> http://www.ncl.ac.uk/environment/
> r.a.sanderson at newcastle.ac.uk
>
> ______________________________________________
> R-help at stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



More information about the R-help mailing list