[R] a kinder view of Type III SS

Bill.Venables at csiro.au Bill.Venables at csiro.au
Fri Feb 8 02:35:33 CET 2008


Frank Harrell has already added some comments, with which I agree.

As one of the people who did become rather heated in the discussion, let
me add a few points in a (fairly) calm and considered way.

1. The primary objection I had to all of this is that it encourages
people to think of analysis of variance in such a simplistic way, i.e.
in terms of 'sums of squares'.  This leads to silly questions like
"Well, if you don't like Type III sums of squares, what type do you
like?" as if the concept of multiple "types" of sum of squares had any
meaning.  There is only one type and it represents a squared distance in
the sample space.  The real question is how to interpret the vector in
sample space of which any particular sum of squares is a squared length.
For that you need to be very clear about both the null hypothes
implicitly being tested, and the outer hypothesis being assumed.  This
issue can become very subtle when interactions are involved, as you
point out.  

2. Most of the heat came from resentment that SAS should insinuate its
way into the statistical community in such a Microsoft-like way, i.e.
trying to make both its black-box software and defective terminology
some kind of industry standard.


Bill Venables
CSIRO Laboratories
PO Box 120, Cleveland, 4163
AUSTRALIA
Office Phone (email preferred): +61 7 3826 7251
Fax (if absolutely necessary):  +61 7 3826 7304
Mobile:                         +61 4 8819 4402
Home Phone:                     +61 7 3286 7700
mailto:Bill.Venables at csiro.au
http://www.cmis.csiro.au/bill.venables/ 

-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org]
On Behalf Of Bernard Leemon
Sent: Friday, 8 February 2008 7:42 AM
To: r-help at r-project.org
Subject: [R] a kinder view of Type III SS

A young colleague (Matthew Keller) who is an ardent fan of R is teaching
me
much about R and discussions surrounding its use.  He recently showed me
some of the sometimes heated discussions about Type I and Type III
errors
that have taken place over the years on this listserve.  I'm presumptive
enough to believe I might add a little clarity.  I write this from the
perspective of someone old enough to have been grateful that the stat
programmers (sometimes me coding in Fortran) thought to provide me with
model tests I had not asked for when I carried heavy boxes of punched
cards
across campus to the card reader window only to be told to come back a
day
or two later for my output.  I'm also modern enough to know that
anova(model1, model2), where model2 is a proper subset of model1, is all
that I need and allows me to ask any question of my data that I want to
ask
rather than being constrained to those questions that the SAS or SPSS
programmer thought I might want to ask.  I could end there, and we would
probably all agree with what I have said to this point, but I want to
push
the issue a bit and say: it seems that Type III Sums of Squares are
being
unfairly maligned among the R cognoscenti. And the practical
ramification of
this is that it creates a good deal of confusion among those migrating
from
SAS/SPSS land into R - not that this should ever be a reason to
introduce a
flawed technique into R, but my argument is that type III sums of
squares
are not a flawed technique.

In my reading of the prior discussions on this list, my conclusion is
that
the Type I/Type III issue is a red herring that has generated
unnecessary
heat.  Base R readily provides both types.  summary(lm( y ~ x + w + z))
provides estimates and tests consistent with Type III sums of squares
(it
doesn't provide the SS directly but they are easily derived from the
output)
and anova(lm(y ~ x + w + z)) provides tests consistent with Type I sums
of
squares.  The names Type I and III are dreadful "gifts" from SAS and
others.
 I'd prefer "conditional tests" for those provided by summary() because
what
is estimated and tested are x|w,z    w|x,z   and  z|x,w [read these as
"x
conditional on w and z being in the model"] and "sequential" for those
provided by anova(), being x, w|x, and z|x,w.  None of these tests is
more
or less valid or useful than any of the others.  It depends on which
questions researchers want to ask of their data.

Things get more interesting when z  represents the interaction between x
and
w, such that z = x * w = xw.  Fundamentally everything is the same in
terms
of the above tests.  However, one must be careful to understand what the
coefficient and test for x|w,xw and w|x,xw mean.  That is, x|w,xw tests
the
relationship between x and y when and only when w = 0.  A very, very
common
mistake, due to an overgeneralization of traditional anova models, is to
refer to x|w,xw as the "main effect."  In my list of ten statistical
commandments I include: "Thou shalt never utter the phrase main effect"
 because it causes so much unnecessary confusion.  In this case, x|w,xw
is
the SIMPLE effect of x when w = 0.  This means among other things that
if
instead we use w' = w - k so as to change the 0 point on the w' scale,
we
will get a different estimate and test for x|w',xw'. Many correctly
argue
that the main effect is largely meaningless in the presence of an
interaction because it implies there is no common average effect.
However,
that does not invalidate x|w,xw because it is NOT a "main" (sense
"principal" or "chief") effect but only a "simple" effect for a
particular
level of w.  A useful strategy for testing a variety of simple effects
is to
subtract different constants k from w so as to change the 0 value to
focus
the test on particular simple effects.


 If x and w are both contrast codes (-1 or 1) for the two factors of a 2
x 2
design, then x|w,xw is the simple effect of x when w = 0.   While w
never
equals 0, in a balanced design w does equal 0 on average.  In that one
very
special case, the simple effect of x when w = 0 equals the average of
all
the simple effects and in that one special case one might call it the
"main
effect."  However, in all other situations it is only the simple effect
when
w = 0.  If we discard the term "main effect", then a lot of unnecessary
confusion goes away.  Again, if one is interested in the simple effect
of x
for a particular level of w, then one might want to use, instead of a
contrast code, a dummy code where the value of 0 is assigned to the
level of
w of interest and 1 to the other level.

When factors have multiple levels, it is best to have orthogonal
contrast
codes to provide 1-df tests of questions of interest.  Products of those
codes are easily interpreted as the simple difference for one contrast
when
the other contrast is fixed at some level.  Multiple degree of freedom
omnibus tests are troublesome but are only of interest if we are fixated
on
concepts like 'main effect.'

gary mcclelland (aka bernie leemon)
colorado

	[[alternative HTML version deleted]]

______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



More information about the R-help mailing list