[BioC] lowess vs. loess
Gordon Smyth
smyth at wehi.edu.au
Wed Sep 3 11:58:49 MEST 2003
Local regression is a statistical method for robustly fitting smoothing
curves without prior assumptions about the shape or form of the curve.
'lowess' and 'loess' are two different functions in R which implement local
regression. As you've noted, 'loess' is the more recent program and it is
essentially equivalent to 'lowess' except that it has more features.
The R function 'loess' has three things which 'lowess' doesn't:
1. It accepts a formula specifying the model rather than the x and y matrices
2. As you've noted, it can be used with more than one predictor.
3. It accepts prior weights.
4. It will estimate the "equivalent number of parameters" implied by the
fitted curve.
On the other hand, 'loess' is much slower than 'lowess' and occasionally
fails when 'lowess' succeeds, so both programs are kept in R.
When there is only one predictor variable and no prior weights, 'lowess'
and 'loess' are in principle exactly equivalent. However the default
settings of the two programs are very different. Here is an example in
which I force 'lowess' and 'loess' to do precisely the same numerical
calculation:
> y <- rnorm(1000)
> x <- 1:1000
> out.lowess <- lowess(x,y,f=0.3,iter=3,delta=0)
> out.lowess$y[1:5]
[1] 0.1816632 0.1799619 0.1782683 0.1765826 0.1749048
> out.loess <-
loess(y~x,span=0.3,degree=1,family="symmetric",iterations=4,surface="direct")
> fitted(out.loess)[1:5]
[1] 0.1816632 0.1799619 0.1782683 0.1765826 0.1749048
Things to note here:
1. 'f' is the 'span' argument for 'lowess'
2. 'loess' does quadratic (degree=2) local regression by default instead of
linear (degree=1)
3. Unless you specify family="symmetric", loess will fit the curve by least
squares, i.e., won't do any robustness iterations at all.
4. lowess and loess count iterations in differently: 'iter' in lowess means
the number of robustness iterations; 'iterations' in loess means the total
number of iterations including the least squares fit, i.e., iterations=iter+1
The only aspect in which it is not possible to make 'loess' and 'lowess'
agree exactly is in their treatment of large data sets. When x and y are
very long, say 10s of thousands of observations, it is impractical and
unnecessary to do the local regression calculation exactly, rather it is
usual to interpolate between observations which are very close together.
This interpolation is control by the 'delta' argument to 'lowess' and the
'cell' and 'surface' arguments to 'loess'.
When there are a large number of observations, 'lowess' groups together
those x-values which are closer than a certain distance apart. Although
grouping observations based on distance is in-principle the best approach,
this is impractical for 'loess' because 'loess' is designed to accept many
x-variables. So 'loess' instead groups observations together based on the
number of observations on a cell rather than distances. Because of this
small difference, 'lowess' and 'loess' will almost always give slightly
different numerical results for large data sets. The difference is not
generally important.
Gordon
At 12:28 AM 3/09/2003, nataraja at mit.edu wrote:
>Hello! I have noticed a distinction being made
>between lowess and loess for the normalization
>of microarray data, but I'm not quite clear about
>what the difference is between the two techniques.
> >From William Cleveland's website, it seems that the
>major difference is that lowess uses only one
>predictor variable, whereas loess can be used with
>more than one predictor:
>http://cm.bell-labs.com/cm/ms/departments/sia/wsc/smoothsoft.html
>For intensity-based normalization (one predictor) wouldn't
>the two algorithms boil down to the same thing?
>Any insight would be greatly appreciated!!
>
>Thank you,
>Sripriya Natarajan
>Graduate Student, Center for Vascular Biology
>Dept. of Pathology, Brigham and Women's Hospital
More information about the Bioconductor
mailing list