# [R] linear regression: evaluating the result Q

Liaw, Andy andy_liaw at merck.com
Tue Dec 14 20:35:47 CET 2004

```It looks just like the classical F-test for lack-of-fit, using estimate of
`pure errors' from replicates, doesn't it?  This should be in most applied
regression books.  The power (i.e., probability of finding lack-of-fit when
it exists) of such tests will depend on the data.

Andy

> From: RenE J.V. Bertin
>
> Hello,
>
> I'd like to come back to this question I obtained some
> valuable help with a while ago.
>
> I just came across a paper applying a seemingly rather
> clever/elegant technique to assess the extent to which a
> linear fit is appropriate, given once data. These authors
> apply an ANOVA to the residuals, and take a NS result as an
> indication that the fitted relationship is indeed
> (sufficiently) linear.
>
> But is this a clever/elegant technique, and is it good and robust?
> A rather pathological example where it fails (I think):
>
> ##> kk<-data.frame( ordered(factor( rep( 1:25, each=11))),
> ordered(factor(rep( 0:10, 25))), sin( pi*jitter(rep(0:10,25))) )
> ##> names(kk)<-c("s","x","y")
> ##> summary( aov(y~x+Error(s), data=kk) )
>
> Error: s
>           Df Sum Sq Mean Sq F value Pr(>F)
> Residuals 24  2.592   0.108
>
> Error: Within
>            Df Sum Sq Mean Sq F value Pr(>F)
> x          10  1.174   0.117   0.974  0.467
> Residuals 240 28.924   0.121
>
> (it doesn't fail when using a cosine instead of a sine, of course).
>
> And if so, before I reinvent the wheel in implementing it
> myself: is anyone here aware of an existing implementation of
> a test that does just that?
>
> Thanks,
> RenE Bertin
>
> ______________________________________________
> R-help at stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help