[R] accuracy of test cases
Uwe Ligges
ligges at statistik.uni-dortmund.de
Fri Apr 29 13:27:40 CEST 2005
Robin Hankin wrote:
>
> On Apr 29, 2005, at 11:51 am, Uwe Ligges wrote:
>
>> Robin Hankin wrote:
>
>
> [snip]
>
>>> The tolerance should be as small as possible, but If I make it too
>>> small, the test may fail
>>> when executed on a machine with different architecture from mine.
>>> How do I deal with this?
>>
>>
>> See ?all.equal
>>
>> Uwe Ligges
>>
>
> Hi Uwe
>
> Thanks for this. But sometimes my tests fail (right at the edge of a
> very wibbly wobbly
> function's domain, for example) even with all.equal()'s default tolerance.
>
> Maybe I should only include tests where all.equal() passes
> "comfortably" on my
> machine, and have done with it. Yes, this is the way to think about it: I
> was carrying out tests where one might
> expect them to fail (entrapment?). My mistake was to focus on the
> magnitude of
> "tol" and to blithely include tests where all.equal() failed, or came
> close to failing.
>
> Unfortunately, all the interesting stuff happens at the boundary.
>
> I guess (thinking about it again) that in such circumstances, there is
> no generic answer.
[We might want to move to R-devel for further discussion...]
Yes, of course the cases at the boundary are the interesting ones.
Unfortunately, it is extremely hard (even if underlying algorithms are
known - and if possible at all) to calculate the "expected" inaccuracy,
if algorithms are becoming quite complex.
It would also be possible to intentional include a test that gives
differences - don't know what Kurt et al. think about it (if we are
talking about a CRAN package), though.
Best,
Uwe
>
> best wishes
>
> rksh
>
>
>
>
>
> --
> Robin Hankin
> Uncertainty Analyst
> Southampton Oceanography Centre
> European Way, Southampton SO14 3ZH, UK
> tel 023-8059-7743
More information about the R-help
mailing list