[R] precision problems in testing with Intel compilers
Samuelson, Frank*
FWS4 at CDRH.FDA.GOV
Thu Aug 19 18:11:00 CEST 2004
I compiled the 1.9.1 src.rpm with the standard gnu tools and it works.
I tried compiling the 1.9.1 src.rpm with the Intel 8 C and FORTRAN
compilers and it bombs out during the testing phase:
comparing 'd-p-q-r-tests.Rout' to './d-p-q-r-tests.Rout.save' ...267c267
< df = 0.5[1] "Mean relative difference: 5.001647e-10"
---
> df = 0.5[1] TRUE
make[3]: *** [d-p-q-r-tests.Rout] Error 1
make[3]: Leaving directory `/usr/src/redhat/BUILD/R-1.9.1/tests'
make[2]: *** [test-Specific] Error 2
make[2]: Leaving directory `/usr/src/redhat/BUILD/R-1.9.1/tests'
make[1]: *** [test-all-basics] Error 1
make[1]: Leaving directory `/usr/src/redhat/BUILD/R-1.9.1/tests'
make: *** [check-all] Error 2
error: Bad exit status from /var/tmp/rpm-tmp.63044 (%build)
looking at the differences between the failed file and the standard, I get:
fws wolf tests] diff d-p-q-r-tests.Rout.save d-p-q-r-tests.Rout.fail
3c3
< Version 1.9.0 Patched (2004-04-19), ISBN 3-900051-00-3
---
> Version 1.9.1 (2004-06-21), ISBN 3-900051-00-3
281c281
< df = 0.5[1] TRUE
---
> df = 0.5[1] "Mean relative difference: 5.001647e-10"
935c935
< Time elapsed: 7.83 0.04 16.1 0 0
---
> Time elapsed: 2.49 0.01 2.55 0 0
Besides being 3 times faster, it's stopping on the following code:
for(df in c(0.1, 0.5, 1.5, 4.7, 10, 20,50,100)) {
cat("df =", formatC(df, wid=3))
xx <- c(10^-(5:1), .9, 1.2, df + c(3,7,20,30,35,38))
pp <- pchisq(xx, df=df, ncp = 1) #print(pp)
dtol <- 1e-12 *(if(2 < df && df <= 50) 64 else if(df > 50) 20000 else
500)
print(all.equal(xx, qchisq(pp, df=df, ncp=1), tol = dtol))# TRUE
##or print(mapply(rErr, xx, qchisq(pp, df=df,ncp=1)), digits = 3)
}
Where dtol used by all.equal is set to be 5e-10,
which the intel compiler misses by 1.6e-13.
This tolerance value seems a bit arbitrary.
The gcc compiled version's passes the test with a 9.3e-11 error.
I am using the -mp option for the intel compilers, which was recommended
on this mailing list previously and would make sense given the docs:
Floating Point Optimization Options
-mp Maintain floating-point precision (disables some
optimizations). The -mp option restricts optimiza-
tion to maintain declared precision and to ensure
that floating-point arithmetic conforms more
closely to the ANSI and IEEE standards. For most
programs, specifying this option adversely affects
performance. If you are not sure whether your
application needs this option, try compiling and
running your program both with and without it to
evaluate the effects on both performance and preci-
sion.
Has anyone else encountered this?
-Frank
More information about the R-help
mailing list