[Rd] significant digits (PR#9682)

simon.urbanek at r-project.org simon.urbanek at r-project.org
Tue Jun 3 22:40:16 CEST 2008


On Jun 3, 2008, at 2:48 PM, Duncan Murdoch wrote:

> On 6/3/2008 11:43 AM, Patrick Carr wrote:
>> On 6/3/08, Duncan Murdoch <murdoch at stats.uwo.ca> wrote:
>>>
>>> because signif(0.90, digits=2) == 0.9.  Those two objects are  
>>> identical.
>> My text above that is poorly worded. They're identical internally,
>> yes. But in terms of the number of significant digits, 0.9 and 0.90
>> are different. And that matters when the number is printed, say as an
>> annotation on a graph. Passing it through sprintf() or format() later
>> requires you to specify the number of digits after the decimal, which
>> is different than the number of significant digits, and requires case
>> testing for numbers of different orders of magnitude.
>> The original complainant (and I) expected this behavior from  
>> signif(),
>> not merely rounding. As I said before, I wrote my own workaround so
>> this is somewhat academic, but I don't think we're alone.
>>> As far as I know, rounding is fine in Windows:
>>>
>>> > round(1:10 + 0.5)
>>> [1]  2  2  4  4  6  6  8  8 10 10
>>>
>> It might not be the rounding, then. (windows xp sp3)
>>   > signif(12345,digits=4)
>>   [1] 12340
>>   > signif(0.12345,digits=4)
>>   [1] 0.1235
>
> It's easy to make mistakes in this, but a little outside-of-R  
> experimentation suggests those are the right answers.  The number  
> 12345 is exactly representable, so it is exactly half-way between  
> 12340 and 12350, so 12340 is the right answer by the unbiased round- 
> to-even rule.  The number 0.12345 is not exactly representable, but  
> (I think) it is represented by something slightly closer to 0.1235  
> than to 0.1234.  So it looks as though Windows gets it right.
>
>
>> OS X (10.5.2/intel) does not have that problem.
>
> Which would seem to imply OS X gets it wrong.

This has nothing to do with OS X, you get that same answer on pretty  
much all other platforms (Intel/Linux, MIPS/IRIX, Sparc/Sun, ...).  
Windows is the only one delivering the incorrect result here.


>  Both are supposed to be using the 64 bit floating point standard,  
> so they should both give the same answer:

Should, yes, but Windows doesn't. In fact 10000.0 is exactly  
representable and so is 1234.5 which is the correct result that all  
except Windows get. I don't have a Windows box handy, so I can't tell  
why - but if you go through fprec this is what you get on the  
platforms I tested (log10 may vary slightly but that's irrelevant here):

x = 0.123450000000000004174439
l10 = -0.908508905732048899217546, e10 = 4
pow10 = 10000.000000000000000000000000
x*pow10 = 1234.500000000000000000000000

Cheers,
Simon


>  but the actual arithmetic is being done by run-time libraries that  
> are outside our control to a large extent, and it looks as though  
> the one on the Mac is less accurate than the one on Windows.
>
>
> But (on both windows and OS X):
>>   > signif(12345.12345,digits=10)
>>   [1] 12345.12
>
> This is a different problem.  The number is correctly computed as  
> 12345.12345 (or at least a representable number quite close to  
> that), and then the default display rounds it some more.  Set  
> options(digits=19) to see it in its full glory.
>
> Duncan Murdoch
>
> ______________________________________________
> R-devel at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>
>



More information about the R-devel mailing list