[Rd] Floating point issue
@d|rk@e @end|ng |rom g@m@@com
Tue Jul 12 03:37:15 CEST 2022
Taras makes several good points, especially this one:
On Mon, Jul 11, 2022 at 12:31 PM Taras Zakharko <taras.zakharko using uzh.ch>
> Available precision can affect numerical properties of algorithms but
> should not affect things like decimal to binary or via versa conversion —
> it either produces the accurate enough number or it doesn’t.
This is a key point: there is no need to rely on platform-specific
properties of floating-point representations or operations to get correct
(*) decimal-to-binary or binary-to-decimal conversions. These tasks can be
done correctly in a way that uses only the guarantees provided by IEEE
floats or doubles, and some additional work using big integers (something
like GNU MP) in some cases. There are freely-available libraries to do
the conversions in a platform-independent, correct, efficient way. An even
easier solution in one direction is strtod(): decades ago it was not 100%
correct but I haven't seen any flaws in recent versions of GLIBC or on
Windows. Certainly strtod can be relied on to do a better job than
"multiply-by-10 and add the next digit".
(*) What is correct? The easy direction is decimal to binary, staying in
the range of positive normalized numbers. There are a finite number of
rational numbers that are exactly representable as IEEE doubles. The
correct double representation of a decimal number (also a rational) is that
IEEE double that is closest. In the event of a tie, use the round-to-even
As a side note, I agree with Andre that relying on Intel’s extended
> precision in this day an age is not a good idea. This is a legacy feature
> from over forty years ago, x86 CPUs have been using SSE instructions for
> floating point computation for over a decade. The x87 instructions are slow
> and prevent compiler optimisations. Overall, I believe that R would benefit
> from dropping this legacy cruft. Not that there are too many places where
> it is used from what I see…
> — Taras Zakharko
> > On 11 Jul 2022, at 13:48, GILLIBERT, Andre <Andre.Gillibert using chu-rouen.fr>
> >> From my current experiences, I dare to say that the M1 with all
> >> its speed is just a tad less reliable numerically than the
> >> Intel/AMD floating point implementations..
> > 80 bits floating point (FP) numbers are great, but I think we cannot
> rely on it for the future.
> > I expect, the marketshare of ARM CPUs to grow. It's hard to predict, but
> ARM may spread in desktop computers in a timeframe of 10 years, and I would
> not expect it to gain extended precision FP.
> > Moreover, performance of FP80 is not a priority of Intel. FP80 in recent
> Intel microprocessors are very slow when using special representations
> (NaN, NA, Inf, -Inf) or denormal numbers.
> > Therefore, it may be wise to update R algorithms to make them work quite
> well with 64 bits FP.
> > --
> > Sincerely
> > Andr� GILLIBERT
> > [[alternative HTML version deleted]]
> > ______________________________________________
> > R-devel using r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> R-devel using r-project.org mailing list
Steven Dirkse, Ph.D.
GAMS Development Corp.
[[alternative HTML version deleted]]
More information about the R-devel