[Rd] Bug in optim for specific orders of magnitude
GILLIBERT, Andre
Andre@G||||bert @end|ng |rom chu-rouen@|r
Tue Jan 3 12:56:37 CET 2023
J C Nash wrote:
> Extreme scaling quite often ruins optimization calculations. If you think available methods are capable of doing this, there's a bridge I can sell you in NYC.
Given that optim()seems to have problem with denormals but work correctly with values greater than 1e-308, I think that it can hardly be considered a real bug.
There is no miracle solution, but there are a few tricks to work with extreme values.
What I call "flogs" are alternative representation of a number x as log(abs(x)) and sign(x). It supports very large and very small numbers, but the precision of flogs is very poor for very large numbers. Functions for addition, substraction, multiplication, etc, can easily be written.
For values that are between 0 and 1 and can be extremely close to 0 or extremely close to 1, a cloglog transformations can be used.
Another trick is to detect "hot points". Values that can be extremely close to hot points should be expressed as differences to the hot point, and the algorithm should be re-written to work with that, which is not always easy to do. For instance, the function log1p can be seen as a modification of the log function when the hot point is 1.
In my experience, the algorithm should be rewritten before having to deal with denormals, because denormals expand the range of floating point values by a quite small amount.
--
Sincerely
André GILLIBERT
More information about the R-devel
mailing list