# [R] Optimal use of optim?

J C Nash pro|jcn@@h @end|ng |rom gm@||@com
Wed Feb 14 02:40:46 CET 2024

```This won't answer the questions, but will point out that I wrote the Nelder-Mead,
BFGS (I call it Variable Metric) and CG methods in BASIC in 1974. They were re-coded
many times and then incorporated in R around 1995 as I recall (Brian Ripley did the
incorporation). There are some great 50 year old cars, but ....

Might want to look at some of the more recent evolutions of the codes (all in R)
in optimx. Note that not all the optimx methods are "good". Many are present to
allow for comparison. Rvmmin and Rcgmin have proved quite reliable; if rarely
the winners in all cases, they tend to come in close to the front. Both do, however,
need gradients. Note that derivative of abs(x) can be defined as 0 at x=0 (even
if that offends purists). If you must use numerical derivatives, avoid forward or
backward approximations in favour of central or better ones.

JN

On 2024-02-13 17:16, Leo Mada via R-help wrote:
> Dear R-Users,
>
> I am interested in the optimal strategy for optim:
>
> Q: How to formulate the optimization problem?
> Q1: Are there benefits for abs(f(x)) vs (f(x))^2?
> Q2: Are there any limitations for using abs(...)?
>
> Regarding point 1: my feeling is that the gradients should be more robust with the abs(...) method; but I did not test it.
>
> Regarding point 2: Obviously, abs(f(x)) is not differentiable. This is a limitation from a mathematical point of view, but the gradient should be still computable using a finite difference method. Are there any other limitations?
>
> The question is based on the following example:
> solving gamma(x) = y for some given y.
>
> Function multiroot in package rootSolve fails when y < 1. The version using optim fares much better. However, I had to tweak somewhat the parameters in order to get a higher precision. This made me curious about the optimal strategy; unfortunately, I do not have much experience with optimizations.
>
> The example is also available on GitHub:
> https://github.com/discoleo/R/blob/master/Math/Integrals.Gamma.Inv.R
>
> Sincerely,
>
> Leonard
>
> ### Example:
> gamma.invN = function(x, x0, lim, ...,
>             ndeps = 1E-5, rtol = 1E-9, gtol = 0.0001, digits = 10) {
>       v = x;
>       if(length(lim) == 1) lim = c(lim - 1 + gtol, lim - gtol);
>       cntr = c(list(...), ndeps = ndeps, pgtol = rtol);
>       # abs(): should provide greater precision than ()^2 when computing the gradients;
>       # param ndeps: needed to improve accuracy;
>       r = optim(x0, \(x) { abs(gamma(x) - v); }, lower = lim[1], upper =lim[2],
>             method = "L-BFGS-B", control = cntr);
>       if( ! is.null(digits)) print(r\$par, digits);
>       return(r\$par);
> }
>
> ### Example
> Euler   = 0.57721566490153286060651209008240243079;
> x = gamma.invN(Euler, -3.5, lim = -3)
> gamma(x) - Euler
> x = gamma.invN(Euler, -3.9, lim = -3)
> gamma(x) - Euler
>
>
> 	[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help using r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help