[Rd] non-differentiable evaluation points in nlminb(), follow-up of PR#15052
Spencer Graves
spencer.graves at prodsyse.com
Fri Sep 28 10:53:41 CEST 2012
On 9/26/2012 2:13 AM, Sebastian Meyer wrote:
> This is a follow-up question for PR#15052
> <http://bugs.r-project.org/bugzilla3/show_bug.cgi?id=15052>
>
> There is another thing I would like to discuss wrt how nlminb() should
> proceed with NAs. The question is: What would be a successful way to
> deal with an evaluation point of the objective function where the
> gradient and the hessian are not well defined?
>
> If the gradient and the hessian both return NA values (assuming R <
> r60789, e.g. R 2.15.1), and also if both return +Inf values, nlminb
> steps to an NA parameter vector.
> Here is a really artificial one-dimensional example for demonstration:
>
> f <- function (x) {
> cat("evaluating f(", x, ")\n")
> if(is.na(x)) {Inf # to prevent an infinite loop for R < r60789
> } else abs(x)
> }
> gr <- function (x) if (abs(x) < 1e-5) Inf else sign(x)
> hess <- function (x) matrix(if (abs(x) < 1e-5) Inf else 0, 1L, 1L)
> trace(gr)
> trace(hess)
> nlminb(5, f, gr, hess, control=list(eval.max=30, trace=1))
>
> Thus, if nlminb reaches a point where the derivatives are not defined,
> optimization is effectively lost. Is there a way to deal with such
> points in nlminb? Otherwise, the objective function is doomed to
> emergency stop() if it receives NA parameters because nlminb won't pick
> up courage - regardless of the following return value of the objective
> function.
> As far as I would assess the situation, nlminb is currently not capable
> of optimizing objective functions with non-differentiable points.
Are you familiar with the CRAN Task View on Optimization and
Mathematical Programming? I ask, because as far as I know, "nlminb" is
one of the oldest nonlinear optimizer in R. If I understand the
history, it was ported from S-Plus after at least one individual in the
R Core team decided it was better for a certain task than "optim", and
it seemed politically too difficult to enhance "optim". Other nonlinear
optimizers have been developed more recently and are available in
specialized packages.
In my opinion, functions like "nlminb" should never stop because
it gets NA for a derivative at some point -- unless that honestly
happened to be a local optimum. If a function like "nlminb" computes an
NA for a derivative not at a local optimum, it should then call a
derivative-free optimizer, then try to compute the derivative at a local
optimum.
Also, any general optimizer that uses analytic derivatives should
check to make sure that the analytic derivatives computed are reasonably
close to numeric derivatives. This can easily be done using the
compareDerivatives function in the maxLik package.
Hope this helps.
Spencer
> Best regards,
> Sebastian Meyer
--
Spencer Graves, PE, PhD
President and Chief Technology Officer
Structure Inspection and Monitoring, Inc.
751 Emerson Ct.
San José, CA 95126
ph: 408-655-4567
web:www.structuremonitoring.com
More information about the R-devel
mailing list