[Rd] Buglet in optim() SANN

Ravi Varadhan RVaradhan at jhmi.edu
Mon Oct 26 15:05:56 CET 2009

Dear John,

First let me apologize for not taking this off-list, since I feel that the
issues that you have raised are very important for optimizeRs to think

I completely agree with all you points.  Even though Brian Ripley is correct
in pointing out that for SANN `maxit' is the only stopping criterion, it is
still misleading when the convergence indicator says `0'.  By convention, a
zero indicator is taken to mean successful convergence to a local optimum,
i.e. to a point where the KKT conditions are satisfied.  If there is no
reasonable way to terminate SANN (or other stochastic search algorithms), it
might be more appropriate to indicate convergence as `NA', rather than `0'. 

Of course, for non-smooth and noisy functions we do not have the KKT
conditions to guide us in evaluating whether we have a good solution or not.
In these situations, the quality of the solution needs to be evaluated based
on problem-specific knowledge.  Hopefully, we can address this issue to some
extent in the future versions of "optimx". For example, if a user tries to
use SANN for a smooth objective function, then she should be forewarned.  



Ravi Varadhan, Ph.D.

Assistant Professor, The Center on Aging and Health

Division of Geriatric Medicine and Gerontology 

Johns Hopkins University

Ph: (410) 502-2619

Fax: (410) 614-9625

Email: rvaradhan at jhmi.edu




-----Original Message-----
From: r-devel-bounces at r-project.org [mailto:r-devel-bounces at r-project.org]
On Behalf Of Prof. John C Nash
Sent: Sunday, October 25, 2009 6:34 PM
To: Prof Brian Ripley
Cc: r-devel at r-project.org
Subject: Re: [Rd] Buglet in optim() SANN

Indeed Brian is correct about the functioning of SANN and the R
documentation. I'd misread the "maxit" warning. Things can stay as they
are for now.

The rest of this msg is for information and an invitation to off-list

I realize my posting opens up the can of worms about what "convergence"
means. As someone who has occasionally published discussions on
convergence versus termination, I'd certainly prefer to set the
'convergence' flag to 1 for SANN, since we have only a termination at
the maximum number of function evaluations and not necessarily a result
that can be presumed to be "optim"al. Or perhaps put a note in the
description of the 'convergence' flag to indicate the potential
misinterpretation with SANN where results need the user to externally
check if they are likely to be usable as an optimum.

It may be better to call the non-zero results for "convergence" a
"termination indicator" rather than an "error code". Some related
packages like ucminf give more than one non-zero indicators for results
that are generally usable as optima. They are informational rather than
errors. Writing our optimx wrapper for a number of methods has forced us
to think about how such information is returned and reported through a
flag like "convergence". There are several choices and plenty of room
for confusion.

Right now a few of us are working on improvements for optimization, but
the first goal is to get things working OK for smooth, precisely defined
functions. However, we have been looking at methods like SANN for
multimodal and noisy (i.e., imprecisely defined) functions. For those
problems, knowing when you have an acceptable or usable result is never

Comments and exchanges welcome -- off-list of course.

Cheers, JN

Prof Brian Ripley wrote:
> As the posting guide says, please read the help carefully before
> posting.  It does say:
>      ?maxit? The maximum number of iterations. Defaults to ?100? for
>           the derivative-based methods, and ?500? for ?"Nelder-Mead"?.
>           For ?"SANN"? ?maxit? gives the total number of function
>           evaluations. There is no other stopping criterion.
>                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>           Defaults to ?10000?.
> so this is indicating 'successful convergence' as documented.
> On Tue, 20 Oct 2009, Prof. John C Nash wrote:
>> I think SANN method in optim() is failing to report that it has not
>> converged. Here is an example
>> genrose.f<- function(x, gs=NULL){ # objective function
>> ## One generalization of the Rosenbrock banana valley function (n
>> parameters)
>>     n <- length(x)
>>        if(is.null(gs)) { gs=100.0 }
>>     fval<-1.0 + sum (gs*(x[1:(n-1)]^2 - x[2:n])^2 + (x[2:n] - 1)^2)
>>        return(fval)
>> }
>> xx<-rep(pi,10)
>> test<-optim(xx,genrose.f,method="SANN",control=list(maxit=1000,trace=1))
>> print(test)
>> Output is:
>>> source("tsann.R")
>> sann objective function values
>> initial       value 40781.805639
>> iter      999 value 29.969529
>> final         value 29.969529
>> sann stopped after 999 iterations
>> $par
>> [1] 1.0135254 0.9886862 1.1348609 1.0798927 1.0327997 1.1087146 1.1642130
>> [8] 1.3038754 1.8628391 3.7569285
>> $value
>> [1] 29.96953
>> $counts
>> function gradient
>>    1000       NA
>> $convergence
> It _should_ be 0 according to the help page.
>> $message
>> Note terribly important, but maybe fixable.
>> Cheers,
>> John Nash
>> ______________________________________________
>> R-devel at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel

R-devel at r-project.org mailing list

More information about the R-devel mailing list