[R] Constrainted Nonlinear Optimization - lack of convergence

mcgrete mcgrete.drs at gmail.com
Wed May 18 06:18:44 CEST 2011


Hello,

I am attempting to utilize the 'alabama' package to solve a constrained
nonlinear optimization problem.

The problem has both equality and inequality constraints (heq and hin
functions are used).  All constraints are smooth, i.e. I can differentiate
easily to produce heq.jac and hin.jac functions. 

My initial solution is feasible; I am attempting to maximize a function,
phi.  As such, I create an objective or cost function '-phi', and the
gradient of the cost function '-dphi/dz'.  

I will gladly provide the detailed code, but perhaps an overview of the
problem may be sufficient.

0.  I installed 'alabama' and was successful at solving the example problem.

1.  My constraints are:
z=0 (for several elements in the vector z)
z>=0 (for remaining elements in vector z)
Z - sum(z) >=0, where Z is a constant real number.

2.  My cost function to maximize is (or, minimize -phi):  phi==SUM[
p[i]*LN{f[i]} ], where sum is for i=1:length(z)
     and where f[i]=={(1-r)*SUM[z+s]-z[i]-s[i]}*z[i]/(z[i]+s[i]) + Z -
sum(z)  
     and where s, p are vectors of length(z) and are constants.  Note,
elements of p & s are all >0
     and where (1-r) is a scalar>0
     Note: f[i], under the constraints listed above that is, should always
be >= 0
 

3.  I can readily calculate the gradient of phi, where in general:
dphi/dz=d/dz[i] of phi== p*f'/f, where f' is df/dz[i].

4.  I created functions for inequality and equality constraints and their
jacobians, the cost function, and the grad of cost function.
 
5.  I utilize the alabama package, and the 'auglag' function.  As a first
attempt, I utilized only a single inequality constraint for z>0, all other z
constraints are z=0, and the Z - sum(z) > 0 inequality constraint.    I used
default settings, except for attempts to utilize various 'methods', e.g.
BFGS, Nelder-Mead.

Review of the alamaba package source code leads me to believe that this code
automatically generates the Lagrangian of the cost function augmented with
Lagrangian multipliers, and also generates the gradient of the augmented
Lagrangian.  Hence, I assume (perhaps incorrectly), that auglag is
automatically generating the dual problem, and attempts to find a solution
to the dual problem by calling 'optim'.  

MY ISSUE:
The code often runs successfully (converges); sometimes with satisfying
(TRUE) KKT1 and KKT2, sometimes only 1 of the 2.  Sometimes it fails to
converge at all.  When it does converge, I do not obtain the same optimum
condition when I utilize different initial conditions.  When it does fail to
converge, I often end up with a Nan, generated when attempting to take
log(f[i]), meaning that f[i]<0, and I interpret and observe that some or all
of the elements of the vector z are less than zero, despite my constraints.

QUESTION
Other than the obvious - review my code for typos, etc, which I believe have
been resolved...
1.  Can the alabama procedure take a solution path that may not satisfy the
constraints?  If not, then I must have an error in my code despite attempts
to eliminate and I must review yet again.  

2.  If the path may not satisfy all of the constraints (perhaps to due to
steep gradients), how to avoid this situation?  

2a.  I presume that some of the issues may be with difference in scaling,
e.g. say s=[200,500,400,300,100], p=[0.1,0.2,0.4,0.1,0.2], Z=1000,
(1-r)=0.8, and initial starting point for z=[0,0,200,0,0].  However, I am
not experienced at scaling these or the constraints.  Any suggestions?

2b I am not an expert in optimization, but have some background in
math/engineering.  I suspect and hope that something as simple as relaxing
the constraints on z=0 to z=delta, where delta is a small positive number,
may help - any comments?  I admit, I am lazy for not trying this, as I just
thought of it while writing this post.

2c.  I am dangerously knowledgeable that penalty functions exist, but I am
uncertain on how to utilize and how to determine how to select the term
'sig0'.  Suggestions?

2d.  Thinking more, I have not rigorously attempted to modify the tolerance
for convergence, thinking that perhaps my issue is more related to the
solution path not remaining in the constraints being the issue, and not my
convergence.  Am I incorrect in thinking so?  

I would appreciate any assitance that someone can provide.  Again, if the
code is required, I will share, but I hope that I have defined my problem
well enough above so as to avoid anyone having to sort through / degub my
own code.

Much appreciated,
Tim


--
View this message in context: http://r.789695.n4.nabble.com/Constrainted-Nonlinear-Optimization-lack-of-convergence-tp3531534p3531534.html
Sent from the R help mailing list archive at Nabble.com.



More information about the R-help mailing list