[R] Combinatorial Optimisation

Detlef Steuer Detlef.Steuer at unibw-hamburg.de
Thu Oct 24 08:58:08 CEST 2002


This thread missed me somehow.

If not mentioned before:
simulated annealing is implemented in optim().

dst

On 23-Oct-2002 Phil Saunders wrote:
> Cheers Jim
> 
> Your response is exactly what I was hoping for, confirming my suspicions that
> (for once) R does not have a ready-made function for the purpose, and that
> what I actually need is one of the various combinatorial optimisation
> algorithms dotted  around the web, such as simulated annealing, genetic
> algorithms, taboo search, memetic algorithms, neural networks, et al.
> 
> I did manage to cobble together some R code for a taboo search, but I'm no
> expert in optimisation so I'm not convinced that it is working as efficiently
> as it should be, and would therefore be very interested in your simulated
> annealing code as I'll bet it works better than mine!
> 
> The SPSA algorithm is also very interesting, especially if it really is as
> efficient as the website claims, so if you have any code for this then again
> it would be very welcome, but otherwise I'll have a go at translating the
> MATLAB code in the overview document into R.
> 
> Thanks very much for your response - it's already been very helpful.
> 
> Phil
> 
> -----Original Message-----
> From: Jim_Garrett at bd.com [mailto:Jim_Garrett at bd.com] 
> Sent: 22 October 2002 14:45
> To: Phil Saunders
> Subject: Re: [R] Combinatorial Optimisation
> 
> 
> 
> Hello Phil,
> 
> There are a number of possibilities that are not hard to implement, though
> I don't know of anything out-of-the-box.  You will probably have to do some
> coding.
> 
> First, if you can implement a "steepest descent" algorithm in which,
> beginning from some candidate point, you move to a neighboring point if
> your criterion suggests it's better, simply run this algorithm many times
> beginning each run from a different randomly-selected starting point.
> Simple probability theory indicates that this will converge (in the number
> of runs) to the true solution.  In the "consider the neighbors" step, you
> could consider all neighbors and pick the best, or just pick one at random
> (which might actually do better--it will be less likely to get stuck in
> poor local optima).  In many problems this works very quickly, it's just
> not very sexy.
> 
> Second, you can generalize this a bit and try "simulated annealing" in
> which neighboring candidate points are randomly selected.  However, whereas
> in steepest descent the algorithm will never move to a point that's worse,
> with simulated annealing this happens with probability that decreases as
> the candidate point looks worse.  I.e., if the candidate point is just a
> little worse, the algorithm moves with moderate to high probability.  But
> if the point is a lot worse, the algorithm will make this move with small
> probability.  Furthermore, the scaling of "badness" becomes continuously
> greater, so that eventually the algorithm will not move to even slightly
> worse points.  I have R code to do this, but it requires that you write
> some routines that are specific to your problem, mainly defining your loss
> function and the function that runs the random selection of neighbors (and
> hence implicitly defines what a neighbor is).  If you'd like to try this
> out, let me know.
> 
> As with steepest descent, I would try several simulated annealing runs.
> But simulated annealing is more likely to get a good optimum in a small
> number of runs.
> 
> One nice thing about simulated annealing is that you can handle complex
> constraints by coding the point-selection routine to never selected an
> impermissible point.  However, this could cause the optimization to be
> trapped.  Another nice feature is (depending on the point-selection routine
> again) neighbors involve single-component changes, and if you can write
> your loss function so that it can be updated rather than recomputed from
> scratch, simulated annealing can be relatively fast.
> 
> Third, there's an extremely efficient continuous optimizer that is not
> known as well as it ought to be, called Simultaneous Perturbation
> Stochastic Approximation (SPSA); there's a web site at
> www.jhuapl.edu/spsa/.  I've adapted it to subset selection (in the context
> of determining which regression inputs to use).  A potential downside is
> that it will randomly partition the components, and will apply the loss
> function to each half.  Initially, roughly half would be used.  However,
> suppose the optimal solution involves 5 components and you have 95 others.
> It will calculate the loss function on the 95.  In some situations this can
> be computationally difficult or mathematically impossible.  I have some
> code that I could probably generalize pretty well; again, you'll have to
> write the loss function.  Again, let me know if you'd like to give it a
> try.
> 
> Finally, as you probably already know, everybody and their brother has some
> combinatorial optimization algorithm.  Some that come to mind are ant
> searches, genetic algorithms, and taboo search.  They will all probably
> work if used correctly.  But I can't offer much help on them!
> 
> Again, I'm happy to offer code, though it's not going to be an
> out-of-the-box solution.  I can get you simulated annealing code faster
> than SPSA code.  Let me know if you're interested.
> 
> Good luck,
> 
> Jim Garrett
> Becton Dickinson Diagnostic Systems
> Baltmore, Maryland, USA
> 
> 
> 
> -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.
> -
> r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
> Send "info", "help", or "[un]subscribe"
> (in the "body", not the subject !)  To: r-help-request at stat.math.ethz.ch
> _._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._.
> _

"There is no way to peace, peace is the way." -- Ghandi

Detlef Steuer --- http://fawn.unibw-hamburg.de/steuer.html
***** Encrypted mail preferred *****
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !)  To: r-help-request at stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._



More information about the R-help mailing list