[R-SIG-Finance] DEoptim and guarantees

Enrico Schumann es at enricoschumann.net
Mon Aug 25 09:11:25 CEST 2014


On Sun, 24 Aug 2014, alexios ghalanos <alexios at 4dscape.com> writes:

> On 24/08/2014 19:24, Enrico Schumann wrote:
>> On Fri, 22 Aug 2014, alexios ghalanos <alexios at 4dscape.com> writes:
>> 
>>> 1. DEoptim is a nonlinear global optimization solver. Global
>>> optimization is usually reserved for hard to solve non-convex
>>> problems with many local minima. There is no guarantee of
>>> optimality not even for convex problems, nor any idea of
>>> whether the answer you are getting is anything other than a
>>> local optimum.
>> 
>> There is no *mathematical* guarantee.  But that does not imply
>> that you cannot use Differential Evolution (the method that
>> DEoptim implements) with confidence.  Just because you cannot
>> prove something does not mean that it is not the case.
>> 
> Accepted, but I'm sure you are not saying that a convex problem which
> can be confidently and quickly solved by convex solvers should instead
> be solved by differential evolution or other global optimization solvers?

I can think of (and remember) situations in which using a
heuristic is fine, even if one could have used a
more-specialised/more-efficient algorithm.  The key advantage of
a heuristic is flexibility.  Even if a model is convex initially,
it may quickly become non-convex because of a constraint that is
added.  At least in my world, this happens all the time.

>> You do not need mathematical proofs to make meaningful statements
>> about whether or how well an optimisation method works.[*] For a
>> given model class (such as particular portfolio-selection
>> models), you can run experiments.  Experimental results are no
>> general proof, of course; but they are evidence of how a method
>> performs for that particular type of model, and typically that is
>> all that we care about when we apply a method.  In other words,
>> you may not be able to mathematically prove that a method works,
>> but you can have empirical evidence that is does.
>>
> Yes, but if the problem is convex, then there is one solution, and this
> can usually be attained quite quickly with specialized convex solvers.
>> 
>> In practical optimisation, it is not useful to think of "the
>> [optimal] solution" to a model, and "all the rest".  An
>> appropriate way to think of it is "no solution, some solution, a
>> better solution, an even better solution, ..."  and so on.  That
>> is, think of "iterative improvement", not of optimisation.
>> 
> If by "practical optimization" you mean problems which are non-convex or
> particularly difficult to solve (e.g. mixed integer), then perhaps. But
> for convex problems, there is only one solution (by definition). Whether
> that solution turns out to be sub-optimal in the out of sample, that is
> down to uncertainty, quality of inputs etc.

There is one solution to the model.  But with a sensible model, a
solution close to the optimum should be better (in terms of its
objective-function value) than a randomly-chosen solution.  So
the quality of a solution should not be considered either optimal
or non-optimal, but there are "shades of grey", even for convex
problems.  These shades of grey exist just as well in the
out-of-sample period.  This may be hard to square with
optimisation theory, but it makes sense for the application.

> However, I have seen a tendency to equate "practical" optimization, with
> a simply lazy consideration of an optimization problem without making
> the effort to see whether that problem can be put in a convex form.
> Plug-and-play, without making some effort to understand the problem is
> never, IMHO, a good way to do things.

By "practical optimisation" I mean optimisation in which the
results are actually used for financial decision-making, and
hence the results and their interpretation are the main priority.

Practical optimisation is conditional, of course, on having
trustworthy algorithms.  As I said, empirical evidence goes a
long way when it comes to creating trust and confidence.  But
here is my point: an optimisation algorithm is the tool, and once
we have established that the tool works sufficiently well (ie, is
trustworthy), we can stop.  Finding the optimal algorithm is not
the goal in "practical optimisation"; "good enough" is enough.
The actual goal is to find good models.

In my experience, the thing that one often lacks in "practical
optimisation" is time.  I do not mean the time that is saved when
running a faster algorithm, but the time to rewrite code that
works (just to have a more efficient algorithm).

Understanding a model, then, has less to do with how one solves
the model, but with analysing how results change when inputs
change, how results look for different datasets, or how specific
statistical properties of the data affect the results.  Or (to
come back back what I said before) how different in-sample
"shades of grey" relate to out-of-sample results. [For a sensible
model, I would want to see a positive correlation between the
in-sample quality of solutions and their out-of-sample quality.]


>> 
>> [*] If you need an example other than Differential Evolution for
>>     that, then look at Nelder--Mead.  You cannot prove anything,
>>     and yet the method "just works".
>>
> Regards,
>
> Alexios
>

Cheers,
-- 
Enrico Schumann
Lucerne, Switzerland
http://enricoschumann.net



More information about the R-SIG-Finance mailing list