[R-SIG-Finance] Sharpe's algorithm for portfolio improvement

Brian G. Peterson brian at braverock.com
Mon Aug 8 01:41:12 CEST 2011


[comments in-line below]

Regards,

   - Brian

On Sat, 2011-08-06 at 18:19 -0400, John P. Burkett wrote:
> Brian,
> 
> Thank you again for your script and thoughtful comments.  After running 
> the script and trying some variations, I have a few questions about the 
> optimize.portfolio function and its output.
> 
> When optimize_method='random', I suppose a user assesses convergence (or 
> at least the practical value of the output) by increasing search_size 
> and observing what if anything happens to the objective function value 
> and the weights.  Is that correct?

Yes, this is correct.

Random portfolios, with any set of objectives and constraints, let you
strictly bound the amount of computing power you're going to apply to a
given problem, tinker with your objective, etc, before devoting more
computing time to getting to the 'true' blobal objective.

One major use that I have for random portfolio evaluations is to show me
the feasible space.  If trace=TRUE, there should be values in the
objective value output that can be sent to plot() methods. 

Actually, there are a set of chart.* functions dispatched by plot(), 

plot -> plot.optimize.portfolio -> plot.optimize.portfolio.random ->
charts.RP -> chart.Scatter.RP&chart.Weights

All this is covered in the documentation, at least peripherally, but the
various chart.* functions can be called directly as well.

Another related function you may wish to examine is extractStats().
this will take the object output by optimize.portfolio() and pull out
the weights and objective measured in a tabular (data.frame) format.

> When optimize_method='DEoptim', the user may have some opportunities to 
> adjust DEoptim's default choices for the arguments of the package's 
> functions.  My original frustration with DEoptim was that the default 
> choices were not producing convergence in reasonable time and I was 
> uncertain about what alternative choices might hasten convergence. 
> PortfolioAnalytics (PA) provides a nice wrapper for DEoptim but still 
> leaves me wondering about function arguments and their effects on 
> convergence.  In particular, the following questions occur to me:
> 1. What choices does PA make by default for the DEoptim arguments called 
> F, itermax, VTR, relto1, stepto1, NP, and strategy?  Can a user of PA 
> override the defaults for these arguments?

Internally, we make a call to DEoptim.control.   The user may pass any
parameters in dots (...) to the optimize.portfolio command that they
wish to pass on to the optimization engine.  For DEoptim, we now use by
default:

if(!hasArg(strategy)) DEcformals$strategy=6 
# use DE/current-to-p-best/1

if(!hasArg(reltol)) DEcformals$reltol=.000001 
# 1/1000 of 1% change in objective is significant

if(!hasArg(steptol)) DEcformals$steptol=round(N*1.5) 
# number of assets times 1.5 tries to improve

if(!hasArg(c)) DEcformals$c=.4 
# JADE mutation parameter, this could maybe use some adjustment

I've found these parameters to speed convergence by about an order of
magnitude on portfolio problems of 100-300+ assets over the default
DEoptim parameters, though as indicated by the code comment, I think
that the 'c' parameter could still use some adjustment.  

I'll also note that we're using JADE adaptive DE by default in
PortfolioAnalytics because I have found it to converge so much faster
than the classical DE algorithms.  Much of the work on adding JADE to
DEoptim (Joshua Ulrich wrote all that code) was motivated by interest in
using it for portfolio optimization.  DEoptim can apply the parameter
self-adaptive properties of JADE to any strategy= parameter, not just
strategy 6 DE/current-to-p-best/1, though I believe this to be the best
strategy for these types of problems.

> 2.  Can a PA user obtain information about why iterations stopped?

Most likely reltol and steptol.  The default reltol is why I multiplied
your objective by 1000 in the script that I sent, and steptol sometimes
needs to be increased as well.

> 3.  How does search_size affect the DEoptim arguments?  I tried 
> increasing search_size from 10,000 to 20,000 but noted that 1550 
> 'iterations' were performed in both cases. I'm not sure that convergence 
> was achieved in either case.  Does search_size ever affect the number of 
> iterations?

20000 is the default for search_size

When using random portfolios, search_size is precisely that, how many
portfolios to test.  You need to make sure to set your feasible weights
in generatesequence to make sure you have search_size unique portfolios
to test, typically by manipulating the 'by' parameter to select
something smaller than .01 (I often use .002, as .001 seems like
overkill)

When using DE, search_size is decomposed into two other parameters which
it interacts with, NP and itermax.

NP, the number of members in each population, is set to cap at 2000 in
DEoptim, and by default is the number of parameters (assets/weights)
*10.

itermax, if not passed in dots, defaults to the number of parameters
(assets/weights) *50. This is the source of DE stopping at 1550
generations in your example (31*50).  You've actually tested 20000
portfolios, using larger populations in each generation.  you can pass
itermax in dots to get a larger number of generations.  You need to
understand that there is quite a lot of interaction going on here.  In
DE, the rule of thumb is to make each population contain ten times the
number of members as you have parameters, sometimes 5x is enough for
portfolio problems, sometimes not. 20000/310 would only give 64
generations, which is almost certainly not enough. 20000/1550 only gives
~12 population members, which probably doesn't give enough opportunities
to mutate/converge. 31*10*1550=480500, which is likely too many, but I
don't know your utility function well enough to say that definitively.

This brings me full circle to the utility of random portfolios, as
pointed out years ago by Pat Burns and others.  You can use a (small)
finite set of random portfolios to gain an intuition for your utility
function, and even to gain a perfectly close approximation of the 'true'
optimal portfolio (certainly within the bounds of your estimation
error!), only resorting to the global optimizer when you want to get
even closer, or if your examination of the random portfolio output
indicates that the feasible space is so non-smooth that you need the
mutation properties of the global optimizer to help you out.

Regards,

   - Brian



More information about the R-SIG-Finance mailing list