[R-sig-finance] Noise in portfolio optimization (was: Random Numbers)

Patrick Burns patrick at burns-stat.com
Sat Nov 19 11:26:04 CET 2005


In the optimizations we are talking about, there is noise
in the expected returns and noise in the variance matrix.

Unless you are using a sample estimate of the variance
rather than something more stable like a factor model,
the error in the variance matrix will be minimal compared
to the error in the expected returns.  Hence a reasonable
approach to error in the variance matrix is not to worry
about it.

I think the proper answer of how to deal with noise in the
expected returns is to increase the trading cost based on
how noisy the expected return is for each asset.

First, note that 'portfolio optimization' is really a misnomer.
We really are (or should be) optimizing the trade.

We are also in a classic James-Stein shrinkage setting in
which we care about the overall outcome, not the individual
pieces.  If in reality the actual best trade is MSFT=-143,
IBM=78, and so on, we don't get any extra benefit for selling
exactly 143 of MSFT.  We benefit from the trade as a whole
being good. 

Given that we have noise, then theory tells us to shrink towards
something.  The question is, shrink towards what?  I think that
the answer has to be to shrink towards where we are, that is,
towards less trading.  The way to accomplish this is to increase
the trading cost based on the amount of noise in the expected
return.

Patrick Burns
patrick at burns-stat.com
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of S Poetry and "A Guide for the Unwilling S User")

Kris wrote:

>Ok there are several things going on here:
>i)  Michaud's resampling algorithm
>ii) What i described which is doing some sort of bias reduction in your historical covariance estimation piece by resampling and if you think there is serial correlation information then do block bootstrapping
>iii) What Patrick describes which is to do the resampling on the alpha. 
>
>I don't if this is related to the original question, but what is the prefered method that is used to detect
> for a medium/long term investor that things have changed enough to rebalance?.
>
>
>
>-----Original Message-----
>From: Patrick Burns <patrick at burns-stat.com>
>Sent: Nov 18, 2005 12:42 PM
>To: "L.Isella" <L.Isella at myrealbox.com>
>Cc: kriskumar at earthlink.net, r-sig-finance at stat.math.ethz.ch
>Subject: Re: [R-sig-finance] Random Numbers
>
>I think there are several problems with the resampled
>efficient frontier, here is one: The procedure as I understand
>it is to bootstrap the mean of the historical returns.  What
>should be bootstrapped is the alpha generation process.
>One hopes that there are few fund managers who use the
>historical mean as their expected return.  Bootstrapping the
>actual alpha generation process is likely to be non-trivial.
>
>Patrick Burns
>patrick at burns-stat.com
>+44 (0)20 8525 0696
>http://www.burns-stat.com
>(home of S Poetry and "A Guide for the Unwilling S User")
>
>L.Isella wrote:
>
>  
>
>>On 11/18/05, Kris <kriskumar at earthlink.net> wrote:
>> 
>>
>>    
>>
>>>I dont quite follow what you mean? People do resampled eff frontier with bootstrapping/bootstrapping+jackknife but this is done on the correlation/covarianceestimation process.
>>>If all you need is correlated rng take a look at V&R's MASS package rmvnorm in particular. alternatively you can use rnorm with chol to get the correlated RNG.
>>>   
>>>
>>>      
>>>
>>Well, I mean the idea of resampled efficiency as expressed by Michaud in his book: you assume that the returns of the stocks in your ptfs are normally distributed (which is a reasonable approximation for the stocks I deal with).
>>You come up with some guesses about the "true" expected rtns and the "true" covariance matrix of these assets. 
>>In other words you assume that your historical data are the sample of multivariate normal distribution with certain correlations.
>>Then you take random draws from this distribution and simulate several (actually plenty) sets of returns.
>>For each simulated set of returns, this provides you with some average returns and correlations and you optimize a ptf on the basis of these data.
>>Oversimplyfing, you repeat this procedure many times, obtain some average ptf weights along the simulated efficient frontier and you use these weights to generate the resampled efficient frontier by means of the "true" covariance matrix and "true" expected rtns.
>>At least this is how I understood it. Anyone understood it differently?
>>Cheers
>>
>>Lorenzo
>>
>>_______________________________________________
>>R-sig-finance at stat.math.ethz.ch mailing list
>>https://stat.ethz.ch/mailman/listinfo/r-sig-finance
>>
>>
>>
>> 
>>
>>    
>>
>
>
>
>
>
>  
>



More information about the R-sig-finance mailing list