[R-SIG-Finance] Monte Carlo Convergence test

Michael Weylandt michael.weylandt at gmail.com
Mon Oct 26 15:32:34 CET 2015


> On Oct 26, 2015, at 2:47, Amelia Marsh via R-SIG-Finance <r-sig-finance at r-project.org> wrote:
> 
> Dear Forum, 
> 
> I have series of say 100 (say equity) instrument prices. From these prices, for each of these 100 instruments, I generate returns using ln(current price / previous price). 
> 
> Assuming originally I had 251 prices available for each of these 100 instruments over last one year period, I have matrix of 250X100 returns. 
> 
> I assume that these returns follow Multivariate Normal Distribution. Using the returns, I generate a mean Vector of returns 'M' and also generate the Variance - covariance matrix of returns 'S'. 
> 
> Then using MASS library, I simulate say 10000 returns for each of the 100 instruments as : 
> 
> sim_rates = mvrnorm(10000, M, S) 
> 
> This gives me 10000 simulated returns for each of the 100 instruments and using these simulated returns carry out further analysis. 
> 
> My query is how do I carry out convergence test in R to arrive at sufficint number of simulations? 

Hi Amelia,

It's not clear to me what you're asking. 

It sounds like you might be confusing the variance of MC (Monte Carlo -- simulation models) estimates with convergence of MCMC (Markov Chain Monte Carlo -- a means of simulating from a distribution you can't write down explicitly). Since it doesn't sound like you're doing MCMC, I'm going to go ahead and assume you're asking how to estimate the variance of your MC estimates. Once you can do that, you can just bump up the number of simulations till you reach some point which is 'good enough' for your goal. 

Since you've assumed a distribution and are sampling directly from it, you've already converged.  End of story. (If your starting point and destination are the same, your travel time is quite short) The predictions you make, however, will have very high variance -- a single sample won't necessarily give you the right result. (Not specific to MC -- same issue comes up with variance of estimates from small amounts of observed data) The key point here is that you need to look at the variability of the final result of your analysis, not the simulated data you feed into further analysis. 

MC estimates typically use the sample mean and you can use the sample variance as a estimate of the variance of your estimator. (This should be covered in any intro stats course -- https://en.m.wikipedia.org/wiki/Standard_error#Standard_error_of_the_mean) A bit more advanced would be to divide your data into multiple 'chunks' and see how your estimate varies across these. 

(Paul Glasserman's book would be a good reference for these sorts of questions if you have access to a good library; Sheldon Ross's simulation book is more introductory, but not bad. Additionally, these issues will be discussed in most Bayesian textbooks, but those books also have to worry about MCMC convergence and that gets more air-time)

A final word of caution: increasing the number of simulations isn't going to make your analyses more accurate, only more precise. (In Machine Learning speak, by increasing the number of samples, you're reducing variance only -- not touching bias).  You're making a very strong assumption with multivariate Gaussianity. If you're worried about this, I'd look into resampling methods. 

 Michael


> 
> With regards 
> 
> Amelia
> 
> _______________________________________________
> R-SIG-Finance at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only. If you want to post, subscribe first.
> -- Also note that this is not the r-help list where general R questions should go.



More information about the R-SIG-Finance mailing list