[R-SIG-Finance] rugarch VaR calculation "manually"

alexios ghalanos alexios at 4dscape.com
Tue May 7 13:35:27 CEST 2013


Hello,

On 07/05/2013 12:15, Neuman Co wrote:
> I am using the rugarch package in R and I have some questions:
>
> I want to use the rugarch package to calculate the VaR.
>
> I used the following code to to fit a certain model:
>
> spec2<-ugarchspec(variance.model = list(model = "sGARCH", garchOrder =
> c(1, 1)),
> mean.model = list(armaOrder = c(5, 5), include.mean = FALSE),
> distribution.model =
> "norm",fixed.pars=list(ar1=0,ar2=0,ar3=0,ma1=0,ma2=0,ma3=0))
>
> model2<-ugarchfit(spec=spec2,data=mydata)
>
> Now I can look at the 2.5 % VaR with the following command:
> plot(model)
>
> and choosing the second plot.
>
> Now my first question is: How can I get the 1.0% VaR VALUES, so not
> the plot, but
> the values/numbers?
Use the 'quantile' method i.e. 'quantile(model2, 0.01)'...also applies 
to uGARCHforecast, uGARCHsim etc It IS documented.
>
> In case of the normal distribution, one can easily do the calculation of
> the VaR with using the forecasted conditional volatility and the forecasted
> conditional mean:
>
> I use the ugarchforecast command and with that I can get the cond. volatility
> and cond. mean (my mean equation is an modified ARMA(5,5), see above in the spec
> command):
>
> forecast = ugarchforecast(spec, data, n.roll = , n.ahead = , out.sample=)
>
> # conditional mean
> cmu = as.numeric(as.data.frame(forecast, which = "series",
> rollframe="all", aligned = FALSE))
> # conditional sigma
> csigma = as.numeric(as.data.frame(forecast, which = "sigma",
> rollframe="all", aligned = FALSE))
>
NO. 'as.data.frame' has long been deprecated. Use the 'sigma' and 
'fitted' methods (and make sure you upgrade to latest version of rugarch).

> I can calculate the VaR by using the property, that the normal distribution
> is part of the location-scale distribution families
>
> # use location+scaling transformation property of normal distribution:
> VaR = qnorm(0.01)*csigma + cmu
>
> My second question belongs to the n.roll and out.sample command. I had
> a look at the
> description http://www.inside-r.org/packages/cran/rugarch/docs/ugarchforecast
> but I did not understand the n.roll and out.sample command. I want to
> calculate the
> daily VaR, so I need one step ahead predicitons and I do not want to reestimate
> the model every time step. So what does it mean "to roll the forecast
> 1 step " and
> what is out.sample?
out.sample, which is used in the estimation stage retains 'out.sample' 
data (i.e. they are not used in the estimation) so that a rolling 
forecast can then be performed using this data. 'Rolling' means using 
data at time T-p (p=lag) to create a conditional 1-ahead forecast at 
time T. For the ugarchforecast method, this means using only estimates 
of the parameters from length(data)-out.sample. There is no 
re-estimation taking place (this is only done in the ugarchroll method).
For n.ahead>1, this becomes an unconditional forecast. Equivalently, you 
can append new data to your old dataset and use the ugarchfilter method.
>
> My third question(s) is (are): How to calculate the VaR in case of a
> standardized
> hyperbolic distribution? Can I still calculate it like in the normal case
> or does it not work anymore (I am not sure, if the sdhyp belongs to the
> location-scale family).
>
> Even if it does work (so if the sdhyp belongs to the family of
> location-scale distributions) how do I calculate it in case of a
> distribution, which does not belong to the location-scale family?
> (I mean, if I cannot calculate it via VaR = uncmean + sigma* z_alpha how
> do I have to calculate it). Which distribution implemented in the rugarch
> package does not belong to the location-scale family?
>
ALL distributions in the rugarch package are represented in a location- 
and scale- invariant parameterization since this is a key property 
required in working with the standardized residuals in the density 
function (i.e. the subtraction of the mean and scaling by the volatility).
The standardized Generalized Hyperbolic distribution does indeed have 
this property, and details are available in the vignette. See also paper 
by Blaesild (http://biomet.oxfordjournals.org/content/68/1/251.short) 
for the linear transformation (aX+b) property.

If working with the standard (NOT standardized) version of the GH 
distribution (\lambda, \alpha, \beta, \delta, \mu) you need to apply the 
location/scaling transformation as the example below shows which is 
equivalent to just using the location/scaling transformation in the 
standardized version:
#################################################
library(rugarch)
# zeta = shape
zeta = 0.4
# rho = skew
rho = 0.9
# lambda = GIG mixing distribution shape parameter
lambda=-1
# POSITIVE scaling factor (sigma is in any always positive)
# mean
scaling = 0.02
# sigma
location = 0.001

# standardized transformation based on (0,1,\rho,\zeta)
# parameterization:
x1 = scaling*qdist("ghyp", seq(0.001, 0.5, length.out=100), shape = 
zeta, skew = rho, lambda = lambda)+location
# Equivalent to standard transformation:
# First obtain the standard parameters (which have a mean of zero
# and sigma of 1).
parms = rugarch:::.paramGH(zeta, rho, lambda)
x2 = rugarch:::.qgh( seq(0.001, 0.5, length.out=100), alpha = 
parms[1]/abs(scaling), beta = parms[2]/abs(scaling), 
delta=abs(scaling)*parms[3], mu = (scaling)*parms[4]+location, lambda = 
lambda)
all.equal(x1, x2)
# Notice the approximation error in the calculation of the quantile for 
# which there is no closed form solution (and rugarch uses a tolerance
# value of .Machine$double.eps^0.25)
# Load the GeneralizedHyperbolic package of Scott to adjust the
# tolerance:
library(GeneralizedHyperbolic)

x1 = location+scaling*qghyp(seq(0.001, 0.5, length.out=100), mu = 
parms[4], delta = parms[3], alpha =  parms[1], beta = parms[2], lambda = 
lambda, lower.tail = TRUE, method = c("spline", "integrate")[2], 
nInterpol = 501, subdivisions = 500, uniTol = 2e-12)
# equivalent to:
x2 = qghyp(seq(0.001, 0.5, length.out=100), mu = 
(scaling)*parms[4]+location, delta = abs(scaling)*parms[3], alpha = 
parms[1]/abs(scaling), beta = parms[2]/abs(scaling), lambda = lambda, 
lower.tail = TRUE, method = c("spline", "integrate")[2], nInterpol = 
501, subdivisions = 500, uniTol = 2e-12)

all.equal(x1, x2)
#################################################


> Thanks a lot for your help!
>

Regards,

Alexios
> _______________________________________________
> R-SIG-Finance at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only. If you want to post, subscribe first.
> -- Also note that this is not the r-help list where general R questions should go.
>



More information about the R-SIG-Finance mailing list