[R-SIG-Finance] Spline GARCH
Bastian Offermann
bastian2507hk at yahoo.co.uk
Fri Feb 7 20:40:58 CET 2014
There was an error in the previous code, this should be correct...
library('BB')
library('alabama')
library('nloptr')
r = rnorm(13551, 0.0004, 0.016) # pseudo returns
### Spline GARCH ###
### Specifying time trend
k = 3 # number of
knots
bounds = floor(1:k * 13551/k) # partition of time horizon
bounds = c(0, bounds[1:(k-1)])
time.lin = 0:13550 # linear time trend
time.nonlin <- matrix(rep(time.lin, k), length(time.lin), k) # quadratic
time trend
for(i in 1:k) { #
time.nonlin[,i] <- time.nonlin[,i] - bounds[i] #
time.nonlin[which(time.nonlin[,i] < 0), i] <- 0 #
time.nonlin[, i] <- time.nonlin[, i]^2 #
} #
time.trend = cbind(time.lin, time.nonlin)
for(i in 1:dim(time.trend)[2]) time.trend[,i] <-
time.trend[,i]/time.trend[dim(time.trend)[1], i] # normalizing between
0 and 1
head(time.trend)
tail(time.trend)
### Spline function
splgarch <- function(para) {
mu <- para[1]
omega <- para[2]
alpha <- para[3]
beta <- para[4]
cc <- para[5]
w <- para[6:(k+6)]
Tau <- cc * exp( apply(t(diag(w)%*%t(time.trend)), 1, sum) )
e2 <- (r-mu)^2
e2t <- omega + alpha * c(mean(e2), e2[-length(r)]) / Tau^2
s2 <- filter(e2t, beta, "recursive", init = mean(e2))
sig2 <- s2 * Tau
0.5*sum( log(2*pi) + log(sig2) + e2/sig2) }
### Spline parameter initialization
mu <- mean(r)
small <- 1e-6
alpha <- 0.1
beta <- 0.8
omega <- (1-alpha-beta)
para <- c(mu, omega, alpha, beta, 1, rep(small, length(4:(k+4))))
lo <- c(-10*abs(mu), small, small, small, rep(-10, length(3:(k+4))))
hi <- c(10*abs(mu), 100*abs(mu), 1-small, 1-small, rep(10, length(3:(k+4))))
### Spline optimization
fit <- nlminb(start = para, objective = splgarch, lower = lo, upper =
hi, hessian = TRUE, control = list(x.tol = 1e-8,trace=0))
names(fit$par) <- c('mu', 'omega', 'alpha', 'beta', 'c', paste('w', sep
= '', 0:k))
round(fit$par, 6)
fit.hessian = hessian(splgarch, fit$par, method="complex")
fit.hessian
Am 2/7/2014 10:28 AM, schrieb Paul Gilbert:
>
>
> On 02/07/2014 08:19 AM, Bastian Offermann wrote:
>> Hi all,
>>
>> I am currently implementing the Engle & Rangel (2008) Spline GARCH
>> model. I use the nlminb optimizer which does not provide a hessian
>> unfortunately to get the standard errors of the coefficients. I can get
>> around this using the 'hessian' function in numDeriv, but usually get
>> NaN values for the omega parameter.
>
> Do you know why this happens, or can you provide a simple example? An
> NaN value from hessian() is often because the function fails to
> evaluate in a small neighbourhood of the point where it is being
> calculated, that is, at your parameter estimate. Are you on the
> boundary of the feasible region?
>>
>> Can anybody recommend additional optimizers that directly return a
>> hessian?
>
> A hessian returned by an optimizer is usually one that is built up by
> some approximation during the optimization process. One of the
> original purposes of hessian() was to try to do something that is
> usually better than that, specifically because you want a good
> approximation if you are going to use it to calculate standard errors.
> (And, of course, you want the conditions to hold for the hessian to be
> an approximation of the variance.) Just because an optimizer returns
> something for the hessian, it it not clear that you would want to use
> it to calculate standard errors. The purpose of the hessian built up
> by an optimizer is to speed the optimization, not necessarily to
> provide a good approximation to the hessian. In the case where
> hessian() is returning NaNs I would be concerned that anything
> returned by an optimizer could be simply bogus.
>
>> How sensitive are the coefficients to the initial starting values?
>
> This depends on a number of things, the optimizer you use being one of
> them. Most optimizers have some mechanism to specify something
> different from the default for the stopping criteria and you can, for
> a problem without local optimum issues (e.g. convex level sets),
> reduce sensitivity to the starting value by tightening the stopping
> criteria. The more serious problem is when you have local optimum
> issues. Then you will get false convergence and thus extreme
> sensitivity to starting values. Even for a parameter space that is
> generally good, there are often parameter values for which the
> optimization is a bit sensitive. And, of course, all this also depends
> on your dataset. Generally, the sensitivity will increase with short
> datasets.
>
> The previous paragraph is about the coefficient estimate. At the same
> coefficient estimate hessian() will return the same thing, but a
> hessian built up by an optimizer will depend on the path, and
> generally needs a fairly large number of final steps in the vicinity
> of the optimum to give a good approximation. Thus, somewhat counter
> intuitively, if you do an optimization starting with values for the
> coefficients that are very close to the optimum you will get quick
> convergence but often a bad hessian approximation from the optimizer.
>
> Paul
>>
>> Thanks in advance!
>>
>> _______________________________________________
>> R-SIG-Finance at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
>> -- Subscriber-posting only. If you want to post, subscribe first.
>> -- Also note that this is not the r-help list where general R questions
>> should go.
>
More information about the R-SIG-Finance
mailing list