[R-SIG-Finance] Spline GARCH

Alexios Ghalanos alexios at 4dscape.com
Fri Feb 7 16:49:20 CET 2014


You might also want to consider the issue of scaling when it comes to the intercepts (mean or variance), as well as  your positivity and stationarity conditions/constraints.

-Alexios

> On 7 Feb 2014, at 15:28, Paul Gilbert <pgilbert902 at gmail.com> wrote:
> 
> 
> 
>> On 02/07/2014 08:19 AM, Bastian Offermann wrote:
>> Hi all,
>> 
>> I am currently implementing the Engle & Rangel (2008) Spline GARCH
>> model. I use the nlminb optimizer which does not provide a hessian
>> unfortunately to get the standard errors of the coefficients. I can get
>> around this using the 'hessian' function in numDeriv, but usually get
>> NaN values for the omega parameter.
> 
> Do you know why this happens, or can you provide a simple example? An NaN value from hessian() is often because the function fails to evaluate in a small neighbourhood of the point where it is being calculated, that is, at your parameter estimate. Are you on the boundary of the feasible region?
>> 
>> Can anybody recommend additional optimizers that directly return a
>> hessian?
> 
> A hessian returned by an optimizer is usually one that is built up by some approximation during the optimization process. One of the original purposes of hessian() was to try to do something that is usually better than that, specifically because you want a good approximation if you are going to use it to calculate standard errors. (And, of course, you want the conditions to hold for the hessian to be an approximation of the variance.)  Just because an optimizer returns something for the hessian, it it not clear that you would want to use it to calculate standard errors. The purpose of the hessian built up by an optimizer is to speed the optimization, not necessarily to provide a good approximation to the hessian.  In the case where hessian() is returning NaNs I would be concerned that anything returned by an optimizer could be simply bogus.
> 
>> How sensitive are the coefficients to the initial starting values?
> 
> This depends on a number of things, the optimizer you use being one of them. Most optimizers have some mechanism to specify something different from the default for the stopping criteria and you can, for a problem without local optimum issues (e.g. convex level sets), reduce sensitivity to the starting value by tightening the stopping criteria. The more serious problem is when you have local optimum issues. Then you will get false convergence and thus extreme sensitivity to starting values. Even for a parameter space that is generally good, there are often parameter values for which the optimization is a bit sensitive. And, of course, all this also depends on your dataset. Generally, the sensitivity will increase with short datasets.
> 
> The previous paragraph is about the coefficient estimate. At the same coefficient estimate hessian() will return the same thing, but a hessian built up by an optimizer will depend on the path, and generally needs a fairly large number of final steps in the vicinity of the optimum to give a good approximation. Thus, somewhat counter intuitively, if you do an optimization starting with values for the coefficients that are very close to the optimum you will get quick convergence but often a bad hessian approximation from the optimizer.
> 
> Paul
>> 
>> Thanks in advance!
>> 
>> _______________________________________________
>> R-SIG-Finance at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
>> -- Subscriber-posting only. If you want to post, subscribe first.
>> -- Also note that this is not the r-help list where general R questions
>> should go.
> 
> _______________________________________________
> R-SIG-Finance at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only. If you want to post, subscribe first.
> -- Also note that this is not the r-help list where general R questions should go.
> 



More information about the R-SIG-Finance mailing list