[R] saddle points in optim
Ravi Varadhan
rvaradhan at jhmi.edu
Sun Nov 7 18:27:13 CET 2010
The hessian from `optim' is not as accurate as that from `numDeriv' (with the default of Richardson extrapolation), so I would trust the numDeriv's hessian result over that of optim. However, without seeing what you actually did, this is only a surmise.
Ravi.
____________________________________________________________________
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University
Ph. (410) 502-2619
email: rvaradhan at jhmi.edu
----- Original Message -----
From: Jonathan Phillips <994phij at gmail.com>
Date: Sunday, November 7, 2010 3:45 am
Subject: [R] saddle points in optim
To: R-help at r-project.org
> Hi,
> I've been trying to use optim to minimise least squares for a
> function, and then get a guess at the error using the hessian matrix
> (calculated from numDeriv::hessian, which I read in some other r-help
> post was meant to be more accurate than the hessian given in optim).
>
> To get the standard error estimates, I'm calculating
> sqrt(diag(solve(x))), hope that's correct.
>
> I've found that using numDeriv's hessian gets me some NaNs for errors,
> whereas the one from optim gets me numbers for all parameters. If I
> look for eigenvalues for numDeriv::hessian, I get two negative numbers
> (and six positive - I'm fitting to eight parameters), so does this
> mean that optim hasn't converged correctly, and has hit a saddle
> point? If so, is there any way I could assist it to find the minimum?
>
> Thanks,
> Jon Phillips
>
> ______________________________________________
> R-help at r-project.org mailing list
>
> PLEASE do read the posting guide
> and provide commented, minimal, self-contained, reproducible code.
More information about the R-help
mailing list