[R] How to obtain final gradient estimation from optim
Stéphane Luchini
luchini at ehess.cnrs-mrs.fr
Thu Mar 27 15:22:56 CET 2003
Le Jeudi 27 Mars 2003 12:52, ripley at stats.ox.ac.uk a écrit :
> On Thu, 27 Mar 2003, [iso-8859-15] Stéphane Luchini wrote:
> > I use optim to compute maximum likelihood estimations without giving an
> > analytical gradient to optim. However, I would like to
> > get an output of the final numerical gradient vector and the final matrix
> > of contributions to the gradient. But I did not
> > find any mention of this kind of output in help pages. Does anyone know
> > how to do that ?
>
> No, and optim does not even necessarily calculate a gradient.
> But if it does, it is supposed to be zero at a maximum....
It is zero at the theoretical level, it is not zero at a numerical level and
it can be used to compute the Outer Product of the Gradient as an estimator
of the information matrix instead of the inverse hessian. For long
computation, it enables one to get an estimation of standard deviations
without computing the hessian matrix which can take a long time.
>
> I don't lnow what you mean by `contributions to the gradient': optim works
> with the (I presume) log-likelihood.
The matrix of the contribution to the gradient with a typical element $G_{ti}$
defined as follows:
$$G_{ti}(y,\theta) = \partial \ell_t(u,\theta) / \partial \theta_i$$
where $\ell$ is the contribution i to the log-lokelihood. Using such a matrix,
one can verify convergence using a Gauss-Newton regression such that
$$Intercept = G b + residuals$$
and the parameters b as to be close to zero if the parameters estimates are
close to the true parameters. This is why it is usefull to get as an output
(when it is avalaible) of the gradient and the G matrix. Moreover, when
analytical gradients are not straightforward to define, it will allow one to
compare the numerical results of two procedures (one with giving to R the
analytical gradient and one without it).
More information about the R-help
mailing list