# [R] How to obtain final gradient estimation from optim

Thomas Lumley tlumley at u.washington.edu
Thu Mar 27 16:54:15 CET 2003

On Thu, 27 Mar 2003, [iso-8859-1] Stéphane Luchini wrote:

> Le Jeudi 27 Mars 2003 12:52, ripley at stats.ox.ac.uk a écrit :
> >
> > I don't lnow what you mean by contributions to the gradient': optim works
> > with the (I presume) log-likelihood.
>
> The matrix of the contribution to the gradient with a typical element $G_{ti}$
> defined as follows:
>
> $$G_{ti}(y,\theta) = \partial \ell_t(u,\theta) / \partial \theta_i$$
>
> where $\ell$ is the contribution i to the log-lokelihood. Using such a matrix,
> one can verify convergence using a Gauss-Newton regression such that
>
> $$Intercept = G b + residuals$$
>

As Brian pointed out, optim() works with the objective function (eg
loglikelihood).  In your notation, optim never sees anything with a t
subscript so it can't possibly do this.

What's more, it is not necessary that the loglikelihood is computed as sum
of independent terms.  There are several examples in the R distribution
where R is used to maximised a likelihood that isn't of this form.

-thomas

`