[R] Hessian from optim()

Dimitris Rizopoulos dimitris.rizopoulos at med.kuleuven.be
Tue Mar 21 17:56:19 CET 2006


I think it should be the first, since for BFGS and L-BFGS-B (the only 
optims()'s methods for which approximation to the hessian is required) 
it is known that the hessian update at convergence of the parameters 
might not yet be a good approximation of the true hessian.

Best,
Dimitris

----
Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://www.med.kuleuven.be/biostat/
     http://www.student.kuleuven.be/~m0390867/dimitris.htm


----- Original Message ----- 
From: "Ingmar Visser" <I.Visser at uva.nl>
To: "Thomas Lumley" <tlumley at u.washington.edu>; "Gregor Gorjanc" 
<gregor.gorjanc at bfro.uni-lj.si>
Cc: <r-help at r-project.org>
Sent: Tuesday, March 21, 2006 5:41 PM
Subject: Re: [R] Hessian from optim()


>
>>> Hello!
>>>
>>> Looking on how people use optim to get MLE I also noticed that one 
>>> can
>>> use returned Hessian to get corresponding standard errors i.e. 
>>> something
>>> like
>>>
>>> result <- optim(<< snip >>, hessian=T)
>>> result$par                  # point estimates
>>> vc <- solve(result$hessian) # var-cov matrix
>>> se <- sqrt(diag(vc))        # standard errors
>>>
>>> What is actually Hessian representing here? I appologize for lack 
>>> of
>>> knowledge, but ... Attached PDF can show problem I am facing with 
>>> this
>>> issue.
>>>
>>
>> The Hessian is the second derivative of the objective function, so 
>> if the
>> objective function is minus a loglikelihood the hessian is the 
>> observed
>> Fisher information.   The inverse of the hessian is thus an 
>> estimate of
>> the variance-covariance matrix of the parameters.
>>
>> For some models this is exactly I/n in your notation, for others it 
>> is
>> just close (and there are in fact theoretical reasons to prefer the
>> observed information).  I don't remember whether the two-parameter 
>> gamma
>> family is one where the observed and expected information are 
>> identical.
>
>
>
> The optim help page says:
>
> hessian     Logical. Should a numerically differentiated Hessian 
> matrix be
> returned?
>
> I interpret this as providing a finite differences approximation of 
> the
> Hessian (possibly based on exact gradients?). Is that the case or is 
> it a
> Hessian that results from the optimization process?
>
> Best, Ingmar
>
> ______________________________________________
> R-help at stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! 
> http://www.R-project.org/posting-guide.html
> 


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm




More information about the R-help mailing list