[R] Fisher Scoring v/s Coordinate Descent for MLE in R

peter dalgaard pdalgd at gmail.com
Fri Jul 4 11:04:13 CEST 2014


There are books on this, can't repeat them here...

Roughly speaking, Fisher Scoring is quadratically convergent, hence requires much fewer iterations than gradient descent methods which are generally only linear, and sometimes very slowly so (in highly collinear cases, usually). I.e., it is a matter of extra work per iterations against more iterations. Besides, glm() wants the information matrix for the variance-covariance matrix of estimates anyway.

-pd

On 03 Jul 2014, at 19:32 , Vijay goel <bgoelv at gmail.com> wrote:

> R base function glm() uses Fishers Scoring for MLE, while the glmnet uses the
> coordinate descent method to solve the same equation ? Coordinate descent is
> more time efficient than Fisher Scoring as fisher scoring calculates the
> second order derivative matrix and some other matrix operation which makes
> it space and time expensive, while coordinate descent can do the same task
> in O(np) time.
> 
> Why R base function uses Fisher Scoring or this method has advantage over
> other optimization methods? What will be comparison between coordinate
> descent and Fisher Scoring ? I am relatively new to do this field so any
> help or resource will be helpful
> 
> Regards
> Vij
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Email: pd.mes at cbs.dk  Priv: PDalgd at gmail.com



More information about the R-help mailing list