[R] Levenberg-Marquardt algorithm
Douglas Bates
bates at stat.wisc.edu
Fri Jan 12 05:57:58 CET 2001
Hiroyuki Kawakatsu <kawa at aris.ss.uci.edu> writes:
> There is a recent article^(1)
> that compares the numerical accuracy of NLS estimates in canned
> statistical packages using the NLS problems posted at NIST's
> statistical reference datasets page at
>
> http://www.nist.gov/itl/div898/strd/nls/nls_main.shtml
>
> It is my experience that, at least for the dense NIST problems, the
> specialized algorithms (for example, TOMS algorithm 573, NL2SOL
> written by Dennis, Gay, and Welsch in 1981) return more accurate
> estimates than general minimization routines that do not exploit the
> least squares structure. I do not recall whether R was tested in the
> article sited below, but it would be great if someone could volunteer
> to test the accuracy of the R optimizer for the NIST problems.
>
> (1) McCullough, BD. "Assessing the reliability of statistical
> software"
> Part I: AMERICAN STATISTICIAN, 1998 NOV, V52 N4:358-366.
> Part II: AMERICAN STATISTICIAN, 1999 MAY, V53 N2:149-159.
Bruce McCullough does not compare R with the other software but he
does have S-PLUS in his comparisons and the algorithms in S-PLUS and
in R are the same although the implementations are different. I know
because I wrote both of them.
There is an R package on CRAN called NISTnls that contains the data
sets from the NIST URL you give. If you run individual examples from
that package you can see how the nls function performs on each
example. To run all the examples you can use
R CMD check NISTnls
NIST gives two sets of starting estimates for each problem.
Frequently the example in the NISTnls documentation for a data set
will show four or more attempts at converging - 2 without using the
"plinear" algorithm that takes advantage of partial linearity and 2
with "plinear". Generally all four converge but the ones using
"plinear" are more stable. For example, on the Bennett5 data the
first attempt (without "plinear") does not converge, the second
attempt (without "plinear") does converge but has to reduce the
residual sum of squares from an initial value of 57261.1 down to
0.000524 at convergence, the third attempt (using "plinear") does
converge from an initial RSS of 0.53381 and the fourth attempt (using
"plinear") also converges from an initial RSS of 2.075 (this is wrong
in the 0.9-5 version of the package but will be corrected in 0.9-6).
Some of these test problems are made artificially difficult by
choosing ridiculous starting estimates. Consider the Eckerle4
example. They want to fit the location and scale parameters for a
Gaussian curve as well as a scale factor. A plot of the data clearly
shows that the peak is around x = 450 and the half-height bandwidth
would be around 5. These would be the natural starting estimates for
the parameters b2 and b3 in their form of the model. Instead they
start at b2 = 10 and b3 = 500. That is, they are starting the
estimate for the location parameter about 10 standard deviations away
from the peak of the curve. This is nonsense.
One of the interesting points that Bruce McCullough noted in his
comparisons is that S-PLUS is one of only two packages that does not
declare convergence falsely. R also has this property. S-PLUS and R
use an orthogonality convergence criterion that gives an unambiguous
indication of convergence. They don't always converge but when they
do declare convergence they can be trusted to have converged. Most
other software packages use a criterion based on the progress of the
algorithm and such criteria can be fooled.
For some of the examples the convergence criterion in R is not met
because the example has an "exact" solution and the numerical
imprecision in the evaluation of derivatives exceeds the size of the
residuals. Such difficulties are encountered only in artificial
examples. It amazes me that numerical analysts talk about the
properties of algorithms on "large residual" problems then test the
algorithms on artificial data that is generated without error.
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._
More information about the R-help
mailing list