jlindsey at alpha.luc.ac.be
Thu Jan 13 18:45:51 CET 2000
> > I have discovered that nlme is extremely sensitive to starting
> > values. I have never experienced this with any other nonlinear
> > optimization involving nlm.
> I have, much of the time, and not anything like so much with other
> optimizers (such as those now added for the development version).
The above statement is true even for models that I can fit with my
functions as well as nlme.
I am very much looking forward to the new optimizers. But I trust that
the original nlm algorithm will not disappear so that I can still
reproduce older consulting work and results in publications.
> > One of the reasons for some of the weird error messages was that the
> > default value of stepmax for nlm is far too large.
> Well, it is an absolute value, and too large for some problems, too small
> for others. We have changed the logic a bit for 0.99.0 so that nlm makes a
> better job of finding an initial step, and have hopes of improving further.
> nlme should probably set stepmax to ca 10. The problem stems from the
> assumptions nlm is making about the initial Hessian, without checking them.
I would agree with 10 if it has to be a fixed constant. In my
functions, all arguments to nlm have default values calculated from
the initial estimates supplied by the user, which may partially
explain why I have fewer problems with nlm.
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch
More information about the R-help