[R] Starting estimates for nls Exponential Fit
Prof. John C Nash
nashjc at uottawa.ca
Wed Dec 2 22:00:19 CET 2009
Kate is correct. The parameter scaling helps quite a bit, but not enough
to render the problem "nice" so that many "reasonable" starting points
will give useful results. Indeed, a run using "all.methods=TRUE" in our
optimx package (on r-forge at
http://r-forge.r-project.org/R/?group_id=395) gives
par fvalues method fns grs itns conv
4 2.194931, 1.000001, 1.000027 566407.6 nlm NA NA 3 0
1 2.1949335, -9.0413113, 0.7516435 566407.6 BFGS 37 6 NULL 0
2 2.1950009, 0.2548318, 1.1163498 566404.6 Nelder 82 NA NULL 0
3 1.974226, 1.829957, 1.681338 2409.771 SANN 10000 NA NULL 0
6 1.9904045, 0.6151421, 1.7559401 1696.497 ucminf 51 51 NULL 0
5 1.9906488, 0.6012996, 1.7575365 1696.248 nlminb 80 166 51 0
KKT1 KKT2
4 FALSE FALSE
1 TRUE FALSE
2 FALSE FALSE
3 FALSE TRUE
6 FALSE TRUE
5 FALSE TRUE
>
A bit of a dog's dinner.
On the positive side, this is a simple but nasty problem to illustrate
lots of the issues with nonlinear parameter estimation.
JN
Katharine Mullen wrote:
> You used starting values:
> pa <- c(1,2,3)
>
> but both algorithms (port and Gauss-Newton) fail if you use the slightly
> different values:
> pa <- c(1,2,3.5)
>
> Scaling does not fix the underlying sensitivity to starting values.
> pa[3] in particular cannot be above ~3.15 for GN and ~3.3 for port; both
> alg. also fail if there is insufficient spread (e.g., both fail for
> pa <- c(1,1.25,1.5) ).
>
> For the record, DE uses (by default at least) a random start and for bad
> starts will sometimes fail to give good results; decrease the probability
> this happens by increasing itermax from the default itermax=200, as in:
>
> ss <- DEoptim(fun, lower=rep(0,3), upper=c(10e7, 10, 10),
> control=list(trace=FALSE, itermax=1000))
>
> On Wed, 2 Dec 2009, Prof. John C Nash wrote:
>
>> Kate Mullen showed one approach to this problem by using DEOptim to get
>> some good starting values.
>>
>> However, I believe that the real issue is scaling (Warning: well-ridden
>> hobby-horse!).
>>
>> With appropriate scaling, as below, nls does fine. This isn't saying nls
>> is perfect -- I've been trying to figure out how to do a nice job of
>> helping folk to scale their problems. Ultimately, it would be nice to
>> has an nls version that will do the scaling and also watch for some
>> other situations that give trouble.
>>
>> Cheers, JN
>>
>>
>> ## JN test
>> rm(list=ls())
>>
>> ExponValues <- c(2018.34,2012.54,2018.85,2023.52,2054.58,2132.61,2247.17,
>> 2468.32,2778.47)
>>
>> ExponCycles <- c(17,18,19,20,21,22,23,24,25)
>>
>> mod <- function(x) x[1] + x[2]*x[3]^ExponCycles
>>
>> modj <- function(x) (1000*x[1] + 0.001*x[2]*x[3]^ExponCycles)
>>
>> fun <- function(x) sum((ExponValues-mod(x))^2)
>>
>>
>>
>> pa<-c(1,2,3)
>> lo<-c(0,0,0)
>> up<-c(20,20,20)
>> names(pa) <- c("Y0", "a", "E")
>>
>> ## fit w/port and GN
>> Emodjn<- nls(ExponValues ~ (1000*Y0 + 0.001*a*(E^ExponCycles)),
>> start=pa, algorithm='port', lower=lo, upper=up,
>> control=list(maxiter=1000, trace=TRUE, warnOnly=TRUE))
>>
>> Emodjn1 <- nls(ExponValues ~ (1000*Y0 + 0.001*a*(E^ExponCycles)), start=pa,
>> control=list(maxiter=1000, trace=TRUE, warnOnly=TRUE))
>>
>> ## fit
>> matplot(cbind(ExponValues, fitted(Emodjn), fitted(Emodjn1)),type="l")
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
More information about the R-help
mailing list