[R] Finding starting values for the parameters using nls() or nls2()

dave fournier davef at otter-rsch.com
Thu Oct 27 17:48:39 CEST 2016


 >
 >> On Oct 25, 2016, at 9:29 AM, dave fournier <davef at otter-rsch.com> 
wrote:
 >>
 >>
 >>
 >> Unfortunately this problem does not appear to be well posed.
 >>
 >>    Retention = (b0*Area^(th+1))^b
 >>
 >> If b0, th, and b are the parameter only the product (th+1)*b is 
determined.
 >>
 >> This comes from noting that powers satisfy
 >>
 >>
 >>
 >>    (a^b)^c  =  a^(b*c)
 >>
 >>
 >>
 >> So your model can be written as
 >>
 >>
 >>
 >>          (b0^b) * Area^((th+1)*b)
 >>
 >
 >... which nicely completes the thread since one model had:
 >
 >th1   =  9.1180e-01
 > b01=    5.2104e+00
 > b11 =  -4.6725e-04
 >(th1+1)*b11
 >[1] -0.0008932885
 >
 >
 > b0  = 5.2104466   ;    b1 =   -0.0004672   ;  th =  0.9118029
 >((th+1)*b1)
 >[1] -0.0008931943
 >
 >So both the R's nls2 and AD_MOdel_Builder results yield that same 
predictions for any given data point at least up to four decimal places.
 >
 >So perhaps there was some other physical interpretation of those 
parameters and there exists an underlying theory that would support 
adding some extra constraints?
 >
 >--
 >>David Winsemius
 >Alameda, CA, USA

I'm not really sure what your point is. The OP has two models. One is well
posed and the other is not. I was discussing solution of the former model
which is well posed.  The form of the model, using a, b, and t and x,y to
simplify the expression is

         y  = exp( a * exp(b * x^t) )

My point is that there are many models like this where the obvious
parameterization (something like the parameters the user is interested
in) leads to parameter estimates with highly correlated errors.
This does not necessarily mean that
the model is badly determined. The model exists independent of the
particular parameterization. To fix  the ideas assume there are n 
observations
(x_i,y_i)  and simplify by assuming that x_1<=x_2<=...<=x_n
(but not all equal lets hope)

A stable parameterization of the model can often be obtained by
picking as new parameters y1 and yn where y1 and yn are the predicted 
values for y_1 and y_n, that is

            y1  = exp( a * exp(b * x_1^t) )
            yn  = exp( a * exp(b * x_n^t) )

Then one solves for some of the original model parameters in terms of y1 
and yn.
The particular way this is done depends on the model. Often some has some
linear parameters and the procedure is easy. For this model as I showed
one can solve for a and b in terms of y1 and yn.
Then one can fit the model easily with AD Model Builder or
nls2 using this parameterization. nls2 provides the standard errors for
the parameter estimates.  The AD Model Builder solution provides the
estimated variance covariance matrix of the parameter estimates via the
standard maximum likelihood Hessian calculations. It also provides the
covariance for any desired dependent variable.  So one can fit the model
using y1,yn, and t and get the covariance matrix for a,b, and t
in one step. In nls2 one needs to fit the model using y1,yn and then
solve for a and b and run the model again from that point.  That is no big
deal, and I showed how to do it, but it is one more step for the user.

It is interesting to see the correlation matrix for the parameter estimates
and the dependent variables.
                     std dev          correlation
1  logt -9.233e-02 3.4026e-01  1.0000
2  c    9.7164e-01 3.7006e-02 -0.2690  1.0000
3  d    1.1010e+00 1.7852e-01 -0.7495  0.0325  1.000
4  t    9.1180e-01 3.1025e-01  1.0000 -0.2690 -0.749  1.0000
5  a    5.2104e+00 4.3404e-01 -0.9781  0.4191  0.621 -0.9781  1.000
6  b   -4.6725e-04 1.1714e-03  0.9994 -0.2836 -0.726  0.9994 -0.984 1.00

Here the independent variable are c, d, and logt
where y1=c*y_1  y2=d*y2
That is just an additional thing so that one can start with c=1 and d=1
Also logt is used where t=exp(logt) which makes sure t>0.
Notice that the correlation between c and d is 0.0325 which is small.


If  a and b were the parameters of the model

4   t    9.1180e-01 3.1025e-01   1.0000
5   a    5.2104e+00 4.3404e-01   -0.9781  1.000
6   b   -4.6725e-04 1.1714e-03    0.9994 -0.984  1.00

One can see how close to 1 and -1 the correlations are.

Notice that the parameter b is very badly determined.
So rather than saying the model is no good one sees that
the model is no good if one want to get information about
the parameter b. In contrast the parameter a is fairly well
determined and the parameter t is "kind of" determined.

   Because of this parameter confounding nls2 is extremely sensitive
   to the starting parameter values using this parameterization.
   If I change the parameters by about 15% or less as below

   #b0start=5.210445
   #b1_start=-0.0004672373
   #th_start=0.911804
   b0_start=5.7
   b1_start=-0.00055
   th_start=1.05

I get the dreaded

Error in (function (formula, data = parent.frame(), start, control = 
nls.control(),  :
   singular gradient

So there is nothing wrong with either the data or the model. The model
just needs to be given a stable parameterization.



More information about the R-help mailing list