[R] How to coerce a parameter in nls?
ProfJCNash
profjcnash at gmail.com
Sun Sep 20 20:56:44 CEST 2015
I posted a suggestion to use nlmrt package (function nlxb to be
precise), which has masked (fixed) parameters. Examples in my 2014 book
on Nonlinear parameter optimization with R tools. However, I'm
travelling just now, or would consider giving this a try.
JN
On 15-09-20 01:19 PM, Jianling Fan wrote:
> no, I am doing a regression with 6 group data with 2 shared parameters
> and 1 different parameter for each group data. the parameter I want to
> coerce is for one group. I don't know how to do it. Any suggestion?
>
> Thanks!
>
> On 19 September 2015 at 13:33, Jeff Newmiller <jdnewmil at dcn.davis.ca.us> wrote:
>> Why not rewrite the function so that value is not a parameter?
>> ---------------------------------------------------------------------------
>> Jeff Newmiller The ..... ..... Go Live...
>> DCN:<jdnewmil at dcn.davis.ca.us> Basics: ##.#. ##.#. Live Go...
>> Live: OO#.. Dead: OO#.. Playing
>> Research Engineer (Solar/Batteries O.O#. #.O#. with
>> /Software/Embedded Controllers) .OO#. .OO#. rocks...1k
>> ---------------------------------------------------------------------------
>> Sent from my phone. Please excuse my brevity.
>>
>> On September 18, 2015 9:54:54 PM PDT, Jianling Fan <fanjianling at gmail.com> wrote:
>>> Hello, everyone,
>>>
>>> I am using a nls regression with 6 groups data. I am trying to coerce
>>> a parameter to 1 by using a upper and lower statement. but I always
>>> get an error like below:
>>>
>>> Error in ifelse(internalPars < upper, 1, -1) :
>>> (list) object cannot be coerced to type 'double'
>>>
>>> does anyone know how to fix it?
>>>
>>> thanks in advance!
>>>
>>> My code is below:
>>>
>>>
>>>
>>>> dproot
>>> depth den ref
>>> 1 20 0.5730000 1
>>> 2 40 0.7800000 1
>>> 3 60 0.9470000 1
>>> 4 80 0.9900000 1
>>> 5 100 1.0000000 1
>>> 6 10 0.6000000 2
>>> 7 20 0.8200000 2
>>> 8 30 0.9300000 2
>>> 9 40 1.0000000 2
>>> 10 20 0.4800000 3
>>> 11 40 0.7340000 3
>>> 12 60 0.9610000 3
>>> 13 80 0.9980000 3
>>> 14 100 1.0000000 3
>>> 15 20 3.2083491 4
>>> 16 40 4.9683383 4
>>> 17 60 6.2381133 4
>>> 18 80 6.5322348 4
>>> 19 100 6.5780660 4
>>> 20 120 6.6032064 4
>>> 21 20 0.6140000 5
>>> 22 40 0.8270000 5
>>> 23 60 0.9500000 5
>>> 24 80 0.9950000 5
>>> 25 100 1.0000000 5
>>> 26 20 0.4345774 6
>>> 27 40 0.6654726 6
>>> 28 60 0.8480684 6
>>> 29 80 0.9268951 6
>>> 30 100 0.9723207 6
>>> 31 120 0.9939966 6
>>> 32 140 0.9992400 6
>>>
>>>> fitdp<-nls(den~Rm[ref]/(1+(depth/d50)^c),data=dproot,
>>> + start = list(Rm=c(1.01, 1.01, 1.01, 6.65,1.01,1), d50=20, c=-1))
>>>> summary(fitdp)
>>>
>>> Formula: den ~ Rm[ref]/(1 + (depth/d50)^c)
>>>
>>> Parameters:
>>> Estimate Std. Error t value Pr(>|t|)
>>> Rm1 1.12560 0.07156 15.73 3.84e-14 ***
>>> Rm2 1.57643 0.11722 13.45 1.14e-12 ***
>>> Rm3 1.10697 0.07130 15.53 5.11e-14 ***
>>> Rm4 7.23925 0.20788 34.83 < 2e-16 ***
>>> Rm5 1.14516 0.07184 15.94 2.87e-14 ***
>>> Rm6 1.03658 0.05664 18.30 1.33e-15 ***
>>> d50 22.69426 1.03855 21.85 < 2e-16 ***
>>> c -1.59796 0.15589 -10.25 3.02e-10 ***
>>> ---
>>> Signif. codes: 0 ?**?0.001 ?*?0.01 ??0.05 ??0.1 ??1
>>>
>>> Residual standard error: 0.1094 on 24 degrees of freedom
>>>
>>> Number of iterations to convergence: 8
>>> Achieved convergence tolerance: 9.374e-06
>>>
>>>> fitdp1<-nls(den~Rm[ref]/(1+(depth/d50)^c),data=dproot,
>>> algorithm="port",
>>> + start = list(Rm=c(1.01, 1.01, 1.01, 6.65, 1.01, 1), d50=20, c=-1),
>>> + lower = list(Rm=c(1.01, 1.01, 1.01, 6.65, 1.01, 1), d50=20, c=-1),
>>> + upper = list(Rm=c(2.1, 2.2, 2.12, 12.5, 2.3, 1), d50=50, c=1))
>>>
>>> Error in ifelse(internalPars < upper, 1, -1) :
>>> (list) object cannot be coerced to type 'double'
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
More information about the R-help
mailing list